text
stringlengths
1
1.54M
meta
dict
\section{Introduction} \label{sec:intro} \begin{figure*}[h] \centering \includegraphics[width=0.95\textwidth]{figures/figure1.pdf} \caption{(a) A fully recurrently connected Hopfield CAM network \cite{hopfield1982neural, hopfield1984neurons}. (b) Bipartite CAM networks: Bipartite Expander Hopfield Network \cite{chaudhuri2019bipartite}, Modern Hopfield Network \cite{krotov2020large}. (c) Overparameterized tail-biting autoencoder as a CAM~\cite{radhakrishnan2020overparameterized}. (d) Schematic of the memory cliff exhibited by most CAM models: addition of patterns beyond the critical capacity leads to catastrophic loss of all patterns. (e) Theoretical upper-bound on information storage for CAM networks with $N^2$ synapses (grey dashed line); existing networks each approach the envelope at one point (i.e., for a specific number of patterns), and otherwise remain far from it. (f) Tripartite architecture of MESH, our proposed model. (g) The desired CAM continuum: a single network with information storage paralleling the theoretical bound envelope regardless of the number of stored patterns. Gray arrows: predetermined fixed weights; black arrows: learned weights.} \label{fig:schematics} \end{figure*} Content-addressable memory (CAM) networks are attractive models of long-term human memory: Humans are experts at recognizing situations or items they have encountered before, and often fill in the details from partial or noisy information. When presented by a partial or corrupted version of a previously seen input, a CAM maintains memories that can be reconstructed from corrupted cues. Recurrently connected CAMs, for example the Hopfield network \cite{XXX}, additionally encode the memorized states as fixed points of their dynamics. They use their dynamics to drive the input state to one of the nearest fixed points and keep it there. Dynamical fixed-point CAM networks can therefore also function as short-term memory networks for the acquired long-term memories. Therefore, CAMs are also powerful memory models for ANNs. Several network architectures support CAM dynamics, including the Hopfield network \cite{hopfield1982neural, hopfield1984neurons} (Fig. \ref{fig:schematics}a), several variants of the Hopfield network \cite{personnaz1985information, tsodyks1988enhanced, krotov2020large} (Fig. \ref{fig:schematics}a,b), and overparametrized autoencoders \cite{radhakrishnan2020overparameterized} (Fig. \ref{fig:schematics}c). However, all these CAM architectures exhibit a memory cliff, beyond which adding a single pattern leads to catastrophic loss of all patterns (Fig. \ref{fig:schematics}d). The total information content of CAM networks is bounded theoretically by $\mathcal{O}(N^2)$, the number of synapses in the network \cite{abu1989information, gardner1988space}, Fig. \ref{fig:schematics}e, defining a total information budget to be split between the number of stored patterns and information per pattern. However, most CAM networks approach that bound only when storing a fixed, specific number of pattterns (Fig. \ref{fig:schematics}e): Different CAM networks (defined by their inputs, architecture, or weight and activity update rules) touch this total information envelope at different points, with some storing a small number of maximally detailed memory states, others storing a larger number of less-detailed states. None of these models have the flexibility to span the memory envelope such that the information recalled per pattern is continuously traded off for increasing numbers of stored patterns in an online way, while preserving a constant total information that remains close to the information envelope. In this paper we propose a novel and biologically motivated memory architecture, Memory Scaffold with Heteroassociation (MESH), that generates a CAM continuum (see Fig. \ref{fig:schematics}f for a schematic of the network architecture). MESH breaks the problem of associative memory into two separate pieces: a part that does memory through a pre-defined ``\emph{memory scaffold}'', and a part that does association through a ``\emph{heteroassociative}'' step. Inspired by the Entorhinal-Hippocampal memory system in mammalian brains, MESH contains a bipartite attractor network that stabilizes a large dictionary of well-separated and pre-defined fixed points that serve as the memory scaffold. Arbitrary dense patterns are then stored by heteroassociatively linking them to the pre-defined scaffold states. This novel combination results in a CAM continuum (CAMC) that approaches the theoretical upper-bound on information storage in neural networks \cite{abu1989information, gardner1988space} as shown schematically in Fig. \ref{fig:schematics}g. In this network, storage of information-dense patterns up to a critical capacity results in complete recovery of all patterns and storage of a larger number of patterns results in partial reconstruction of the corresponding stored pattern. Partial reconstruction continues up to an exponentially large number of patterns as a function of total number of neurons in the network, ending in correct recognition of exponentially many stored patterns. To our knowledge, this is the first model of a CAM that automatically trades off pattern number and pattern richness. It predicts that biological memory systems may exploit pre-existing scaffolds to acquire new memories, potentially consistent with the preplay of hippocampal sequences before they are used for representing new environments \cite{dragoi2011preplay}. In the next section, we discuss existing CAM models and their dynamics. In Section~\ref{sec:MESH_CAMC} we provide our central results on the memory continuum exhibited by MESH. In Sections~\ref{sec:memory_scaffold} and \ref{sec:arbitrary_patts}, we analyze how MESH works. In section \ref{sec:applications} we extend MESH to the case of continuous neural activations, apply it to a realistic dataset, and show that it continues to exhibit the memory continuum even when storing continuous patterns. \section{Existing CAM models lack a memory continuum} Here we review existing CAM architectures. Unless otherwise specified, we consider networks with $N$ neurons and dense binary activations (i.e., $N$-dimensional vectors with activations of 1 or -1 in each entry). Hopfield networks ~\cite{hopfield1982neural} (Fig. \ref{fig:schematics}a) can store up to $\approx0.14N$ random binary patterns. Beyond this capacity, the network demonstrates a memory cliff ~\cite{nadal1986networks, crisanti1986saturation, dominguez2007information} (Fig. \ref{fig:existing_nets}a). The recurrent weights in the Hopfield network may be set by a pseudoinverse learning rule~\cite{personnaz1985information}, where the network is guaranteed to store up to $N$ linearly independent patterns. However, storing more than $N/2$ patterns results in vanishing basins of attraction around each fixed point \cite{personnaz1986collective, kanter1987associative} (Fig. \ref{fig:existing_nets}b). Bounded synapse models~\cite{parisi1986memory, fusi2007limits, van2012soft} on the other hand, do not exhibit a memory cliff in the same sense as the classic Hopfield network, however, attempted storage of a large number of patterns results in complete loss of a large fraction of the stored patterns with only $\approx0.04N$ patterns correctly recalled (Fig. \ref{fig:existing_nets}c). Hopfield networks with sparse inputs store sparse $\{0,1\}$ binary patterns with a fraction $p$ of non-zero entries, instead of the usual dense $\{-1,1\}$ patterns~\cite{tsodyks1988enhanced}. They can store a larger number of sparse patterns, given by $(p|\ln(p)|)^{-1}N$, such that the product of number of patterns times information per pattern is constant. However, the tradeoff between pattern number and pattern information here is due to the input patterns rather than the network -- each pattern is still fully recalled, with a memory cliff at the pattern capacity (Fig. \ref{fig:existing_nets}d). The same network operating on the same dataset does not exhibit a tradeoff between pattern number and pattern information. Sparse Hopfield networks on the other hand, have sparse connectivity ~\cite{dominguez2007information}, but store dense $\{-1,1\}$ patterns. These networks present a narrow memory continuum (Fig. \ref{fig:existing_nets}e), however they have a very low capacity. The bipartite expander Hopfield network~\cite{chaudhuri2019bipartite} can be used to perform robust label retrieval from noisy or partial pattern cues, for an exponentially large number of arbitrary patterns (Fig. \ref{fig:existing_nets}b). However, the nature of memory in this network is familiarity or labeling, not reconstruction. Thus the information per pattern is very small, regardless of the number of stored patterns. Dense (`Modern') Hopfield networks are recently proposed variants of the Hopfield model that involve higher-order interactions in place of the conventional pairwise interactions. These present a memory capacity that grows as $N^{K-1}$ or $\exp(N)$ dependent on the order of the interactions ~\cite{krotov2016dense,demircigil2017model,ramsauer2020hopfield}. Though a bipartite structure (Fig. \ref{fig:schematics}b) with pairwise interactions can approximate higher-order interactions \cite{krotov2020large, chaudhuri2019bipartite}, the capacity of a CAM with this structure remains merely linear, rather than exponential, in the number of hidden nodes~\cite{krotov2020large}. In fact, the number of hidden units must exactly equal the number of memories, thus storage of a variable number of patterns requires a change of network architecture, rendering the network inflexible and hence unable to exhibit a memory continuum. Overparameterized autoencoders can also act as a CAM, with patterns stored as the fixed points of iterations of the learned map of the autoencoder~\cite{radhakrishnan2020overparameterized} (Fig. \ref{fig:schematics}c). A drawback of these CAMs is that autoencoders require extensive training through backpropagation, in contrast to the one-shot learning in associative memory models, including all CAM models described above and MESH. Moreover, similar to other CAM models, overparametrized autoencoders also exhibit a memory cliff (Fig. \ref{fig:existing_nets}f). \section{MESH exhibits near-optimal CAM continuum} \label{sec:MESH_CAMC} \begin{figure*}[h] \centering \includegraphics[width=0.95\textwidth]{figures/figure5.pdf} \caption{(a) Mutual information per input bit between the stored and recovered patterns in MESH, as a function of the number of patterns stored in the network. Here $N_L = 32$, $k=3$, $N_H = 200$, $N_F = 4960$. (b) Mutual information (per input bit) in existing networks relative to MESH (see Fig.~\ref{fig:appendix_MIrelative}b for mutual information as a function of total information stored per synapse). (c) Comparison of information per synapse across different networks relative to MESH. In (b) and (c) all networks are chosen to have $\approx 5\times 10^5$ synapses, with MESH layer sizes: $N_L = 18$, $N_H = 300$, $N_F = 816$, and $k = 3$ active bits in the label layer.} \label{fig:MESH_CAM_resutls} \end{figure*} We present MESH, a memory architecture in which single networks, \emph{without reparametrization or restructring}, can store few patterns with rich detail and increasingly more patterns with continuously decreasing detail, and thus inhabit the whole extent of the memory envelope, Fig.~\ref{fig:schematics}g. MESH consists of two components, Fig.~\ref{fig:schematics}f: 1) A predefined ``memory scaffold'' that generates a set of fixed points with large basins. The memory scaffold is a bipartite attractor network with an $N_L$-dimensional label layer and an $N_H$-dimensional hidden layer; and 2) a ``heteroassociative'' layer -- an $N_F$-dimensional input/readout layer whose arbitrarily specified patterns are hooked onto the memory scaffold via hetroassociative learning. The network construction is described in more detail in the following two sections. We first demonstrate the capabilities of MESH. To probe memory recovery in MESH, the feature layer is cued with a corrupted version of a stored pattern. The retrieval dynamics (Fig.~\ref{fig:schematics}f) of the network returns a cleaned up version of the cued pattern to the feature layer. MESH perfectly reconstructs upto $N_H$ arbitrary stored patterns $N_{patts}$ of size $N_F$ bits each. When $N_{patts}$ is increased beyond this number, the network performs partial reconstruction of the stored patterns, with a smooth decay in the quality of reconstructed patterns, Fig. \ref{fig:MESH_CAM_resutls}a. We next quantify the information stored in MESH against theoretical bounds \cite{gardner1988space, abu1989information}. The theoretical bound for CAMs is given by the total number of learnable synapses. For the layer sizes in MESH, this bound is $MI_{total}^* = {N_H(2N_F + N_L) }$. In practice $N_L\ll N_F$, so this bound can be approximated as $2N_H N_F$. For patterns of length $N_F$, the CAM networks (Fig.~\ref{fig:existing_nets}) with a matched number of synapses may at best fully recall up to $2N_H$ patterns, but beyond exhibit a memory cliff. However, if a CAM network were to exhibit an optimal memory continuum, it should saturate the total information bound regardless of the number of stored patterns, with information per pattern per bit theoretically bounded by: \begin{eqnarray} \label{Eqn:bound} MI_{perinbit}^* &=& \frac{MI_{total}^*}{N_{patts}\cdot \text{\# bits per pattern}} \\ &=& \frac{N_H (2N_F + N_L) }{N_{patts}N_F}\approx \frac{2 N_H}{ N_{patts}}. \end{eqnarray} Experimentally, we find that MESH nearly saturates this theoretical bound across a wide range in the number of stored patterns, Fig. \ref{fig:MESH_CAM_resutls}a (bound in dashed gray), without any architectural or hyperparameter changes. The per-input-bit mutual information matches the best-performing CAM models when the number of stored patterns is smaller than the traditional CAM capacity, and is dramatically bigger when the number of stored patterns is larger, Fig.~\ref{fig:MESH_CAM_resutls}b (also see Fig.~\ref{fig:appendix_MIrelative}b,c). The number of stored and partially retrievable patterns exceeds the traditional CAM pattern number capacity by orders of magnitude. Consistent with this result, the information per synapse at large pattern numbers in MESH is significantly larger than in existing CAM models, Fig.~\ref{fig:MESH_CAM_resutls}c. Asymptotically with an increasing number of stored patterns, the total information per synapse in MESH approaches a constant (Fig.~\ref{fig:MESH_CAM_resutls}c) -- demonstrating that the total information that can be successfully recovered by MESH in a network of fixed size is invariant to the number of stored patterns. This invariance dictates the smooth trade-off between MI per pattern (pattern richness) and number of patterns. In sum, MESH stores a constant amount of total information that is invariant to the number of stored patterns; this total information content is proportional to the theoretical synaptic upper bound of total information storage in CAMs and is distributed across patterns so that the information per pattern degrades gracefully with no memory cliff as a function of the number of stored patterns (see Fig.~\ref{fig:appendix_MIrelative}a for the feature recovery error distribution). Next, to understand the underlying mechanisms that permit this flexible memory performance, we will examine the functional properties of the two components of MESH: the memory scaffold in Section \ref {sec:memory_scaffold} and the heteroassociative learning in Section \ref{sec:arbitrary_patts}. \section{Exponential memory scaffold} \label{sec:memory_scaffold} \begin{figure*}[h] \centering \includegraphics[width=0.95\textwidth]{figures/figure2.pdf} \caption{(a) Memory scaffold in our model --- label-hidden state attractor network. (b) Capacity of the memory scaffold ( with 20\% input noise injected in the hidden layer, and allowing up to a 3\% recovery error measured via the Hamming distance between the stored and recovered patterns). Different curves show the capacity corresponding to different sizes of the label layer for labels with a constant number of active bits ($k=3$). (c) Exponential capacity of the memory scaffold, assuming a constant density ($k/N_L$) of stored labels. } \label{fig:mem_scaffold} \end{figure*} The memory scaffold is a network that recurrently stabilizes a large number of prestructured states with large basins of attraction, and performs denoising or clean-up of corrupted versions of these states. Specifically, we choose the predefined label layer states to be the set of $k$-hot patterns (each label state $l^\mu$ is a vector with exactly $k$ bits set to ``1'' and all other bits set to ``0'', where $\mu$ is the pattern index). These label states are defined on a small $N_L$-dimensional label (L) layer that projects with fixed dense random weights $W_{HL}$ to a much larger $N_H$-dimensional hidden (H) layer, Fig.~\ref{fig:mem_scaffold}a. These weights are drawn independently from a normal distribution with zero mean and unit variance. \begin{equation} {W_{HL}}_{ij} \sim \mathcal{N}(0,1). \end{equation} Return projections from the hidden (H) to the label (L) layer are learned through pairwise Hebbian learning between the set of predetermined label layer states, and the resulting hidden-layer activations $h^{\mu} = \sgn (W_{HL} l^{\mu})$ as given by Eq. \ref{Eqn:return_proj}, where $C$ is a normalization term given by the number of predefined patterns, $\binom{N_L}{k}$. We assume that the label layer implements attractor dynamics through $k$-winners-take-all dynamics imposed by local recurrent inhibition \cite{rutishauser2011collective, wang2003k, yang1997dynamic}, enforcing through its dynamics that states remain $k$-hot at all times through a ``Top-k'' nonlinearity (this Top-k nonlinearity can be replaced with a fixed threshold across all patterns; however the threshold would then have to be varied with $N_H$, Fig.~\ref{fig:appendix_topk}). \begin{equation} \label{Eqn:return_proj} W_{LH} = \frac{1}{C}\sum_{\mu=1}^{C} l^{\mu} (h^{\mu})^T = \frac{1}{C}\sum_{\mu=1}^{C} l^{\mu} \sgn (W_{HL} l^{\mu})^T. \end{equation} Given a state $h(t)$, the memory scaffold states update as: \begin{align} l(t) &= \topk [W_{LH} h(t)], \label{eq:HtoL} \\ h(t+1) &= \sgn [W_{HL} l(t)].\label{eq:LtoH} \end{align} The essential features thar we desire for a memory scaffold are: first, the scaffold should have a large number of fixed points as compared with the size of the network; and second, the basins of attraction for each of these fixed points must be sufficiently large to accommodate any perturbations induced while accessing the memory scaffold through the feature layer --- as we show, each of the $\binom{N_L}{k}$ predefined states will form robust fixed points of the network with maximally large basins of attraction. \begin{theorem}\label{thm:scaffoldFP} For $N_H$ larger than a critical number $N_H^{crit}$, all $\binom{N_L}{k}$ predefined $k$-hot label states are fixed points of the recurrent dynamics Eqs. (\ref{eq:HtoL},\ref{eq:LtoH}). \end{theorem} While we do not provide a rigorous proof of this theorem, we provide a heuristic justifying this result in Appendix \ref{apx:scaffoldproof}. Empirically, $N_H^{crit}\ll \binom{N_L}{k}$ is independent of $N_L$ and grows linearly with $k$ (Fig. \ref{fig:appendix_memscaffold}c). We obtain directly as a corollary (see Appendix \ref{sec:app_scaffold_converge} for the proof) \begin{corollary}\label{thm:onestep} For $N_H>N_H^{crit}$, any vector $h(0)$ maps to a predefined scaffold state $h^\mu$ for some $\mu$ within a single iteration. \end{corollary} As described in Sec. \ref{sec:MESH_CAMC}, the hidden layer $H$ serves as an access point onto which the arbitrary patterns in the feature layer are hooked. Thus, we will primarily be interested in the robustness of these fixed points to perturbations to the hidden layer states $h^\mu$. \begin{theorem}\label{thm:basinsize} For $N_H>N_H^{crit}$, all fixed points are stable, with equal-volume basins of attraction that are maximally large, i.e., the basin size is of the order of the size of the Voronoi cell of each pattern, $\text{Vol}[\{-1,1\}^{N_H}]/C$, where $C=\binom{N_L}{k}$ is the number of predefined scaffold states. \end{theorem} While we prove this result in Appendix \ref{sec:app_basinsize}, the presence of large volume basins of attraction does not in itself imply robustness to perturbations. However, we additionally show that these basins are convex in Appendix \ref{apx:scaffoldconvex}, which then guarantees strong robustness to noise. We also note that the $k$-winners-take-all attractor dynamics of the label layer can recurrently maintain the stability at the retrieved state with an additional update equation of $l(t+\tau) = \topk[l(t)] = l(t)$ for $\tau\geq 1$. In this sense, the network is able to hold a retrieved state as a short-term memory, as in Hopfield networks. Corresponding to Theorem \ref{thm:scaffoldFP}, we experimentally observe that this bipartite memory scaffold can denoise states with high accuracy once the number of hidden neurons exceeds a critical value $N_{H}^{crit}$. For a fixed value of $k$, this critical number appears to be approximately independent of the number of label neurons $N_L$ (Fig.~\ref{fig:mem_scaffold}b), and thus does not depend on the number of stored patterns $C=\binom{N_L}{k}$. For constant $k$, one can therefore increase $N_L$ (while $N_L<N_H$) to obtain a capacity that at fixed $N_H$ grows rapidly with $N_L$ and $k$ as $\binom{N_L}{k}\sim (N_L)^k$. An even faster growth of large-basin scaffold states can be obtained by increasing the size of the label layer while holding the activity density ($d = k/N_L$) fixed (Fig.~\ref{fig:mem_scaffold}c). This results in a capacity that grows exponentially as $\binom{N_L}{k}\sim \exp(d N_L)$. Attaining this exponential growth in capacity requires an increase in $k$, which subsequently requires a corresponding linear increase in $N_H^{crit}$ (Fig. \ref{fig:appendix_memscaffold}c). The number of large-basin stable states in this memory scaffold is far greater than the number of nodes and the number of synapses in this bipartite network, growing exponentially with the number of nodes. This does not violate CAM synaptic information bounds (since the stable states are predetermined rather than being arbitrary, and thus cannot transmit any information beyond the pattern index). We next demonstrate that the hidden layer can be used as an access point between arbitrary patterns and the memory scaffold, to hook external patterns onto the scaffold states. \section{Heteroassociation of arbitrary patterns onto scaffold} \label{sec:arbitrary_patts} \begin{figure*}[h] \centering \includegraphics[width=0.95\textwidth]{figures/figure4.pdf} \caption{(a) Detailed architecture of our proposed model MESH, with numbers in red indicating the time step of information flow through the model. (b) Number of label, hidden and feature layer vectors that are perfectly recovered on cuing the MESH network with zero input noise (left) and 5\% input noise (right) in the Feature states, and allowing zero recovery error. MESH constructed with $N_L=18$, $k=3$ and $N_F = \binom{N_L}{k}$. (c) Left: Overlap between the stored and the recovered patterns. Right: Information per synapse in MESH (number of stored patterns increases along the x-axis). (d) Blue and red curves: Mutual information (per input bit). Gray curve: Mutual information when $W_{HF}$ and $W_{FH}$ are trained using Hebbian learning. (e) Left: Voronoi diagram showing basins of attraction (Voronoi cells) for the stored patterns. Right: Even during partial reconstruction, the recovered pattern continues to lie in the correct Voronoi cell. } \label{fig:arbitrary_patterns} \end{figure*} The second component of MESH is bi-directional heteroassociative learning between the memory scaffold and inputs in the feature layer. The feature layer is the input and output of MESH: patterns to be stored are presented as random dense patterns which are then ``hooked'' onto one of the large-basin fixed points of the memory scaffold, Fig. \ref{fig:arbitrary_patterns}a. The heteroassociative weights are set by presenting an arbitrary input vector $f^\mu$\footnote{In analytical derivations, we consider $f^\mu$ to be dense random binary $\{-1,1\}$ patterns, although in practice this is not necessary (Sec. \ref{sec:applications} shows examples of storage of continuous valued features).} while activating one of the predefined scaffold states to generate a hidden layer state $h^\mu$. Weights are then set by the pseudoinverse learning rule \cite{personnaz1985information}, \begin{align} W_{HF} &= H F^+ \;\;\; \text{and} \;\;\; W_{FH} = F H^+, \label{eq:WFH} \end{align} where the columns of $H$ and $F$ are the the predefined hidden layer states $h^\mu$ and input patterns $f^\mu$, respectively. Pseudoinverse learning can also be approximated through a (biologically plausible) online incremental learning mechanism \cite{tapson2013learning}, allowing weights to be learned as patterns are presented in an online or streaming setting. Note that the essential component of MESH is heteroassociative learning, not specifically the pseudoinverse rule. Heteroassociation through Hebbian learning, such that $W_{HF} = HF^T$ and $W_{FH} = FH^T$ also produces a CAM continuum in MESH, though as as seen in conventional Hopfield networks, pseudoinverse learning results in higher total stored information \cite{kanter1987associative, refregier1989improved, storkey1997increasing}, (Fig. \ref{fig:arbitrary_patterns}d, gray curve). Furthermore, given a memory scaffold that perfectly recovers all hidden states, a single heteroassociatve step through Hebbian learning is also sufficient for a continuum (see Appendix \ref{sec:MI_hebb_theory} for details). Presented with a noisy feature state $f(t)$ at time $t$, MESH dynamics are summarized as follows: \begin{align} h(t) &= \sgn[W_{HF} f(t)], \\ l(t) &= \topk[W_{LH} h(t)], \label{eq:memscaf1}\\ h(t+1) &= \sgn[W_{HL} l(t)], \label{eq:memscaf2}\\ f(t+1) &= \sgn[W_{FH} h(t+1)] \label{eq:frecon}. \end{align} Heteroassociative weights project noisy input patterns onto the hidden layer in the memory scaffold. The memory scaffold cleans up the received input by flowing to the nearest fixed point. This fixed point is decoded by the return projection to the feature layer, generating a non-noisy reconstruction of the input. To see why the heteroassociation with the memory scaffold allows for successful pattern storage and recovery, we examine the mapping from the feature layer to the memory scaffold, and then the recovery of the feature state from the scaffold. For the purpose of our arguments, we assume that the patterns being stored in the feature layer are random binary $\{-1,1\}$ patterns, and hence the matrix $F$ will be full rank. This allows the following results. \begin{theorem}\label{thm:Hcorrect} If the $N_F\times N_{patts}$ dimensional matrix $F$ is full rank, an input of clean feature vectors perfectly reconstructs the hidden layer states through heteroassociative pseudoinverse learning from the feature layer to the hidden layer, provided $N_{patts}\leq N_F$ \end{theorem} Theorem \ref{thm:Hcorrect}, which we prove in Appendix \ref{sec:app_hetero_ftoh}, implies that cuing the network with unperturbed features stored in the memory results in perfect reconstruction of the predefined hidden layer states. Following the results in Sec. \ref{sec:memory_scaffold}, if $N_H>N_H^{crit}$, reconstruction of the correct hidden states ensures that the correct predefined label states are also recovered as shown in Fig. \ref{fig:arbitrary_patterns}b, left. Note that the number of successfully recovered features is equal to the number of hidden neurons $N_H$, consistent with our description in Sec. \ref{sec:MESH_CAMC}, a result that we now formalize. \begin{theorem}\label{thm:Nhpatts} Assuming correctly reconstructed predefined hidden layer states, heteroassociative pseudoinverse learning results in perfect reconstruction of up to $N_H$ patterns in the feature layer. \end{theorem} This theorem (proof in Appendix~\ref{sec:app_hetero_htof}) also demonstrates the importance of the expansion of the label layer to a hidden layer of size $N_H$ --- setting up the predefined fixed points of the memory scaffold in a space that is higher dimensional than the label layer allows for perfect reconstruction of patterns up to the hidden layer dimensionality. This allows for the `knee' of the CAMC to be tuned as required by choosing an appropriate value of $N_H$ (Fig. \ref{fig:appendix_MIvarylayers}a). We now show that a CAM continuum exists for $N_{patts}>N_H$. We first show a result on the overlap between stored and recovered patterns before proving our main result on the mutual information of recovery of the CAM continuum. \begin{theorem}\label{thm:ha_ov_continuum} Assume that the memory scaffold has correctly reconstructed the predefined hidden layer states. Heteroassociative pseudoinverse learning from the hidden layer to the feature layer for $N_{patts}>N_H$ results in an partial reconstruction of stored patterns such that the dot product between the stored patterns and the reconstruction of the stored patterns without the sign nonlinearity is $N_H/N_{patts}$ when averaged across all patterns. \end{theorem} Theorem \ref{thm:ha_ov_continuum} (proof in Appendix \ref{sec:app_overlapscaling}) along with Theorem \ref{thm:Nhpatts} demonstrate the existence of the memory continuum --- for $N_{patts}\leq N_H$ the stored patterns are recovered perfectly, and for $N_{patts}>N_H$ the recovered patterns vary from the originally stored patterns in a smoothly varying fashion. However, Theorem \ref{thm:ha_ov_continuum} only accounts for the overlap before the application of the sign nonlinearity; the sign nonlinearity in the feature layer only serves to additionally error correct the reconstructed patterns. This can be seen in Fig.~\ref{fig:arbitrary_patterns}c, left, where the gray curve presents the overlap before the application of the sign nonlinearity and is in close agreement to the theoretically expected result (dashed black curve). After this additional error correction, the mutual information recovered is then observed to asymptotically approach a $1/N_{patts}$ scaling as well, as seen in Fig. \ref{fig:MESH_CAM_resutls}a. This can alternately be viewed as the mutual information per synapse approaching a constant as larger amounts of information are stored, Fig. \ref{fig:arbitrary_patterns}c, right. Following Theorem \ref{thm:ha_ov_continuum}, we note that the overlap between the true features and the recovered features is only a function of $N_H$ and $N_{patts}$. Thus, varying $N_L$ does not affect the magnitude of mutual information recovered and the corresponding curves for varying $N_L$ overlap with each other, Fig. \ref{fig:arbitrary_patterns}c,d. Since a CAM continuum exists in MESH, storing larger than $N_H$ patterns results in a slow drift of the recovered patterns from the true state. This is shown schematically in Fig. \ref{fig:arbitrary_patterns}e where the Voronoi cells around each stored pattern are marked (i.e., the region closer to the stored pattern as compared to any other pattern): when the number of stored patterns is less than $N_H$ (left) the recovered pattern corresponds exactly to the stored clean pattern; storage of additional patterns up to the maximal $\binom{N_L}{k}$ results in the recovered patterns drifting away from the clean pattern but remaining within the same Voronoi cell as the correct Voronoi cell (right). In this sense, the continuum in recovered mutual information implies that all recovered features remain within the correct Voronoi cell corresponding to the originally stored pattern, which we verify numerically as the black curve in Fig. \ref{fig:arbitrary_patterns}b, left. Until now, our theoretical results have only considered presentation of unperturbed versions of the stored patterns to the network. For up to $N_H$ patterns, presentation of corrupted versions of the stored features results in an approximate reconstruction of the hidden layer states, which through the memory scaffold dynamics then flows to the corresponding clean hidden and label state, Fig. \ref{fig:arbitrary_patterns}b, right. This is then mapped back to the perfectly recovered stored pattern following Theorem \ref{thm:Nhpatts}. For more than $N_H$ corrupted patterns, a similar process results in perfect recovery of the hidden layer and label layer states for a large number of patterns, although the capacity remains slightly smaller than the maximal number of patterns, $\binom{N_L}{k}$. \section{Continuous Patterns} \label{sec:applications} \begin{figure*}[h] \centering \includegraphics[width=0.95\textwidth]{figures/figure6.pdf} \caption{(a) Top: Overlap (top) and Mutual information (bottom) when MESH is trained on random continuous patterns. (b) Overlap when MESH (and the overparameterized autoencoder) is trained on the Fashion MNIST dataset. (c) Left: original images from the Fashion MNIST dataset. Right: images directly decoded by the decoder from the compressed feature representations. (d) Images decoded from the recovered feature representations stored in MESH trained on 300, 550, and 800 images respectively (left to right). MESH layer sizes: $N_L = 18$, $k = 3$, $N_H = 300$, $N_F = 500$. (e) Images decoded from the recovered feature representations stored in the overparameterized autoencoder (shown in Fig. \ref{fig:schematics}c) trained on 15, 300, and 550 images respectively (left to right). Layer sizes: 500, 300, 18, 300 and 500.} \label{fig:applications} \end{figure*} Next, we show that MESH exhibits a CAMC even when trained on continuous valued input patterns. To store such continous-valued patterns, the only change necessary to the architecture is removal of the sign nonlinearity in Eq. (\ref{eq:frecon}) Additionally, to compare the stored and recovered patterns, we normalize them to unit $L_2$ norm before calculation of pattern overlap and recovered mutual information. \newline \textbf{Random Continuous Patterns}: In this case we consider patterns $f^\mu$ such that for each $\mu$ and $i$, $f^\mu_i$ is an independently sampled number from a normal distribution with zero mean and unit variance. Since Lemma \ref{thm:ha_ov_continuum} did not rely on any assumptions of pattern discreteness, the result extends to the case of continuous patterns. However, since we are normalizing the patterns before calculating the overlap, Eq. \ref{eq:normoverlap} dictates the scaling of the overlap as $\sqrt{N_H/N_{patts}}$, consistent with the numerical results in Fig.~\ref{fig:applications}a, top. Since the stored patterns were drawn from a normal distribution, and assuming the recovered patterns are also distributed normally, the mutual information can be computed from the overlap ($m$) using Eq.~\ref{Eqn:MI_rand_conts} (details in Appendix \ref{sec:mi_cont_randn}), which is again in close agreement with the numerical results shown in Fig. \ref{fig:applications}a (bottom), demonstrating the CAMC. Note that perfect reconstruction of continuous valued patterns (as in the case of $N_{patts}\leq N_H$) results in an infinite mutual information. \begin{equation}\label{Eqn:MI_rand_conts} MI = -\log \left(1-m^2\right)/2 = -\log \left(1- N_H/N_{patts}\right)/2 \end{equation} \textbf{Fashion MNIST dataset}: To evaluate the performance of MESH on realistic images, we considered the toy problem of image storage from the Fashion MNIST dataset \cite{xiao2017fashion}. As the primary comparative model in this setting, we will consider an equivalent overparameterized autoencoder (Fig. \ref{fig:schematics}c) with the same number of nodes and synapses in each layer. Since the images themselves have large pattern-pattern correlations, we found it beneficial for both MESH and the autoencoder to compress the dataset to extract lower-dimensional feature representations of the images through a separate large autoencoder (details in Sec. \ref{sec:fahion_mnist}). This large autoencoder was trained on all classes in the dataset except the ``shirts'' class. The encoder was then used to extract features of the ``shirts'' class which were used as the set of patterns to be stored in the MESH network and the overparameterized autoencoder. Fig. \ref{fig:applications}b shows the mean-subtracted overlap between recovered and stored patterns --- MESH continues to show a continuum, whereas the overparameterized autoencoder has a memory cliff at a very small number of patterns. To visualize the memory storage, we pass the recovered patterns through the larger trained decoder to reconstruct the stored images of the shirts (Fig. \ref{fig:applications}c-e). Fig. \ref{fig:applications}c shows a few samples of original images as well as images reconstructed directly from the extracted feature representations using the larger trained decoder. Fig. \ref{fig:applications}d shows the images reconstructed through the decoder by using the feature representations recovered from MESH for varying numbers of stored patterns. When trained on $N_{patts}=N_H=300$ feature patterns, the image is reconstructed perfectly up to the quality of the larger decoder. As the number of stored feature patterns is increased ($N_{patts} > N_H$), the quality of image reconstruction gradually degrades. While the overparametrized autoencoder, Fig.~\ref{fig:applications}e, also recovered the stored images with high accuracy when training on only 15 patterns, training on a larger number of patterns results in the retrieved images becoming rapidly unrecognizable (corresponding to the memory cliff of Fig.~\ref{fig:applications}b) \section{Discussion} In this work, we have proposed a CAM network, MESH, that exhibits a memory continuum, and can be used as a high capacity pattern labeller for recognition/familiarity detection, and locality sensitive hashing. While the convergence time of the Hopfield network scales as $\mathcal{O}(N_F^\gamma); \gamma \ll 1$ \cite{kohring1990convergence, frolov2000convergence}, MESH converges in a single step through a $\topk$ nonlinearity, or within $\mathcal{O}(\log N_L)$ time when a $k$-winners-take-all attractor is used. Several neural networks use a key-value mechanism to store and read memories \cite{graves2014neural, graves2016hybrid, sukhbaatar2015end, vaswani2017attention, le2019neural, banino2020memo}. MESH provides a neurally plausible architecture for implementing them through a factorized key-value structure provided by the Label and Feature layers. MESH also maps naturally onto the entorhinal-hippocampal system in the brain that factorizes sensory and spatial representations in lateral (LEC) and medial (MEC) entorhinal cortices respectively \cite{manns2006evolution, eichenbaum2014can,Mulders2021.11.20.469406}, with the feature, label and hidden layers corresponding to LEC, MEC, and the hippocampus respectively. Adding a heteroassociatively trained recurrent connection to the hidden layer in MESH can enable reconstruction of sequences, from any given starting state in both forward and backward directions, potentially related to planning in the hippocampus through preplay and replay of hippocampal sequences \cite{dragoi2011preplay, pfeiffer2013hippocampal}. \clearpage \nocite{langley00}
{ "timestamp": "2022-02-17T02:27:58", "yymm": "2202", "arxiv_id": "2202.00159", "language": "en", "url": "https://arxiv.org/abs/2202.00159" }
\section{Introduction} The vast majority of stars are known X-ray emitters, and the most massive ones are no exception. The cause of their high-energy emission lies in the material ejected by those stars. The stellar winds of OB stars are driven by the scattering of the UV radiation by metallic ions. This driving process is intrinsically unstable and the resulting shocks lead to the generation of X-rays \citep{fel97}. This wind emission is soft (plasma temperature of about 0.6\,keV) and rather faint ($\log(L_{\rm X}/L_{\rm BOL}\sim -7$). If a large-scale magnetic field is present, the wind flows can be channeled by the field and collide, generating hot plasma. This leads to an additional X-ray emission, generally harder in character \citep{udd16}. Alternatively, when two massive stars form a binary system, the two winds may collide and the strong shock may generate X-rays \citep{rau16}: some massive binary systems thus appear harder (plasma temperature of about 2.\,keV) and brighter ($\log(L_{\rm X}/L_{\rm BOL})$ up to $-6$). Finally, a small subgroup of stars have emerged in the last decade: the $\gamma$\,Cas\ analogs \citep{smi16}. These stars also display thermal X-ray spectra, but much harder (plasma temperature $>5$\,keV) and brighter ($\log(L_{\rm X}/L_{\rm BOL})$ between $-6.2$ and $-4$). Only X-ray binaries appear brighter in the X-ray range than those $\gamma$\,Cas\ stars. It must be noted that all of these objects are of the Oe/Be spectral type, i.e. they possess a decretion disk. However, up to now, the only difference spotted between the other Oe/Be stars and the $\gamma$\,Cas\ analogs resides in their X-ray properties \citep{naz20tess,naz21}. While the link between the outflows and the X-ray emission is well understood for the other cases (embedded wind shocks, magnetically confined winds, colliding winds), the origin of the peculiar $\gamma$\,Cas\ characteristics remains debated. Up to now, all detected $\gamma$\,Cas\ analogs belong to the Oe/Be category, i.e. they possess a decretion disk in Keplerian rotation. This gives us an important clue, but the exact role of the disk in the generation of the high-energy emission remains unclear. In this context, two broad classes of scenarios to explain the $\gamma$\,Cas\ phenomenon have been proposed. The first one relies on the presence of a companion. It could be a compact object (WD or NS) and the X-rays would then come from its accretion of material from the Be star and its disk \citep{mur86,pos17}. In such a case, if the disk dissipates, the source of material disappears hence the X-ray emission should ultimately (i.e. after some travelling delay) stop. Another possibility is to consider a stripped He-star companion whose stellar wind collides with the peripheral regions of the disk, thereby leading to a possible emission of X-rays \citep{lan20}. Again, disappearance of the disk implies cutting the feeding of the X-ray source. In contrast, the second type of scenario requires no companion as it involves magnetic interactions between the Be star and its inner disk \citep{rob02,mot15}. In such a case, if the disk fully dissipates, the interactions will no longer take place hence the peculiar X-ray emission should disappear, with a faster reaction time than for the previous case. Similar arguments can be built perforce if considering a disk outburst rather than a disk dissipation. Following the behaviour of the X-ray emission in reaction to changes in the disk therefore has an important diagnostic value. In this context, long-term monitorings are required as they are the only way to assess the amplitude of the correlated X-ray response to an optical event (if any) and to derive the time lag between them (if any). In this paper, we present the results of such an exercise for two $\gamma$\,Cas\ stars, HD\,119682\ and V767\,Cen. The $\gamma$\,Cas\ nature of HD\,119682\ (B0Ve, $V$=7.9) was first reported by \citet{rak06} and \citet{saf07}. Indeed, the X-ray emission is both much harder (plasma temperature of $\sim$10\,keV, hard-to-soft flux ratio of 2.5) and brighter ($\log(L_{\rm X}/L_{\rm BOL})\sim -5.7$) than for ordinary OB stars \citep{naz18}. The star was also classified as a binary candidate in \citet{naz21}. The X-ray emission of V767\,Cen\ (HD\,120991, B2Ve, $V$=6.1) was studied in detail by \citet{naz18} thanks to an {\sc{XMM}}\emph{-Newton}\ archival exposure. The star displayed a $\log(L_{\rm X}/L_{\rm BOL})\sim -5.4$, a plasma temperature of 6.4\,keV, and a hard-to-soft flux ratio of 2, all pointing to a $\gamma$\,Cas\ nature. Unfortunately, no optical data were available for the two stars at the time of these discovery observations. However, our optical monitoring fills this gap and enlarges the view by revealing changes in their disks. X-ray observations were then triggered to assess the reaction of the high-energy emission to the disk variations. Section 2 presents the data used in this study, while Section 3 derives the results, Section 4 discusses them and Section 5 summarizes them. \section{Data} \subsection{Optical data} To ensure a regular monitoring of the H$\alpha$\ line, we set up a collaboration with amateur astronomers. Since 2019, four persons regularly observed a set of southern Be stars, with three observers (co-authors TB, BH, PMcG) based in Australia and one (co-author PC) in Brazil. Their instruments were 11--14 inches reflectors equipped with spectrographs (Gerlach LowSpec, Shelyak LHIRESIII, LISA, eShel) which provided spectral resolutions between 400 and 20\,000 (most common value was $\sim$4000 for HD\,119682\ and $\sim$10000 for the brighter V767\,Cen). For HD\,119682, exposure times ranged from 5 minutes to 3.5\,hrs, leading to typical signal-to-noise ratios of 75; for V767\,Cen, exposure times were similar but average signal-to-noise was 50. The spectra were reduced in a standard way using ISIS\footnote{http://www.astrosurf.com/buil/isis-software.html} and finally normalized over the same set of continuum windows using polynomials of low order. No telluric correction could be made for the low resolution spectra (all HD\,119682\ data and a quarter of V767\,Cen\ data) but it was applied for observations taken with higher resolution. All Australian amateur spectra were deposited on the Be Star Spectra (BeSS) open-access database\footnote{http://basebe.obspm.fr}. Note that HD\,119682\ belongs to a small group made of several stars, notably CPD--62$^{\circ}$3559 (K\,2\,II/III) and HD\,119699 (A\,1\,II), but their spectral types being different from that of HD\,119682\ and their separation being large enough, the amateur spectra of HD\,119682\ were not contaminated by these close neighbours. In parallel, a few spectra of both stars were obtained at the Cerro Paranal ESO Observatory for our ESO program ID 105.204D. They were taken with the Ultraviolet and Visual Echelle Spectrograph (UVES) in dichroic mode (covered regions: 3300--4560\,\AA\ at $R\sim 70\,000$ and 4730--6830\,\AA\ at $R\sim 100\,000$). These spectra were already used and presented in \citet{naz21}, and we refer to that publication for details on this dataset. An additional X-Shooter spectrum of each star was also taken for the same program. Their spectral resolution is lower than that of UVES ($R\sim 20\,000$ in the visible range) but the signal-to-noise ratios are high ($\sim200$). These spectra were reduced in the same way as the UVES data. H$\alpha$ equivalent widths were estimated for all spectra using the first-order moment over a given velocity range (--600 to 600\,km\,s$^{-1}$). They are listed in Tables \ref{ew} and \ref{ew2} for HD\,119682\ and V767\,Cen, respectively. \begin{table} \caption{Equivalent widths measured on H$\alpha$ (in the --600 to +600\,km\,s$^{-1}$\ interval) for HD\,119682. \label{ew}} \setlength{\tabcolsep}{3.3pt} \begin{tabular}{lcc|lcc} \hline Date & ID & $EW$ (\AA) & Date & ID & $EW$ (\AA) \\ \hline 8564.993 &BH$^n$ & -4.29$\pm$0.12 & 9049.936 &TB & 1.25$\pm$0.02 \\ 8573.977 &TB & -3.81$\pm$0.03 & 9053.961 &PMcG$^l$& 1.12$\pm$0.02 \\ 8580.956 &TB & -3.53$\pm$0.03 & 9057.953 &PMcG$^l$& 1.22$\pm$0.03 \\ 8602.994 &BH$^n$ & -3.04$\pm$0.05 & 9060.885 &BH$^l$ & 1.12$\pm$0.04 \\ 8610.011 &BH$^n$ & -3.12$\pm$0.11 & 9061.954 &TB & 1.21$\pm$0.03 \\ 8661.063 &PMcG$^l$& 0.07$\pm$0.07: & 9074.952 &TB$^l$ & 0.80$\pm$0.03 \\ 8662.012 &TB$^l$ & -1.03$\pm$0.06 & 9080.974 &TB & 0.59$\pm$0.04 \\ 8676.913 &TB & -0.19$\pm$0.02 & 9089.902 &BH$^l$ & 0.00$\pm$0.07 \\ 8682.954 &TB & 0.14$\pm$0.02 & 9098.880 &BH$^l$ & 0.34$\pm$0.07 \\ 8698.991 &TB & 0.30$\pm$0.03 & 9185.837 &UVES & 1.650$\pm$0.003 \\ 8713.963 &TB & 1.06$\pm$0.03 & 9205.842 &UVES & 1.639$\pm$0.003 \\ 8719.949 &TB & 1.15$\pm$0.02 & 9218.850 &Xsh & 1.007$\pm$0.012 \\ 8721.940 &TB & 0.97$\pm$0.02 & 9234.113 &PMcG$^l$& 1.39$\pm$0.04 \\ 8724.925 &PMcG & 0.21$\pm$0.06 & 9242.753 &UVES & 1.186$\pm$0.003 \\ 8728.935 &TB & 0.99$\pm$0.03 & 9256.984 &TB & 1.25$\pm$0.05 \\ 8756.956 &PMcG$^n$& 1.42$\pm$0.27 & 9257.801 &UVES & 1.226$\pm$0.004 \\ 8837.225 &PMcG$^n$& -3.06$\pm$0.14 & 9274.662 &UVES & -0.294$\pm$0.003 \\ 8865.210 &PMcG$^n$& -2.68$\pm$0.13 & 9275.999 &TB & -0.53$\pm$0.03 \\ 8878.205 &PMcG & -1.98$\pm$0.06 & 9277.079 &PMcG$^l$& -0.27$\pm$0.04 \\ 8934.002 &PMcG$^n$& 1.92$\pm$0.17 & 9297.953 &TB & -0.88$\pm$0.03 \\ 8952.069 &TB & 1.06$\pm$0.03 & 9329.994 &TB & -0.74$\pm$0.02 \\ 8974.035 &PMcG$^l$& 1.36$\pm$0.07 & 9331.188 &PMcG$^l$& -0.60$\pm$0.05 \\ 8981.055 &TB & 1.75$\pm$0.02 & 9364.989 &BH$^l$ & 0.98$\pm$0.03 \\ 9017.003 &TB & 1.57$\pm$0.02 & 9371.045 &TB & 1.07$\pm$0.02 \\ 9037.950 &TB & 1.51$\pm$0.03 & 9394.925 &PMcG$^l$& 1.07$\pm$0.03 \\ 9044.996 &TB & 1.48$\pm$0.03 & 9424.977 &TB & 0.77$\pm$0.03 \\ 9045.021 &PMcG$^l$& 1.36$\pm$0.04 \\ \hline \end{tabular} {\scriptsize Dates are in the format HJD-2\,450\,000, and ':' indicates an uncertain value. The ID refers to the source of the spectrum: UVES or Xsh for ESO data, TB, BH, PMcG or PC for the amateur data (from their initials, see authors' list). Symbols $^n$ and $^l$ are added for the noisier spectra ($SNR<20$) and lower resolution data ($R<3000$ but $SNR>20$), respectively. Note that the errors on equivalent widths are computed from flux errors: they only reflect the $SNR$ and do not include the normalization errors. } \end{table} \begin{table} \caption{Same as in Table \ref{ew} for V767\,Cen, except that low resolution is here defined as $R<5000$. \label{ew2}} \setlength{\tabcolsep}{3.3pt} \begin{tabular}{lcc|lcc} \hline Date & ID & $EW$ (\AA) & Date & ID & $EW$ (\AA)\\ \hline 8570.959 & TB & -8.25$\pm$0.01 & 9116.885 & TB & -3.55$\pm$0.01 \\ 8578.070 & BH & -7.46$\pm$0.03 & 9127.873 & TB & -2.15$\pm$0.03 \\ 8590.038 & BH & -7.09$\pm$0.02 & 9191.842 & UVES & -3.981$\pm$0.002 \\ 8599.002 & TB & -7.15$\pm$0.01 & 9196.843 & Xsh & -3.889$\pm$0.013 \\ 8610.154 &PMcG$^l$& -8.94$\pm$0.03 & 9206.834 & UVES & -3.436$\pm$0.003 \\ 8624.000 & BH & -10.29$\pm$0.02 & 9231.859 & UVES & -2.502$\pm$0.003 \\ 8641.155 &PMcG$^l$& -7.68$\pm$0.05 & 9234.149 &PMcG$^l$& -2.64$\pm$0.03 \\ 8655.954 &PMcG$^l$& -8.07$\pm$0.08 & 9246.777 & UVES & -1.864$\pm$0.002 \\ 8668.932 & BH & -7.88$\pm$0.02 & 9298.965 & TB & -2.24$\pm$0.01 \\ 8683.984 &PMcG$^l$& -7.39$\pm$0.02 & 9335.978 & TB & -3.42$\pm$0.01 \\ 8684.995 & TB & -7.30$\pm$0.01 & 9351.943 & TB & -2.31$\pm$0.01 \\ 8685.943 & BH & -7.31$\pm$0.03 & 9364.964 & BH$^l$ & -0.96$\pm$0.03 \\ 8693.970 & TB & -7.31$\pm$0.01 & 9368.914 & BH & -0.91$\pm$0.04 \\ 8697.971 & TB & -7.76$\pm$0.02 & 9370.923 & TB & -1.62$\pm$0.01 \\ 8724.949 &PMcG$^l$& -6.85$\pm$0.03 & 9381.914 & BH & -0.62$\pm$0.03 \\ 8728.953 & TB$^l$ & -5.99$\pm$0.03 & 9386.933 & BH & -1.05$\pm$0.03 \\ 8865.177 &PMcG$^l$& -5.01$\pm$0.05 & 9394.989 &PMcG$^l$& -0.33$\pm$0.04 \\ 8934.037 &PMcG$^l$& -2.15$\pm$0.04 & 9398.501 & PC & -0.13$\pm$0.04 \\ 8937.978 & TB & -2.42$\pm$0.02 & 9399.534 & PC & -0.36$\pm$0.03 \\ 8948.992 & BH & -2.56$\pm$0.05 & 9401.876 &PMcG$^l$& -0.91$\pm$0.03 \\ 8953.044 &PMcG$^l$& -2.85$\pm$0.03 & 9403.964 & BH & -0.78$\pm$0.03 \\ 8960.996 & BH & -3.41$\pm$0.02 & 9406.467 & PC & -1.33$\pm$0.02 \\ 8962.024 & TB & -3.42$\pm$0.01 & 9406.977 & TB & -2.00$\pm$0.01 \\ 8972.955 & TB & -3.10$\pm$0.02 & 9407.505 & PC & -1.50$\pm$0.03 \\ 8974.080 &PMcG$^l$& -3.34$\pm$0.03 & 9408.465 & PC & -2.17$\pm$0.02 \\ 8975.994 & TB & -2.92$\pm$0.02 & 9410.462 & PC & -1.92$\pm$0.03 \\ 8984.037 & BH & -3.68$\pm$0.03 & 9421.462 & PC & -4.08$\pm$0.03 \\ 8995.951 & BH & -2.92$\pm$0.03 & 9422.959 & TB & -4.62$\pm$0.01 \\ 8998.034 & TB & -2.70$\pm$0.02 & 9424.973 & BH$^l$ & -4.05$\pm$0.02 \\ 9003.996 & TB & -2.46$\pm$0.02 & 9426.474 & PC & -3.76$\pm$0.03 \\ 9015.943 & TB & -1.61$\pm$0.02 & 9428.909 & BH$^l$ & -3.66$\pm$0.01 \\ 9018.024 & BH & -1.35$\pm$0.04 & 9429.473 & PC & -4.06$\pm$0.03 \\ 9029.970 & BH & -2.11$\pm$0.03 & 9435.442 & PC & -4.39$\pm$0.03 \\ 9031.971 & TB & -2.19$\pm$0.02 & 9437.976 & TB & -4.55$\pm$0.01 \\ 9044.996 & BH & -3.28$\pm$0.02 & 9441.447 & PC & -4.07$\pm$0.02 \\ 9045.074 &PMcG$^l$& -3.26$\pm$0.02 & 9445.908 & BH & -3.42$\pm$0.03 \\ 9062.973 & BH & -3.33$\pm$0.04 & 9446.994 & TB & -3.81$\pm$0.02 \\ 9066.997 & TB & -3.09$\pm$0.02 & 9456.907 & BH & -2.99$\pm$0.04 \\ 9071.891 & BH$^l$ & -2.46$\pm$0.03 & 9459.969 & TB & -3.49$\pm$0.02 \\ 9074.983 & TB & -2.80$\pm$0.02 & 9466.915 & TB & -2.99$\pm$0.02 \\ 9106.910 & BH$^n$ & -1.56$\pm$0.06 & 9481.916 & TB & -1.97$\pm$0.02 \\ 9111.496 & UVES & -2.312$\pm$0.002 & 9591.165 & PMcG & -1.50$\pm$0.02 \\ 9112.914 & BH$^n$ & -3.32$\pm$0.07 & \\ \hline \end{tabular} \end{table} Contemporaneous photometry was recorded by ASAS-SN\footnote{https://asas-sn.osu.edu/} for both stars. The targets are however brighter than the saturation limit of ASAS-SN. In such cases, a correction procedure using the bleed trails is implemented, but it is far from perfect and notably requires the targets to be isolated (ASAS-SN has a PSF of 15\arcsec\ FWHM). For HD\,119682, which lies in a crowded area, the data points are unrealistically spread over several magnitudes and the lightcurve is unusable. For V767\,Cen, the situation appears better, although a dispersion of $\sim 0.07$\,mag is present (which cannot hide long-term trends if present, see e.g. a similar situation for $\pi$\,Aqr in \citealt{naz19piaqr}). Only g-band data cover the dates of our monitoring. A few outliers (having values deviating from the median by 3 times the median absolute deviations) were filtered out. \subsection{X-ray observations} Both our targets were observed with {\sc{XMM}}\emph{-Newton}. The first observation of HD\,119682\ was taken in August 2001 in full frame mode and with a thick filter to avoid contamination by UV/optical photons (40\,ks, Rev.\,315, ObsID=0087940201, PI Hughes). HD\,119682\ here appears off-axis as the observation was concerned with a nearby supernova remnant. The star was re-observed on-axis in March 2009 (54\,ks, Rev.\,1692, ObsID=0551000201, PI Motch), this time with a medium filter. Finally, we triggered our TOO program to monitor the X-ray emission of HD\,119682\ as its disk was disappearing, leading to four additional observations taken with the medium filter and in Large window mode (PI Rauw; 10\,ks in August 2019 on Rev.\,3610, ObsID=0840310901; 20\,ks in January 2020 on Rev.\,3684, OBsID=0840311001; 10\,ks in July 2020 for Rev.\,3775, ObsID=0840311101 and March 2021 for Rev.\,3890, ObsID=0840310801). {\sc{XMM}}\emph{-Newton}\ also observed V767\,Cen\ during 7\,ks in January 2007 (Rev. 1306, ObsID=0402121801, PI Favata) using a thick filter and the large window mode. Following the observation of a disk flaring (see below), we requested a TOO observation which was taken in July 2021 with the same characteristics as in the first pointing (Rev. 3967, ObsID=0891800801, PI Naz\'e). All {\sc{XMM}}\emph{-Newton}\ data were processed with the Science Analysis Software (SAS) v19.1.0 using calibration files available in June 2021 and following the recommendations of the {\sc{XMM}}\emph{-Newton}\ team\footnote{SAS threads, see \\ http://xmm.esac.esa.int/sas/current/documentation/threads/ }. The European Photon Imaging Camera (EPIC) observations were first processed with the pipeline and then filtered to keep only the best-quality data ({\sc{pattern}} 0--12 for MOS and 0--4 for pn). To assess whether contamination by background proton flares were present, we built global light curves for energies above 10\,keV and discarded time intervals corresponding to flares. Only the older two datasets and that of July 2020 were affected by such flares for HD\,119682\ while there was no flare for V767\,Cen\ data. Source detection was then performed to assess the crowding in the fields-of-view. This allowed us to carefully choose extraction regions. The source regions were circles centered on the Simbad positions of the targets and with radii of 30\arcsec\ in general while background regions were chosen from nearby circles devoid of sources and generally 50\arcsec\ in radius. Background-corrected lightcurves were calculated for energy bands of 0.5--10.\,keV, 0.5--2.\,keV, and 2.--10.\,keV. For HD\,119682, bins of 100\,s and 1\,ks were used, whereas shorter bins of 50\,s and 500\,s could be used for V767\,Cen\ to get the same lightcurve quality since its X-ray flux is larger. The lightcurves were corrected for vignetting, off-axis angle, and bad pixels, and bins exposed during less than half nominal bin length were eliminated. For spectra, dedicated calibration matrices were built and a grouping was then applied to obtain an oversampling factor of maximum five and a minimum signal-to-noise ratio of 3. For HD\,119682, a few {\it Chandra} observations were also available. These grating observations were taken in December 2008 (ObsID=8929 and 10834--6, PI Rakowski), totalling nearly 150\,ks altogether. Individual zeroth order spectra as well as combined (order +1 and --1) grating spectra were extracted for HEG and MEG. The reduction of these observations was presented in \citet{naz18} and no further processing was applied for this paper. As mentioned in \citet{naz18}, additional {\it Chandra} observations of HD\,119682\ were taken with ACIS-I and suffer from pile-up hence could not be used. For V767\,Cen, we further obtained an X-ray monitoring with the Neil Gehrels {\it Swift} observatory (ObsID=00014422001--8) during the second semester of 2021 and early 2022. The X-ray telescope (XRT) was used in Windowed Timing mode because V767\,Cen, with its $V$=6.1, is too bright in the optical and UV ranges for the other observation mode. Since we were mostly interested in flux variations, exposures were of 2--3\,ks duration, allowing for a few hundred counts to be collected for the source. Note that exposures 00014422003--4, taken four days apart, were combined to reach the required signal-to-noise. Individual count rates and spectra were obtained with the {\it Swift} on-line tool\footnote{https://www.swift.ac.uk/user\_objects/}. Note that V767\,Cen\ is at the limit for WT observations: the background has a similar count rate as the source and the centroiding is difficult. Some slight systematic errors cannot be totally excluded but the consistency of the obtained data advocates for a limited impact on our results. \begin{figure*} \begin{center} \includegraphics[width=8cm]{bestprof_hd119682.ps} \includegraphics[width=8cm]{ew_hd119682.ps} \end{center} \caption{{\it Left panels:} Evolution with time of the profile of the H$\alpha$ line observed in HD\,119682\ during our monitoring campaign. Dates in YYMMDD are provided to the right of the line. Spectra taken close to the time of an {\sc{XMM}}\emph{-Newton}\ observation are shown with a magenta dotted line, while vertical black dotted lines indicate the interval chosen for equivalent width determination. Note that the noisier ($SNR<20$) or lower-resolution($R<3000$) spectra are not shown. {\it Right panel:} Evolution with time of the H$\alpha$ equivalent widths measured in the --600 to +600\,km\,s$^{-1}$\ interval and of X-ray fluxes. The top axis provides the date in years while the bottom axis uses julian dates. Red open triangles display the values obtained from the noisier spectra ($SNR<20$), black open squares those measured on the lower resolution data ($R<3000$ but $SNR>20$), green dots those derived from other amateur spectra ($R>3000$ and $SNR>20$), and blue symbols those measured on ESO spectra (stars for UVES and open pentagon for X-Shooter). Note the very good agreement between ESO and amateur data taken at similar dates. The vertical magenta dotted lines indicate the times of the {\sc{XMM}}\emph{-Newton}\ observations. } \label{profhd} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=8cm]{hd119682_xparam.ps} \end{center} \caption{X-ray fluxes and hardness ratios of HD\,119682\ over time (bottom) and compared to $EW$ (top). } \label{xparhd} \end{figure} Spectra were then fitted in {\sc Xspec} v12.11.1 using absorbed optically thin thermal emission models with solar abundances from \citet{asp09}. For {\sc{XMM}}\emph{-Newton}, all EPIC spectra (pn, MOS1, MOS2) were fitted simultaneously; for {\it Chandra}, zeroth order spectra were fitted for individual exposures while a fit to the HEG and MEG grating spectra combining both orders and all exposures was also made. The chosen models were as in \citet{naz18}. Results are provided in Table \ref{fitsx} - slight differences with those reported in \citet{naz18} for the same exposures come from the improved atomic parameters of the fitting tool. For V767\,Cen, a single temperature fit could a priori provide sufficient results for {\it Swift} spectra. Nevertheless, two-temperature fits were also tried to ease the comparison with {\sc{XMM}}\emph{-Newton}\ results. As the {\it Swift} data have lower quality, especially at low energies, the temperatures were fixed to those found with {\sc{XMM}}\emph{-Newton}, which show little changes between the two observations. However, even with fixed temperatures, the strength of the low temperature component could not be constrained. We therefore further fixed the ratio between the normalization factors of the two thermal components to the value observed with {\sc{XMM}}\emph{-Newton}\ in 2021. \begin{table*} \caption{Best-fit models to the X-ray spectra. \label{fitsx}} \begin{tabular}{lcccccccccc} \hline ID & I & $HJD$ & $N_{\rm H}^{ISM}$ & $N_{\rm H}$ & $kT$ & $norm$ & $\chi^2$/dof & $F_{\rm X}^{obs}$ & $F_{\rm X}^{ISM-cor}$ & $HR$ \\ & & $-2450000.$ &\multicolumn{2}{c}{($10^{22}$\,cm$^{-2}$)} & (keV) & ($10^{-4}$\,cm$^{-5}$) & & \multicolumn{2}{c}{(tot, $10^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$)} & \\ \hline \multicolumn{11}{l}{HD\,119682}\\ 0315 &x&2149.849 &0.2 &0.020$\pm$0.008& 7.8$\pm$0.4 &11.8$\pm$0.13 &518.71/468 &1.90$\pm$0.02 &2.09 &2.24$\pm$0.05\\ 8929 &c&4817.692 &0.2 &0.00$\pm$0.02 &10.3$\pm$2.2 &11.3$\pm$0.39 &49.52/56 &1.90$\pm$0.12 &2.10 &2.44$\pm$0.27\\ 10835 &c&4820.076 &0.2 &0.00 (fixed) & 9.4$\pm$1.9 &8.45$\pm$0.33 &49.68/45 &1.41$\pm$0.08 &1.55 &2.35$\pm$0.22\\ 10834 &c&4821.515 &0.2 &0.00$\pm$0.03 &12.5$\pm$2.7 &8.58$\pm$0.24 &106.86/89 &1.47$\pm$0.07 &1.60 &2.61$\pm$0.19\\ 10836 &c&4822.342 &0.2 &0.06$\pm$0.07 &12.9$\pm$6.0 &10.2$\pm$0.60 &39.87/51 &1.70$\pm$0.12 &1.84 &2.91$\pm$0.33\\ comb. &c&4820.201 &0.2 &0.13$\pm$0.13 &12.5$\pm$8.2 &12.0$\pm$0.66 &128.98/444 &1.97$\pm$0.12 &2.12 &3.21$\pm$0.38\\ 1692 &x&4897.283 &0.2 &0.009$\pm$0.008& 8.1$\pm$0.5 &6.97$\pm$0.09 &467.69/448 &1.13$\pm$0.02 &1.25 &2.24$\pm$0.05\\ 3610 &x&8721.890 &0.2 &0.000$\pm$0.007& 8.3$\pm$0.8 &6.74$\pm$0.11 &283.72/268 &1.10$\pm$0.03 &1.22 &2.23$\pm$0.10\\ 3684 &x&8870.137 &0.2 &0.000$\pm$0.005& 8.3$\pm$0.5 &6.94$\pm$0.09 &396.86/363 &1.13$\pm$0.02 &1.26 &2.22$\pm$0.07\\ 3775 &x&9050.958 &0.2 &0.00 (fixed) & 7.8$\pm$0.7 &5.86$\pm$0.10 &270.94/265 &0.95$\pm$0.03 &1.05 &2.17$\pm$0.09\\ 3890 &x&9279.849 &0.2 &0.000$\pm$0.008& 8.8$\pm$1.1 &4.88$\pm$0.10 &265.48/234 &0.81$\pm$0.03 &0.89 &2.29$\pm$0.11\\ \hline \multicolumn{11}{l}{V767\,Cen}\\ 1306 &x&4126.327 &0.043&0.065$\pm$0.013& 0.27$\pm$0.03 &1.26$\pm$0.37 & 494.27/443 & 3.09$\pm$0.05 & 3.18 & 1.97$\pm$0.06\\ & & & & 6.48$\pm$0.33 &18.5$\pm$0.28 \\ 3967 &x&9434.160 &0.043&0.080$\pm$0.012& 0.28$\pm$0.03 &1.60$\pm$0.39 & 560.49/451 & 3.28$\pm$0.05 & 3.38 & 1.89$\pm$0.06\\ & & & & 6.02$\pm$0.37 &20.0$\pm$0.30 \\ 00014422001 &s&9409.837 &0.043&0.21$\pm$0.09& 0.28\&6.02 (fixed)& 25.3$\pm$2.5 &56.46/51 & 3.85$\pm$0.32 &3.93 & 2.37$\pm$0.32\\ 00014422002 &s&9428.756 &0.043&0.10$\pm$0.10& 0.28\&6.02 (fixed)& 24.5$\pm$3.3 &30.19/34 & 3.97$\pm$0.41 &4.08 & 1.96$\pm$0.35\\ 00014422003+4&s&9446.647 &0.043&0.00$\pm$0.10& 0.28\&6.02 (fixed)& 16.6$\pm$2.4 &37.16/36 & 2.88$\pm$0.43 &2.98 & 1.61$\pm$0.39\\ 00014422005 &s&9465.858 &0.043&0.09$\pm$0.10& 0.28\&6.02 (fixed)& 23.7$\pm$3.0 &36.50/38 & 3.88$\pm$0.36 &3.99 & 1.91$\pm$0.31\\ 00014422006 &s&9486.837 &0.043&0.00$\pm$0.08& 0.28\&6.02 (fixed)& 20.9$\pm$2.6 &50.01/48 & 3.62$\pm$0.40 &3.76 & 1.62$\pm$0.34\\ 00014422007 &s&9561.986 &0.043&0.00$\pm$0.11& 0.28\&6.02 (fixed)& 14.9$\pm$2.4 &51.76/39 & 2.59$\pm$0.36 &2.68 & 1.63$\pm$0.33\\ 00014422008 &s&9583.002 &0.043&0.00$\pm$0.07& 0.28\&6.02 (fixed)& 16.6$\pm$2.1 &35.38/32 & 2.88$\pm$0.38 &2.99 & 1.62$\pm$0.35\\ \hline \multicolumn{11}{l}{$\pi$\,Aqr}\\ 00010659001-39&s&8344.345 &0.036&0.53$\pm$0.06 &27.6$\pm$9.4 &105.5$\pm$4.6 &396.03/436 &16.0$\pm$0.4 &16.1 & 6.10$\pm$0.24\\ \hline \end{tabular} {\scriptsize Fitted models were of the form tbabs$\times$phabs$\times$apec, with the first absorption fixed to the interstellar value. ID refers to the revolution number (for {\sc{XMM}}\emph{-Newton}) or ObsID (for {\it Chandra}; ``comb'' indicates the fitting of the fully combined (all obs, both orders) {\it Chandra} HEG and MEG grating spectra. Column I identifies the facility used (x for {\sc{XMM}}\emph{-Newton}, c for {\it Chandra}, s for {\it Swift}). The hardness ratios are defined by $HR = F_{\rm X}^{ISM-cor}(hard)/F_{\rm X}^{ISM-cor}(soft)$, with $F_{\rm X}^{ISM-cor}$ the flux after correction for interstellar absorption and soft and hard energy bands being defined as 0.5--2.0 keV and 2.0--10.0 keV, respectively (the total band being 0.5--10.0 keV). Errors correspond to 1$\sigma$ uncertainties; they correspond to the larger value if the error bar is asymmetric. For {\it Swift} spectra of V767\,Cen, the normalization factor of the 0.28\,keV component is fixed to 0.08 times that of the 6.02\,keV component, as in {\sc{XMM}}\emph{-Newton}\ data of Rev. 3967, and only the normalization of that hottest component is provided here.} \end{table*} \section{Results} \begin{figure*} \begin{center} \includegraphics[width=8.8cm]{bestprof1b_v767cen.ps} \includegraphics[width=8.8cm]{bestprof2b_v767cen.ps} \end{center} \caption{Same as left panel of Fig. \ref{profhd} but for the profiles of V767\,Cen. Note that low resolution spectra here have $R<5000$. } \label{profvc} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=8cm]{ew_v767cen.ps} \end{center} \caption{{\it Top:} g-magnitudes of V767\,Cen\ recorded by ASAS-SN. {\it Bottom:} Same as the right panel of Fig. \ref{profhd} but for V767\,Cen. Note that low resolution spectra are here defined as $R<5000$. } \label{profvc2} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{v767cen_xparam.ps} \end{center} \caption{Same as Fig. \ref{xparhd} but for V767\,Cen. } \label{xparvc} \end{figure} \subsection{HD\,119682} When the optical monitoring began in 2019, the H$\alpha$ line of HD\,119682\ displayed a double-peaked emission, with no trace of absorption. The equivalent width was moderate (--4.3\,\AA, a slightly lower value than in an older - May 2017 - spectrum). Over the course of the year, the emission steadily decreased, with the underlying absorption more and more clearly detectable. A moderate emission suddenly re-appeared in January 2020 ($EW\sim-3$\,\AA), but disappeared soon afterwards. The H$\alpha$ line then remained in absorption, although a small emission did shortly re-appear in December 2020 and March-April 2021 (Fig. \ref{profhd}). Thus, during the monitoring, the disk did not undergo a monotonic disappearance. Rather, the behaviour appears somewhat erratic, with an overall disappearance trend superimposed on temporary disk reinforcements, probably corresponding to small mass ejection events. The complex emission line profile, with changing width and even multiple peaks as revealed by ESO spectra, also points towards a complicated disk geometry at those times: the ejected material does not seem to form a single blob slowly and smoothly mixing with the disk. The H$\alpha$ line profile was very different when the various {\sc{XMM}}\emph{-Newton}\ observations were taken (see magenta profiles in Fig. \ref{profhd}). In August 2019, the absorption was dominant and the emission weak. In January 2020, the emission was moderate while, in July 2020, there was no trace of emission. Finally, in March 2021, the emission was intermediate between the first two cases. There is unfortunately no information on the shape of the H$\alpha$ line profile at the time of the older {\sc{XMM}}\emph{-Newton}\ or {\it Chandra} observations. During each X-ray observation, short-term flux variations can be spotted in the lightcurves, as are common in $\gamma$\,Cas\ stars \citep[e.g.][]{smi12}. It should however be noted that the count rate does not allow to make lightcurves with extremely short time bins (e.g. 1\,s) thought to be typical of the ``shot''/flaring emission of $\gamma$\,Cas\ stars hence such timescales remain unexplored for HD\,119682. The Appendix shows these {\sc{XMM}}\emph{-Newton}\ lightcurves. It reveals that the hardness of the emission, estimated from the ratio of hard (2--10\,keV) to soft (0.5--2\,keV) count rates, does not seem to change in a significant way over the whole {\sc{XMM}}\emph{-Newton}\ dataset. With a Pearson correlation coefficient of only 0.33, the hardness also does not appear significantly correlated to the strength of the X-ray emission, estimated from the full band (0.5--10\,keV) count rate. Finally, it may be noted that the dispersion of the lightcurve points remains similar if one considers a single exposure or the whole dataset, despite different values of the average count rate. Table \ref{fitsx} provides the results of spectral fits. We can see that there is some systematic difference between {\sc{XMM}}\emph{-Newton}\ and {\it Chandra} results. This is in part due to the stellar variations, but also to remaining cross-calibration problems: even for a stable object, it is quite common to not get exactly the same modelling results. Note that the comparison between the combined grating data and the individual 0th order spectra shows a good agreement, indicating little effects of pile-up which could potentially still affect the latter data. The X-ray flux was about $2\times10^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$ at the time of the oldest observations, but it then decreased by a factor of two in more recent years. Focusing on our four monitoring observations, the star did not appear at its lowest flux when the H$\alpha$ emission was the lowest (absorption only, July 2020) nor did it appear brighter in January 2020 or March 2021 when a small surge in disk emission occurred. Rather, there seems to be a slow, monotonic flux decline ($\sim$30\%) over the entire monitoring interval. Furthermore, restricting to comparing {\sc{XMM}}\emph{-Newton}\ spectra to avoid any cross-calibration problems, we see that the ratio $HR$ between hard and soft fluxes (after correction for interstellar absorption) remained stable at all times, while the temperature and absorption also agree within errors. Clearly, the $\gamma$\,Cas\ characteristics did not disappear, nor did they change of relative strength, even though the flux changed and the disk evolved. Figure \ref{xparhd} graphically displays this evolution of flux and hardness ratios over time. The top panel of the same figure compares the fluxes to the H$\alpha$ line strength: it confirms the lack of correlation between X-ray parameters and the H$\alpha$ line. \subsection{V767\,Cen} At the start of the monitoring in March 2019, the star displayed a rather strong emission, with the line amplitude nearly reaching four times the continuum level (Figs. \ref{profvc} and \ref{profvc2}, $EW\sim-8$\,\AA). The line profile had a single peak, although with prominent shoulders (i.e. the line had a winebottle shape). The emission slowly decreased in April, then went shortly back to its initial level in May, before resuming its decrease. After this downward trend, the emission appeared to stabilize the next year. Only small variations were seen around an average equivalent width of $\sim$--3\,\AA. At that time, the line profile reached an amplitude of only 1--1.5 times the continuum level. Furthermore, the shoulders seemed to have disappeared while the broad stellar absorption started to appear in the high-velocity wings of the profile. The high-resolution ESO spectrum taken in September 2020 shows that the single peak seen in amateur data is actually made of two very close subpeaks. In December 2020, the high-resolution ESO data revealed that the emission became narrower but of larger amplitude ($EW=-4$\,\AA, amplitude twice the continuum level, with clear hints of the photospheric absorption outside the emission range). The emission gradually decreased over the next months, down to $EW=-2$\,\AA, showing that the disk re-building was only temporary. In parallel, the separation between peaks in the double-peaked H$\beta$ line profile changed from 18.5\,km\,s$^{-1}$\ in early December 2020 to 52\,km\,s$^{-1}$\ at the beginning of February 2021. Using Eq. 2 of \citet{zam19}, this translates into a size change for the emission region by a factor of $\sim$8 in just two months. Such a variation in peak separation is typical of Be disks as a larger disk reaches lower orbital velocities at its periphery than a small disk closer to the star \citep{hum95,zam19}. The emission then stabilized in March--May 2021, to finally resume its slow decrease in June. The absorption wings were clearly visible from mid-May to early July. While one would have expected the emission to slowly disappear, the absorption wings suddenly filled up. This suggests an input of fresh material close to the star, where velocities are large. In such a case, it is expected that the material will gradually spread out over the whole disk: indeed, the core of the emission soon became broader and stronger, reaching one time the continuum level at the end of July (compared to an amplitude of one-half in mid-July). After this sudden event, the emission traced by the H$\alpha$ line resumed its decrease. Contrary to HD\,119682, the H$\alpha$ line was never largely dominated by absorption in our observations of V767\,Cen, a clear emission being visible in all spectra, even at lowest $EW$. Both stars however display irregular decreases, often interrupted by short flaring episodes. Since the July 2021 flaring led to an equivalent width increase never seen before in our monitoring, we decided to trigger X-ray observations to follow the reaction to the reinforcement of the disk, rather than to its disappearance as in HD\,119682. {\it Swift} was the fastest to react, with the first exposure taken less than a week after the event, while {\sc{XMM}}\emph{-Newton}\ data were taken a month later. Comparing first the highest quality data, it is obvious that the 2007 and 2021 {\sc{XMM}}\emph{-Newton}\ spectra display very similar properties (absorption, temperatures, overall luminosity, hardness ratio, see Table \ref{fitsx}). Although the disk state in 2007 is not known, it would be quite a coincidence for it to be the exact same one as in 2021. In addition, each {\sc{XMM}}\emph{-Newton}\ lightcurve displays short-term flux variations, as found for HD\,119682\ and other $\gamma$\,Cas\ stars, but no hardness variations (see Appendix). Again, there seems to be no correlation between the overall count rate and the hardness in these lightcurves. Finally, while the disk emission was much lower in 2021 than at the beginning of the optical monitoring in 2019, the $\gamma$\,Cas\ character remained clear. The {\it Swift} data allow to examine the behaviour of the star in 2021 over a longer timescale, albeit with lower quality data. V767\,Cen\ appears slightly brighter in the second exposure and slightly fainter in the last ones, but the changes remain within 2$\sigma$: again, the star seems to display a rather similar X-ray emission at all times. These results are shown in Fig. \ref{xparvc}, which also graphically demonstrates the absence of correlation between X-ray flux and strength of the H$\alpha$ line. Finally, {\it ASAS-SN} provided photometry over a long timescale\footnote{V767\,Cen\ was also observed by {\it TESS} during our monitoring campaign and these observations are reported in \citet{naz20tess}. Significant variations of about 0.2\,mag were recorded by {\it TESS} over its few weeks' observing window and they are confirmed by {\it ASAS-SN} data.} (see top panel of Fig. \ref{profvc2}). Despite their noise, those data clearly show that the broad-band photometry remains rather stable while the H$\alpha$ line strongly changes. \section{Discussion} \subsection{Observed changes in $\gamma$\,Cas\ analogs} Amongst $\gamma$\,Cas\ analogs, only three stars had previously been followed through a transition of their disk. The Oe star HD\,45314, the hottest $\gamma$\,Cas\ analog known so far, was monitored extensively in the optical range \citep{rau18}. X-ray observations were obtained as the star displayed very different disk states, as traced by the H$\alpha$ line: strong emission (equivalent width close to --23\AA), shell phase, and very small emission (equivalent width of --7.9\AA). Changes in $V$-band photometry were also detected, indicating that even the inner parts of the disk were affected by the variation. Between the first and last observations, the X-ray flux was reduced by an order of magnitude and the hardness of the spectrum markedly decreased too ($HR$ changed from 4.3 to 1.8, see \citealt{naz18}). The $\gamma$\,Cas\ character was thus disappearing, suggesting a direct link between the disk and the generation of X-rays \citep{rau18}. In contrast, the monitoring of $\pi$\,Aqr drew a different picture \citep{naz19piaqr}. X-ray data were here also taken in two very different situations: as the disk had nearly completely disappeared (H$\alpha$ equivalent width of --1.7\AA) and as the emission associated to the disk was strong (equivalent width of --23\AA, with a disk size five times larger). Here too, simultaneous variations in broad-band photometry were recorded. The first X-ray observation was a single {\sc{XMM}}\emph{-Newton}\ snapshot but the second observing campaign corresponds to a set of short exposures taken by {\it Swift} over 250\,d (i.e. three orbital cycles). There was no obvious relation between the X-ray parameters derived from individual {\it Swift} exposures and the orbital phase or between them and the disk fluctuations of $\pi$\,Aqr as traced by H$\alpha$ line strength \citep{naz19piaqr}. To readdress this issue, we now combine the {\it Swift} exposures with the on-line tool\footnote{https://www.swift.ac.uk/user\_objects/} to get a single, higher-quality spectrum. We fit it in the same way as done for the {\sc{XMM}}\emph{-Newton}\ spectrum in \citet{naz18}, see Table \ref{fitsx} for results. While the short {\it Swift} exposures show individual fluxes varying by a factor of a few, as usual for the short-term intrinsic variations of $\gamma$\,Cas\ stars \citep{naz19piaqr}, the flux of the combined {\it Swift} dataset appears to be $\sim$50\% higher than in the {\sc{XMM}}\emph{-Newton}\ data. This change is due to the increase of the hard X-ray flux, leading to a variation of the hardness ratio $HR$ from 3.6 to 6.1. However, it is important to stress that the hard component was very clear and strong at all times. The $\gamma$\,Cas\ character thus never disappeared for $\pi$\,Aqr. The third $\gamma$\,Cas\ star that has been simultaneously monitored at optical and X-ray wavelengths is $\gamma$\,Cas\ itself. \citet{mot15} reported a good correlation, without any time lag, between averages over optical observing seasons of X-ray fluxes measured by {\it RXTE} and of disk brightnesses as traced by $V$-band magnitudes. Moreover, variations with timescales near 70\,d were found in both wavelength ranges. Recently, Rauw et al. (in prep) reported on a set of {\sc{XMM}}\emph{-Newton}\ observations taken during an eruption event (with H$\alpha$ equivalent width transitioning from --45\AA\ to nearly --55\AA\ and then back to the initial value). They compared the high-energy emission then observed to that recorded in older {\sc{XMM}}\emph{-Newton}\ observations when the disk emission was significantly lower (H$\alpha$ equivalent width ranging from --27 to --35\AA). Both flux and hardness variations are observed in the X-ray range. Besides the usual short-term ``flaring'' activity, the mean flux of each {\sc{XMM}}\emph{-Newton}\ observation varied by a factor of two. The hardness ratio was similar in six out of the ten observations ($HR\sim 3$) and three more observations displayed values rather close to that, despite the different strength of the H$\alpha$ line at these nine dates. However, the X-ray spectrum clearly was hardest ($HR\sim 8$) and faintest at the time of the maximum emission. This variation stems from the soft X-rays (the soft flux changed by a factor of 3--4), most probably because of a larger absorption. Such characteristics are reminiscent of the localized absorbing events reported by \citet{ham16} and \citet{smi19}, of which this event would then be an extreme case. Indeed, the next exposure, taken only a month later when the H$\alpha$ emission was still very strong, did not display those features. It can nevertheless not be totally excluded that the observed change is a direct high-energy reaction to the disk event. In any case, there was no obvious correlation between the H$\alpha$ emission and the X-ray properties during this event or at previous epochs of {\sc{XMM}}\emph{-Newton}\ observations. Finally, it may be noted that no significant change in $V$-band photometry was detected during this recent emission event. HD\,119682\ and V767\,Cen\ add to this picture of constrasting behaviours. The X-ray flux of HD\,119682\ displayed long-term variations by a factor of two, but the hardness of the emission remains stable ($HR\sim2.2$ in all {\sc{XMM}}\emph{-Newton}\ data). In particular, the $\gamma$\,Cas\ characteristics were still present when the optical spectrum showed no trace of emission (H$\alpha$ equivalent width of 1.3\AA). For V767\,Cen, the disk never disappeared entirely: when it seemed on the edge of doing so (at an H$\alpha$ equivalent width of $\sim-0.2$\,\AA), the H$\alpha$ emission suddenly increased. At X-ray wavelengths, however, the spectral properties show little change, both in flux and hardness. In summary, three disk disappearances (or near-disappearances) were monitored, for HD\,45314 (O9pe), $\pi$\,Aqr (B1Ve), and HD\,119682\ (B0Ve). In all cases, a lower X-ray flux than observed before was measured at these times. However, the amplitude of the flux change wildly varied (an order of magnitude for HD\,45314, about a factor of two for the other two). The hardness variations are even more different: HD\,45314 had such a low hard X-ray emission that it formally lost its $\gamma$\,Cas\ character, the hardness of $\pi$\,Aqr decreased but the star still fulfilled the criteria to remain a $\gamma$\,Cas\ analog, and HD\,119682\ kept the hardness measured before. This suggests the existence of a link, but a loose one, between the H$\alpha$ emission of the disk and X-ray emissions. In parallel, two disk ``flarings'' were monitored, for $\gamma$\,Cas\ and V767\,Cen. They were qualitatively very different, as the disk of $\gamma$\,Cas\ was already strong when it became stronger while that of V767\,Cen\ was on the edge of disappearance when a disk re-building was detected. However, apart from a single {\sc{XMM}}\emph{-Newton}\ exposure of $\gamma$\,Cas, there was no large change of X-ray properties recorded in both stars. It is important to note that, for both stars, broad-band photometry showed no specific trace of flaring at the time of the recorded events (see above for V767\,Cen, and see Rauw et al. in prep for $\gamma$\,Cas). \subsection{The case of Be-XRBs} What do these monitoring results tell us on the generation of X-rays in $\gamma$\,Cas\ stars? To shed light on this issue, a comparison with the usual behaviour of Be stars in X-ray binaries (XRBs) should be done. In such systems, the X-ray variability is often classified in two categories. Type I outbursts occur at specific orbital phases, when accretion is enhanced at periastron passages, while type II outbursts are rather linked to major disk changes, often after some reaction delay \citep[e.g.][]{gru07,cam12,lut12,alf17}. Most systems remain X-ray quiet when the Be disk is small or has disappeared \citep{neg01} although a few systems have undergone type II outbursts even in such conditions \citep{mon17}. The X-ray observations of $\gamma$\,Cas\ analogs have not revealed any outburst up to now, hence their closer analogs amongst X-ray binaries may be the low-eccentricity systems such as X\,Per. From a long monitoring of this system, \citet{zam19} found a direct correlation, without time lag, between the X-ray flux and the equivalent width of the H$\alpha$ line. However, this occurred only when the emission was very strong, i.e. for a large disk. At lower emission levels, the situation was clearly different, with the $V$-band photometry varying first, then the $EW$(H$\alpha$), and at last the X-ray flux, after years of delay. This can be understood recalling that the accretion material comes from the outer parts of the disk: if the disk is small, it will take some time for such an inner variation to change the accreting conditions near the companion while any variation at the periphery of a large disk, hence closer to the companion, will have a direct impact on the accretion flow. \subsection{Constraints on the $\gamma$\,Cas\ phenomenon} The $\gamma$\,Cas\ monitorings provide two important clues. First, the $\gamma$\,Cas\ character remained for $\pi$\,Aqr, HD\,119682, and V767\,Cen\ even when the H$\alpha$ emission was very weak. Of course, the current H$\alpha$ data does not enable us to conclude that the disk had entirely disappeared, even in the case of HD\,119682. Indeed, though H$\alpha$ is certainly a powerful diagnostics for the presence of a circumstellar disk, the line forms over a rather wide radial extent and is much less sensitive to the innermost and outermost parts of the disk. However, it is certain that the disk size was much reduced at these times. In fact, the size of the disk $R_d$ can be evaluated from the peak separation $\Delta(V)$ in double-peaked profiles using $R_d=R_*\times (2\times v \sin(i)/\Delta(V))^2$ \citep{hua72,hum95,zam19}. The projected rotational velocities $v \sin(i)$ are 100 and 200\,km\,s$^{-1}$\ for V767\,Cen\ and HD\,119682, respectively \citep{zor16}. The peak separation were measured each time the profile appeared double-peaked and this yielded disk sizes of a few $R_*$ for HD\,119682\ and 10--30\,$R_*$ for V767\,Cen. Comparable values were found for HD\,45314 \citep{rau18} and $\pi$\,Aqr \citep{naz19piaqr}. The disk sizes can also be evaluated from the $EW$s \citep[see Eq. (3) in][]{zam19} but the application of the formula can be debated for such small $EW$ values (the largest $|EW|$ measured here is around 10\AA). However, \citet{rei16} also found that the disk sizes must be small for such low $EW$. In our cases, the disks are thus smaller than usually found in active states of Be-XRBs \citep[see also Fig. 5 of][]{zam19}. In any case, any compact companion would need to be very close for significant accretion (hence significant hard X-ray emission) to occur: if $|EW|<10$\,\AA\ then $P_{orb}<50$\,d, see \citet{coe15}. For a Be star mass of 12\,M$_{\odot}$ and a Oe star mass of 20\,M$_{\odot}$, a neutron star mass of 2\,M$_{\odot}$, and a $P_{orb}<50$\,d, the velocity amplitudes would be larger than 15--20\,km\,s$^{-1}$, which is not detected for HD\,119682, V767\,Cen, or HD\,45314 \citep{rau18,naz21}. For $\pi$\,Aqr, the known orbital period is 84\,d, with a disk much smaller than the orbital separation \citep[and references therein]{naz19piaqr}. The current data thus seem to disfavour scenarios involving accreting companions. Second, the X-ray emission seems to react differently depending on the extent of the disk changes. Indeed, optical photometry and spectroscopy probe different zones of the disk: the stellar photometry is more sensitive to the densest and innermost parts of the disk, while the H$\alpha$ line rather probes the disk over a larger region, up to its periphery. In $\gamma$\,Cas, HD\,45314 and $\pi$\,Aqr, variations of the X-ray emission were spotted when both broad-band photometry and H$\alpha$ line profile changed. In contrast, the recent ``events'' in $\gamma$\,Cas\ and V767\,Cen\ were detected through H$\alpha$ measurements but both X-ray properties and optical broad-band photometry remained unaffected. All this may be a hint that the hard X-ray emission that characterizes the $\gamma$\,Cas\ phenomenon is born in the inner disks of the Be stars, rather than at their periphery. The observed optical and X-ray behaviours of $\gamma$\,Cas\ analogs therefore reveal that the $\gamma$\,Cas\ character may remain even if the disk size is much reduced and that changes in X-rays are usually seen only if broad-band photometric variations occurred. This is difficult to reconcile with a crucial role of distant companions, and seems to bring support to the magnetic star-disk interactions where X-rays are generated closer to the Be star \citep{rob02}. The differences observed between $\gamma$\,Cas\ analogs could then result from a range of reasons such as varying inclinations, stellar rotation, and stellar temperatures. For example, the hotter Oe star HD\,45314 should have a stronger wind and a faster disk disappearance \citep{kee16}. In contrast, the disk of the cooler Be stars would take more time to disperse, hence re-building events would have time to occur and a full disk disappearance may be more difficult to get. With the inner parts of the disk still in place, the $\gamma$\,Cas\ character would then still be observable. Nevertheless, no full modelling of the star-disk interaction is available yet, hence it is difficult to exactly quantify its adequacy. Future modelling should explore the specific impact of geometry and stellar properties. \section{Summary and conclusion} The H$\alpha$ line in Be stars is considered as one of the main probes of their disks. We have monitored this line for two $\gamma$\,Cas\ analogs, HD\,119682\ and V767\,Cen, for several years. Both stars displayed decreasing line strengths, although interrupted by several short re-building events. The discovery of this behaviour triggered X-ray observations, to assess the impact of the disk changes on the peculiar high-energy emission of those stars. For HD\,119682, the H$\alpha$ line was fully in absorption in mid-July 2020. In parallel, the X-ray flux slightly decreased between August 2019 and March 2021, with no change in hardness. The flux level was comparable to that recorded a decade before in a previous X-ray exposure. The $\gamma$\,Cas\ character remained clear at all epochs. V767\,Cen\ was monitored during a disk re-building event and no significant change in X-ray flux or hardness was detected. In parallel, ASAS-SN photometry in the visible range also appeared to remain stable. The limited reaction to large H$\alpha$ variations and the presence of the $\gamma$\,Cas\ character even with a weak H$\alpha$ line, coupled to a stable photometry, seems to disfavour scenarios involving an X-ray source located far out in the disk, close to a companion, and rather hints at an X-ray generation closer to the Be star. These results brought important clues regarding the $\gamma$\,Cas\ phenomenon, but of course much remains to be done. In particular, the optical and X-ray monitorings should continue, to exclude large time delays and/or to analyse the X-ray behaviour during long (and complete) disk disappearances. Also, while the disk clearly plays some role in the appearance of the $\gamma$\,Cas\ phenomenon (since all $\gamma$\,Cas\ are Be stars), it remains to be clarified why most Be stars are {\it not} $\gamma$\,Cas\ in character. \section*{Acknowledgements} We thank Dr N. Schartel for granting us a DDT observation of V767\,Cen, and the {\it Swift} team for their help. We also thank Myron Smith for his comments and our useful discussions. Y.N. and G.R. acknowledge support from the Fonds National de la Recherche Scientifique (Belgium), the European Space Agency (ESA) and the Belgian Federal Science Policy Office (BELSPO) in the framework of the PRODEX Programme (contracts linked to XMM-Newton and Gaia). ADS and CDS were used for preparing this document. \section*{Data availability} The ESO, {\it Swift, Chandra}, and {\sc{XMM}}\emph{-Newton}\ data used in this article are available in their respective public archives, while the Australian optical amateur spectra are available in the public BeSS database (http://basebe.obspm.fr/basebe/). The Brazilian amateur data are available upon reasonable request.
{ "timestamp": "2022-02-02T02:14:13", "yymm": "2202", "arxiv_id": "2202.00278", "language": "en", "url": "https://arxiv.org/abs/2202.00278" }
\section{Introduction} \label{sec:1} Over the past decade, interest in gravastars has increased greatly. There was written a large number of papers, that investigate the research of charged and non-charged gravastars (for example, see \cite{Pani2010,Chan2010,Kubo2016,Banerjee2020,Ghosh2020,Shamir2020,Abbas2020,Kuhfittig2020}). Gravastars (gravitational condensate stars) was first ever proposed in works of Mazur and Mottola \cite{Mazur2001,Mazur2004}. This stars are probable alternative to the black holes. Gravastars are usually have the following structure: \begin{itemize} \item Interior region $\mathcal{D}_1$ (from $r=0$ to $R_1$): de Sitter fluid with Equation of State (further - EoS) $p=-\rho$. \item Intermediate (shell) region $\mathcal{D}_2$ (from $R_1$ to $R_2$): stiff Zeldovich fluid with EoS $p=\rho$. \item Exterior region $\mathcal{D}_3$ (from $R_2$ to $r=\infty$): empty spacetime with Schwarzschild, Schwarzschild-de Sitter or Reissner-Nordstr\"om geometry and EoS $p=\rho=0$. \end{itemize} Exterior spacetime could be considered as the Schwarzschild Black Hole (BH) metric. On the Figure (\ref{fig:1}) we illustrated the geometry of the gravastar spacetime on the conformal diagram. We expanded static gravastar spacetime as the Reissner-Nordstrom spacetime. Metric potentials of the gravastar interior spacetime are non-singular, and therefore, at $r=0$ we have space without any singularities. On the conformal diagram, triangles mean de Sitter (further - dS) spacetime (interior gravastar region with EoS parameter $\omega=-1$). As well, $i_0$ stands for the infinetly distant spacelike point and $\mathscr{I}_{+/-}$ stands for the null-like hypersurface. \begin{figure*}[!htbp] \centering \begin{tikzpicture}[thick,scale=1.0,, mycirc/.style={circle,fill=red!70, minimum size=0.01cm}] \draw (0,0) -- (1.5,-1.5); \draw (3,0) -- (1.5,-1.5); \draw[] (0,0) -- (0,-6); \draw (1.5,-4.5) -- (0,-6); \draw (1.5,-4.5) -- (3,-6); \draw (0,-3) -- (1.5,-1.5); \draw (0,-3) -- (1.5,-4.5); \draw (1.5,-4.5) -- (3,-3); \draw (3,-3) -- (1.5,-1.5); \node[rotate=90] at (-0.2,-3) {\large $r=0$}; \node[rotate=45] at (0.7,-2) {\large $r=R$}; \node[rotate=135] at (0.7,-4) {\large $r=R$}; \draw (1.5,-1.5) node[] (C) {}; \draw (1.5,-4.5) node[] (D) {}; \draw[dashed] (C) to [bend left=-35] (D); \draw (1.5,-1.5) node[] (C) {}; \draw (1.5,-4.5) node[] (D) {}; \draw[dashed] (C) to [bend left=35] (D); \draw (0,-3) node[] (C) {}; \draw (3,-3) node[] (D) {}; \draw[dashed] (C) to [bend left=35] (D); \draw (0,-3) node[] (C) {}; \draw (3,-3) node[] (D) {}; \draw[dashed] (C) to [bend left=-35] (D); \draw (0,-3) node[] (C) {}; \draw (0,-6) node[] (D) {}; \draw[dashed] (C) to [bend left=25] (D); \draw (0,0) node[] (C) {}; \draw (0,-3) node[] (D) {}; \draw[dashed] (C) to [bend left=25] (D); \node[] at (3.2,0) {\Large $i_0$}; \node[] at (3.2,-3) {\Large $i_0$}; \node[] at (3.2,-6) {\Large $i_0$}; \node[] at (2.5,-1) {\Large $\mathscr{I}_-$}; \node[] at (2.5,-2) {\Large $\mathscr{I}_+$}; \node[] at (2.5,-4) {\Large $\mathscr{I}_-$}; \node[] at (2.5,-5) {\Large $\mathscr{I}_+$}; \end{tikzpicture} 2\caption{Conformal diagram of static gravastar} \label{fig:1} \end{figure*} Mainly, there was the research in gravastars field done only in the Einstein General Theory of Relativity. But, despite the fact that Einstein's relativism still describes the universe quite well, we cannot quantize relativistic systems, and recent cosmological observations and theoretical works require modifying the classical Einstein-Hilbert action. There was many attempts done to properly modify GR gravity, and one of the most viable theories of modified gravity is $f(\mathcal{R})$ theory, that replaces Ricci scalar in classical EH action by arbitrary function of Ricci scalar. This theory was originally proposed in \cite{Buchdahl1970}. One of the interesting features of this kind of modified gravity is that this MOG could describe cosmological inflation \cite{Brooker2016,Huang2013,Starobinsky1980} as well as the late time acceleration, solve the dark energy problem \cite{Capozziello2011,Nojiri2017}. There was done some research on the various topics in other kinds of MOG's. For example, Das et al \cite{Das2017} have derived exact solutions of gravastars in the $f(\mathcal{R},\mathcal{T})$ gravity. In this model, they defined pressure as the negative energy density, shell region was filled with the ultrarelativistic fluid and exterior region was assumed to be the vacuum in the non-rotating Schwarzschild-de Sitter spacetime (with the present $\Lambda$ term). Gravastar solutions in this gravity were non-singular and exact. In the $f(\mathcal{G},\mathcal{T})$ gravity, gravastar model was first ever constructed by Shamir et al. \cite{Shamir2020}. In turn, there was also probed the electromagnetic nature of the gravastars by \cite{Debnath2019} in $f(\mathcal{T})$ gravity with: $\mathcal{T}=0$ (traceless) or $f_{\mathcal{T}\mathcal{T}} = 0$. It was shown that with $\mathcal{T}=0$ there is no physically acceptable solutions, but with $f_{\mathcal{T}\mathcal{T}} = 0$ authors constructed non-singular and exact solutions for three gravastar regions. As we know, gravity is much weaker than other three natural forces - strong and weak nuclear force and electromagnetic force. In the particle physics, this problem is known as the hierarchy problem. In attempt to solve this problem, Randall and Sundrum proposed RS-I model \cite{Randall1999,PhysRevLett.83.4690}. This model includes two $(3+1)$ dimensional branes with the positive and negative tension in the $5D$ bulk (usually Anti-de Sitter). In the RS braneworlds, only gravity could freely propagate through bulk, and other forces located on the branes. We will investigate the second model, namely RS-II, in which the second brane with negative tension is sent to the infinity, and thus we have only one positive tension brane. In this model at the lower energies, we could also recover Newtonian gravity. During the last few decades, many papers were devoted to the investigation of braneworlds, and some of them were particularly aimed on the gravastars (for example see \cite{Banerjee:2015ipa,Sengupta2020,Arbanil:2019xfi} and references therein. In our paper we will investigate the non-charged Kuchowicz gravastars in the framework of braneworld gravity, which is exactly the dimensionally reduced RS-II model with five dimensional bulk. We have found some analytical and numerical solutions for different regions of gravastars structure. Also, we investigated the physical aspects of the gravastars, such as proper length, energy, entropy and interior region mass. Our letter is organised as follows: in the Section (\ref{sec:1}) we provide the brief introduction into the topic of charged and non-charged gravastars, modified theories of gravity. In the Section (\ref{sec:22}), we describe the formalism of braneworld gravity, derive energy density and pressure for the spherically symmetric interior spacetime from modified Einstein Field Equations. In the Section (\ref{sec:3}), we provide effective Equation of State for different gravastar regions and describe Kuchowicz-like metric potential that we will use across our paper. In the Section (\ref{sec:4}) we probe the physical aspects of the gravastars in the framework of modified gravity. Finally, we summed up everything in the last Section (\ref{sec:6}). \section{Branewolrd formalism} \label{sec:22} In braneworld theory of gravity the Einstein-Hilbert (EH) action integral is modified as follows \cite{Maartens:2003tw}: \begin{equation} \begin{gathered} S_\mathrm{BWG} = \frac{1}{2\kappa^2_{4+d}} \int d^4x d^dy\sqrt{-^{(4+d)}g} \left[^{(4+d)}R- 2\Lambda_{4+d}\right]\\ + \frac{1}{2\kappa^2_{4}}\int d^4x\sqrt{-g}(-\sigma+\mathcal{L}_\mathrm{M}) \end{gathered} \end{equation} where $\mathcal{L}_\textrm{M}$ is the Lagrangian of the matter fields. Then, by varying the EH action we could obtain (modified) EFE \cite{Sotiriou2010}: \begin{widetext} \begin{equation} \begin{gathered} {}^{(4+d)\!}G_{AB} \equiv \;{}^{(4+d)\!}R_{AB}-{1\over2} \;{}^{(4+d)\!} R \;{}^{(4+d)\!}g_{AB} = -\Lambda_{4+d} \;{}^{(4+d)\!}g_{AB}+ \kappa_{4+d}^2 \;{}^{(4+d)\!}T_{AB} \end{gathered} \label{eq:13} \end{equation} \end{widetext} where $4+d$ dimensional energy-momentum tensor is related to the $4$-dimensional on brane one by Dirac delta function: \begin{equation} ^{(4+d)}T_{AB}=-\Lambda_{4+d} \;^{4+d}g+(-\sigma g_{\mu\nu}+T_{\mu\nu})\delta(y-y_0) \end{equation} In the equation above $y_0$ means the location of the brane in the additional fifth bulk coordinate $y$. As well, aforementioned EFE's for 5D Randall-Sundrum II braneworld configuration could be rewritten in the more simplified form \cite{Maartens:2003tw} \begin{widetext} \begin{eqnarray} {G}_{\mu\nu}&=&-{1\over2}{\Lambda}_5 g_{\mu\nu}+{2\over3} \kappa_5^2 \left[{}^{(5)}T_{AB}g_\mu{}^A g_\nu{}^B + \left( {}^{(5)}T_{AB}n^An^B-{1\over 4} \;{}^{(5)}T \right) g_{\mu\nu} \right] \nonumber \\ && + K K_{\mu\nu}-K_\mu{}^\alpha K_{\alpha\nu} + {1\over2}\left[K^{\alpha\beta}K_{\alpha\beta}-K^2 \right]g_{\mu\nu} - {\cal E}_{\mu\nu}, \label{ein} \end{eqnarray}% \end{widetext} where four dimensional terms above could be defined through the five dimensional ones: \begin{equation} {\cal E}_{\mu\nu} = {}^{(5)\!}C_{ACBD} \, n^Cn^D g_\mu{}^A g_\nu{}^B, \end{equation} \begin{equation} \Lambda=\frac{1}{2}(\Lambda_5+\kappa^2_4\sigma)\xRightarrow{\Lambda=0}\Lambda_5=-\kappa^2_4\sigma \end{equation} \begin{equation} \kappa_4^2=\frac{1}{6}\lambda\kappa_5^4 \end{equation} Finally, we as well could derive curvature from the Israel–Darmois junction conditions: \begin{equation} K_{\mu\nu}=-\frac{1}{2}\kappa_5^2\bigg[T_{\mu\nu}+\frac{1}{3}(\sigma-T)g_{\mu\nu}\bigg] \end{equation} After the use of some tedious algebra and Israel's junction conditions, we could finally come up with the most simplified form of the on brane field equations \cite{PhysRevD.62.024012}: \begin{equation} \begin{gathered} G_{\mu\nu} = T_{\mu\nu}+\frac{6}{\sigma}S_{\mu\nu}+E_{\mu\nu} \end{gathered} \end{equation} where the expressions for unknown tensors are \begin{equation} \begin{gathered} S_{\mu\nu} = \frac{TT_{\mu\nu}}{12}-\frac{T_{\mu\alpha}T^{\alpha}_{\nu}}{4}+\frac{g_{\mu\nu}}{24}(3T_{\alpha\beta}T^{\alpha\beta}-T^2) \end{gathered} \end{equation} \begin{equation} \begin{gathered} E_{\mu\nu} = -\frac{6}{\sigma}\bigg[Uu_{\mu}u_\nu+P\chi_\nu \chi_\nu + h_{\mu\nu}\bigg(\frac{U-P}{3}\bigg)\bigg] \end{gathered} \end{equation} In the equations above, $\sigma$ is the $(3+1)$ dimensional brane tension, $G_{\mu\nu}$ is the Einstein tensor, $T_{\mu\nu}$ is the brane stress-energy tensor ($T=g^{\mu\nu}T_{\mu\nu}$ is it's trace), $U$ and $P$ are bulk energy density and isotropic pressure, finally $u_\mu$ is the four-velocity and $\chi_\nu = 1/\sqrt{g_{rr}}\delta^\nu_r$ is radial spacelike unitary vector, $h_{\mu\nu}=g_{\mu\nu}+u_{\mu}u_\nu$. For simplicity, we will use bulk Equation of State (EoS) $P=\omega U$ with $U=A\rho+B$ where $\rho$ is the brane energy density. We will study the $(3+1)$ dimensional gravastar geometry with the following interior spherically symmetric spacetime (metric signature is $(-,+,+,+)$): \begin{equation} ds^2 = -e^{\nu(r)}dt^2 + e^{\lambda(r)}dr^2 + r^2d\theta^2+r^2\sin^2\theta d\phi^2 \label{eq:2.3} \end{equation} Using the line element above and EFE's from Equation (\ref{eq:13}), we could derive energy density and isotropic pressure \cite{Sengupta2020}: \begin{equation} e^{-\lambda}\left(\frac{\lambda'}{r}-\frac{1}{r^2}\right)+\frac{1}{r^2} =\left[\rho(r) \left( 1+\frac {\rho(r) }{2 \sigma} \right) +{\frac {6 U}{\sigma}}\right],\label{eq6} \end{equation} \begin{widetext} \begin{equation} e^{-\lambda}\left(\frac{\nu'}{r}+\frac{1}{r^2}\right) -\frac{1}{r^2}, =\left[p \left( r \right) +{\frac {\rho \left( r \right) \left( p \left( r \right) +\frac{\rho(r)}{2} \right)} {\sigma}}+{\frac {2U}{\sigma}}+{\frac {4 P}{\sigma}}\right],\label{eq7} \end{equation} \begin{equation} e^{-\lambda}\left[\frac{\nu''}{2}-\frac{\lambda' \nu'}{4}+\frac{\nu'^2}{4}+\frac{\nu'-\lambda'}{2r}\right] = \Bigg[p(r)+{\frac{\rho(r) \left(p(r)+\frac{\rho(r)}{2}\right)}{\sigma}}+{\frac{2U}{\sigma}}-{\frac {{2 P}}{\sigma}}\Bigg].\label{eq8} \end{equation} \end{widetext} where we have used stress-energy tensor of form \cite{Shamir2020}: \begin{equation} T_{\mu\nu}=(\rho+p_t)u_\mu u_\nu -p_tg_{\mu\nu}+(p_r-p_t)\chi_\mu \chi_\nu \end{equation} Here, we define $p_r$ and $p_t$ as radial and tangential pressures respectively, $u_\mu$ is timelike four-velocity and $ \chi_\mu$ is radial four-vector. We consider isotropic case for simplicity and therefore $p_r=p_t=p$. It is also necessary to define 'effective' energy density and isotropic pressure \cite{Arbanil:2019xfi} \begin{equation} \rho^{\mathrm{eff}}=\rho\bigg(1+\frac{\rho}{2\sigma}\bigg)+\frac{6U}{\sigma} \end{equation} \begin{equation} p^{\mathrm{eff}}=p+\frac{1}{2\sigma}\bigg[\rho(\rho+2p)+4U\bigg] \end{equation} Finally, in braneoworld gravity for a given line element, energy conservation (well known Tolman-Oppenheimer-Volkov) equation reads \cite{Oppenheimer1939,Poncede1993,Rahaman2014,Tolman1939}: \begin{equation} \frac{dp}{dr}+\frac{\nu'(r)}{2}(\rho+p)+ F_{\mathrm{ex}} = 0 \label{eq:7} \end{equation} where $F_{\mathrm{ex}}$ is the external force, that is present because of stress-energy tensor non-continuity in braneworld MOG. To properly analyze such compact astrophysical object as gravastar, one could assume the physically viable metric potential, namely Kuchowicz-like metric potential of form \cite{Kuchowicz1968}: \begin{equation} e^{\nu(r)} = e^{Cr^2+2\ln D} \end{equation} where $C$ and $D$ are arbitrary constants. Interior spacetime with the given forms of metric potentials is often called as Kuchowicz spacetime. \subsection{Junction conditions} Gravastars need to satisfy continuity equations below (equations obtained at the surface of gravastar of radius $R$, therefore $r=R$) \cite{BHAR2021100879}: \begin{equation} \mathrm{Continuity\;of\;}g_{tt}:\quad 1-\frac{2M}{R}=e^{CR^2}D^2 \end{equation} \begin{equation} \mathrm{Continuity\;of\;}\frac{\partial g_{tt}}{\partial r}:\quad \frac{2M}{R^2}=2CRe^{CR^2}D^2 \end{equation} So the solutions of equations above for Kuchowicz constants are \begin{equation} C=-\frac{M}{R^2 (2 M-R)} \end{equation} \begin{equation} D=\frac{\sqrt{M} e^{-\frac{C R^2}{2}}}{\sqrt{C} R^{3/2}} \end{equation} Therefore, we could proceed to the gravastars in the next section. \section{Gravastars in the braneworld gravity} \label{sec:3} \subsection{Interior region} Spacetime of gravastars is separated on three different regions. First region - interior region. Fluid in this region has the following effective Equation of State (further - EoS) \cite{Mazur2001,Mazur2004}: \begin{equation} p = -\rho \end{equation} Also, for the interior region one equation is true \cite{Abbas2020,Sharif2020}: \begin{equation} p = -\rho = - \rho_c \end{equation} where $\rho_c$ is constant energy density. Then, by adopting the effective EoS $p=-\rho_c$, we could derive metric tensor component (metric potential). For five dimensional RS-II braneworld, gravastar with K metric potential has the energy density of form \begin{equation} e^{-\lambda}\bigg(\frac{\lambda^{\prime}}{r}-\frac{1}{r^2}\bigg)+\frac{1}{r^2}= \bigg[\rho_c\bigg(1+\frac{\rho_c}{2\sigma}\bigg)+\frac{6}{\sigma}(A\rho_c+B)\bigg] \end{equation} \begin{widetext} \begin{equation} e^{-\lambda}\bigg(\frac{1}{r^2}+\frac{\nu^{\prime}}{r}\bigg)-\frac{1}{r^2}= \bigg[-\rho_c\bigg(1+\frac{\rho_c}{2\sigma}\bigg)+\frac{2}{\sigma}(A\rho_c+B)+\frac{4\omega}{\sigma}(A\rho_c+B)\bigg] \end{equation} \end{widetext} Therefore, one left metric potential has the following solution: \begin{equation} \begin{gathered} e^{-\lambda}=-\frac{6 A \rho_c r^3+6 B r^3+\rho_c r^3 (\rho_c+\sigma )-3 r \sigma +3 c_1}{3 r \sigma } \end{gathered} \end{equation} To obtain regular solution at the origin we impose $c_1=0$ so that \begin{equation} e^{-\lambda}=1-\frac{r^2 (\rho_c(6 A+\rho_c+\sigma )+6 B)}{3 \sigma } \end{equation} Now we could also derive the brane tension from continuity condition for $g_{rr}$: \begin{equation} \mathrm{Continuity\;of\;}g_{rr}:\quad \bigg(\frac{1-2M}{R}\bigg)^{-1}=e^{\lambda(R)} \end{equation} Using equation above, brane tension is \begin{equation} \sigma =\frac{R^3 (\rho_c (12 A+\rho_c)+12 B)}{12 M-2 \rho_c R^3} \end{equation} For the above expression, brane tension is positive (as expected), if $A>0$ for relatively small constant density $\rho_c\ll1$ and if $A<0$ for bigger values of $\rho_c$. Using that assumption for brane tension we finally obtain simplified form of metric potential \begin{equation} e^{-\lambda}=1-\frac{2 M r^2}{R^3} \end{equation} Also, we could see that both energy density and isotropic pressure do not suffer from the central singularity, which is common for gravastars models with non-singular metric potentials. Consequently, while we already defined stress-energy tensor components, we finally could also calculate the total interior region mass from the formula \cite{Ghosh2021}: \begin{equation} \mathcal{M} = \int^{R_1=R}_0 4\pi r^2 \rho dr=\frac{4\pi R^3}{3}\rho_c \label{eq:18} \end{equation} As we see, interior region mass is invariant under the change of gravitation framework, since energy density is constant. But we also could define the active gravitational mass of interior region in terms of effective energy density as follows: \begin{equation} \widetilde{M}= \int^{R_1=R}_0 4\pi r^2 \rho^{\mathrm{eff}} dr=\frac{4}{3} \pi R^3 \bigg(\frac{6 A \rho_c}{\sigma }+\rho_c \bigg(\frac{\rho_c}{2 \sigma }+1\bigg)\bigg) \end{equation} We plot the gravitational mass for both regular and effective energy densities on the Figure (\ref{fig:1}). As we see, quantity of the active gravitational mass grows exponentially as we get nearer to the envelope (shell), which is expected (for example, the same results were obtained for gravastars, admitting conformal motion in $f(R,T^2)$ gravity \cite{SHARIF2021}). \begin{figure}[!htbp] \centering \includegraphics[width=0.7\columnwidth]{Mass.pdf} \caption{Active gravitational mass of the first (interior) region for PSR J1416-2230 compact star with mass $M=1.97M_\odot$ and radius $R=9.69$km \cite{Demorest2010ATN}. Since constant density in the interior region is considered to be relatively small ($\rho_c=0.01$), we assume that $A=1$, and therefore $\sigma=20.0762$.} \label{fig:12} \end{figure} \subsection{Intermediate region: shell} The shell of the gravastar is usually very thin, but finite. It separate interior and exterior regions of gravastar and contain all of the collapsing star mass. We will assume, that the matter in the shell obey EoS equation $\rho=p$ (with $\omega=1$). Also, from the thin-shell approximation, $0<e^{-\lambda(r)}<1$ \cite{Abbas2020}.Thus, with the given EoS, we could say the fluid in the shell is stiff fluid (found by \cite{Zeldovich1972}). For stiff fluid-like equation of state general form of braneworld EFE's are rewritten as follows \cite{Sengupta2020} \begin{equation} \label{eq16} \frac{e^{-\lambda}\lambda^{\prime}}{r}+\frac{1}{r^2}=\bigg[\rho\bigg(1+\frac{6A}{\sigma}\bigg)+\frac{\rho^2}{2\sigma}+\frac{6B}{\sigma}\bigg], \end{equation} \begin{equation} \label{eq17} -\frac{1}{r^2}=\bigg[\rho\bigg\{1+\bigg(\frac{1+2\omega}{\sigma}\bigg)2A\bigg\}+\frac{3\rho^2}{2\sigma}+\bigg(\frac{1+2\omega}{\sigma}\bigg)2B\bigg], \end{equation} \begin{widetext} \begin{equation} \label{eq18} -\frac{\lambda^{\prime}\nu^{\prime}}{4}e^{-\lambda}-\frac{e^{-\lambda}\lambda^{\prime}}{2r}= \bigg[\rho\bigg\{1+\bigg(\frac{1-\omega}{\sigma}\bigg)2A\bigg\}+\frac{3\rho^2}{2\sigma}+\bigg(\frac{1-\omega}{\sigma}\bigg)2B\bigg]. \end{equation} \end{widetext} Solving EFE's and using $\rho=\rho_ce^{-\nu(r)}$ with Kuchowicz-like $\nu(r)$ we could obtain analytically second unknown metric potential \begin{equation} e^{-\lambda}=\frac{3 A \rho_c e^{-C r^2}}{C D^2 \sigma }-\frac{3 B r^2}{\sigma }+\frac{\rho_c^2 e^{-2 C r^2}}{8 C D^4 \sigma }+\frac{\rho_c e^{-C r^2}}{2 C D^2}+\log (r)-c_1 \end{equation} Without the loss of generality it is convenient to assume that $c_1=0$. \subsection{Exterior region} The exterior region geometry of the gravastar could be well described by the Schwarzschild metric of the form (with the effective Equation of State $\rho=p=0$): \begin{widetext} \begin{equation} ds^2 = \bigg(1-\frac{2M}{r}\bigg)dt^2 - \bigg(1-\frac{2M}{r}\bigg)^{-1}dr^2 - r^2 d\theta^2 - r^2 \sin^2 \theta d\phi^2 \end{equation} \end{widetext} where $M$ indicates total gravastar mass. For exterior spacetime, since EoS is vacuum one, EFE's has the very simplified form below \begin{equation} e^{-\lambda}\bigg(\frac{\lambda'}{r}-\frac{1}{r^2}\bigg)+\frac{1}{r^2}=\frac{6B}{\sigma} \end{equation} Solution is therefore \begin{equation} e^{-\lambda}=1-\frac{2M}{r}-\frac{2B}{\sigma}r^2 \end{equation} To get rid of effective brane cosmological constant $\Lambda=6B/\sigma$, we will assume that $B=0$ and therefore solution above mimics regular component of Schwarzschild spacetime line element (constant of integration for sake of regularity at the origin is already assumed to vanish). \section{Physical aspects of gravastars in modified gravity} \label{sec:4} \subsection{Proper length} Gravastar shell proper length: \begin{equation} \ell = \int^{R+\epsilon}_{R}\frac{dr}{\sqrt{e^{-\lambda(r)}}} \label{eq:23} \end{equation} For the Starobinsky, gamma and exponential gravity we have the following proper length of the gravastar shell (using Tolman-Kuchowicz metric potentials): \begin{figure}[!htbp] \centering \includegraphics[width=0.7\columnwidth]{proper.pdf} \caption{Shell proper length w.r.t shell thickness $\epsilon$. For model we use PSR J1416-2230 compact star with mass $M=1.97M_\odot$ and radius $R=9.69$km \cite{Demorest2010ATN}. To plot the results numerically, we as well assume the same brane tension as for the interior region (namely $\sigma=20.0762$) and the same value for $A$.} \label{fig:32} \end{figure} On the Figure (\ref{fig:32}) we as usual numerically solved equation (\ref{eq:23}) with $A=1$ and $B=0$ (vanishing effective brane cosmological constant), $R=9.69$ and varying shell thickness. As one may notice, values of the shell proper length $\ell$ grows with $\varepsilon\to\infty$. \subsection{Energy} Energy of the gravastar shell is defined as follows: \begin{equation} \mathcal{E} = \int ^{R+\epsilon}_{R} 4\pi r^2 \rho^{\mathrm{eff}} dr \label{eq:24} \end{equation} \begin{figure}[!htbp] \centering \includegraphics[width=0.7\columnwidth]{energy.pdf} \caption{Shell energy w.r.t shell thickness $\epsilon$. For model we use PSR J1416-2230 compact star with mass $M=1.97M_\odot$ and radius $R=9.69$km \cite{Demorest2010ATN}. As usual, we have assumed the same values of brane tension and $A$ constant as in the interior region.} \label{fig:332} \end{figure} On the Figure (\ref{fig:332}) we numerically solved Equation (\ref{eq:24}) for braneworld gravity with regular energy density $\rho$ and effective energy density $\rho^{\mathrm{eff}}$. Remarkably, as we see if $\epsilon$ grows, $\mathcal{E}\to\infty$, which is expected behavior of shell energy. Also, shell energy for effective energy density is bigger that for regular $\rho$. \subsection{Entropy} Mazur and Mottola \cite{Mazur2001,Mazur2004} stated that interior region of gravastar must have zero entropy density, which is stable for the single condensate area. But, entropy on the shell is generally non-zero. The entropy of the relativitstic star system (static) gravastar could be easily determined by the formula below: \begin{equation} S = \int ^{R+\epsilon}_{R} 4\pi r^2 \frac{s(r)}{\sqrt{e^{-\lambda(r)}}}dr \end{equation} where \begin{equation} s(r) = \xi \frac{k_{\mathrm{B}}}{\hbar} \sqrt{\frac{p}{2\pi}} \end{equation} We assumed that $k_{\mathrm{B}}=\hbar$. On the Figure (\ref{fig:111}) variation of entropy within the shell is shown for RS-II braneworld gravastar. During the numerical analysis of the gravastar shell entropy, we noticed that shell entropy grows as shell thickness becomes bigger. Also, for effective pressure (plugged in the definition of $s(r)$ function), entropy is slightly bigger that for regular isotropic pressure $p$. \begin{figure}[!htbp] \centering \includegraphics[width=0.7\columnwidth]{entropy.pdf} \caption{Shell entropy w.r.t shell thickness $\epsilon$. For model we use PSR J1416-2230 compact star with mass $M=1.97M_\odot$ and radius $R=9.69$km \cite{Demorest2010ATN}. As well, we assume that $\rho_c=0.01$, $\xi=0.235$ and $\sigma=20.0762$, $A=1$.} \label{fig:111} \end{figure} \subsection{Surface redshift} Gravastar surface redshift is defined in the following way: \begin{equation} \mathcal{Z}_s = |g_{tt}|^{-1/2} - 1 = \frac{e^{-\frac{1}{2} \Re\bigl(C r^2\bigr)}}{\bigl| D\bigr| }-1 \end{equation} Surface redshift for the isotropic compact star fluid must not exceed $2$ (for the spacetimes with present cosmological constant surface redshift must not exceed $5$). We plot the surface redshift on the Figure (\ref{fig:1111}) for different compact objects with stellar nature. As we noticed from numerical investigation, for each compact star $\mathcal{Z}_s$ at the whole interior domain does not exceed $2$, which is necessary condition. \begin{figure}[!htbp] \centering \includegraphics[width=0.7\columnwidth]{surface.pdf} \caption{Surface redshift w.r.t radial coordinate $r$.} \label{fig:1111} \end{figure} \subsection{Adiabatic index} We could check the dynamical stability of the relativistic stellar against infinitesimal adiabatic perturbations by following the pioneering work of Chandrasekhar \cite{Chandrasekhar1964}. Chandrasekhar predicted that for the relativistic system to be stable the adiabatic index should exceed $4/3$. This adiabatic index is defined as \cite{Maurya2017}: \begin{equation} \Gamma = \frac{p+\rho}{p}\frac{dp}{d\rho} \end{equation} Then: \begin{itemize} \item For the interior region with the EoS $p=-\rho$, $\Gamma=0$. \item For the intermediate shell region with the EoS $p=\rho$, $\Gamma=2$. \end{itemize} Therefore, we could conclude that for gravastars with braneworld gravity formalism, from the adiabatic index interior region is unstable and shell region is stable. \section{Conclusions} \label{sec:6} In the present letter we have studied the static and spherically symmetric non-charged gravastars under the framework of braneworld gravity model (we assume that bulk is five dimensional and brane configuration is of RS-II type) with the Kuchowicz metric potential. In this section we want to summarize all of the key results, that was obtained in the paper. As well, we have derived several physical parameters, such as: interior region mass, proper length, shell energy and entropy, surface redshift and adiabatic index. We have discussed the nature of this parameters both analytically and graphically. From the numerical solutions we have noted that: \begin{itemize} \item Interior region mass: we investigate the interior region gravitational mass on the Figure (\ref{fig:1}) for both regular and effective energy densities. As one could notice, gravitational mass grows exponentially with radius $R$, which represents the common behavior of interior region DE-like matter. \item {Proper length}: The proper length of the gravastar shell $\ell$ is plotted w.r.t. shell thickness. As we noticed, shell proper length is increasing with growing shell thickness. We plotted the results on the Figure (\ref{fig:32}). \item Energy: The energy of the shell $\mathcal{E}$ was probed and illustrated on the Figure (\ref{fig:332}). As well, the shell energy behaves as expected. \item Entropy: For the interior region, entropy density $S$ is zero, but for the shell region generally not. We plotted the entropy for regular braneworld gravity on Figure (\ref{fig:111}). As we see, on-shell entropy monotonously grow with shell thickness. \item Surface redshift: from the values of the surface redshift $\mathcal{Z}_s$ we could judge whether the compact object is stable or not. For the isotropic fluid (with $p_r=p_t$), surface redshift must not exceed the value of $2$, which is obeyed for some compact stars and results are plotted on the Figure (\ref{fig:1111}). \end{itemize} As we already said, gravastars are usually separated on three different regions: interior, shell and exterior. With Kuchowicz metric potential, from Equation of State for each region we have derived analytically second unknown metric potential for $g_{rr}$ component. In the end, we came to the conclusion that we have derived new, non-singular and horizonless gravastar model for braneworld gravity with the impact of Kuchowicz metric potential. Generally, with special metric potential it is more challenging to obtain physically acceptable solutions, but, as one could notice, interest to the Tolman-Kuchowicz metric potentials for the compact objects (exotic stars for example) have grown in this decade (see \cite{Rej2021,Shamir2020b,Biswas2020,Majid2020,Jasim2018,FarasatShamir2020}), and thus it is important to test the Kuchowicz spacetime on static gravastars. \section*{Acknowledgment} PKS acknowledges National Board for Higher Mathematics (NBHM), No.: 02011/3/2022 NBHM(R.P.)/R\&D II/2152 Dt.14.02.2022, Govt. of India under Department of Atomic Energy (DAE). We are thankful to the honorable anonymous referee for helpful comments, which have significantly improved our work in terms of research quality and presentation.
{ "timestamp": "2022-03-16T01:33:52", "yymm": "2202", "arxiv_id": "2202.00236", "language": "en", "url": "https://arxiv.org/abs/2202.00236" }
\section{introduction} This paper is about a structure abbreviated as SA SFT, which stands for strongly aperiodic subshift of finite type, that exists on some groups and not on others (all groups considered are infinite and finitely generated). It has two parts: in the Part~\ref{part:why} we discuss the reasons for constructing SA SFTs and explain what they are. In Part~\ref{part:how} we explain the basic ideas behind the construction which led to the theorem of D. B. Cohen, C. Goodman--Strauss, and the author~\cite{cohen_goodman-strauss_rieck_2021}: \begin{thm} \label{MainThm} A hyperbolic group admits an SA SFT if and only if it has at most one end. \end{thm} \noindent This paper is written for topologists and geometric group theorists, and we are not assuming any familiarity with SA SFTs. We do assume familiarity with hyperbolic groups, that were introduced by Gromov in an extremely influential paper~\cite{gromov}. Gromov outlined the study of SFTs on hyperbolic groups in that paper, and a detailed investigation was carried out by Coornaert and Papadopoulos~\cite{MR1222644,MR1878587}. Our interest is promoting the SFT to an SA SFT. \begin{remark} Much of this paper reviews old and well-known results. In order to facilitate the reading of~\cite{cohen_goodman-strauss_rieck_2021}, we included many references to that work. This is \em not \em an indication that all the claims are original. \end{remark} {\bf Acknowledgement.} I would like to thank the organizers of the special session, Jos\'e Ayala Hoffman, Mario Eudave Mu\~noz, and Jennifer Schultens, for giving me this opportunity. I thank Chaim Goodman--Strauss for help preparing this paper and the beautiful illustrations. I am very grateful to the anonymous referee for very helpful suggestions. Yo'av Rieck was partially supported by the Simons Foundation (award number 637880). \part{Why do we do what we do?}\label{part:why} \section{The Geography of a Group} Fix, for the entirety of Part~1, an infinite, finitely generated group \(G\), together with a finite generating set \(S = S^{-1}\). As usual we view the group as its Cayley graph \(\Gamma\) which we define here for completeness. The vertices of \(\Gamma\) are the elements of \(G\), and the edges out of a vertex \(g\) have the form \((\textit{g},\textit{ga})\) for all \(\textit{a} \in S\). The edges of \(\Gamma\) are labeled and directed --- the edge \((\textit{g,ga})\) is labeled \(a\) and points from \(g\) to \(\textit{ga}\). Since \(S = S^{-1}\) there is an edge pointing in the opposite direction, namely, \((\textit{ga},\textit{gaa}^{-1})\). Connectivity of \(\Gamma\) follows from the fact that \(S\) generates \(G\). Assigning length \(1\) to each edge induces a metric on \(\Gamma\). Left multiplication defines an action of \(G\) on \(\Gamma\) by a graph isomorphism that preserve the labels, directions, and distances. This action is transitive on the vertices. Since \(G\) acts on the vertices of \(\Gamma\) transitively, we cannot use \(\Gamma\) to distinguish elements, that is, for any \(g_{1},g_{2} \in G\) and any \(R>0\) the balls \(B(R,g_{1})\) and \(B(R,g_{2})\) are identical. How are we to know where we are? One way is to name each and every group element. For example, if the group is the integers \(\mathbb{Z}\), each integer has a name and we may use it. This will not do. When labeling group elements we may only use \em finitely many \em labels (imagine a small device attached to each group element that records the label) and the group \(G\) is, by assumption, infinite. Here's another suggestion: label the identity by \(0\) and all other elements \(1\). This poses a different problem: once the labeling of the group is completed we want to be able to \em guarantee \em that it is ``correct'', and this must be done locally. If we allow arbitrarily large balls to be labeled \(1\), what is to stop us from labeling the entire group \(1\)? This pinpoints our goal: to construct a finite set of labels and a collection of local rules that will allow us to distinguish elements of \(G\). Specifically, we would like to (and here we give an intuitive description of an SA SFT): \begin{enumerate} \item Choose a finite set \(A\). \item Construct \em local rules \em for labeling. \item In \em any \em labeling that obeys the local rules, and for any \(g_{1} \neq g_{2} \in G\), there is \(R>0\), so that the labeling on \(B(R,g_{1})\) and \(B(R,g_{1})\) are distinct (\(R\) depends on \(g_{1}\) and \(g_{2}\) and this is unavoidable). \item There must be some labeling obeying the rules, so condition~(3) is not vacuous. \end{enumerate} \begin{remark} \label{remark:SFT} A few remarks are in order: \begin{enumerate} \item A finite set carries no information other than its cardinality. (In reality, in order to work with \(A\), each label will carry useful information; a similar comment can be made about the states of a Turing machine.) \item The local rules are the key here. \em Local rules \em form an atlas, or a finite collection of {\it charts}, where each chart is a labeling of a ball of radius \(R\), for some fixed \(R\).\footnote{ The reader may have seen local rules defined on finite sets that are not necessarily balls, but there is no loss of generality in assuming that they are all balls and of the same radius.} We say that the labeling satisfies the local rules if the labeling about each point \(g\) agrees with one of the charts (translated to be centered at \(g\)). \item From this point on we focus on labeling the vertices, but the fixed labelling of the edges plays a key role: since the edges are labeled and directed, once we center a chart at \(g\) the exact position of each vertex of the ball is determined (we cannot ``rotate'' the ball). \item An SFT is only required to satisfy the first two conditions above: a finite set, and local rules for labeling. The other two conditions may not be satisfied: on the one hand, the rules may allow for a labeling in which some (even all!) vertices look the same; on the other hand, the rules may not allow for \em any \em labeling at all (this is an empty SFT). Adding conditions~(3) and~(4) is the content of ``SA''. \end{enumerate} \end{remark} \section{Relation to Tilings} \label{tiling} An SFT on a group \(G\) is sometimes called a ``tiling'' of \(G\). This is very reasonable, as we now explain. Starting from the second point of Remark~\ref{remark:SFT}, we see that an SFT is given by an atlas of charts. Say there are \(n\) charts and number them \(1\) through \(n\). Now the labelling about each \(g\), with labels from \(A\), must coincide with one of the charts. We note that, by writing the number of the chart at \(g\) (thus obtaining a function \(G \to \left\{1,\dots, n \right\}\)). Conversely, any function \(G \to \left\{1,\dots, n\right\}\) gives a labeling \(G \to A\) provided that overlapping charts are compatible. An easy argument shows that if the labeling about neighboring vertices agree, then the labeling is consistent. And so here is the tiling: each tile covers exactly one group element \(g\) and carries a label from \(\{1,\dots,n\}\) that corresponds to the chart used for the labeling about \(g\). Let \(T_{g}\) denote the tile that covers \(g\). We can visualize \(T_{g}\) as a ``polygon'' that covers only \(g\), where the boundary of \(T_{g}\) consists of finitely many ``edges'', one for each neighbor of \(g\), where ``edges'' can be glued together if the corresponding charts are compatible. Here ``edge'' means an ``edge'' of the ``polygon'', not an edge of the Cayley graph (in fact every ``edge'' of the glued ``polygons'' crosses exactly one edge of the Cayley graph and {\it vice-versa}). By construction, \(T_{g}\) depends only on the chart at \(g\); thus there are exactly \(n\) possible tile shapes. For obvious reasons this is called a \em nearest neighbor SFT\em. We conclude that for a group \(G\) the concepts of \em SFT \em and \em tiling\em\ are the same. The notions of ``aperiodicity'' (to be defined in the next section) carry over. We will focus on SFTs. \newpage One final remark: SFTs on \(\mathbb{Z}^{2}\) correspond to tilings of \(\mathbb{E}^{2}\) (where by \(\mathbb{E}^{2}\) we mean \(\mathbb{R}^{2}\) with the Euclidean metric), but the converse does not hold. There are tilings of \(\mathbb{E}^{2}\) (for example~\cite{radin}) that are not aligned along a lattice and hence do not induce an SFT on \(\mathbb{Z}^{2}\). Similarly, strongly aperiodic tiles for \(\mathbb{H}^{2}\) were first constructed by Goodman-Strauss in~\cite{MR2142334}, but do not provide an SFT on any group of isometries of \(\mathbb{H}^{2}\); the first SFT on a hyperbolic was constructed by Cohen and Goodman--Strauss~\cite{MR3692905} 12 years later. \section{Formal Definitions} After reviewing the concepts the formal definitions should be easy to read. For a finitely generated group \(G\) we define: \begin{enumerate \item The full shift on \(G\) with labels in a finite set \(A\) is: \[ A^{G} := \left\{ \omega : G \to A\right\} \] In other words, these are all possible labeling of the elements of \(G\) by labels from \(A\). \item We endow \(A\) with the discrete topology and \(A^{G}\) with the product topology. This turns \(A^{G}\) into a compact space that carries an right \(G\)-action; the action of \(g \in G\) on \(\omega \in A^{G}\) is defined as follows (here, we evaluate \(\omega \cdot g \in A^{G}\) on \(h \in G\)): \[ (\omega \cdot g) (h) = \omega(gh) \] This action is by homeomorphisms. \item A \em subshift \em is a closed, invariant subset \(\Omega \subset A^{G}\). The action of \(G\) on \(A^{G}\) induces an action of \(G\) by homeomorphisms on the compact space \(\Omega\). Note that \(\Omega = \emptyset\) is a subshift. More interesting examples are given by the closure of the orbit of any \(\omega \in A^{G}\). \item Given a subshift \(\Omega\) we call \(\omega \in \Omega\) a \em configuration\em. Thus any configuration is a function \[ \omega: G \to A \] Consistent with the terminology above, we call the value \(\omega(g)\) the \em label \em at \(g\). \item A subshift is called \cor[\em strongly aperiodic (SA)] if it is not empty, and for every \(\omega \in \Omega\) we have that \textcolor{red}{\(\stab[\omega]\) is trivial} (here and below \(\stab[\cdot]\) stands for \em stabilizer\em). \item A non-empty subshift is called \em weakly aperiodic \em if \(\stab[\omega]\) has infinite index in \(G\) for every \(\omega \in \Omega\). Since we restricted ourselves to infinite groups, strong aperiodicity implies weak aperiodicity (but at this point it is not clear if the converse holds). We will focus on strong aperiodicity. \item A \em subshift of finite type (SFT) \em is a subset of the full shift \(A^{G}\) that is defined by local rules, or an atlas of charts, as described above. It is not defined as a subshift satisfying an extra condition, but in fact it is: it is not hard to see that an SFT is closed and \(G\)-invariant, and hence a subshift. \item Combining~(5) and~(7), we define {\it a strongly aperiodic subshift of finite type (SA SFT)} to be an SFT in which the stabilizer is trivial for every configuration. \end{enumerate} \section{Origins} The origins of the theory, in the very early 60's, are closely linked to axiomatization and computational theory (as we shall see shortly). H. Wang~\cite{MR1112395} asked if the {\it Domino Problem} is decidable for tilings of \(\mathbb{E}^{2}\): {\bf The Domino Problem.} Is there an algorithm that decides if a given finite set of tiles in \(\mathbb{E}^{2}\) can be used to tile the plane? Obviously, one can replace \(\mathbb{E}^{2}\) with other spaces of interest, for example, \(\mathbb{H}^{2}\), \(\mathbb{E}^{n}\), and \(\mathbb{H}^{n}\). (We will not discuss higher dimensions.) Wang proposed an ``algorithm'': try! If the tiles do not tile \(\mathbb{E}^{2}\), an attempt to tile arbitrarily large balls would eventually fail (and one will certainly detect this). On the other hand, if they do tile, one ``should'' find a tiled domain. Of course, in the latter the tiling would be a lift of a tiling of the torus. But does this actually work? Perhaps there is a set of tiles that tiles the plane but not the torus? This seemed rather unlikely. Wang himself found such tiles, but they are \em seeded\em, which means that they contain a special tile (the seed) that must be used. With this, Wang constructed tiles that emulate an arbitrary Turing machine and tile the plane if and only if the given machine never halts; undecidability of the Domino Problem for seeded tiles follows from undecidability of the Halting Problem. Figure~\ref{wangrun} indicates a Turing machine and seeded tiles that emulate it. Note that without the seed (in the second row, second column) copies of the blank tiles (in the first row) can always be used to tile \(\mathbb{E}^{2}\) (even the torus), and thus without the seed these tiles give nothing of interest. \psfrag{R}[ll][cc][.8]{$t=0$} \psfrag{S}[ll][cc][.8]{$t=1$} \psfrag{T}[ll][cc][.8]{$t=2$} \psfrag{U}[ll][cc][.8]{$t=3$} \psfrag{V}[ll][cc][.8]{$t=4$} \psfrag{W}[ll][cc][.8]{$t=5$} \psfrag{X}[ll][cc][.8]{$t=6$} \psfrag{1}[cc][cc][.8]{${\tt 1}$} \psfrag{0}[cc][cc][.8]{${\tt 0}$} \psfrag{A}[cc][cc][.8]{${\tt A}$} \psfrag{B}[cc][cc][.8]{${\tt B}$} \psfrag{C}[cc][cc][.8]{${\tt C}$} \psfrag{A0}[cc][cc][.8]{${\tt A 0}$} \psfrag{B0}[cc][cc][.8]{${\tt B 0}$} \psfrag{C0}[cc][cc][.8]{${\tt C 0}$} \psfrag{A1}[cc][cc][.8]{${\tt A 1}$} \psfrag{B1}[cc][cc][.8]{${\tt B 1}$} \psfrag{C1}[cc][cc][.8]{${\tt C 1}$} \psfrag{pA0}[cc][cc][.6]{$\phi({\tt A 0})$} \psfrag{pB0}[cc][cc][.6]{$\phi({\tt B 0})$} \psfrag{pC0}[cc][cc][.6]{$\phi({\tt C 0})$} \psfrag{pA1}[cc][cc][.6]{$\phi({\tt A 1})$} \psfrag{pB1}[cc][cc][.6]{$\phi({\tt B 1})$} \psfrag{pC1}[cc][cc][.6]{$\phi({\tt C 1})$} \psfrag{A0}[cc][cc][.8]{${\tt A 0}$} \psfrag{B0}[cc][cc][.8]{${\tt B 0}$} \psfrag{C0}[cc][cc][.8]{${\tt C 0}$} \psfrag{A1}[cc][cc][.8]{${\tt A 1}$} \psfrag{B1}[cc][cc][.8]{${\tt B 1}$} \psfrag{C1}[cc][cc][.8]{${\tt C 1}$} \psfrag{H}[cc][cc][.8]{${\tt H}$} \psfrag{!}[cc][cc][.6]{$\phi({\tt A 0})$} \psfrag{@}[cc][cc][.6]{$\phi({\tt B 0})$} \psfrag{#}[cc][cc][.6]{$\phi({\tt C 0})$} \psfrag{^}[cc][cc][.6]{$\phi({\tt A 1})$} \psfrag{&}[cc][cc][.6]{$\phi({\tt B 1})$} \psfrag{*}[cc][cc][.6]{$\phi({\tt C 1})$} \psfrag{,}[cc][cc][.6]{$\phi({\tt A 0})$} \psfrag{.}[cc][cc][.6]{$\phi({\tt B 0})$} \psfrag{/}[cc][cc][.6]{$\phi({\tt C 0})$} \psfrag{<}[cc][cc][.6]{$\phi({\tt A 1})$} \psfrag{>}[cc][cc][.6]{$\phi({\tt B 1})$} \psfrag{?}[cc][cc][.6]{$\phi({\tt C 1})$} \begin{figure} \centerline{{\includegraphics[]{./figures/wang_1}} \hspace{1in}{\includegraphics[]{figures/wang_2}}} \caption{Wang emulates the run of a Turing machine as a tiling problem.} \label{wangrun} \end{figure} A few years later, R. Berger showed how to emulate an arbitrary Turing machine with an {\it unseeded} set of tiles (see Figure~\ref{UnseededTiles}, which shows a much-simplified set of tiles due to R. Robinson). These tiles can be used to show two things: \begin{enumerate} \item The Domino Problem in \(\mathbb{E}^{2}\) is undecidable. \item There exists a set of strongly aperiodic tiles for \(\mathbb{E}^{2}\). This is indicated in Figure~\ref{UnseededTiles}. As suggested by that figure, the tiles assemble to form disjoint squares of arbitrarily large scale. Hence any element of the stabilizer will need to move these square off themselves or not move them at all, showing that the stabilizer is trivial. \end{enumerate} \begin{figure} \centerline{{\includegraphics{./figures/rob}}} \caption{Unseeded tiles.} \label{UnseededTiles} \end{figure} Later, J. Kari~\cite{MR1417578} gave a different and very simple construction that also implies undecidability of the Domino Problem in \(\mathbb{E}^{2}\). \section{Other Geometries} Existence of SA SFT's has subtle connections to the geometry of the group. For example, D. Cohen~\cite{DBCohen} showed that for finitely presented groups, existence is a quasi-isomotery invariant.\footnote{For the definition of quasi-isomotery see, for example,~\cite{BridsonHaefliger}.} I find this quite surprising because a quasi-isometry is very coarse and can destroy any local information, and SFT's are defined using local rules only. Here is a sample of results. Some show existence of an SA SFT, and some show that the Domino Problem is undecidable (implying the existence of a weakly aperiodic set of tiles, but not implying the existence of an SA SFT). More results can be found in~\cite{cohen_goodman-strauss_rieck_2021}. The 2018 survey~\cite{DominoSurvey} contains many results about the Domino Problem. All groups considered in this section are assumed to have decidable word problem and to be finitely presented (although some of the results require only finite generation). In 1997 Mozes~\cite{mozes} constructed strongly aperiodic tilings on any simple Lie group \(\Gamma\) of rank greater than one. The tiles themselves are Voronoi cells of a uniform lattice \(G \leq \Gamma\).\footnote{The voronoi cell corresponding to \(g \in G\) consists the all \(x \in \Gamma\) satisfying \(d(x,g) \leq d(x,g')\) for all \(g' \in G\).} The Voronoi cells are then decorated with \em combinatorial \em information, given by the pattern of intersection they form with the Voronoi cells of a second, incompatible, uniform lattice. Mozes showed that incompatibility of the lattices breaks all symmetry and yields a strongly aperiodic tiling of \(\Gamma\); for our purposes, it also gives an SA SFT on \(G\). (This situation is similar to the tiling shown in Figure~\ref{SA_SFT_H2}, although these tiles come from a different construction in \(\mathbb{H}^{2}\).) We remark that uniform lattices in rank-one simple Lie groups are hyperbolic and therefore admit an SA SFT by Theorem~\ref{MainThm}. Thus a uniform lattice in any simple Lie group admits an SA SFT. \begin{figure} \centerline{{\includegraphics[width=4.5in]{./figures/HGSFT_orbit_graph2}}} \caption{Two patterns of ``rectangles'' are shown, each rectangle having some predecessor above and some successors below. In the pattern drawn with dark lines, the number of rectangles doubles from row to row. In the gray pattern, light rectangles (which are all congruent) have one light and one dark rectangle as successors, and dark rectangles (which are all congruent) have one light and two dark successors.} \label{SA_SFT_H2} \end{figure} In~\cite{MR2142334} Goodman-Strauss constructed the first set of strongly aperiodic tiles in \(\mathbb{H}^{2}\). A simpler construction is shown in Figure~\ref{SA_SFT_H2} where the grid ``rectangles'' are all isometric pentagons, each with five edges of length \(1\), fours vertices of angle \(\pi/2\), and one vertex of angle \(\pi\), in the middle of the bottom edge. These overlay an incompatible shaded grid that has two types of tiles, hexagons and pentagons. We note however that these tiles do not correspond to a lattice and therefore do not provide an example of a SA SFT. In a recent preprint~\cite{ProdusctOfNonAmenable}, Barbieri, M. Sablik, and V. Salo showed that the product two non-amenable groups with decidable word problem admits an SA SFT. In~2019 S. Barbieri~\cite{MR3964145} showed that the product of three infinite groups, each with decidable word problem, admits an SA SFT. In~\cite{MR3594268} Aubrun and Kari constructed SFTs on \(\mathrm{BS}(1,n)\) groups, that were shown to be SA by J. Esnay and E. Moutot in~\cite{esnay2021weakly}. Furthermore, in a recent paper~\cite{AUBRUN2021}, Aubrun and Kari showed that the Domino Problem is undecidable for all \(\mathrm{BS}(m,n)\). The reader may have noticed that not even one of the groups discussed above is hyperbolic. This changed in 2017 when Cohen and Goodman--Strauss showed that surface groups admit an SA SFT~\cite{MR3692905}. Since then Aubrun, Barbieri, and Moutot~\cite{DominoForSurfaceGroups} showed that the Domino Problem for surface groups is undecidable. \section{Two Necessary Conditions} \label{section:TwoNecessaryConditions} From this point on we only considering finitely presented groups. There are two known necessary conditions for existence of an SA SFT on a group \(G\): \begin{enumerate} \item \(G\) must have a decidable word problem. \item \(G\) must have at most one end. \end{enumerate} This leads us to ask: \begin{que} Does every finitely presented, one-ended group with decidable word problem admit an SA SFT? \end{que} The first condition, proved by E. Jeandel~\cite{jeandel}, fits well with the philosophy that an SA SFT gives us a way to ``address'' group elements. After all, being able to construct arbitrarily large balls of the Cayley graph is equivalent to having a decidable word problem. Here are a few details regarding the second condition. It was shown by Cohen~\cite{DBCohen} that no group with more than one end admits an SA SFT. We demonstrate this for \(\mathbb{Z}\), which is a well-known and easy case. Suppose we are given a non-empty SFT on \(\mathbb{Z}\) with label set \(A\) and local rules that are defined on sets of (say) \(n\) adjacent integers. Note that an interval of length \(n\) can be labeled in finitely many ways. It follows that there are two disjoint intervals of length \(n\) with identical labeling; say that one starts at \(a\) and the other starts at \(b > a\). It is now an easy exercise to show that the labeling of the integers from \(a\) to \(b-1\) can be repeated to produce a periodic configuration. Let us return for a moment to Wang's tiles. Recall that these tiles had a \em seed\em\ that we were required to use. The following example demonstrates the dramatic effect of seeding: \begin{itemize} \item \(A := \{\mathbf{C},\mathbf{L},\mathbf{R}\}\). The meaning of the labels: \(\mathbf{C}\) is the ``center'' of \(\mathbb{Z}\), \(\mathbf{L}\) and \(\mathbf{R}\) mean ``left/right of \(\mathbf{C}\)''. \item Local rules (allowable configurations, defined on adjacent pairs): \begin{itemize} \item \(\mathbf{C}\)-\(\mathbf{R}\) \item \(\mathbf{L}\)-\(\mathbf{C}\) \item \(\mathbf{R}\)-\(\mathbf{R}\) \item \(\mathbf{L}\)-\(\mathbf{L}\) \end{itemize} \item The seed is \(\mathbf{C}\). \end{itemize} A moment reflection shows that any allowable configuration has the form \[ \cdots\textrm{-} \mathbf{L}\textrm{-}\cdots\textrm{-}\mathbf{L} \textrm{-}\mathbf{C}\textrm{-} \mathbf{R}\textrm{-}\cdots\textrm{-}\mathbf{R} \textrm{-}\cdots \] Of course, the stabilizer of such a configuration is trivial, since any non-trivial translation will move \(\mathbf{C}\) to a point labeled \(\mathbf{L}\) or \(\mathbf{R}\). So the seeded SFT is SA. This is in contrast to \em unseeded \em SFT's, where we saw that \(\mathbf{Z}\) admits no SA SFT. Indeed, if we remove the seeding requirement we have two periodic configurations: \[ \cdots\textrm{-} \mathbf{R}\textrm{-}\cdots\textrm{-}\mathbf{R} \textrm{-}\cdots \ \ \ \text{and} \ \ \ \cdots\textrm{-} \mathbf{L} \textrm{-} \cdots \textrm{-} \mathbf{L} \textrm{-} \cdots \] This concludes the first part, in which we attempted to do three things: explain what an SA SFT is, motivate our interest in them, and survey some of the many results. The reader should be aware that in recent years there has been a flurry of activity and this discussion is far from complete. \part{How do we do what we do?}\label{part:how} In the second part of this paper we discuss the construction of an SA SFT on a 1-ended hyperbolic group \(G\) which we fix once and for all. We also fix a finite generating set \(S = S^{-1}\). We start with the construction of an SFT that is not required to be strongly aperiodic, and has been known for a long time (\(\Omega_{S}\) below). We then enhance it to attain the desired SA SFT. A finitely presented group \(G\) (with a fixed finite generating set \(S = S^{-1}\)) is called {\it hyperbolic} if there is some \(\delta>0\) so that every triangle in the Cayley graph of \(G\) is \(\delta\)-slim, that is, any point on one edge is within \(\delta\) from the union of the other two edges. We fix \(\delta>0\) for the remainder of this paper. Hyperbolicity, due to Gromov~\cite{gromov}, turns out to be both very natural (satisfied by many groups that appear in application) and very useful. As a simple example, hyperbolicity implies a very efficient solution to the word problem known as \em Dehn's algorithm\em; surprisingly, this turns out to be equivalent to hyperbolicity. As a far more sophisticated example we mention that the isomorphism problem for hyperbolic groups is decidable (Z. Sela for torsion free hyperbolic group~\cite{MR1324134}, later extended to all hyperbolic groups by F. Dahmani and V. Guirardel~\cite{MR2795509}). These are, of course, just examples of what one can do with hyperbolic groups; very many other beautiful results are known and at this point the theory of hyperbolic groups seems very well understood. An excellent reference is~\cite{BridsonHaefliger}. Another very useful reference is~\cite{Word_processing_in_groups}, in particular for the {\it shortlex FSA} which will be use extensively below. \section{Shortlex shellings} \label{section:Shortlex} In this section we explain how to construct a certain SFT, called the \em shortlex SFT \em and denoted \(\Omega_{S}\), on a hyperbolic group \(G\); these SFT's will serve as the backbone for our main construction. The group \(G\) need not be one-ended, and the resulting SFT is not necessarily strongly aperiodic (in fact, if \(G\) is infinite this SFT's is necessarily not strongly aperiodic). The ideas presented here are similar to ideas that date back to Gromov's paper~\cite{gromov}; see Coornaert and Papadopoulos~\cite{MR1222644,MR1878587} for a detailed treatment. A key to our construction is {\it Cannon's Shortlex FSA}. Fixing an arbitrary order on \(S\) induces a lexicographic order on the finite words \((S \cup S^{-1})^{*}\). A path in the Cayley graph is called \em shortlex \em if and only if it is a geodesic, and is first in the lexicographic order among all geodesics with the given endpoints. It is clear that for each element \(g \in G\) there is a unique shortlex geodesic from the identity \(e\) to \(g\) (and this is true for any finitely generated group). It is far less obvious how calculate shortlex representatives. In fact, shortlex representatives in a group \(G\) can be calculated if and only if \(G\) has a decidable word problem, see~\cite{book}. A hyperbolic group \(G\) admits an FSA,\footnote{For those unfamiliar, an FSA (finite state automaton) is a Turing machine without a tape; the ``memory'' is contained is finitely many states.} described in~\cite{book}, that accepts a word if and only if it is shortlex representative. Denoting the states for the FSA as \[ \left\{ s_{1},\dots,s_{n} \right\} \] We can now label each group element with the following three labels: \[ \left( P(g), \ \mathrm{dist}(e,g), \ s_{i(g)} \right) \] Defined as follows: \begin{enumerate} \item \(P(g)\) is the generator that points towards \(e\) on the shortlex geodesic to \(g\), in other words, the product \(gP(g)\) is the last group element on the shortlex geodesic before arriving at \(g\). \(P(e)\) is not defined. \(P\) is called the \em parent function\em. \item \(\mathrm{dist}(e,g)\) is the distance from \(e\) to \(g\). \item \(s_{i(g)}\) is the state of the FSA at \(g\) (so \(i(g) \in \{1,\dots,n\}\)). \end{enumerate} Of course, \(\mathrm{dist}(e,g)\) cannot be used as a label for an SFT since it takes on infinitely many values. We replace it with the function that describes the \em difference \em of the distance to the origin between \(g\) and its neighbors, that is, for each \(a \in S\), we define \[ \textrm{\dh}(g)(a) := \mathrm{dist}(e,g) - \mathrm{dist}(e,ga) \] Since \(g\) and \(ga\) are neighbors this function can only take the values \(\pm 1\) or \(0\). Thus \(\textrm{\dh}\) is a function \[ \textrm{\dh}: G \to \{-1,0,1\}^{S} \] This completes the labeling on \(g\) for each group element. This labeling is \em not \em a configuration in any SFT but rather a blueprint for constructing \(\Omega_{S}\), as we now explain. \bigskip\noindent We now define a full shift of \(G\) with labels \begin{equation} \label{labels_shellings} (\textrm{\dh}(g) ,s_{i(g)},P(g)) \in \{-1,0,1\}^{S} \times \{s_1,\dots,s_{n}\} \times S \end{equation} Let \(\Omega_{S}\) be the subshift consisting of all configuration satisfying the following condition: \begin{center} The labeling of any ball, of any radius, \\ coincides with labeling above on some ball not containing \(e\). \end{center} It is not clear that: \begin{enumerate} \item This is an SFT. The requirement above says ``balls of any radius'', and we must ensure that this can be enforced by considering balls of a fixed radius. This is one of the many places that hyperbolicity is used. \item This SFT is not empty. \end{enumerate} The second point is actually not too hard. Since we can label balls of arbitrary radius, a diagonalization argument shows that there is a labeling of all of \(G\), that is, the SFT is not empty. \begin{remark} \label{remark_notSA} It is known that \(\Omega_{S}\) is necessarily \em not \em SA. More needs to be done. \end{remark} \begin{remark} We give an example of a parent function \(P\) defined on \(F_{2}\), the free group on generators \(a\) and \(b\). The Cayley graph of \(F_{2}\), a regular 4-tree, is shown in Figure~\ref{figure:papas}. At each vertex there are two gray edges (corresponding to \(a\) and \(a^{-1}\)) and two black edges (corresponding to \(b\) and \(b^{-1}\)). Each edge is marked with an arrow that points from \(g\) to \(P(g)\). As the figure suggests, \(P(g)\) can be \em any \em neighbor of \(g\); the only rule is that at any \(g\) we see exactly one triangle pointing ``out'' (towards \(P(g)\)) and three pointing in (from \(P^{-1}(g)\)). Picking any \(g \in F_{2}\) and following the arrows we arrive at a point on \(\partial_{\infty} F_{2}\), and this point is independent of choice of \(g\). \begin{figure} \centerline{{\includegraphics[width=4.5in]{./figures/FigOfTheParentFunction2}}} \caption{The function \(P\)} \label{figure:papas} \end{figure} \end{remark} \section{Relation to the Boundary and Weak Aperiodicity} \label{section:boundary_WA} We are assuming that the reader is familiar with \(\partial_{\infty} G\), the \em boundary at infinity \em of \(G\). Given any configuration \(\omega \in \Omega_{S}\) and any \(g \in G\), we construct the path \[ g, Pg, P^{2}g,\dots \] It follows from the definition of \(P\) that this path is a geodesic ray and hence defines a point at infinity. Hyperbolicity implies that this point is independent of \(g\), and we denote it by \(\xi\) (of course, \(\xi(\omega)\) depends on \(\omega\)). The association \(\omega \mapsto \xi\) defines a function \[ \Omega_{S} \to \partial_{\infty} G \] It is clear that this function is compatible with the actions of \(G\) on \(\Omega_{S}\) and on \(\partial_{\infty} G\). This means that the action of \(G\) on \(\partial_{\infty} G\) is a \em factor \em of an SFT, which motivated Gromov to study them. We would like to exploit the function \(\Omega_{S} \to \partial_{\infty} G\) differently: we will use it to show that \(\Omega_{S}\) is weakly aperiodic. The map \(\Omega_{S} \to \partial_{\infty} G\) shows that for any \(\omega \in \Omega_{S}\) we have \[ \stab[\omega] < \stab[\xi(\omega)] \] It is known that \(\stab[\xi]\) is virtually cyclic for any \(\xi \in \partial_{\infty} G\).\footnote{To see that \(\stab[\xi]\) is virtually cyclic use the following: (1) every non-torsion element of \(G\) fixes exactly two points on the boundary; (2) if two elements share one fix points they share both fixed points; and (3) the elements that fix the same two points form a virtually cyclic group (this allows for a finite group, as finite groups are virtually trivial). See Sections~8.1 and~8.2 Gromov~\cite{gromov} and the proof of Proposition~III.\(\Gamma\).3.20 (Page~467) of~\cite{BridsonHaefliger}.} We conclude that \(\stab[\omega]\) is virtually cyclic (for any \(\omega \in \Omega_{S}\)). By assumption \(G\) is one-ended, and hence \(G\) is not virtually cyclic. This shows that \(\stab[\omega]\) has infinite index, in other words, \(\Omega_{S}\) is weakly aperiodic. \section{Horospheres} \label{section:horospheres} Fix a configuration \(\omega \in \Omega_{S}\). We may ``integrate'' \dh\ (notation as in~~\eqref{labels_shellings} above) to get a function \(h:G \to \mathbb{Z}\) (defined up-to an additive constant). The level sets of \(h\) are called {\it horospheres}. Any \(g \in \stab[\omega]\) preserves \dh\ and hence \(g\) preserves \(h\) up-to an additive constant, that is, there exist an integer \(C\) so that for any \(x \in G\) we have \begin{equation} h(g \cdot x) = h(x) + C \end{equation} If \(g\) is a torsion element then \(C=0\). Conversly, if \(g\) has infinite order then \(C \neq 0\) (Lemma~9.1 of~\cite{cohen_goodman-strauss_rieck_2021}). This simple observation will prove quite useful. Since a hyperbolic group has finitely many conjugacy classes of torsion, getting rid of the torsion in the stabilizers is very easy (Proposition~3.3 of~\cite{cohen_goodman-strauss_rieck_2021}). Thus we may assume that \(g\) has infinite order and \(C \neq 0\). \section{The Divergence Graph} Fix \(\omega \in \Omega_{S}\). As we discussed in Section~\ref{section:boundary_WA}, for every \(g \in G\) the sequence \begin{equation} \label{past_geodesic} g, Pg, P^{2}g,\dots \end{equation} converges to \(\xi(\omega) \in \partial_{\infty} G\). For convenience we describe \(P\) as moving ``down'' towards \(\xi\). We now move up from \(g \in G\): \[ g, P^{-1}g, P^{-2}g,\dots \] This defines sets that move away from \(g\); unlike the downwards path that limits on \(\xi\), the limit of the sets \(P^{-n}g\) is more complex. It \em does \em depend on \(g\), and will often be an uncountable subset of \(\partial_{\infty} G\). The union of this sets is called the {\it future cone} of \(g\), namely, \[ P^{-*}g = \bigcup_{n=0}^{\infty} P^{-n}g \] \begin{remark}[growth rate] \label{remark:growth_rate} The hyperbolic group \(G\) has a well-defined growth rate which we will denote by \(\lambda > 0\). Given a configuration \(\omega \in \Omega_{S}\), the future cone of each \(g \in G\) is completely determined by the state of the FSA at \(g\), denoted \(s_{i(g)}\) in~\eqref{labels_shellings} above (this was analyzed in detail in~\cite{MR4015648}). In particular, \(s_{i(g)}\) determines the growth rate of the future cone, and we call it the \em growth rate of the state\em. In what follows we only consider states whose future cone has growth rate \(\lambda\) (for a precise discussion see Definition~6.3 of~\cite{cohen_goodman-strauss_rieck_2021}). We denote that set of all elements of \(G\) satisfying this condition \(G^{+}\). Note that each \(g \in G^{+}\) accumulates to an uncountable subset of \(\partial_{\infty} G\). By Proposition~6.5 of~\cite{cohen_goodman-strauss_rieck_2021} any \(2\delta\)-ball, in any configuration \(\omega \in \Omega_{S}\), contains a point of \(G^{+}\). \end{remark} Back to our discussion, fix a configuration \(\omega \in \Omega_{S}\). Let \(h\) be as in Section~\ref{section:horospheres}. For \(i \in \mathbb{Z}\) set \(H_{i} := h^{-1}(i)\) and \(H^{+}_{i} := G^{+} \cap H_{i}\). On \(H^{+}_{i}\) we define the {\it divergence graph} as follows: \begin{enumerate} \item[\bf v] The vertices of the divergence graph are the vertices of \(H^{+}_{i}\). \item[\bf e] Two vertices \(g_{1}, g_{2} \in H^{+}_{i}\) are connected by an edge if and only if their futures remain a bounded distance apart. In other words, for some \(C>0\), and for each integer \(n \geq 1\), there are \(v_{1} \in P^{-n}g_{1}\) and \(v_{2} \in P^{-n}g_{2}\) with \(\mathrm{dist}(v_{1},v_{2}) < C\). \end{enumerate} The following holds (see Lemma~7.4 of~\cite{cohen_goodman-strauss_rieck_2021} and its proof): \begin{itemize} \item The union of the limit sets of the future cones of all of \(H^{+}_{i}\) is \(\partial_{\infty} G - \{\xi\}\). \item \(g_{1}, g_{2} \in H^{+}_{i}\) are connected by a divergence graph edge if and only if the limit sets of their future cones intersect. \item We may therefore describe the divergence graph as a discrete approximation of \(\partial_{\infty} G\). \item Most importantly, \em the divergence graph is connected\em. This reflects the fact that \(\partial_{\infty} G - \xi\) is connected (the Cut Point Conjecture, proved by Swarup~\cite{MR1412948}). \end{itemize} The plan is now as follows. Any infinite order element \(\phi \in \stab[\omega]\) will translate the levels of \(h\) by \(C \neq 0\), as explained in Section~\ref{section:horospheres}. It is our goal to enhance \(\Omega_{S}\) by associating an integer \(\Delta(g)\) to each \(g \in G^{+}\) in a way that cannot be periodic; the new SFT will no longer have \(\phi\) in its stabilizer. We enhance the labels of \(\Omega_{S}\) (compare this with the labels presented in~\eqref{labels_shellings}) \begin{equation} \label{labels_populated_shellings} (\textrm{\dh}(g) ,s_{i(g)},P(g),\wp(g),\Delta(g),m(g)) \end{equation} The enhanced SFT is called \em populated shelling\em, denoted \(\Omega_{P}\). The name comes from the fact that \(\wp\) defines a ``population'' of ``villagers'' on each ``village'' \(g \in G\) (or, if the reader prefers, any village \(v \in G^{+}\), since the population of any \(g \in G - G^{+}\) is zero anyway). \section{\(\wp\) and \(m\)} \label{section:PandM} The number of ``villagers'' defines the function \(\wp\), with \(\wp(g)\) being the population at \(g\): \[ \wp: G^{+} \to \mathbb{Z}_{\geq 1} \] This function is required to be bounded, with the population bound \(N\) fixed in advanced. Clearly, we need something that will help us relate \(\wp\) to the geometry of the group, for otherwise the population values will be arbitrary numbers. This is \(m\), which stands for \em matching\em. More precisely, it is \em parent-child matching\em. Each ``villager'' in \(v \in H^{+}_{i}\) has \(q^{\Delta_{i}}\) children (\(q^{\Delta_{i}}\) will be described in the next section, for now just take it to be ``some number''). We list the children as follows: \[ (v,j,k) \] Where here \(v\) as the village, \(1 \leq j \leq \wp(v)\) is the villager, and \(1 \leq k \leq q^{\Delta_{i}}\) is the child. The function \(m\) ``places'' this child as a villager in \(H^{+}_{i+1}\): \[ m(v,j,k) = (u,l) \] Here, \(u \in H^{+}_{I+1}\) is a village and \(1 \leq l \leq \wp(u)\) is a villager. The geometry of the group comes to play when we force the child to be placed not too far from the parent. The precise condition is the following: the child may take up to 3 steps on the divergence graph of \(H^{+}_{i}\), and then move one step up. A succinct description is this: the child of a villager in \(v \in H^{+}_{i}\) is a villager in \(u \in H^{+}_{i+1}\) with \[ \mathrm{DivDist}(v,Pu) \leq 3 \] Naturally, \(\mathrm{DivDist}\) denotes that divergence graph distance. \begin{remark} The reader probably finds the constant 3 rather arbitrary (not to say mysterious). It comes from an application of a theorem in graph theory, which states that if a graph is connected, then its cube admits a Hamiltonian path between nay two vertices.\footnote{The \em cube \em of a graph is obtained by adding an edge between any two vertices of distance at most 3.} This is then used to construct a ``translation-like \(\mathbb{Z}\) action'' (in the sense of Seward~\cite{MR3158775}) whose {\it defect} is 3, and this is the origin of the constant. \end{remark} \section{Straying Away and Coming Back Home} It would be quite natural to worry that we are too loose with the geometry of the group here. We populate the group with the goal of considering the population after arbitrarily many generations, and descendants may stray 3 divergence-graph steps each generation. Indeed, after many generation, a descendent of the villager \((v,j)\) may be in a village which is very far from the future cone \(P^{-*}v\). It may be worth emphasizing that we are using two distinct ``futures'' here, the future cone \(P^{-*}v\) which is the collection of \em villages \em \(u\) for which \[ v = P^{n}u \] for some \(n\). On the other hand, there are \em villagers \em that are descendants of \((v,j)\), and may stray away from \(P^{-*}v\). Hyperbolic geometry to the rescue. As described above, let \((u,l)\) be a descendant of \((v,j)\), after, say, \(n\) generations. We use the notation \(v=v_{0},v_1, \dots, v_{n}=u\) for the villages so that \((u,l)\) is a descendant of \((v_{n-1},l_{n-1})\) (for some \(l_{n-1}\)), \((v_{n-1},l_{n-1})\) is a descendant of \((v_{n-2},l_{n-2})\) (for some \(l_{n-2}\)), and so one. It is not hard to show that an edge of a divergence graph connects vertices of Cayley distance at most \(2\delta\), and so the apple doesn't fall too far from the tree: \[ \mathrm{CayDist}(v_{i},v_{i+1}) \leq 6\delta + 1 \] On the other hand, because \(v_{i} \in H^{+}_{i}\) and \(v_{j} \in H^{+}_{j}\), we get that \[ \mathrm{CayDist}(v_{i},v_{j}) \geq |i-j| \] This produces a \em quasi-geodesic \em that can be compared with the geodesic \[ v_{n},Pv_{n},\dots,P^{n}v_{n} \] It is a feature of hyperbolic geometry that quasi-geodesics remain a bounded distance away from geodesics, which means that, for some fixed \(R>0\) we have: \[ \mathrm{CayDist}(v_{0},P^{n}v_{n}) \leq R \] A precise statement is given in Lemma~9.2 of~\cite{cohen_goodman-strauss_rieck_2021}. This allows for sufficient control over the population growth, since it shows that all the descendants of villagers in \(S \subset H^{+}_{i}\), where \(S\) is any finite set, live in the future cone of \(\mathcal{N}(S)\), where here \(\mathcal{N}(S)\) is the 3-neighborhood of \(S\) in the divergence graph on \(H^{+}_{i}\). \section{\(\Delta\)} \label{section:Delta} We finally describe \(\Delta\), focusing on \(G^{+}\); \(\Delta\) is extended to \(G\) by setting it to be zero on \(G - G^{+}\). It is \(\Delta\) that will ultimately be responsible for aperiodicity. For each \(g \in G^{+}\), the growth rate of the population at \(g\) is controlled by \(\Delta(g)\). To be precise, every villager at \(g\) has exactly \(q^{\Delta(g)}\) children (for \(q\) to be decided momentarily). We use, intentionally, a number \(q\) which is \em not \em compatible with \(\lambda\): \(q\) is an integer, which we may take to be either 2 or 3, so that \begin{equation} \frac{\log(q)}{\log(\lambda)} \not\in \mathbb{Q} \end{equation} The function \[ \Delta: G^{+} \to \left\{ \left\lfloor \log_{q}(\lambda) \right\rfloor, \left \lceil \log_{q}(\lambda)\right \rceil \right\} \] is required to satisfy the following conditions: \begin{enumerate} \item (And this is key) \(\Delta\) is constant along levels of \(h\), that is, for each integer \(i\), \(\Delta|_{H^{+}_{i}}\) is constant. We denote this value \(\Delta_{i}\). This defines the sequence \[ \left(\Delta_{i}\right)_{i \in \mathbb{Z}} \] \item The sequence \(\Delta_{i}\) approximates \(\lambda\) (in a sense made precise in Corollary~9.3 of~\cite{cohen_goodman-strauss_rieck_2021}). \item The condition above, and the incompatibility of \(\lambda\) and \(q\), guarantee that \(\Delta_{i}\) is not periodic (Corollary~9.4 of~\cite{cohen_goodman-strauss_rieck_2021}). \end{enumerate} We saw in Section~\ref{section:horospheres} that any infinite order elements in the stabilizer of a configuration \(\omega \in \Omega_{S}\) must translate the levels \(\{h=i\}\) by a non zero amount. Once we enhance \(\Omega_{S}\) by populating \(G^{+}\), the function \(\Delta\) will not be invariant under such an element. This shows that the there is no infinite-order element in the stabilizer of any configuration in \(\Omega_{P}\), as desired. \begin{remark}[the role of the (necessary!) assumption of one-endedness of \(G\)] The question of existence of SA SFT is irrelevant for zero-ended groups (that is, finite groups), where the answer is always ``yes'', as well as two-ended groups (that is, virtually-\(\mathbb{Z}\)'s), where the answer is always ``no''. So we ignore these groups in this remark and consider only one-ended and infinitely-ended groups. Our main result is that an SA SFT exists \em only \em for the former. This begs the question: where exactly was the ``one-endedness'' assumption used? The only place is {\it imposing that \(\Delta\) be constant along each \(H^{+}_{i}\)}. This must be enforced via local rules, as part of the SFT. What allows us to do this is \em connectivity of the divergence graph\em. This is the one and only place where the assumption is used, as connectivity of the divergence graph is equivalent to \(G\) being one-end. Swarup's resolution of the cut-point conjecture~\cite{MR1412948} plays a key role here; it states that \(\partial_{\infty} G \setminus \{\xi\}\) is connect (for any \(\xi \in \partial_{\infty} G\)). \end{remark} \section{One Last Issue} It is not the goal of this paper to give a complete proof of theorem~\ref{MainThm}. Our goal is to explain some of the elements that go into the proof in a way that would facilitate its reading. However, it is hard to ignore the fact that we have not addressed the following question: \begin{center} Does a populated shelling even exist? \end{center} This should not be taken lightly as it is quite possible that we defined the empty subshift here. In fact, if the population bound is too small this is probably the case. Proposition~8.5 of~\cite{cohen_goodman-strauss_rieck_2021} shows that \(\Omega_{P}\) described above is indeed an SFT, and Proposition~9.5 shows that no configuration has an infinite order element in its stabilizer, but neither addresses existence of a configuration. Much of the work in~\cite{cohen_goodman-strauss_rieck_2021} is devoted to Proposition~8.12, showing that (for an appropriately chosen population bound) \(\Omega_{P}\) is indeed not empty. This is the most technical and longest part of the proof, and here is my attempt at explaining the idea. We proceed in three steps: \begin{enumerate} \item[\bf Level:] The first step is populating each level \(H^{+}_{i}\). To discuss that we need to dig a little deeper into the maximal growth states of the FSA.\footnote{ There are two distinct notions of growth rate at play, growth rate of the population, controlled by \(\Delta_{i}\), and growth rate of the group elements, which is \(\lambda\); here we discuss the latter. Having two incompatible growth rates is a little confusing but it is the very thing the leads to strong aperiodicity.} In Remark~\ref{remark:growth_rate} we explained that the future of some states must have growth rate \(\lambda\) (the growth rate of the group itself) and denoted the set of vertices that have this growth rate by \(G^{+}\). From that point we concentrated on \(G^{+}\) and on \(H^{+}_{i} := G^{+} \cap H_{i}\). In fact more is true; although the transitions of the FSA need not satisfy the assumptions of the Perron--Frobenius Theorem, it is possible to associate to them a measure that behaves just like a Perron--Frobenius eigenvector; see~\cite{MR4015648} or Section~6 of~\cite{cohen_goodman-strauss_rieck_2021}. Now in the first step we populate each level \(H^{+}_{i}\) so that the population of \em any \em finite subset approximates its total measure (it is not possible to get this to be exact; the ratio of the measures of distinct states is usually irrational). This is Lemma~8.7 of~\cite{cohen_goodman-strauss_rieck_2021} (for one level). The average ratio of population to total measure (called the population density) can be chosen freely, within a reasonable range (Definition~8.6 and Lemma~8.7). \item[\bf Levels:] We apply this to all of \(G\) (one level at a time, and still without matching levels). This is given in Corollary~8.9 of~\cite{cohen_goodman-strauss_rieck_2021}. In order to be able to match parents and children, we ensure that the sequence of densities behaves well (grows when small, shrinks when big). The precise description is given in Definition~8.10. \item[\bf Matching:] Having populated the group as described above, we apply the Hall Matching Theorem to prove existence of a matching function \(m\) as required. This is Proposition~8.11 of~\cite{cohen_goodman-strauss_rieck_2021}. \item[\bf Ta da!] That's all, folks. \end{enumerate}
{ "timestamp": "2022-02-02T02:10:32", "yymm": "2202", "arxiv_id": "2202.00212", "language": "en", "url": "https://arxiv.org/abs/2202.00212" }
\section{Introduction} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} By submitting a manuscript to 3DV, the authors assert that it has not been previously published in substantially similar form. Furthermore, no paper which contains significant overlap with the contributions of this paper either has been or will be submitted during the 3DV 2021 review period to {\bf either a journal} or any conference or any workshop. {\bf Papers violating this condition will be rejected.} If there are papers that may appear to the reviewers to violate this condition, then it is your responsibility to: (1)~cite these papers (preserving anonymity as described in Section 1.6 below), (2)~argue in the body of your paper why your 3DV paper is non-trivially different from these concurrent submissions, and (3)~include anonymized versions of those papers in the supplemental material. \subsection{Paper length} 3DV papers should be no longer than 8 pages, excluding references. The references section will not be included in the page count, and there is no limit on the length of the references section. Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the \verb'\threedvfinalcopy' command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g.\ this line is $097.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Mathematics} Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for techreports) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith, it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \etal [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors12final} as additional material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2021 Submission ID 324, Supplied as additional material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors12bfinal}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the 3DV70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \etal. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \etal, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgments OK? No. Leave them for the final copy. \begin{figure}[t] \begin{center} \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \end{center} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.) However, use it only when there are three or more authors. Thus, the following is correct: `` Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high. Page numbers should be in footer with page numbers, centered and .75 inches from the bottom of the page and make it start at the correct page number rather than the 4321 in the example. To do this find the line (around line 23) \begin{verbatim} \setcounter{page}{4321} \end{verbatim} where the number 4321 is your assigned starting page. Make sure the first page is numbered by commenting out the first page being empty on line 47 \begin{verbatim} \end{verbatim} \subsection{Type-style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote {This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors12final}. Where appropriate, include the name(s) of editors of referenced books. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.eps} \end{verbatim} } \subsection{Color} Color is valuable, and will be visible to readers of the electronic copy. However ensure that, when printed on a monochrome printer, no important information is lost by the conversion to grayscale. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. {\small \bibliographystyle{ieee_fullname} \section{Introduction} 3D object detection is an essential and fundamental problem in many robotics applications, such as autonomous driving~\cite{chen2020pano3d}, object manipulation~\cite{weng2020multi}, and augmented reality~\cite{10.1007/978-3-319-46484-8_10}. In recent years, many deep learning-based approaches for point cloud-based 3D object detection~\cite{yan2018second, lang2019pointpillars, zhu2019class} have emerged and achieved high performances on various benchmark datasets~\cite{sun2020scalability,caesar2020nuscenes,Geiger2013IJRR,360LiDARTracking_ICRA_2019}. Despite the impressive performances, most of the existing deep learning-based approaches for 3D object detection on point clouds are strongly supervised and require the availability of a large amount of well-annotated 3D data that is often time-consuming and expensive to collect. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/teaser.jpg} \caption{\textbf{Semi-supervised 3D Object Detection via Temporal Graph Neural Networks.} Our method utilizes the rich spatiotemporal information from point cloud videos to perform semi-supervised learning to train a single frame object detector. This detector can be used to generate better candidates, which will in turn lead to better psuedo labels. } \label{fig:teaser} \end{figure} Semi-supervised learning~\cite{teichman2012tracking} is a promising alternative to supervised learning for point cloud-based 3D object detection. This is because semi-supervised learning requires only a limited amount of labeled data, instead relying on large amounts of unlabeled data to improve performance. The challenge with semi-supervised learning is to determine how to make use of the unlabeled data to improve the performance of the detector. In most applications, point clouds are recorded over time as a data stream. A point cloud video contains richer spatiotemporal information than a single frame. Our insight is that this spatiotemporal information can be exploited to correct inaccurate predictions. For example, a false negative (missing) detection can be identified if the same object is detected in adjacent frames but is missing in the current frame. A false positive detection can be identified if the detection is isolated \textit{i.e.} a corresponding detection occurs neither in the previous nor the subsequent frame. A misalignment can be identified if the alignment differs significantly between successive video frames (see Figure~\ref{fig:pc_video} for examples). \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/temporal.jpg} \caption{\textbf{Detections in Point Cloud Videos.} 3D object detectors usually suffer from false negatives, false positives and misalignment due to the sparse nature of LiDAR data. However, these failures can always be detected by exploiting the rich spatiotemporal information in point cloud videos. For example, a false negative (red box in the top row) can be identified if the same object is detected in adjacent frames but is only missing in the current frame. A false positive (red box in the middle row) can be identified if the detection isisolated i.e. a corresponding detection occurs neither in the previous nor the subsequent frame. A misalignment (red box in the bottom row) can be identified if the same object is detected in adjacent frames but the alignment differs significantly in the current frame. (Blue boxes show true positives, where the same object is detected in all adjacent frames).} \label{fig:pc_video} \end{figure} In this paper, we propose utilizing the rich spatiotemporal information from point cloud videos to perform semi-supervised learning to train a single frame object detector (Figure~\ref{fig:teaser}). Specifically, we first build video graphs using a large amount of unlabeled point cloud videos, where each node represents a candidate detection predicted by a pretrained detector and each edge represents the relationship between the connected nodes in different time frames. A graph neural network is then used to refine these candidate detections using spatiotemporal information. These refined detections are treated as pseudo labels and are used to re-train a detector. This retrained detector can be used to generate better detections, which can again be refined in an iterative process of continual improvement. A potential limitation with learning using pseudo labels generated from temporally smoothed detections is that these labels are not necessarily accurate. The detector may produce mistakes, and learning on these mistakes may lead to inferior 3D detections. We propose to explicitly tackle this problem by incorporating uncertainty into semi-supervised learning. Specifically, we propose to use the entropy of estimated detections as uncertainty weights for the semi-supervised loss. We demonstrate the efficacy of our method across a variety of large-scale benchmark datasets, including nuScenes~\cite{caesar2020nuscenes} and H3D~\cite{360LiDARTracking_ICRA_2019} and show state-of-the-art detection performance, compared to baselines trained on the same amount of labeled data. The contributions of this paper are as follows: \begin{enumerate} \item We propose a novel framework for semi-supervised 3D object detection by leveraging the rich spatiotemporal information in 3D point cloud videos. \item We propose a \textit{3DVideoGraph} for spatiotemporal reasoning in 3D point cloud videos. \item We show that we can use the 3DVideoGraph with uncertainty loss weighting for semi-supervised training of 3D object detectors. \item We demonstrate our method over two large-scale benchmark datasets and show state-of-the-art detection performance, compared to baselines trained on the same amount of labeled data. \end{enumerate} \section{Related Work} \paragraph{3D Object Detection} 3D object detectors aim to predict 3D oriented bounding boxes around each object. A common approach to 3D object detection is to exploit the ideas that have been successful for 2D object detection~\cite{girshick2014rich, girshick2015fast, ren2015faster, he2017mask}, which first find category-agnostic bounding box candidates, then classify and refine them. Existing works can be roughly categorized into three groups, which are birds-eye-view based methods~\cite{yang2018pixor, liang2018deep}, voxel-based methods~\cite{zhou2018voxelnet, yan2018second}, and point-based methods~\cite{lang2019pointpillars,vora2020pointpainting}. Unlike these methods, our work focus on semi-supervised learning to improve a detector's performance. \paragraph{3D Video Object Detection} A few recent papers have incorporated spatiotemporal reasoning in 3D video object detection. 3DVID~\cite{yin2020lidar} proposes an Attentive Spatiotemporal Transformer GRU (AST-GRU) to aggregate spatiotemporal information across time. Similarly, 3DLSTM~\cite{huang2020lstm} proposes a sparse LSTM to aggregate features across time. These works show great potential by utilizing the rich spatiotemporal information in point cloud videos. However, they use memory-intensive sequence models to utilize the temporal information; as a result, only three consecutive frames can be input to the model due to memory limitations~\cite{yin2020lidar}, which makes it hard to reason about complex spatiotemporal information. In contrast, our graph neural network representation can reason over much longer sequences. Further, we show how such spatiotemporal reasoning can be combined with uncertainty estimates for semi-supervised learning of a single-frame detector. \paragraph{Semi-supervised Learning} Many approaches have been proposed for semi-supervised learning (SSL), which learns from a small labeled dataset combined with a much larger unlabeled dataset. One approach uses pseudo labels, also known as self-training. Self-training has been successfully applied to improve the state-of-the-art of many tasks, such as image classification~\cite{xie2020self}, object detection~\cite{liu2021unbiased}, semantic segmentation~\cite{sun2020teacher}. These methods often involve a teacher which provides pseudo-labels for a student which learns from these pseudo-labels~\cite{arazo2020pseudo,pham2020meta}. However, these methods highly depend on the performance of teacher. which often makes incorrect predictions. Another promising direction for SSL is self-ensembling, which encourages consensus among ensemble predictions of unknown samples under small perturbations of inputs or network parameters~\cite{Zhao_2020_CVPR,miyato2018virtual, wang20213dioumatch}. The student learns to perform better than the teacher due to its robustness to corruption. However, the improvement is limited since the teacher and student can use the same data to make predictions. In contrast, we propose to use spatiotemporal information to construct a better teacher, which is then used to train a student which has access to only single-frame information. \paragraph{Spatiotemporal Reasoning} Most efforts in spatio-temporal reasoning focus on 2D semantic segmentation~\cite{yang2019step, Nilsson_2018_CVPR, perazzi2017learning}. For example, Bao et al.~\cite{bao2018cnn} embed mask propagation into the inference of a spatiotemporal MRF model to improve temporal coherency. EGMN~\cite{lu2020video} employs an episodic memory network to store frames as nodes and capture cross-frame correlations by edges. However, these methods are computationally expensive even in 2D videos, which makes them infeasible to be adapted to 3D videos. In contrast, our temporal GNN is very computational efficient and memory efficient and thus can be applied to long sequences. \section{Approach} In this section, we elaborate on our method for semi-supervised 3D object detection. Our method consists of a teacher which performs spatiotemporal reasoning for 3D object detection; this teacher is then used to provide pseudo labels to train a student which takes as input only a single frame (see Figure~\ref{fig:pipeline}). We use uncertainty-aware training to handle incorrect pseudo labels produced by the teacher. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{figures/pipeline.jpg} \caption{Overview of Semi-supervised 3D Object Detection via Temporal Graph Neural Networks: The teacher consists of a 3D object detector and a graph neural network for spatiotemporal reasoning. The 3D object detector takes a single point cloud frame as input and outputs candidate detections. The graph neural network takes candidate detections from a sequence of point clouds as input and outputs refined detection scores. These pseudo labeled point clouds are then combined with labeled point clouds to train the student. The 3D object detection module of the teacher and the student are with the same architecture. And the parameters of the 3D object detection module of the teacher are the exponential moving average of the continually updated student network parameters. This updated detector is used to generate better detections, which is further used to refine the spatiotemporal reasoning module. The iterative refinement of student and teacher can lead to a continual improvement.} \label{fig:pipeline} \end{figure*} \subsection{Teacher Network: GNN for Spatiotemporal Reasoning} \label{sec:teacher} We first describe our teacher network, which uses spatiotemporal reasoning to provide pseudo labels on unlabeled data. These pseudo labels are then used to train a student which takes as input single-frame data. Our teacher network consists of two modules: a 3D object detection module, and a spatiotemporal reasoning module. The 3D object detection module can be any existing 3D object detector, which is initially trained from only a limited amount of labeled data. We then input each frame to the 3D object detector to find detected objects in the scene. We filter the output of this detector with a fairly low threshold (we use a confidence of 0.1 in our experiments) to create a large set of candidate detections in the scene. However, these predictions are always noisy and inaccurate. To smooth these detections and create more accurate pseudo labels, we propose a novel spatiotemporal reasoning module: we build a video graph over the frames of the 3D video (sequence of point clouds), and use a Graph Neural Network (GNN) to output new scores for each detection node based on the principle of spatiotemporal consistency. This spatiotemporal reasoning module is trained on the same labeled data as the 3D object detection module. \label{sec:feature_extractor} Specifically, we extract a set of high-level features of each candidate detection as node features ($x$) for the 3D video graph. These high level features include (1) the detection score, (2) the number of points in the detection box, and (3) the size of the detection box (width, length, and height). In total, our node features are 5-dimensional. \label{sec:gnn_reasoning} After feature extraction, we have a node feature vector for each candidate detection in each frame of a video. To connect nodes across frames, we use the detection's estimated velocity, which is often predicted by the 3D object detector~\cite{zhu2019class, chen2020pano3d, yin2021center}. We project the node to its neighboring frames based on its estimated velocity, as follows: \begin{equation} \hat{p}^i_{t+N} = v^i_t \cdot (T_t - T_{t+N}) + p^i_t, \label{eq:forward} \end{equation} where $\hat{p}^i_{t+N}$ represents the predicted position of object $i$ in frame $t+N$, $p^i_t$ and $v^i_t$ represents the estimated position and velocity of object $i$ in frame $t$, and $T_t$ and $T_{t+N}$ represents the timestamp of frame $t$ and frame $t+N$. We then calculate the distance between the predicted position of object $i$ ($\hat{p}^i_{t+N}$) with all detections in frame $t+N$. All detections whose distance to the predicted position are smaller than a threshold are connected to node $i$. Each node is connected to all such neighboring nodes in the neighboring frames from $t-N$ and $t+N$. We also add the following edge features ($e$) to the 3D video graph: (1) the distance between the predicted center positions of each pair of detection boxes (1-dimensional), (2) the difference in the sizes along each dimension (3-dimensional), and (3) the difference in bounding box orientations (1-dimensional). In total, our edge features are 5-dimensional. Finally, our GNN takes the node feature and edge feature as input, and it outputs a new detection score $s_i$ for the $i^{th}$ candidate detection. The graph neural network operations can be written as: \begin{equation} x^k_i = \gamma^k(x^{k-1}_i, \cup_{j\in\mathcal{N}(i)}\phi^k(x^{k-1}_i,x^{k-1}_j,e_{j,i})), \label{eq:inference} \end{equation} where $\gamma^k$ denotes the $k^{th}$-step updating network, $\phi^k$ denotes the $k^{th}$-step message network ($k^{th}$-layer of the graph neural network), $\cup$ denotes the message aggregation function, $x^{k-1}_i$ and $x^{k-1}_j$ denotes node features of node $i$ and $j$ in layer $(k-1)$, and $e_{j,i}$ denotes the edge features of node $j$ and node $i$, which remains unchanged for all layers. The output of the final update is the new detection score $s_i$. \begin{equation} s_i = x_i^K, \label{eq:score} \end{equation} where $x_i^K$ represents the final update (output of the final layer) of node $i$. The GNN is trained end-to-end using the Binary Cross Entropy Loss (BCE). The spatio-temporal reasoning of the GNN allows the network to output more accurate detections than those of the original 3D object detector. These refined detections are treated as pseudo labels and are used to train a student (a 3D object detector). For simplicity, the 3D object detection module of the teacher and the student are with the same architecture. And the parameters of the 3D object detection module of the teacher $\theta^T$ are the exponential moving average (EMA) of the continually updated student network parameters $\theta^S$. \begin{equation} \theta_t^T = \alpha \theta_t^S + (1-\alpha) \theta_{t-1}^T, \label{eq:weight_update} \end{equation} where the coefficient $\alpha$ represents the degree of weighting decrease, $\theta_t^S$ represents the student network parameters after iteration $t$. This updated detector is used to generate better detections, which is further used to refine the spatiotemporal reasoning module. The iterative refinement of student and teacher can lead to a continual improvement. Please refer to Section~\ref{sec:training_details} for more details on the training procedure. \subsection{Graph Augmentation} To prevent the graph neural network from overfitting, we use data augmentation. It's worth noticing all these augmentations are only applied to the limited amount of labeled data since the spatiotemporal reasoning module is only trained on the labeled data. We propose three novel data augmentation techniques. \paragraph{Copy-and-Paste} A copy-and-paste augmentation scheme is widely used in many popular single-frame object detectors including SECOND~\cite{yan2018second}, PointPillars~\cite{lang2019pointpillars}, and CenterPoint~\cite{yin2021center}, which crops ground truth bounding boxes from other frames and pastes them onto the current frame's ground plane. Unlike previous methods, we propose trajectory-level copy and paste. We first assign each object an identity based on the its center distance to the closest ground truth object Then, detections with same identity form a trajectory; a random clip of the trajectory is copied into a new video with a random starting frame. Similarly, we also randomly remove trajectories from current video as a form of data augmentation. \paragraph{Random Trim} To further increase the diversity of our data, we propose to randomly select the first frame and the last frame from a video to form the sequence of frames used in the graph. \paragraph{Random Noise} We also propose to add random noise to the location, size, orientation and detection scores of individual detections. This is to simulate errors of a pre-trained object detector on more diverse unlabelled data. The noise values for the location and size are uniformly sampled from $\pm 10\%$ of the bounding box size, while the orientation noise is uniformly sampled from $\pm 10^{\circ}$. The detection score noise is uniformly sampled from $\pm0.15$ and is only applied to detections in the middle of a trajectory (not detections at the beginning or end of a trajectory). \subsection{Uncertainty-aware Semi-supervised Training} One of the most prominent challenges in training on pseudo labels is that they are not guaranteed to be accurate. Our solution is to leverage \textit{uncertainty} in the pseudo labels. For examples with high uncertainty, we discount the contribution of the corresponding example during training. This is intended to reduce the effect of incorrect labels in semi-supervised training. \label{sec:calibration} We propose a method for estimating the amount of uncertainty of each pseudo label during training time. To estimate uncertainty, we first obtain the new detection score $s_i$ from the temporal GNN mentioned in Section~\ref{sec:gnn_reasoning}. However, these scores are not always well calibrated. In other words, the probability associated with the predicted label cannot reflect its ground truth correctness likelihood. To calibrate the prediction score, we adopt histogram binning~\cite{zadrozny2001obtaining}, a simple non-parametric calibration method. In short, all uncalibrated predictions $s_i$ are divided into mutually exclusive bins $B_1$,..., $B_M$. Each bin is assigned a calibrated score $\theta_m$; \textit{i.e.} if $s_i$ is assigned to bin $B_m$, then the calibrated score $\hat{s_i} = \theta_m$. More precisely, for a suitably chosen M, we first define bin boundaries $0=a_1\leq a_2 \leq ... \leq a_{M+1}=1$, where the bin $B_M$ is defined by the interval $(a_m, a_{m+1}]$. The prediction $\theta_i$ are chosen to minimize the bin-wise squared loss: \begin{equation} \mathop{min} \limits_{\{\theta_i\}} \sum_{m=1}^M \sum_{i=1}^n 1(a_m \leq s_i \leq a_{m+1})(\theta_m-s_i)^2, \label{eq:bin} \end{equation} where $1$ is the indicator function. Given fixed bins boundaries, the solution to Eq.~\ref{eq:bin} results in $\theta_m$ that correspond to the average number of positive-class samples in bin $B_m$. We then use the entropy~\cite{shannon2001mathematical} of the calibrated score $\hat{s_i}$ as a measure of uncertainty $u_i$: \begin{equation} u_i = -\hat{s}_i \log(\hat{s}_i) - (1-\hat{s}_i) \log(1-\hat{s}_i) \end{equation} This uncertainty is then used to weight the pseudo labels during training the student -- any existing 3D object detector, which is always composed of a classification branch and a regression branch. We first apply this uncertainty to the classification branch: \begin{equation} loss_{c}' = \left\{ \begin{array}{lr} -(1-u_i)^k \log (p_i), &\textrm{if}\, \hat{s}_i > 0.5\\ -(1-u_i)^k \log (1-p_i), &\textrm{if}\, \hat{s}_i < 0.5\\ \end{array} \right. \label{eq:clss_loss} \end{equation} where $k$ is the focusing parameter $k \geq 0$ which helps the model focuses on the samples with low uncertainty and $p_i$ is the prediction of our student neural network. We also apply the uncertainty to the bounding box regression branch: \begin{equation} loss_{r}' = u_i \times \sum_{b \in (x,y,z,w,,l,h,\theta)} Dis (\Delta b) \end{equation} where $\Delta b$ defines the regression residuals between psuedo-labels and student's prediction, and $Dis$ defines the distance metric (\textit{e.g.} Smooth L1 loss). In total, the semi-supervised loss $loss_s$ is defined as a combination of uncertainty-weighted classification loss and regression loss: \begin{equation} loss_s = loss_{c}' + loss_{r}' \end{equation} \subsection{Gradual Semi-supervised Training and Iterative Refinement} To avoid the student learning from large amount of unreliable pseudo labels, we propose gradual semi-supervised training inspired by~\cite{kumar2020understanding, teichman2012tracking, hong2020learning}. In a nutshell, the student is training with a mix of labelled and unlabelled data, while the amount of unlabelled data increase gradually in each iteration. After each iteration, we update the 3D object detection module of the teacher as a exponential moving average of the student. This updated detector is used to generate better detections. We then retrain the GNN on labeled data for better spatiotemporal reasoning. As the teacher keeps improving, the student can learn from a larger amount of more reliable pseudo labels each iteration. Thus, combining gradual semi-supervised training with iterative refinement of student and teacher can lead to a continual improvement. \section{Experiments} We evaluate our approach on 3D object detection on the nuScenes dataset~\cite{caesar2020nuscenes} and Honda 3D dataset (H3D)~\cite{360LiDARTracking_ICRA_2019} , which provide a series of sequence of 3D lidar pointcloud with annotated 3D bounding boxes. We also verify the effectiveness of each component of our method by performing an ablation analysis. \subsection{Dataset and Experiment Setup} To obtain enough unlabeled data for semi-supervised training, we re-split the nuScenes dataset as following. We use 50 scenes from nuScenes train for supervised training and 500 scenes from nuScenes train for semi-supervised training. For semi-supervised training, instead of using the ground truth labels, we use the pseudo labels generated by our proposed teacher network. We also use 150 scenes from nuScenes train as validation set. We pick the iteration with best performance on our validation set and reports its performance on 150 videos from nuScenes validation. For H3D dataset, we use 50 scenes from H3D train for supervised training and 300 scenes from HRI Driving Dataset (HDD) for semi-supervised training, since H3D and HDD datasets have the same data distribution. We pick the iteration with best performance on 30 scenes from H3D validation and reports its performance on 80 videos from H3D test. We compare our methods with baselines using the official metric: mean Average Precision (mAP). \subsection{Training Details} \label{sec:training_details} Our teacher is composed of two modules: 3D object detection module and spatiotemporal reasoning module. We choose CenterPoint~\cite{yin2021center}, a state-of-art 3D object detector as the detection module for both the teacher and the student. We use a 4-layer-GNN for the spatiotemporal reasoning and use the mean function for message aggregation (Eq~\ref{eq:inference}). Node features and edge features are computed as described in Section~\ref{sec:teacher}. The distance threshold below which two detections in adjacent frames are connected is set to 10m. We use 4 preceding frames and 4 succeeding frames as adjacent frames (N = 4 as denoted in Section~\ref{sec:teacher}). Combined with the current frame, a node can be connected to 9 frames in total. The student network is initialized from the 3D object detection module of the teacher trained on a small amount of labeled data. During semi-supervised training, we add $20\%$ unlabeled data at each iteration whose pseudo labeled is generated by our proposed teacher. The student is trained with a mix of labeled and unlabeled data at each iteration (\textit{i.e.} supervised loss on labeled data and uncertainty-aware semi-supervised loss on unlabeled data). We use the Adam optimizer~\cite{kingma2015adam} for training the student with a batch size of 4 and a learning rate of $1\times10^{-3}$. We also use the Adam optimizer~\cite{kingma2015adam} for training the temporal GNN of the teacher with a batch size of 50 videos and a learning rate of $1\times10^{-3}$. We train the student for 5 epochs and temporal GNN for 5 epochs each iteration. \subsection{Results} We compare our method to the following baselines: \begin{itemize} \item \textit{Student (w/o Semi-supervised Training)}: We provide the performance of the original student initialized from the 3D object detection module of the teacher trained on a small amount of labeled data. \item \textit{Gradual Semi-supervised Training~\cite{kumar2020understanding, teichman2012tracking, hong2020learning}}: In gradual semi-supervised training, the teacher and student share the same architecture (i.e. CenterPoint~\cite{yin2021center}), while the student is trained with a mix of labeled and unlabeled data. The amount of unlabeled data increase gradually in each iteration. Detections with the maximum predicted probability are used as the pseudo label for each unlabeled sample. \item \textit{SESS~\cite{Zhao_2020_CVPR}}: SESS is a self-ensembling semi-supervised 3D object detection framework. During training, labeled samples and unlabeled samples are perturbed and then input into the student and the teacher network, respectively. The student is trained with a supervised loss on labeled samples and a consistency loss with the teacher predictions using on unlabeled samples. \item \textit{Oracle (Fully Supervised)}: To provide an upper bound on our performance, we also compare against using full supervision, i.e. the student trained on the same data points (labeled and unlabeled point cloud videos) as semi-supervised training, but is provided ground truth label of all data. This is the ideal case for semi-supervised training and achieves the oracle performance of our method. \end{itemize} For nuScenes, our method performs consistently better than each of the baselines in all categories (Table~\ref{tab:nuScenes}). For H3D, our method outperforms all baseline methods in the overall performance (Table~\ref{tab:h3d}). However, some baselines outperform our method on “other vehicle”. The class of “other vehicle” includes different types of cars and is diverse, with only a small number of examples per vehicle type. Our method also performs marginally worse (within noise) than the baselines on Car and Pedestrian categories. We generally find that classes which have more labeled data (car, pedestrian) benefit less from semi-supervised learning. With spatiotemporal reasoning, our method is able to exploit large amount of unlabeled data to boost original performance. However, there is still a large gap between the performance of semi-supervised and supervised training (last two rows). Furthermore, we show the performance of the student on nuScenes after each training iteration (Table \ref{tab:iterative}). By iteratively refining the teacher and the student, and gradually adding a small batch of unlabeled data, the performance of the student keeps improving. \textbf{Ablations: } In order to determine the contributions of each component of our method, we evaluate five different versions of our method changing one component of our method at a time: \begin{itemize} \item Ours (Tracking~\cite{Weng2020_AB3DMOT}): Instead of using our proposed temporal graph neural networks, we adopt a Kalman Filter based 3D multi-object tracker~\cite{Weng2020_AB3DMOT} to reason about the spatial-temporal information. We use the average detection score of each trajctory as confident score, and the most confident tracks are then used as pseudo labels to supervise the student network. \item Ours (Flicker~\cite{jin2018unsupervised}): Inspired by Flicker~\cite{jin2018unsupervised}, we propose to decrease the confidence of detections that are isolated in time, \textit{i.e.} that have no associated preceding or following detections, and we increase the confidences of detections that are near to high-scoring detections in adjacent frames. The most confident tracks are then used as pseudo labels to supervise the student network. \item Ours (-Augmentations): We train the temporal GNN of the teacher without data augmentation, the node features and edge features remain unchanged. This ablation shows the value of data augmentation to avoid overfitting. \item Ours (-Gradually): Instead of training with more unlabeled samples gradually (adding a small batch of unlabeled data at each iteration), we use all unlabeled data at once for semi-supervised training. \item Ours (-Uncertainty): Our method trained without uncertainty weighting, where all pseudo labels are equally weighted in Eq.~\ref{eq:clss_loss}) during semi-supervised training. This ablation shows the value of uncertainty-aware training. \item Ours (-Iterative): Instead of refining the teacher and student iteratively, we leave the teacher network unchanged, while the student is finetuned with a mix of labeled and unlabeled data. \end{itemize} \setlength{\tabcolsep}{6pt} \begin{table}[!tbp] \caption{Performance of student after each training iteration on nuScenes. By iteratively refining the teacher and the student, and gradually adding a small batch of unlabeled data, the performance of the student keeps improving.} \fontsize{8}{6}\selectfont \begin{center} \begin{tabular}{m{1.2cm}<{\centering}||m{2.8cm}<{\centering}|m{2.8cm}<{\centering}} \toprule \midrule & Unchanged Teacher & Refined Teacher\\ \midrule Iter-0 & 23.17 & 23.17\\ Iter-1 & 28.97 & 35.61\\ Iter-2 & \textbf{30.44} & \textbf{36.46}\\ \bottomrule \end{tabular} \end{center} \vspace{-0.6cm} \label{tab:iterative} \end{table} \setlength{\tabcolsep}{6pt} \begin{table*}[!tbp] \caption{Comparison of Detection Performance (\%) on nuScenes Dataset} \fontsize{10}{8}\selectfont \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{m{3.2cm}<{\centering}||m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}| m{1.8cm}<{\centering}| m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}} \toprule \midrule Method & mAP & Car & Truck & Bus & Trailer & CV & Pedes & Motor & Bicycle & TC & Barrier\\ \midrule Student (w/o Semi) & 23.17 & 62.74 & 17.80 & 15.93 & 0.00 & 0.17 & 60.04 & 23.55 & 5.57 & 22.21 & 22.19\\ Gradual Semi & 25.58 & 58.09 & 19.54 & 17.26 & 3.83 & 0.09 & 64.73 & 19.2 & 5.31 & 36.88 & 30.07\\ SESS & 27.45 & 61.77 & 21.17 & 19.03 & 6.12 & 0.25 & 63.32 & 24.39 & 6.98 & 40.13 & 31.33\\ Ours & \textbf{36.46} & \textbf{74.94} & \textbf{29.86} & \textbf{31.32} & \textbf{10.14} & \textbf{3.76} & \textbf{75.64} & \textbf{31.91} & \textbf{12.29} & \textbf{49.09} & \textbf{45.61}\\ \midrule Supervised Training & 41.22 & 75.98 & 38.00 & 39.56 & 16.77 & 10.37 & 79.20 & 39.95 & 18.88 & 50.54 & 42.95\\ \bottomrule \end{tabular} } \end{center} \vspace{-0.6cm} \label{tab:nuScenes} \end{table*} \setlength{\tabcolsep}{6pt} \begin{table*}[!tbp] \caption{Comparison of Detection Performance (\%) on H3D Dataset} \fontsize{10}{8}\selectfont \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{m{3.2cm}<{\centering}||m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}| m{1.8cm}<{\centering}| m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}} \toprule \midrule Method & mAP & Car & Ped & Other vehicle & Truck & Bus & Motorcyclist & Cyclist & Animal\\ \midrule Student (w/o Semi) & 31.02 & 55.77 & 63.4 & 5.26 & 30.23 & 12.16 & 22.6 & 55.68 & 3.03\\ Gradual Semi & 35.23 & \textbf{56.25} & 63.03 & \textbf{16.56} & 35.39 & 14.49 & 23.72 & 63.28 & 9.09\\ SESS & 34.11 & 55.91 & \textbf{63.12} & 14.43 & 33.37 & 13.62 & 24.87 & 59.42 & 8.14\\ Ours & \textbf{38.99} & 56.22 & 63.01 & 11.83 & \textbf{35.81} & \textbf{14.67} & \textbf{27.26} & \textbf{64.10} & \textbf{9.09}\\ \midrule Supervised Training & 45.56 & 61.27 & 55.76 & 19.1 & 41.56 & 19.51 & 49.93 & 71.82 & 2.46\\ \bottomrule \end{tabular} } \end{center} \vspace{-0.6cm} \label{tab:h3d} \end{table*} \setlength{\tabcolsep}{6pt} \begin{table*}[!tbp] \caption{Ablation Study on nuScenes Dataset} \fontsize{12}{10}\selectfont \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{m{4.8cm}<{\centering}||m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}| m{1.8cm}<{\centering}| m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}|m{1.8cm}<{\centering}} \toprule \midrule Method & mAP & Car & Truck & Bus & Trailer & CV & Pedes & Motor & Bicycle & TC & Barrier\\ \midrule Ours (Tracking) & 29.69 & 65.43 & 23.14 & 21.56 & 7.65 & 3.14 & 64.23 & 27.89 & 7.32 & 42.01 & 34.52\\ Ours (Flicker) & 34.97 & 72.94 & 26.86 & 30.32 & \textbf{11.21} & \textbf{4.02} & 72.35 & 30.27 & 10.11 & 48.79 & 42.82\\ Ours (-Data Augmentation) & 33.30 & 70.51 & 23.03 & 30.12 & 8.34 & 3.21 & 71.98 & 30.32 & 10.11 & 43.07 & 42.32\\ Ours (-Gradually Training) & 32.20 & 64.73 & 28.09 & 27.34 & 8.75 & 3.18 & 71.73 & 28.11 & 10.01 & 45.41 & 34.67\\ Ours (-Uncertainty) & 30.61 & 62.34 & 26.51 & 15.01 & 4.32 & 3.61 & 70.21 & 29.97 & 9.81 & 44.03 & 40.32\\ Ours (-Iterative) & 30.44 & 61.34 & 28.51 & 18.01 & 7.32 & 3.87 & 68.32 & 27.92 & 7.56 & 43.23 & 38.32\\ Ours & \textbf{36.46} & \textbf{74.94} & \textbf{29.86} & \textbf{31.32} & 10.14 & 3.76 & \textbf{75.64} & \textbf{31.91} & \textbf{12.29} & \textbf{49.09} & \textbf{45.61}\\ \bottomrule \end{tabular} } \end{center} \vspace{-0.6cm} \label{tab:ablation} \end{table*} \begin{figure}[t!] \begin{center} \includegraphics[width=0.48\textwidth]{figures/flicker.png} \caption{Qualitative Results: Ground truth detections are shown as black dashed boxes. Predicted detections are shown as blue boxes. We filter the detections with confidence lower than 0.5. The a1) and b1) show the detection results from initial student. The a2) and b2) rows show the result of our method. a2) shows our method successfully corrects false positive detections are near to high-scoring detections in adjacent frames (red circles). b2) shows our method corrects false negative detections that are isolated in time (red circles).} \vspace{-0.6cm} \label{fig:flicker} \end{center} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.4\textwidth]{figures/semi.png} \caption{Qualitative Results (Semi-supervised): Ground truth detections are shown as green boxes. Predicted detections are shown as blue boxes. We filter the detections with confidence lower than 0.5. a) shows the results from the initial student. b) shows the result from the learned student. After training with unlabeled data via semi-supervised training, our method improves the detection results.} \vspace{-0.6cm} \label{fig:semi} \end{center} \end{figure} According to the ablation study, we show that each component contributes to the final improvement of the performance. Comparing the first and last row of Table~\ref{tab:ablation}, we prove using the tracking method instead the GNN model can get worse performance, because tracking method updates not only the confident score but also the position of each detected object. These object position predicted by Kalman Filter introduces new errors to the pseudo label, which are then learned by the student. Our (Flicker) indicates the GNN network of student can more efficient extract features and understand the relation among the objects from the point cloud video. Our (-Augmentations) shows the importance of graph augmentation for temporal GNN (comparing the third and last row of Table~\ref{tab:ablation}). The temporal GNN can easily overfit to the limited amount of labeled data without graph augmentation, and thus hurts the performance of the teacher and student. Our (-Gradually) indicates the importance of gradually training (comparing the fourth and last row of Table~\ref{tab:ablation}). Since the initial student is trained with limited label data, the model can easily get negative effect by large amount of pseudo labels with high uncertainty. Our (-Uncertainty) shows that even with spatiotemporal reasoning, the teacher's prediction is still noisy. Thus using uncertainty to weight pseudo labels significantly reduce the affect of unreliable labels and thus can help the student to learn more from good samples. Our (-Iterative) shows that keeping the teacher updated is another key component to boost the detection performance as a better teacher can generate more reliable pseudo labels. \subsection{Qualitative Analysis} In this section, we qualitatively analyze our method. Figure~\ref{fig:flicker} a1) and a2) visualizes examples where our method corrects false positive detections made by the original teacher. a1) shows the detection from initial student, where in frame $t$, ground-truth objects were failed to be detected. However, the detector is able to detect these objects in most of the previous two and next two frames. These are examples of ``false positive flickers", where a detection suddenly appears in one frame and disappears in the next. Since the GNN of student aggregates information from neighboring frames, it reasons that it is unlikely the object exists in frame $t$. Our method a2) successfully removes these false detections as shown in the second and fourth rows. Similarly, Figure~\ref{fig:flicker} b1) and b2) visualizes examples where the learned student corrects the initial teacher's false negative. In the b1), the teacher falsely detects an object in frame $t$. However, most of the neighboring nodes do not have detections corresponding to this object. These are examples of ``false negative flickers" where an object is predicted to disappear in one frame and immediately reappear in the next frame. Our method takes temporal information into account, and reasons that because the neighboring frames have positive detections at the same location, it is highly likely that there should be a detection in frame $t$. The b2) shows that our method increases the confidence of the flickered detections and correctly detects the missed objects. Figure~\ref{fig:semi} shows the comparison between a) the initial teacher and b) the student trained by our proposed semi-supervised approach in two different scenarios. The detection from the pre-trained model with limited data shows lots of false positive and false negation. After feeding into more unlabeled data, the student network produces a more accurate detection results by learning both positive and negative sample with low uncertainty. Overall, our approach combines the temporal information which reduces the uncertainty of pseudo labels and semi-supervised training pipeline which obtains the benefits from larger set of data to improve the generalization and robustness of the model. \section{Conclusion} In conclusion, we propose to leverage large amounts of unlabeled point cloud videos by semi-supervised learning of 3D object detectors via temporal graph neural networks. It does not require a large amount of strong labels that are often difficult to obtain. We show that teacher with temporal GNN can generate more accurate pseudo labels to train the student. By incorporating uncertainty-aware semi-supervised training, gradual semi-supervised training, and iterative refinement, our method achieves state-of-the-art detection performance on the challenging nuScenes~\cite{caesar2020nuscenes} and H3D~\cite{360LiDARTracking_ICRA_2019} benchmarks, compared to baselines trained on the same amount of labeled data. We hope that our work points toward moving away from spending excessive efforts annotating labeled data and instead redirecting them to semi-supervised learning on large unlabeled dataset. \minisection{Acknowledgements.} The authors would like to thank members of R-pad for fruitful discussion and detailed feedback on the manuscript. Carnegie Mellon Effort has been supported by the Honda Research Institute USA, NSF S\&AS Grant No. IIS-1849154. \section{Training Details} \subsection{Graph Augmentation} While performing data augmentation, we copy and paste trajectories from other videos such that it is at most 10m away from a candidate detection. We randomly copy $m$ trajectories (m is uniformly sampled from 1-5) from other videos and randomly remove $m$ trajectories from the augmented video. We use 4 preceding frames and 4 succeeding frames as neighbouring frames ($N=4$ as denoted in Section 3.1). Combined with the current frame, a node can be connected to 9 frames in total. \subsection{Teacher: Temporal Graph Neural Network} The graph neural network operations can be written as: \begin{equation} x^k_i = \gamma^k(x^{k-1}_i, \cup_{j\in\mathcal{N}(i)}\phi^k(x^{k-1}_i,x^{k-1}_j,e_{j,i})) \label{eq:forward} \end{equation} where $\gamma^k$ denotes the $k^{th}$-step updating network, $\phi^k$ denotes the $k^{th}$-step message network, $\cup$ denotes the message aggregation function, $x^{k-1}_i$ and $x^{k-1}_j$ denotes node features of node $i$ and $j$ in layer (k-1), and $e_{j,i}$ denotes the edge features of node $j$ and node $i$. We use a single fully connected layer for both updating network and message network at each step. We iterate the message aggregation and updating procedure 4 times, which forms a 4-layer-GNN. The total number of hidden neurons (width) is 8. We use the mean function for message aggregation. We use the default initialization in PyTorch to initialize our network, which is a variant of Kaiming He Initialization~\cite{he2015delving}. The temporal GNN of the teacher is initialized from 50 labeled videos of nuScenes. We use the Adam optimizer~\cite{kingma2015adam} for training the network with a learning rate of $1\times10^{-3}$, $\beta_1$ and $\beta_2$ are 0.9 and 0.999, respectively. We train the network for 50 epochs. \subsection{Semi-supervised Training} We use the original CenterPoint \cite{yin2021center} architecture and training procedure, but modify the loss to the new semi-supervised loss. To obtain the initial student model, we train the detector of student model for 20 epochs on the labeled data. During the semi-supervised training, we first obtain the detection results on the unlabeled data from the detector of student, and then use GNN of student to modify the detection score according to temporal information and histogram binning to calibrate the score. Finally, we apply the Equation 6 on calibrated score to estimate the uncertainty of each object. We use the mixed labeled and pseudo labeled data to train the detector and GNN of the student individually for 5 epochs as one iteration, and then use the same pseudo label generation process to obtain the new pseudo labels for next iteration training. The models were trained on a single NVIDIA Quadro V100 GPU. \subsection{Flicker} For each individual detection, we first calculate the average confidence $C_\text{avg}$ of the ``nearby" detections (where ``nearby" is defined according to the distance between the forecasted position using velocity and the actual position i.e. motion projection of Equation 1) in each adjacent frames. The detection is then rescored to $s_\text{new} = C_\text{avg} \cdot s_\text{old}$, where $s_\text{old}$ is the original detection score. Specifically, we first project an object to its neighboring frames based on its estimated velocity using Equation 1. Detections whose distance to the predicted position of the object is less than 10 meters are considered as ``nearby" detections. We consider 4 preceding frames and 4 succeeding frames as adjacent frames. \section{Visualization} Please see the attached video for more visualizations. \section{Code} Please refer to the attached code to see our implementation. {\small \bibliographystyle{ieee_fullname}
{ "timestamp": "2022-02-02T02:09:09", "yymm": "2202", "arxiv_id": "2202.00182", "language": "en", "url": "https://arxiv.org/abs/2202.00182" }
\section{Introduction} Model Predictive Control (MPC) is an advanced control method that has found widespread use in many industrial areas, including aerospace, chemical processes, power electronics, and autonomous vehicles. The industrial success of MPC in these areas is due to several factors, including \begin{enumerate}[label=(\roman*)] \item the ability to incorporate complex systems and constraints in the controller, \label{en:mpc:cons} \item a simple and cross-disciplinary conceptual idea of the controller, and \label{en:mpc:simple} \item increased tooling and support for the design and implementation of the controller. \label{en:mpc:tooling} \end{enumerate} Factors~\ref{en:mpc:cons} and~\ref{en:mpc:simple} occur because MPC controllers are written as an optimization problem that seeks to minimize an objective subject to constraints that enforce the system dynamics model and the constraints from the designer. Switching between different applications is then conceptually as simple as changing the dynamics model, constraints and the objective in the optimization problem. Factor~\ref{en:mpc:tooling} then translates this conceptual simplicity into practice, with software tools such as FORCESPRO or MATLAB's Model Predictive Control Toolbox translating the high-level optimization problem into a working controller. \begin{table*}[t!] \centering \caption{Schedule of topics covered in the courses} \label{tab:courseContent} \begin{tabular}{c||c|c} \textbf{Week} & \textbf{2018, 2019, and 2020 Courses} & \textbf{2021 Course} \\ \hline 1 & Introduction to predictive control & Introduction to predictive control \\ 2 & Predicting the future & State-space modelling and differential equation solvers \\ 3 & Unconstrained LQR RHC & Numerical/automatic differentiation \& Discretization methods \\ \cline{3-3} 4 & Constrained LQR RHC & Constrained LQR RHC \\ \cline{2-2} 5 & Soft constraints \& Setpoint tracking & Soft constraints \& Max-type costs and constraints \\ 6 & Setpoint tracking \& Disturbance rejection & Robustness \& Constraint tightening \\ 7 & Disturbance rejection & Closed-loop stability and recursive feasibility \\ 8 & Rate constraints \& Move blocking & The real-time iteration scheme \\ \cline{2-2} 9 & Stability \& Robustness & Move blocking \\ 10 & Final project (2018, 2019)/NMPC direct collocation (2020) & External constraint handling methods \end{tabular} \end{table*} The growing use of MPC in industry is occurring alongside a shift in the expectations for university control courses. \citet{rossiterFirstCourseFeedback2019a} reported that a recent survey asking how a first course on control should be designed showed that respondents from industry ranked topics such as optimal control and optimal state feedback in the top 10 concepts to include, which was higher than topics such as PID and lead-lag controllers. In addition, industry respondents felt that pedagogical techniques such as assessments focusing on concepts and demonstrating control design using authentic simulation/implementation scenarios were as important as assessments that were focused on just the mathematical concepts/theory. Designing a course covering MPC is not a simple task due to the need to balance the large set of topics that could be covered (e.g.\ stochastic, robust, nonlinear, economic) and the depth of coverage (e.g.\ optimization theory, stability, problem formulations). Recently, \citet{faulwasserTeachingMPCWhich2021} discussed where and how such a course on MPC can fit into the curriculum, and concluded that a first course in MPC could fit into the 2nd or 3rd year of a Bachelor's curriculum and cover an introduction to the linear-quadratic problem formulation, numerical optimization, stability, and highlight application areas. Further topics such as nonlinear, distributed or economic MPC could then be covered in a graduate-level course along with a more in-depth discussion of the underlying theory. One possible course for undergraduates was described by \citet{honcTeachingPracticingModel2016}, with the course focusing on teaching the fundamentals of linear MPC and other topics such as model derivation, controller tuning and offset-free control, with a final project of applying MPC to the control of the water level in a tank. A slightly more advanced course for Masters-level students was described by \citet{kellerTeachingNonlinearModel2020}, and consisted of both linear and nonlinear MPC and related topics such as optimization theory, discretization methods, and stability theory. At the end of the course, the students were tasked with designing a nonlinear MPC controller for a diesel engine and implementing it in real-time on a laboratory test-bench. In this paper, we describe the predictive control course taught at Imperial College London between 2018 and 2021 to Masters-level (MSc and final year MEng) students. The majority of students will have completed an introductory control course on state space and transfer function methods (whose content will vary depending on their undergraduate institution), and may not have taken any prior courses on optimization. This course has evolved over the four years, starting with only linear-quadratic MPC and some extensions in 2018 and transitioning to nonlinear MPC and its dependencies in 2021. The course includes laboratory exercises utilizing a laboratory-scale gantry crane, and also uses MATLAB Grader to provide checkpointing assessments for the students during the course. Instead of utilizing a written/exam-based summative assessment at the end of the course, we developed a specification-based controller design problem that allows for a more thorough assessment of the student's knowledge and understanding of MPC. In these summative assessments, the student is given only the specification describing a real-world problem, which is moving an overhead gantry crane to a target point in a limited time while avoiding obstacles. We developed a MATLAB/Simulink framework (initially described by the authors in \citet{McInerneyPredictiveControlAssessment2018}) that provides a closed-loop simulation environment where the students write their own MATLAB functions to implement the target generator, state estimator and controller. The student controllers are automatically tested on a set of over 30 variations of the real-world problem that are generated by changing the obstacles/constraints and adding uncertainty to the simulation model of the gantry crane. \section{Course Lectures} \label{sec:courseStructure} The course was taught over a 10-week period each year, with one 2-hour long lecture session per week. The topics covered in the courses can be seen in Table~\ref{tab:courseContent}. The 2018 and 2019 courses focused exclusively on linear MPC, with the 2020 course adding a 1-lecture introduction to direct collocation-based nonlinear MPC in week 10. The 2021 course was redesigned to primarily focus on nonlinear MPC and its associated prerequisites, and contained only a 1-lecture overview of linear MPC. \subsection{2018, 2019 \& 2020 courses} The main focus of the course for 2018--2020 was to teach how to apply linear predictive control to a system by using the Linear Quadratic Regulator (LQR) formulation of MPC as a Receding Horizon Controller (RHC). Notably, this course did not contain any lectures dedicated to optimization theory or optimization solvers, with the students instructed to instead use the \textit{mpcqpsolver} function when implementing the MPC controllers in the course. Instead of utilizing a singular textbook for the course, the students were given a reference list containing approximately 30 books and papers that covered the material taught in the course. \subsubsection{Course topics} The course was split into three parts, with the basics of linear MPC in the first, several extensions to linear MPC in the second, and more advanced topics in the third. The first part began by introducing feedback control and the idea of predicting the future trajectory of a system. Then the unconstrained finite-horizon LQR formulation was introduced along with the algorithms to construct the prediction matrix and receding-horizon state feedback controller matrix. Finally, both the uncondensed and condensed constrained LQR formulations were introduced, along with the algorithms to construct the appropriate constraint matrices and Hessians. In the second part, the students were introduced to more advanced linear MPC concepts taken from various research papers in the MPC field, such as soft constraints \citep{scokaertFeasibilityIssuesLinear1999}, offset-free control \citep{pannocchiaOffsetfreeMPCExplained2015}, and move blocking \citep{cagienardMoveBlockingStrategies2007}. The final part of the course contained more advanced topics that were not contained in either the checkpointing or summative assessments. In the 2018/2019 courses, the only advanced topic covered was stability theory for linear MPC, based on \citet{Mayne2000_StabilitySurvey}. In the 2020 course, an additional advanced topic introducing direct collocation-based nonlinear MPC was added based on the tutorial paper by \citet{kellyIntroductionTrajectoryOptimization2017}. \subsubsection{Laboratory exercises} The course contained two physical laboratory activities that the students performed during weeks 3/4 and 5/6. These activities used the INTECO 3D overhead gantry crane \citep{intecoCrane} connected to a computer running MATLAB/Simulink. During the first laboratory, the students were given an unconstrained LQR controller that moved the crane from a starting point to a target point. The students were asked to modify the various controller parameters (i.e.\ cost matrices and horizon length) to see how they affect the closed loop system response and gain an intuition about the tuning of the controller. In the second laboratory, the students were given a framework with a constrained linear MPC controller. The students were asked to modify the cost matrices, horizon length, and constraints (i.e.\ tighten/loosen them) to gauge their effect on the controller response. In this laboratory, the students were also introduced to the idea of computational complexity of the controller, with the framework recording the time used by the optimization solver at each sample so that the students could see the effect the controller changes had on the computations needed. \subsection{2021 course} The 2021 course was redesigned to focus on nonlinear MPC and utilized the textbook by \citet{rawlingsModelPredictiveControl2020} as a reference. The focus of this course was still on applying MPC, however the switch to nonlinear MPC also required the introduction of more background material to the course content. In addition to the switch to nonlinear MPC, the COVID-19 pandemic necessitated the course be redesigned to be a virtual course with one weekly 1-hour video lecture slot. This was done by using a flipped-classroom approach, where the students watched video lectures on each topic before attending the weekly session, and then participated in activities/short quizzes during the weekly session. \subsubsection{Course topics} This course consisted of two parts, the first covering the necessary background material to formulate a nonlinear MPC problem and the second covering various formulations/extensions of nonlinear MPC. The background material covered in the first part included numerical differential equation solvers (e.g.\ Runge-Kutta methods, collocation methods), discretization methods for nonlinear state-space equations (e.g.\ single/multiple shooting, direct collocation), and computing derivatives (i.e.\ numerical and automatic differentiation). In the second part of the course, the students were introduced to extensions of the nonlinear MPC formulation. These included soft constraints, constraint tightening and robustness \citep{saltikOutlookRobustModel2018}, the real-time iteration \citep{Gros2016}, move blocking \citep{chenEfficientMoveBlocking2020}, and external constraint handling \citep{nieExternalConstraintHandling2020}. \subsubsection{Laboratory exercises} Due to the COVID-19 pandemic, the course could not use the in-person laboratory equipment, so a new virtual laboratory was developed. The students were given a high-fidelty simulation model of the crane in feedback with a constrained linear MPC controller and were asked to perform the same experiments as the previous-year's in-person laboratories (i.e.\ modifying controller parameters to see the closed-loop response). This virtual laboratory was implemented inside a MATLAB Live Script that could be run using MATLAB Online, allowing for students to do the laboratory without needing to have MATLAB installed on their computers. The use of the Live Script also allowed the instructions and formatted equations to be included in the same file as the code, allowing the students to more easily understand the lab. \begin{table}[t] \centering \caption{Checkpointing assessments} \label{tab:assignments} \begin{threeparttable} \begin{tabular}{cc} \textbf{Topic} & \textbf{Problem} \\ \hline \hline \textbf{Modelling} & Gantry crane model derivation\\ \hline \multirow{3}{2.5cm}{\centering\textbf{Unconstrained RHC}} & Prediction matrix construction \\ & Cost function matrix construction \\ & Linear RHC law construction \\ \hline \multirow{5}{2.5cm}{\centering\textbf{Constrained RHC}} & Stage constraint matrix \\ & Trajectory constraint matrix \\ & QP constraint matrix \\ & Receding horizon controller \\ & Soft constraint matrices \\ \hline \multirow{3}{2.5cm}{\centering\textbf{Differential equations}\tnote{1}} & Runge-Kutta Methods \\ & Implicit/Explicit Euler \\ & Writing an ODE45 method \\ \hline \multirow{3}{2.5cm}{\centering\textbf{Linear equations}\tnote{1}} & Solution uniqueness \\ & Solving linear systems \\ & Least squares method \\ \hline \multirow{3}{2.5cm}{\centering\textbf{Quadrature methods}\tnote{1}} & Riemann sums \\ & Trapezoidal method \\ & Simpson's rule \\ \hline \multirow{2}{2.5cm}{\centering\textbf{Numerical\newline optimization}\tnote{1}} & fmincon w/ manual differentiation \\ & fmincon w/ auto differentiation \\ \hline \end{tabular} \begin{tablenotes} \item[1] Only in 2021 course \end{tablenotes} \end{threeparttable} \end{table} \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.25\textwidth} \centering \resizebox{1.0\textwidth}{!}{\input{images/wedge}} \caption{Shape 1: Wedge (2018 \& 2019)} \label{fig:shape:wedge} \end{subfigure}% \hspace*{4em} \begin{subfigure}[b]{0.25\textwidth} \centering \resizebox{1.0\textwidth}{!}{\input{images/cornercircle}} \caption{Shape 2: Circles on edge (2020)} \label{fig:shape:edge} \end{subfigure}% \hspace*{4em} \begin{subfigure}[b]{0.25\textwidth} \centering \resizebox{1.0\textwidth}{!}{\input{images/areacircles}} \caption{Shape 3: Circles in region (2021)} \label{fig:shape:circles} \end{subfigure}% \caption{Sample shape the crane must stay inside. (The allowed region is in white, forbidden region in hatched red, the starting point is the filled circle and the target point is the empty square).} \label{fig:shape} \end{figure*} \section{Checkpointing Assessments} \label{sec:formAssessments} During the course, the students were assigned several small summative assessments on MATLAB Grader to gauge their progress and understanding of the topics being taught. In the 2018--2020 courses, there were three main topics covered in the assessments: system modelling, unconstrained RHC, and constrained RHC. In the 2021 course, four new topics were added to cover the new background material needed for nonlinear MPC: differential equation solvers, linear equations, quadrature methods, and numerical optimization with \textit{fmincon}. Inside each topic, the student was presented with several MATLAB coding problems to test their knowledge on how to implement ideas discussed in the lectures (see Table~\ref{tab:assignments} for the problems in each topic). For example, in the prediction matrix problem in the Unconstrained RHC topic, the students were asked to write a MATLAB function that took the system's state-space matrices and desired horizon length as the input and returned the fully formed prediction matrix. The functions the students wrote were then tested against several sets of function inputs to ensure their function worked and was generalizable to other problems. \section{Specification-based Summative Assessment} \label{sec:summAssessment} To assess the learning of the students in the course, we utilized two controller design summative assessments instead of the traditional end-of-course written exam. The first of the assessments was due roughly 50--60\% of the way through the course, and tested the student's knowledge of the basic predictive control concepts. The second assessment was due at the end of the last week of term and focused on challenging the student to try the more advanced concepts taught in part 2 of the course. \subsection{Assessment Overview} In the controller design assessments, the students were tasked with designing a controller to move a gantry crane from a starting point to a target point while staying inside a specified region and navigating around obstacles. For the first assessment, the constrained region was a simple rectangle (e.g.\ the left half of the wedge in Figure~\ref{fig:shape:wedge}). The constrained region for the second assessment evolved during the four years of the course. In the 2018 and 2019 courses, the constrained region was the intersection of two rectangles at a 90$^\circ$ angle to form a wedge, as shown in Figure~\ref{fig:shape:wedge}. In the 2020 course, the region was changed to be a base shape of a single rectangle with circular regions centered on the edge of the rectangle added as obstacles for the crane to avoid, as shown in Figure~\ref{fig:shape:edge}. For the 2021 course, the shape was formed by a single rectangular region with up to 10 elliptical obstacles placed anywhere in the region, with an example region shown in Figure~\ref{fig:shape:circles}. For the 2018--2020 courses, the objective of the controller was to simply move the crane to the target point within a specified time limit and without violating any of the constraints. With the switch to nonlinear MPC in the 2021 course, the assessment was updated with two major changes: (i) turning the control design problem into a fixed final time problem (i.e.\ the crane needed to be at the target point exactly at the specified time), and (ii) the addition of a constraint that limited the amount of total work the controller could perform when moving the crane. \begin{table*}[t] \centering \caption{Number of students choosing each design option.} \label{tab:designchoice} \subfloat[2018 course (out of 30 students)\label{tab:designChoice:2018}]{ \begin{tabular}[t]{cc} \toprule \multicolumn{2}{c}{\textbf{Cost Function}} \\ \hline Quadratic cost & 30 \\ Stabilizing terminal penalty & 15 \\ \bottomrule \multicolumn{2}{c}{\textbf{Constraints}} \\ \hline Soft constraints & 14 \\ Multiple constraint sets & 29 \\ \bottomrule \multicolumn{2}{c}{\textbf{State Estimator}} \\ \hline Kalman filter & 15 \\ Other state estimator & 5 \\ \bottomrule \multicolumn{2}{c}{\textbf{Other Features}} \\ \hline Offset-free tracking & 18 \\ Move blocking & 1 \\ \bottomrule \end{tabular}}\hspace*{2cm} \subfloat[2021 course (out of 26 students)\label{tab:designChoice:2021}]{ \begin{tabular}[t]{cc} \toprule \multicolumn{2}{c}{\textbf{Setup - Path Planning}} \\ \hline MATLAB \textit{nlmpc} & 2 \\ MATLAB \textit{fmincon} & 4 \\ Other path planning & 3 \\ \bottomrule \multicolumn{2}{c}{\textbf{Setup - Nonlinearities}} \\ \hline Nonlinear cost & 2 \\ Nonlinear ellipses & 9 \\ \bottomrule \multicolumn{2}{c}{\textbf{Controller - Other Features}} \\ \hline Constraint tightening & 15 \\ Soft constraints & 4 \\ State estimator & 5 \\ \bottomrule \end{tabular} \begin{tabular}[t]{cc} \toprule \multicolumn{2}{c}{\textbf{Controller - Optimizer}} \\ \hline MATLAB \textit{nlmpc} & 4 \\ MATLAB \textit{fmincon} & 20 \\ Real-time iteration & 3 \\ \bottomrule \multicolumn{2}{c}{\textbf{Controller - Model}} \\ \hline Nonlinear/Time-varying & 9 \\ Linear & 15 \\ \bottomrule \multicolumn{2}{c}{\textbf{Controller - Cost}} \\ \hline Nonlinear & 5 \\ Quadratic & 23 \\ \bottomrule \end{tabular}} \end{table*} \subsubsection{Specification} In the assessment document, the students were only given a set of formal performance specifications that their controller must meet, and were then free to implement any MPC formulation they wished. These performance specifications consisted of two parts: the equilibrium condition for the crane at the target point and the definition of successful completion of a testcase, shown in Definitions~\ref{def:softEqu} and~\ref{def:softComplete}, respectively, for the 2021 course. \begin{definition}[Equilibrium] Obtaining equilibrium in the simulation run means that: \begin{itemize}[noitemsep,nolistsep] \item the $x$ and $y$ position states of the cart are within $\epsilon_{t}$ of the target point at $t=T_f$ seconds, \item the $x$ and $y$ position states of the payload are within $\epsilon_{t}$ of the target point at $t=T_f$ seconds, \item the velocity of the cart ($\dot{x}$ and $\dot{y}$) and angular velocity of the pendulum ($\dot{\theta}$ and $\dot{\psi})$ are within $\epsilon_{r}$ of $0$ at $t=T_f$ seconds, and \item the inputs are within $\epsilon_{r}$ of $0$ at $t=T_f$ seconds. \end{itemize} All comparisons are made using the infinity norm. \label{def:softEqu} \end{definition} \begin{definition}[Successful Completion] Successful completion of a testcase means that: \begin{itemize}[noitemsep,nolistsep] \item the system is at equilibrium (as defined in Definition~\ref{def:softEqu}) at $t=T_{f}$ seconds, \item the inputs remain in the interval $[-1, 1]$ during the entire time interval $[0,T_f]$, \item the work done by the cart over $T_f$ seconds is not more than $W_{max}$, i.e.\ $W\leq W_{max}$, and \item no constraints are violated over the time interval $[0,T_f]$. \end{itemize} \label{def:softComplete} \end{definition} \subsubsection{Marking} The submitted controller designs were evaluated using both the laboratory gantry crane hardware and a high-fidelity simulation model of the gantry crane in MATLAB, and were tested against over 30 region shapes/sizes and sets of obstacles to gauge the generalizability of the student controller. When testing against the simulation model, the students were only given one of the testcases used to mark the controllers (the default shape testcase) before the submission deadline. The remaining testcases were kept secret, and only given to the students along with their results. The secret testcases were designed to expose the controllers to a variety of situations, and were generated by \begin{itemize} \item narrowing/widening the constrained region, \item moving the target point closer to the constraints, \item adding more elliptical obstacles, and \item perturbing the model parameters away from the nominal model. \end{itemize} To ensure fairness in marking, every student controller was tested using the same set of secret testcases. When evaluating the controller using the hardware gantry crane, each student was given a 30 minute timeslot on the actual hardware to experiment and tune their submitted controller. Because the hardware gantry crane already includes unmodelled elements/disturbances, no additional disturbances/uncertainties were added, and the controllers were only marked using the default shape testcase. The students were required to write a short (1--2 page) report on the MPC controller they implemented. They also underwent an oral exam, where they were asked about their controller design and how it met the specifications given in the assessment document. \subsection{Assessment Framework} To administer the specification-based summative assessment, we developed a MATLAB/Simulink framework that allowed the students to write four MATLAB functions containing their solution: \textit{mySetup}, \textit{myMPController}, \textit{myTargetGenerator}, and \textit{myStateEstimator}. These functions were submitted through MATLAB Grader, which also ran preliminary tests to ensure the submitted code had no syntax errors. The marking framework iterated through all the testcases, running a closed-loop simulation of the crane system for each testcase, and then saving the resulting state and control trajectories for later analysis. Each of the student functions implemented a specific component of the closed-loop system, with the \textit{mySetup} function running before the simulation began to allow offline computation of variables that were used in the other three controller functions. After all student controllers were run on every testcase, the marking framework then generated marks for the students by comparing the saved trajectories against the given specification and determining any violations that occurred. These violations were then plugged into a marking rubric to turn the testcase results into actual course marks. The students then received a report containing plots of all the trajectories and a listing of all the specification violations that occurred. \section{Student Solutions and Observations} Overall, an analysis of the student controllers submitted for the final assessment in the course shows that the specification-based summative assessment framework provided a large degree of freedom to the students. This can be seen in Table~\ref{tab:designchoice}, where we show the different design options chosen by students in both the 2018 and 2021 courses. In 2018, the students were limited to using only the \textit{mpcqpsolver} optimization function (due to limitations in MATLAB's real-time code generation), meaning all 30 students implemented a quadratic MPC formulation that simply changed the constraint sets in the optimizer as the crane moved around the wedge. However, there was more diversity in the other design choices, with half the students choosing to implement soft constraints, another half implementing state estimation and offset-free tracking, and one student choosing to implement move blocking. In the 2021 course, when there was no prescribed optimizer to use, the students chose to implement a diverse set of controllers, with the majority using \textit{fmincon}, but with 4 using the built-in MATLAB Model Predictive Control Toolbox \textit{nlmpc} function, and another 3 implementing a custom real-time iteration scheme. Additionally, the framework provided the freedom to implement path planning to avoid the obstacles, which 9 students chose to do (with 3 of them implementing an ASTAR-based path planner). The majority of the students in 2021 only utilized the \textit{fmincon} optimizer to add the nonlinear elliptical constraints to the optimization problem, and still utilized a linear dynamics model and a quadratic cost function. To add robustness to their designs, the 2021 cohort utilized constraint tightening more than soft constraints, and relatively few implemented state estimation. Based on our observations, the students engaged with and enjoyed the in-person lab components of the course, but sometimes spent too much time completing the final summative assessments. This appears to be a result of having a lot of freedom in the design, which results in the students continually trying out new and more advanced methods (and having to spend time debugging them), only to see marginal improvements in the performance of their controller. \section{Lessons Learned} During the past four years, we have encountered several issues and problems when using the specification-based summative assessments in the course, with the two largest issues being: ambiguity in the specification and errors in the student code submissions. \subsection{Students find ambiguities/loopholes} The largest issue we faced was properly and completely defining the specification given to the students inside the assessment document so that it contained what we wanted to assess. When drafting the specification, we fell into the trap of including our implicit assumptions on what the controller should do when we interpreted the specification instead of interpreting it as written. For example, in the 2018--2020 courses the definition of equilibrium contained conditions similar to \begin{quote} The $x$ and $y$ position states of the cart are within $\epsilon_{t}$ of the target point within 5 seconds, \end{quote} which as written means the student controller only needed to be at the target point satisfying all the conditions for a single time instant to meet the specification. In reality, we were implicitly thinking the student controllers should enter \textit{and stay within} $\epsilon_{t}$ of the target. These types of errors in the specification are the hardest to fix, since the assessment criteria could not be modified once given to the students. Instead, we modified the analysis framework to match the given specification, and in the 2021 course redesigned the assessment to be a fixed final time problem, removing this ambiguity. \subsection{Don't trust student code} While the framework was designed to be an automated system that could simply be started and then left to run, during our initial use of it we encountered many unexpected errors that crashed the framework. These errors mainly come from the student code, and include: incorrect computations inside the controller function causing MATLAB to throw an error, the student controller giving out-of-bound or invalid values that then cause the dynamical simulation to error, and internal errors in the numerical solvers that then crash the entire MATLAB process. In response, we implemented four levels of error catching/handling in the framework to catch and gracefully handle any errors that occurred, including a command line script to monitor and restart MATLAB if it crashed. \section{Conclusions} \balance In this paper, we presented the predictive control course taught by the authors over the past 4 years at Imperial College London. This course has evolved from teaching only linear MPC in 2018 to focusing on nonlinear MPC in 2021, and assesses student knowledge of the predictive control concepts using a novel specification-based summative assessment framework. This framework gives the students freedom in their controller design to implement different MPC formulations, while also encouraging them to think about the robustness of their controller. While we observed an increase in the different types of controllers implemented by the students in the 2021 course when the focus changed to nonlinear MPC, there was still a tendency to utilize a linear model and quadratic cost instead of exploring more advanced concepts. Future work and updates to the course could explore ways to push students more towards the non-LQR formulations, possibly by introducing a different and more nonlinear system or changing the objective of the controller.
{ "timestamp": "2022-05-02T02:23:26", "yymm": "2202", "arxiv_id": "2202.00157", "language": "en", "url": "https://arxiv.org/abs/2202.00157" }
\section{Introduction} Environmental mapping is an inevitable function of autonomous systems and LiDAR is one of the most common sensors used for mapping tasks owing to its ranging accuracy and reliability. Following recent visual SLAM studies, tightly coupled LiDAR-IMU fusion techniques have been widely studied in recent years \cite{Ye2019,Qin2020}. The tight coupling scheme fuses LiDAR and IMU measurements on a unified objective function and makes the sensor trajectory estimation robust to quick sensor motion, as well as feature-less environments, where sufficient geometrical constraints are not available. Furthermore, IMU measurements provide information on the direction of gravity, which enables a reduction of the estimation drift \cite{Qin2018}. However, the use of the LiDAR-IMU tight coupling scheme has mostly been limited to the frontend (i.e., odometry estimation) of the system in the context of LiDAR SLAM. This is because the backend (i.e., global optimization) of most existing methods relies on pose graph optimization; this uses approximated relative pose constraints constructed from the estimation result of the frontend, resulting in the separation of LiDAR- and IMU-based estimation. In this paper, we propose a real-time SLAM framework that employs a tightly coupled LiDAR-IMU fusion scheme for all estimation stages (i.e., from odometry estimation to global optimization). We use the voxel-based GICP matching cost factor, which can fully leverage GPU parallel processing and enables the creation of a factor graph to minimize the scan matching error over the entire map \cite{koide_ral2021}. We combine the GPU-accelerated matching cost factor with the IMU preintegration factor to jointly consider the LiDAR and IMU constraints for global trajectory optimization. This approach enables us to accurately correct estimation drift in challenging environments while preserving the global consistency of the map. To the best of our knowledge, this is the first study to perform global trajectory optimization based on the tight coupling of LiDAR and IMU constraints. We also propose a keyframe-based LiDAR-IMU frontend algorithm with fixed-lag smoothing that enables efficient and low-drift sensor ego-motion estimation with a bounded computation cost. We show that the proposed framework enables highly accurate and robust trajectory estimation through experiments on the Newer College dataset \cite{Ramezani2020} and KAIST urban dataset \cite{Jeong2018}. The proposed framework is distinct from existing LiDAR-IMU SLAM frameworks in several aspects. \begin{enumerate} \item It is based on the voxelized GICP matching cost factor \cite{koide_ral2021}, which uses a larger number of points to calculate the registration error compared to the commonly used scan matching based on line and plane point matching \cite{Ye2019}. This enables accurate and robust constraint of sensor poses while fully leveraging GPU parallel processing. \item Its tightly coupled odometry estimation module employs a keyframe-based fixed-lag smoothing method inspired by \cite{Engel2018}, which enables a low-drift trajectory estimation with a bounded computation cost. \item It also employs the tight coupling approach for the backend. The backend constructs a densely connected matching cost factor graph with the support of the IMU factors and exhibits outstanding accuracy. It also introduces the concept of {\it endpoints} of submaps to strongly constrain submaps at a large time interval with IMU constraints. \end{enumerate} \section{Related Work} \subsection{LiDAR-IMU frontend} Following the recent progress in visual-inertial SLAM techniques \cite{Qin2018,Stumberg2018,Campos2021}, LiDAR-IMU fusion has been an important topic for LiDAR SLAM \cite{liosam2020shan}. The use of IMU enables us to predict sensor motion at a frequency of 100-1000 Hz, facilitating good initial estimates of the sensor pose and correct distortion of LiDAR points under quick sensor motion. Furthermore, IMU measurements provide information on the direction of gravity, enabling a reduction of the trajectory estimation drift in four DoFs by aligning the trajectory with this direction \cite{Qin2018}. One method for fusing IMU and LiDAR measurements is the loose coupling scheme, which separately considers LiDAR-based estimation and IMU-based estimation and fuses the estimation results in the pose space using, for example, an extended Kalman filter \cite{Weiss2011} or a factor graph \cite{liosam2020shan,Indelman2013}. While the loose coupling scheme is computationally efficient, another approach, i.e., the tight coupling scheme, can theoretically be more accurate and robust than the loose coupling scheme \cite{Ye2019}. The tight coupling scheme fuses LiDAR and IMU measurements on a unified objective function. This approach enables robust estimation of sensor trajectory in feature-less environments, where sufficient geometrical information is not available through LiDAR data, because IMU constraints help to constrain the sensor trajectory based on inertial information. Owing to their high accuracy and robustness, tightly coupled LiDAR-IMU methods have been widely studied in recent years \cite{Ye2019,Qin2020,Xu2021,Li2021}. Despite its theoretical advantages, the tight coupling approach considerably increases system complexity and computational cost and can be unstable in extreme situations. To avoid increasing system complexity, several methods employ an IMU-centric loose coupling approach to make the system robust in extreme environments (e.g., in an underground environment) \cite{Palieri2021,zhao2021super}. \subsection{LiDAR-IMU backend} While there are many LiDAR SLAM frontend methods based on LiDAR-IMU fusion, the use of IMU constraints is mostly limited to the frontend (i.e., odometry estimation) in most existing methods \cite{liosam2020shan,Li2021,Shan2018} because they use pose graph optimization for global trajectory optimization, which minimizes errors in the pose space. As pose graph optimization uses SE3 relative pose constraints to constrain sensor poses, separation of the LiDAR-based estimation and IMU-based estimation is unavoidable. It also affects the consistency of the map when closing a large loop or constraining small overlapping frames because it employs an approximated representation (i.e., Gaussian distribution) for relative pose constraints \cite{koide_ral2021}. The backend of the proposed framework is conceptually similar to Voxgraph \cite{Reijgwart2020}, which also considers point cloud registration errors for global trajectory optimization. It uses the Euclidean signed distance field \cite{Oleynikova2017} to represent submaps and efficiently computes the registration error between submaps, without requiring a costly nearest-neighbor search. However, the registration error minimization was still computationally expensive, and global optimization was conducted using a random subset of registration residuals with the support of SE3 relative pose factors. The proposed method eliminates inaccurate SE3 relative pose factors and fully relies on matching cost factors for all optimization stages, resulting in globally consistent mapping results. Furthermore, it also enables the construction of a tightly coupled global trajectory optimization, which greatly improves the robustness of the mapping process in severely feature-less environments. \section{Methodology} \begin{figure}[tb] \centering \includegraphics[width=0.8\linewidth]{figs/system.pdf} \caption{System overview.} \label{fig:system} \end{figure} Fig. \ref{fig:system} shows an overview of the proposed framework, comprising a preprocessing module and three estimation modules, i.e., odometry estimation, local mapping, and global mapping, which are all based on tightly coupled LiDAR-IMU fusion. The odometry estimation (i.e., frontend) module robustly estimates the sensor motion and provides an initial estimate of the latest sensor state. The estimated sensor states are refined by the following local mapping module, and several local frames are merged into one submap. The global mapping module then optimizes the submap poses such that the global registration error is minimized while preserving the consistency of the map. We run these modules in parallel via multi-threading. We define the sensor state ${\bm x}_t$ that will be estimated in the estimation modules as \begin{align} {\bm x}_t = [{\bm T}_t, {\bm v}_t, {\bm b}_t]^T, \end{align} where ${\bm T}_t = [{\bm R}_t | {\bm t}_t] \in SE(3)$ is the sensor pose, ${\bm v}_t \in \mathbb{R}^3$ is the velocity, and ${\bm b}_t = [{\bm b}_t^a, {\bm b}_t^{\omega}] \in \mathbb{R}^6$ is the IMU acceleration and angular velocity bias. We estimate the time series of sensor states from LiDAR point clouds $\mathcal{P}_t$ and IMU measurements (linear acceleration $a_t$ and angular velocity $\omega_t$). Note that we transform LiDAR point clouds into the IMU coordinate frame and, for efficiency and simplicity, consider them as if they are in a unified sensor coordinate frame. In the following Sec. \ref{sec:matching_cost_factor} and \ref{sec:imu_preintegration_factor}, we first introduce two types of factors, the LiDAR matching cost factor and the IMU preintegration factor, that are the main components of the factor graphs used in the proposed framework. Then, we explain each module in the proposed framework in Sec. \ref{sec:preprocess} to \ref{sec:global_mapping}. \subsection{LiDAR Matching Cost Factor} \label{sec:matching_cost_factor} The matching cost factor constrains two sensor poses (${\bm T}_i$ and ${\bm T}_j$) such that the matching cost between the point clouds ($\mathcal{P}_i$ and $\mathcal{P}_j$) is minimized. As the matching cost, we choose the voxelized GICP (VGICP) cost \cite{vgicp}, which is a variant of generalized ICP \cite{Segal2009} suitable for GPU computation. VGICP models each input point ${\bm p}_k \in \mathcal{P}_i$ as a Gaussian distribution ${\bm p}_k = ({\bm \mu}_k, {\bm C}_k$), and the covariance matrix ${\bm C}_k$ is computed from the neighboring points of ${\bm p}_k$. It discretizes $\mathcal{P}_j$ into voxels and computes a Gaussian distribution for each voxel by aggregating the means and covariances of the points in the voxel. Then, the matching cost $e^M$ between $\mathcal{P}_i$ and $\mathcal{P}_j$ is defined based on the GICP distribution-to-distribution distance: \begin{align} \label{eq:vgicp} e^M(\mathcal{P}_i, \mathcal{P}_j, {\bm T}_i, {\bm T}_j) &= \sum_{p_k \in \mathcal{P}_i} e^{\text{\it D2D}}({\bm p}_k, {\bm T}_i^{-1} {\bm T}_j), \\ e^{\text{\it D2D}} ({\bm p}_k, {\bm T}_{ij}) &= {\bm d}_k^T ({\bm C}'_k + {\bm T}_{ij}{\bm C}_k{\bm T}_{ij}^T)^{-1} {\bm d}_k, \end{align} where ${\bm p}_k' = ({\bm \mu}_k', {\bm C}_k')$ is the mean and covariance of the corresponding voxel of ${\bm p}_k$ given by looking up the voxel map of $\mathcal{P}_j$, and ${\bm d}_k = {\bm \mu}_k' - {\bm T}_{ij} {\bm \mu}_k$ is the residual between ${\bm \mu}_k$ and ${\bm \mu}'_k$. From the derivatives of Eq. \ref{eq:vgicp}, we obtain a Hessian factor to constrain the relative pose between ${\bm T}_i$ and ${\bm T}_j$. It is worth emphasizing that we re-evaluate and linearize $e^M$ at the current linearization point for every optimization iteration, which results in a more accurate constraint than the traditional SE3 relative pose constraint \cite{koide_ral2021}. \subsection{IMU Preintegration Factor} \label{sec:imu_preintegration_factor} We use the IMU preintegration technique \cite{Forster2017} to efficiently incorporate IMU constraints into the factor graph. Given an IMU measurement (${\bm a}_t$ and ${\bm \omega}_t$), the sensor state evolves as follows: \begin{align} \label{eq:imu_evol_R} {\bm R}_{t + \Delta t} &= {\bm R}_t \exp \left( \left( {\bm \omega}_t - {\bm b}_t^{\omega} - {\bm \eta}_k^{\omega} \right) \Delta t \right), \\ \label{eq:imu_evol_v} {\bm v}_{t + \Delta t} &= {\bm v}_t + {\bm g} \Delta t + {\bm R}_t \left( {\bm a}_t - {\bm b}_t^a - {\bm \eta}_t^a \right) \Delta t, \\ \label{eq:imu_evol_p} {\bm t}_{t + \Delta t} &= {\bm t}_t + {\bm v}_t \Delta t + \frac{1}{2} {\bm g} \Delta t^2 + \frac{1}{2} {\bm R}_t \left( {\bm a}_t - {\bm b}_t^a - {\bm \eta}_t^a \right) \Delta t^2, \end{align} where ${\bm g}$ is the gravity vector and ${\bm \eta}_t^a$ and ${\bm \eta}_t^{\omega}$ are white noise in the IMU measurement. The IMU preintegration factor integrates the system evolution between two time steps $i$ and $j$ to obtain the relative body motion constraints (see \cite{Forster2017} for a detailed derivation): \begin{align} \label{eq:preint_R} \Delta {\bm R}_{ij} &= {\bm R}_i^T {\bm R}_j \exp \left( \delta {\bm \phi}_{ij} \right), \\ \Delta {\bm v}_{ij} &= {\bm R}_i^T \left( {\bm v}_j - {\bm v}_i -{\bm g} \Delta t_{ij} \right) + \delta {\bm v}_{ij}, \\ \Delta {\bm t}_{ij} &= {\bm R}_i^T \left( {\bm t}_j - {\bm t}_i - {\bm v} \Delta t_{ij} - \frac{1}{2} {\bm g} \Delta t_{ij}^2 \right) + \delta {\bm t}_{ij}, \end{align} where $\delta {\bm \phi}_{ij}, \delta {\bm v}_{ij}$, and $\delta {\bm p}_{ij}$ are white noise in the integration process. The IMU preintegration factor enables us to keep the factor graph well-constrained in environments where geometrical features are insufficient and LiDAR factors can be deficient. Furthermore, it provides information on the direction of gravity and reduces the estimation drift in 4 DoF \cite{Qin2018}. \subsection{Preprocessing} \label{sec:preprocess} We first downsample the input point clouds with a voxel grid filter. For the following deskewing process, we average the timestamps of points in addition to the positions for each voxel. If a point has a timestamp that is significantly different from that of the corresponding voxel (e.g., $|t^{\text{\it point}} - t^{\text{\it voxel}}| > \frac{d^{\text{\it scan}}}{10}$, where $d^{\text{\it scan}}$ is the scan duration), we assign the point to another new voxel to avoid fusing the first and last points of a scan. We then find k neighboring points for each point required for the subsequent point covariance estimation. We assume that the neighborhood relationship of points does not largely change during the following deskewing process and use the precomputed nearest neighbors in covariance estimation, which is performed after deskewing. \subsection{Odometry Estimation} \label{sec:odometry} \begin{figure}[tb] \centering \includegraphics[width=1.0\linewidth]{figs/frontend_graph.pdf} \caption{Frontend factor graph. Only the factors related to the latest frame ($x_8$) are illustrated. Matching cost factors are created for the last $N$ frames and keyframes. If a keyframe is outside the fixed-lag smoothing window (is already marginalized out), we create a unary matching cost factor. IMU preintegration factors are created between consecutive frames.} \label{fig:frontend_graph} \end{figure} The odometry estimation module compensates for quick sensor motion and robustly estimates the sensor state by fusing LiDAR and IMU measurements. We first correct the distortion on the point cloud caused by the sensor motion by transforming the points into the IMU frame with motion prediction based on IMU dynamics. We then compute the covariance of each point using the precomputed neighboring points. Given the deskewed point clouds, we construct the factor graph shown in Fig. \ref{fig:frontend_graph}. To limit computation cost and ensure that the system is real-time capable, we use a fixed-lag smoothing approach and marginalize the old frames. Inspired by direct sparse odometry \cite{Engel2018}, we introduce a keyframe mechanism for efficient and low-drift trajectory estimation. Keyframes are a set of frames that are selected such that they are spatially well-distributed while having sufficient overlap with the latest frame. We create a matching cost factor between the latest frame and every keyframe to efficiently reduce estimation drift. If a keyframe is already marginalized from the fixed-lag smoother, we consider the keyframe pose as fixed and create a unary matching cost factor that constrains the latest sensor pose with respect to the fixed keyframe. To manage keyframes, we define an overlap rate between two frames $\mathcal{P}_i$ and $\mathcal{P}_j$ as the fraction of points in $\mathcal{P}_i$ that fall within a voxel of $\mathcal{P}_j$ \cite{koide_ral2021}. Every time a new frame arrives, we evaluate the overlap rate between that frame and the latest keyframe and, if the overlap is smaller than a threshold (e.g., 90\%), we insert that frame into the keyframe list. Similar to the keyframe marginalization strategy in \cite{Engel2018}, we remove redundant keyframes using the following strategy: \begin{enumerate} \item We remove keyframes that overlap the latest keyframe by less than a certain threshold (e.g., 5\%). \item If more than $N^{\text{\it odom}}$ (e.g., 20) frames exist in the keyframe list, we remove the keyframe that minimizes the following score: \begin{align} s(i) = o(i, N^{\text{\it odom}}) \sum_{j \in [1, N^{\text{\it odom}}-1] \backslash \{i\}} \left( 1 - o(i, j) \right), \end{align} where $o(i, j)$ is the overlap rate between the i-th and j-th keyframes. The score function is heuristically designed to keep keyframes spatially well-distributed while leaving more keyframes close to the latest one. \end{enumerate} In addition to the keyframes, we create matching cost factors between the latest frame and the last few frames (e.g., last three frames) to make the odometry estimation robust to quick sensor motion. We also create an IMU preintegration factor between consecutive frames for robustness in feature-less environments. \subsection{Local Mapping} \label{sec:local_mapping} \begin{figure}[tb] \centering \includegraphics[width=1.0\linewidth]{figs/backend_graph.pdf} \caption{Backend factor graph. The local mapping module merges several local frames into one submap using an all-to-all registration strategy. The global mapping module optimizes the submap poses such that the global registration error is minimized over the entire map. Both modules take advantage of IMU factors to stabilize the estimation in severe feature-less environments and reduce estimation drift.} \label{fig:backend_graph} \end{figure} Once a frame is marginalized from the odometry estimation graph, it is fed to the local mapping module as an initial estimate of the sensor state. The local mapping module merges several local frames into one submap to reduce the number of optimized variables in the global mapping module. We first re-perform deskewing and covariance estimation with the marginalized state, which is expected to improve upon the initial prediction made at the beginning of the odometry estimation. We then evaluate the overlap rate between that frame with the latest frame in the submap and, if the overlap rate is smaller than a threshold (e.g., 90\%), insert that frame into the submap factor graph. As shown in Fig. \ref{fig:backend_graph}, we create a matching cost factor for every combination of frames in the submap (i.e., all-to-all registration). We also add an IMU preintegration factor between consecutive frames and add a prior factor for the velocity and bias of each frame, based on the marginalized state, to better stabilize the submap optimization. Once the number of frames in the submap becomes equal to $N^{\text{\it sub}}$ (e.g., 15) or the overlap between the first and last frames becomes smaller than a threshold (e.g., 5 \%), we perform factor graph optimization using the Levenberg-Marquardt optimizer \cite{Levenberg1944} and merge the frames into one submap based on the optimization result. \subsection{Global Mapping} \label{sec:global_mapping} The global mapping module corrects the estimation drift to obtain a globally consistent mapping result. We create a matching cost factor between every submap pair with an overlap rate exceeding a small threshold (e.g., 5\%). This results in an extremely dense factor graph. Every submap is aligned with not only adjacent submaps on the graph but also every revisited submap that results in closing loops implicitly. Submaps are created at a larger time interval (e.g., 10 s). If we simply create an IMU factor between submaps, its uncertainty becomes too large, and it cannot strongly constrain the relative pose between submaps \cite{Stumberg2018,MurArtal2017a}. Furthermore, we also lose information on the velocity and IMU bias estimated by the local mapping module. To address these problems, we introduce two states called {\it endpoints} (${\bm x}^i_L$ and ${\bm x}^i_R$) for each submap ${\bm x}^i$; they hold the states of the first and last frames in the submap with respect to the submap pose. Given an estimate of sensor states $[{\bm x}_0, \cdots, {\bm x}_{N^{\text{\it sub}}}]$ in a submap ${\bm x}^i$, we define the submap origin ${\bm T}^i$ as the pose of the sensor pose at the center ${\bm T}_{N^{\text{\it sub}}/2}$. Then, the sensor state ${\bm x}_t$ relative to the submap origin is given as: \begin{align} \label{eq:relative_T} {\bm T}'_t &= \left( {\bm T}^i \right)^{-1} {\bm T}_t, \\ \label{eq:relative_v} {\bm v}'_t &= \left( {\bm R}^i \right)^{-1} {\bm v}_t, \\ \label{eq:relative_b} {\bm b}'_t &= {\bm b}_t. \end{align} We create relative state factors between a submap ${\bm x}^i$ and endpoints ${\bm x}^i_L$ and ${\bm x}^i_R$ respectively from the first and last frames in the submap (${\bm x}_0$ and ${\bm x}_{N^{\text{\it sub}}}$) such that they satisfy the relative state relationship described by Eqs. \ref{eq:relative_T} - \ref{eq:relative_b}. We then create an IMU factor between ${\bm x}^i_R$ and ${\bm x}^{i+1}_L$. In this way, an IMU factor covers a small time interval and can strongly constrain the submap poses while avoiding the loss of the velocity and bias information estimated by the local mapping module. Every few times a new submap is inserted (e.g., every five submaps), the factor graph is incrementally optimized via the iSAM2 optimizer \cite{Kaess2011} in GTSAM\footnote{\url{https://gtsam.org/}}. \section{Evaluation} \subsection{Evaluation on the Newer College Dataset} \begin{table*}[tb] \centering \caption{Evaluation results on the Newer College dataset} \label{tab:result_newer} \begin{tabular}{c|cccccc} \toprule Metric & LIOM (odom) & LIO-SAM (odom) & LIO-SAM & Proposed (odom) & Proposed \\ \midrule RTE [m] & 2.224 $\pm$ 1.402 & 2.215 $\pm$ 1.376 & 2.156 $\pm$ 1.357 & {\bf 2.140} $\pm$ 1.348 & 2.160 $\pm$ 1.356 \\ ATE [m] & 3.392 $\pm$ 1.653 & 1.176 $\pm$ 0.641 & 0.529 $\pm$ 0.259 & 0.899 $\pm$ 0.595 & {\bf 0.276} $\pm$ 0.093 \\ \bottomrule \end{tabular} \end{table*} \begin{figure}[tb] \centering \includegraphics[width=0.9\linewidth]{figs/newer_trajs.pdf} \caption{Sensor trajectories estimated by the proposed framework and LIO-SAM for the {\it long\_experiment} sequence in the Newer College dataset. The color indicates the magnitude of the ATE.} \label{fig:newer_trajs} \end{figure} We conducted experiments on the Newer College dataset \cite{Ramezani2020} recorded with an Ouster OS-1 64, which provides LiDAR point clouds at 10 Hz accompanied by synchronized IMU data at 100 Hz. We compared the proposed framework with two state-of-the-art LiDAR-IMU SLAM frameworks --- i.e., LIO-mapping (LIOM) \cite{Ye2019} and LIO-SAM \cite{liosam2020shan} --- on the {\it long\_experiment} sequence, which is the longest sequence in the Newer College dataset (3,060 m / 2,650 s). As the evaluation metric, we used the absolute trajectory error (ATE) and 100 m relative trajectory error (RTE) \cite{Zhang2018}. We used an Intel Core i9-9900K and Nvidia RTX 2080 for all the experiments. Table \ref{tab:result_newer} presents the quantitative evaluation results. The proposed frontend algorithm showed a comparable RTE (2.140 m) to those of LIO-mapping and LIO-SAM (2.224 m and 2.215 m, respectively). This result suggests that the proposed keyframe-based odometry estimation enables a low-drift trajectory estimation. With global optimization, the proposed framework greatly improved the trajectory consistency and demonstrated a significantly better ATE (0.276 m) than that of LIO-SAM (0.529 m). Fig. \ref{fig:newer_trajs} shows the trajectories estimated by the proposed framework and LIO-SAM. LIO-SAM exhibited large errors on a large curve. We infer that is because 1) the density of the frames is relatively small at the corner and LIO-SAM failed to create sufficient relative pose constraints. 2) LIO-SAM does not incorporate IMU factors in the global optimization that resulted in losing the gravity direction information. Meanwhile, the proposed framework showed an accurate and consistent trajectory estimation result owing to the global matching cost minimization scheme and the tight coupling of LiDAR and IMU constraints. It is worth mentioning that the proposed method is very robust to quick sensor motion. We confirmed that it successfully estimated the sensor trajectory in {\it quad\_with\_dynamics} and {\it dynamic\_spinning} sequences, which presented aggressive sensor motion (up to 1.5 m/s and 3.5 rad/s) \footnote{See the supplementary video.}. \subsection{Evaluation on KAIST Urban Dataset} \begin{figure}[tb] \centering \includegraphics[width=0.35\linewidth]{figs/kaist.pdf} \caption{Sensor configuration of KAIST urban dataset.} \label{fig:kaist_urban} \end{figure} \begin{figure*}[tb] \centering \begin{minipage}[b]{0.44\linewidth} \centering \includegraphics[width=\linewidth]{figs/kaist_07_traj.pdf} \subcaption{Estimated map and trajectory} \end{minipage} \begin{minipage}[b]{0.44\linewidth} \centering \includegraphics[width=\linewidth]{figs/kaist_07_graph.pdf} \subcaption{Factor graph} \end{minipage} \caption{Mapping result for the KAIST07 sequence.} \label{fig:kaist_07} \end{figure*} \begin{figure*}[tb] \centering \begin{minipage}[b]{0.44\linewidth} \centering \includegraphics[width=\linewidth]{figs/kaist17_01.pdf} \subcaption{Constraints between small overlapping frames} \end{minipage} \begin{minipage}[b]{0.44\linewidth} \centering \includegraphics[width=\linewidth]{figs/kaist17_02.pdf} \subcaption{Feature-less highway environment} \end{minipage} \caption{Snapshots for the mapping process through the KAIST17 sequence. The orange points indicate the latest LiDAR scans.} \label{fig:kaist_17} \end{figure*} \begin{table}[tb] \centering \caption{Processing time through the KAIST07 sequence} \label{tab:proctime_kaist} \begin{tabular}{llc} \toprule Module & Process & Time [msec] \\ \midrule \midrule \multirow{3}{*}{Preprocess} & Downsampling & 4.4 $\pm$ 1.0 \\ & kNN search & 18.9 $\pm$ 3.1 \\ & Total & 23.3 $\pm$ 4.0 \\ \midrule \multirow{4}{*}{Odometry estimation} & Deskew \& Cov. & 6.7 $\pm$ 6.2 \\ & Optimization & 21.0 $\pm$ 11.7 \\ & Keyframe update & 13.5 $\pm$ 12.9 \\ & Total & 41.3 $\pm$ 18.3 \\ \midrule \multirow{5}{*}{Local mapping} & Deskew \& Cov. & 8.0 $\pm$ 7.6 \\ & Total (per-frame) & 8.9 $\pm$ 8.0 \\ \cmidrule{2-3} & Optimization & 117.0 $\pm$ 48.8 \\ & Merging frames & 5.0 $\pm$ 5.2 \\ & Total (per-submap) & 122.0 $\pm$ 49.7 \\ \midrule \multirow{3}{*}{Global mapping} & Factor creation & 22.2 $\pm$ 21.1 \\ & Optimization & 208.9 $\pm$ 120.0 \\ & Total & 242.5 $\pm$ 136.1 \\ \bottomrule \end{tabular} \end{table} \begin{figure}[tb] \centering \includegraphics[width=0.95\linewidth]{figs/proctime.pdf} \caption{Number of submaps and matching cost factors and the processing time of the global optimization through the KAIST07 sequence.} \label{fig:proctime} \end{figure} To demonstrate that the proposed framework can robustly estimate the sensor trajectory in challenging situations, we conducted experiments on KAIST urban dataset \cite{Jeong2018}. In this dataset, a LiDAR (Velodyne VLP-16) was vertically mounted on the vehicle (see Fig. \ref{fig:kaist_urban}), and thus consecutive LiDAR point clouds only have a small overlap when the vehicle is running. It is often difficult to obtain sufficient geometrical constraints from LiDAR point clouds, and tight coupling of LiDAR and IMU constraints is inevitable. Fig. \ref{fig:kaist_07} shows a mapping result for the KAIST07 sequence. We can see that the proposed method was able to create a consistent environmental map in the challenging setting thanks to the tightly coupled LiDAR-IMU fusion scheme. We can also see that the proposed framework aggressively creates matching cost factors between small overlapping frames, and an extremely dense factor graph is constructed. Fig. \ref{fig:kaist_17} shows snapshots of a trial with the KAIST17 sequence. The proposed backend algorithm is very powerful and enables the creation of constraints between small overlapping frames, which helps correcting trajectory estimation drift, as shown in Fig. \ref{fig:kaist_17} (a). It can also robustly estimate the sensor trajectory in a feature-less highway environment, as shown in Fig. \ref{fig:kaist_17} (b) \footnotemark[2]. Note that, without the IMU constraints, the global optimization corrupted on the highway environment due to insufficient geometrical features. Through the KAIST07 sequence, the proposed framework ran approximately twice as fast as the real-time elapsed (20 FPS). Table \ref{tab:proctime_kaist} summarizes the processing times of each module in the proposed framework. The preprocessing and odometry estimation modules respectively took 23.3 ms and 41.3 ms per frame and were sufficiently faster than the real-time requirement (100 ms per frame). The submap optimization, which was performed approximately every 2 s, took 122.0 ms on average. The global map optimization, which was performed approximately every 5 s, took 242.5 ms on average. Fig. \ref{fig:proctime} shows how the global optimization time grew as the number of submaps and matching cost factors increased. While a massive amount of matching cost factors are created (over 6,000 factors), the global optimization converged in less than one second thanks to the incremental optimizer and GPU acceleration. \section{Conclusions} This paper presents a LiDAR-IMU mapping framework. The proposed framework comprises odometry estimation, local mapping, and global mapping modules, which are all based on the LiDAR-IMU tight coupling. For odometry estimation, an efficient keyframe mechanism and fixed-lag smoothing technique are used to achieve a low-drift estimation with a bounded computation cost. A new factor graph structure for the backend was proposed to realize tightly coupled LiDAR-IMU fusion. We validated the efficiency and accuracy of the proposed framework using the Newer College dataset and KAIST urban dataset. \balance \bibliographystyle{IEEEtran}
{ "timestamp": "2022-02-03T02:09:12", "yymm": "2202", "arxiv_id": "2202.00242", "language": "en", "url": "https://arxiv.org/abs/2202.00242" }
\section{Introduction} Since this is the fourth paper of the series \cite{Shuryak:2021fsu,Shuryak:2021hng,Shuryak:2021mlh}, it does not need an extensive introduction. Let us just state that its main goal is to bridge the gaps between subfields of hadronic physics, with our general direction being from (I) {\em the vacuum structure} in its Euclidean formulation (instantons and lattice), to (ii) {\em the hadronic structure} and quark-quark interactions and resulting spectroscopy, to (iii) {\em the hadronic structure on the light front} with its novel Hamiltonians and wave functions. The connection between (i) and (ii) is provided by nonlocal gauge field correlators, such as e.g. correlations of Wilson lines defining static quark potentials. Using lattice or semiclassical models of the vacuum fields, one can evaluate them. The connection between (ii) and (iii) is less developed, as neither spectroscopists nor people studying partonic observables were inclined to study them. (The former community is now living through a deluge of new hadrons discovered lately, and is rather busy.) So, let us emphasize some of the reasons for its development. Standard spectroscopy (in the CM frame) uses rather different tools for states made of heavy and light quarks. There are two very different reasons for that. The first set are {\em kinematical} issues: while heavy quarkonia can be treated nonrelativistically, using the Schroedinger equation and perturbative effective theories like pHQCD, the latter are studied with relativistic tools such as Bethe-Salpeter equation and the like. (In fact, even the standard approaches to heavy quarkonia are not so accurate, as one might get from textbooks. Say, for charm quark, the typical velocity is not really small, $v\sim \frac 12$ or so.) More important are the {\em dynamical} differences between heavy and light quark interactions. Indeed, light quark physics is tightly bound to the issue of chiral symmetry breaking, and its root causes -- strong short-range effects described by NJL operators or instanton-based t'Hooft Lagrangian. Most of that was well understood in the 1990's and need not be repeated here. However, as we have shown in \cite{Shuryak:2021fsu}, a dilute instanton ensemble is only one part of the vacuum fluctuations related with gauge topology at low resolution, and when one studies gauge field observables one finds larger effects at moderatly higher resolution. Even for heavy quarkonia, we argued that a ``dense vacuum" with instanton-antiinstanton pairs (incomplete tunneling through a topologicl barrier) contributes to Wilson line correlators, with and without magnetic fields, and generates a good fraction of the central and spin-dependent forces. This raises a question of how one can include those effects for light quarks. Fortunately, both these kinematical and dynamical issues are much less severe on the light front. The kinematics in this case is simply relativistic for all masses. There are no sudden changes, as one go from heavy to light quarks. Quark masses enter the $H_{LF}$ in a very uniform way and (as we have shown in the previous papers of the series \cite{Shuryak:2021fsu,Shuryak:2021hng,Shuryak:2021mlh}), one can consistently derive the mesonic properties from $\bar b b$ to light $\bar q q$ by the same tools. Indeed, in the first approximation, the transverse oscillator Hamiltonian generates near-linear Regge dependences of $M^2$, on the principal quantum number $n$ and angular momentum $m$. Dynamical issues also get less severe. In particular, on the light front, even light quarks can be ``eikonalized" as they move along approximately straight lines. \subsection{Baryons} Baryons of course, are just another application using the tools developed along the lines mentioned above. There are important technical issues here as well, as the barrier between ``relative motion" in mesons and baryons, is due to the differences between the obvious variables describing the relative motion of two particles, and the nontrivial choices of variable for few-body quantum mechanics. We will address those below, but before that let us add some general introduction. Non-relativistic and semi-relativisitc constituent quark models, have been developed since the 1960's, and they exist in numerous versions. One well documented and widely used approach is that by Isgur and Karl \cite{Isgur:1979be}, which was updated for heavy quark states in \cite{0711.2492}. These authors treated confinement by an oscillatory potential, which is methodically close to our $H_{LF}$. A well known problem with the model, is its predictions of many more baryonic states than those actually observed. Recent years have seen the discoveries of many new hadrons in the so-called heavy-light sector, including $QQq$ baryons and tetraquarks of $\bar Q Q \bar q q$ and $QQ \bar q \bar q$ structure. Calculations for similar states with five and six quarks are ongoing by many groups. No doubt, this activity will shed more light on the issue of quark-quark interactions. Also, baryons too have a 5-quark sector, responsible for the {\em antiquark sea}, well studied experimentally in the case of the proton and neutron: their flavor structure has been discussed e.g. in the paper by one of us \cite{Shuryak:2019zhv}. And yet, we will not discuss in this paper multi-quark sectors or hadrons, but rather focus on basic baryons. Furthermore, for simplicity we will start with three-quark systems which are completely symmetric in flavor, such as $\Delta^{++}_{uuu}, \Omega_{sss}^-,\Omega^{++}_{ccc},\Omega_{bbb}^-$, although only half of them have been observed. General considerations are well summarized in the early note by Bjorken~\cite{Bjorken:1986xpa}. If the color part of the wave function is antisymmetric and the flavor part is symmetric, then Fermi statistics requires the spin-orbital part to be symmetric as well. The simplest one, with no orbital motion, then fixes spins to be e.g. $\uparrow \uparrow \uparrow $ and the global quantum number to be $\frac 32^+$. We will focus to the sector with zero orbital momentum, thereby avoiding the inclusion of spin-orbit mixing (on which we focused in the previous paper \cite{Shuryak:2021mlh}). In Table \ref{tab_1} we show the quark and baryon masses, as well as the binding of the lowest $\frac 32^+$ states according to Ref.~\cite{0711.2492}. As one can see, from heavy to light quarks, the binding changes from negative to positive values. This is due to the attractive Coulomb interaction at small distances, which decreases as lighter states become larger in size. \begin{table}[htp] \caption{Baryon masses and binding energies ( all in $GeV$) for different quark flavors. Two baryon masses in the last two rows are experimental, all other numbers are as used in Ref.~\cite{0711.2492}.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline & $m_Q$ & $M_{QQQ} ^{3/2^+}$ & $M_{QQQ}^{ 3/2^+}-3m_Q$ \\ \hline b & 5.2019 & 14.834 & -0.7717 \\ c & 1.8182& 4.965 & -0.4896 \\ s & .5553& 1.672 & 0.006 \\ q & .2848 & 1.232 & 0.3776 \\%& 0.939& 0.836 \\ \hline \end{tabular} \end{center} \label{tab_1} \end{table}% \begin{figure}[t!] \begin{center} \includegraphics[width=6cm]{Delta-of-n} \includegraphics[width=6cm]{Delta-of-J} \caption{Upper plot: red circles are the squared masses of Delta resonances $M^2_\Delta(n+1,3/2)\, (GeV)^2$ from PDG tables versus the principle quantum number $n+1=1,2,3$. The brown triangle corresponds to the triple-strange baryon $M^2_\Omega(1,\frac 32)$. The lower plot shows the dependence of the Delta resonances $M^2_\Delta(1,J)$ with angular momenta $J=\frac 32,\frac 72,\frac{11}2$. Both straight lines have the same slope $1.1 \, GeV^2$. } \label{fig_deltas_nj} \end{center} \end{figure} The dependence of the masses and wave functions of these {\em ground state} baryons, on the quark mass is of course only one issue to be considered. Another is their spectrum, in particular the dependence on the {\em principle quantum numbers} $n$, and {\em total angular momentum} $J$. It is well known that the existence of the confining strings, lead to specific Regge behavior, both for mesons and baryons. For example, we show in Fig.\ref{fig_deltas_nj} that the squared masses of various $\Delta$ resonances follow linear trajectories, versus radial $n$ and angular $J$ quantum numbers. Two further remarkable observations are: (i) both plots have the same slope; and (ii) this slope is the same as for mesons. This leads to the well known difficulty of a ``star" (or Y) configuration: a symmetric picture with a "color junction" at the baryon center, leads to a different slope, as the tension of three strings is different from that of one in mesons. The qualitative resolution of this difficulty, is also well known: it is a quark-diquark picture, with a single string between them. However, a dynamical justification of this configuration is still missing, especially as a function of the radial quantum number $n$. This notwithstanding, we still proceed methodically, starting from a ``basic" symmetric baryon picture. Like we did it for the mesons in~\cite{Shuryak:2021hng}, we derive and solve the light front Hamiltonian $H_{LF}$ for baryons, by including first only the confining string coupled to a junction. As we will show, this problem is nontrivial but solvable. Additional forces we will then add as perturbations. \subsection{Relativisitic semiclassical quantization in the rest frame} \label{sec_semiclassics} This subsection is our preliminary study of the problem to follow on the light front below, three quarks connected by a string to a junction, or ``star configuration". In the rest frame the baryon with zero orbital momentum is spherically symmetric, but on the light front one explicitly separate the transverse and longitudinal degrees of freedom, and treat them separately whenever possible. Heavy quarks can be treated via the Schroedinger equation, but light quarks need to be addressed differently. This distinction however can be avoided in the semi-classical approach we will use here (and of course the light front approach is the same for light and heavy quarks). We will describe the Jacobi coordinates in detail below, and now just present the Hamiltonian we will quantize \begin{eqnarray} \label{HRF} H=\frac 1{2m}\big(\vec p_\lambda^2+\vec p_{\rho}^2\big)+\sigma_T \sum_{i=1}^3|\xi_i(0)| +\bigg(\frac 32 \frac{m^2_Q}m+\frac 32 m\bigg)\nonumber\\ \end{eqnarray} Here $m_Q$ is the quark mass, $\sigma_T$ the string tension, and $m=1/2e$ is the variational effective mass, arising from the einbein trick used to unwind the relativistic square-root. A similar Hamiltonian was obtained in~\cite{Simonov:1989ff}, using a different world-sheet embeding than the one we will present below (see (\ref{EMBEDDING})). A similar trick is used for the confining term \begin{widetext} \begin{eqnarray} \label{REDUCTION} \sum_{i=1}^3|\xi_i(0)|=\frac 12 \sum_{i=1}^3\bigg(\frac 1{E_i}+E_i|\xi_i(0)|^2\bigg)\rightarrow \frac 12 \bigg(\frac 3{E}+E (\vec r_\lambda^2+\vec r_\rho^2)\bigg)\rightarrow \sqrt{3}(\vec r_\lambda^2+\vec r_\rho^2)^{\frac 12} \end{eqnarray} \end{widetext} assuming symmetric extrema $E_i\rightarrow E$, for the star baryon with equal masses. (\ref{HRF}) simplifies to that of $one$ particle in a $D=6$-dimensional space \begin{eqnarray} \label{PMU} H\rightarrow &&\,\,\, \frac 1{2m}\big(\vec p_\lambda^2+\vec p_{\rho}^2\big)\nonumber\\ &&+\sqrt{3}\sigma_T (\vec r_\lambda^2+\vec r_\rho^2)^{\frac 12} +\bigg(\frac 32 \frac{m^2_Q}m+\frac 32 m\bigg)\nonumber\\ \rightarrow&&\,\,\, \frac {p_\mu^2}{2}+{\tilde{\sigma}_T}|Z_\mu| +\bigg(\frac 32 \frac{m^2_Q}m+\frac 32 m\bigg) \end{eqnarray} (\ref{PMU}) describes a non-relativistic and linearly confined particle of variational mass $m$, with coordinates $Z_\mu=(\lambda^i, \rho^i)$ in $D=6$ dimensions as per the last relation. We have rescaled the coordinate $\sqrt{m}Z\rightarrow Z$ and string tension $\tilde{\sigma}_T=\sqrt{3}\sigma_T/\sqrt{m}$, for convenience. An estimate of the mass spectrum can be obtained using the WKB approximation, \begin{eqnarray} \int_{r_S}^{r_L}dr \bigg(2E-2\tilde\sigma_T r-\frac{l(l+D-2)}{r^2}\bigg)^{\frac 12}= \bigg(n+\frac 12\bigg) \pi\nonumber\\ \end{eqnarray} with the end points $r_{L,S}$ solution to the cubic equation $$2\tilde\sigma_Tr^3-2Er^2+l(l+D-2)=0$$ For zero orbital motion $l=0$, the WKB radial energy levels can be found to be \begin{eqnarray} E_{n0}(m)=\bigg(\frac{3\pi}{2\sqrt{2}}\bigg)^{\frac 23}\bigg(n+\frac 12\bigg)^{\frac 23}\tilde\sigma_T^{\frac 23} \equiv \frac{\tilde{E}_{0n}}{m^{\frac 13}}\nonumber\\ \end{eqnarray} Once combined with the extra terms in (\ref{PMU}), we can carry the minimization in $m$, and set its value at the minimum. The ensuing WKB radial mass spectrum of the star baryon $M_{n0}$ in the rest frame, Reggeizes for large $n$ $linearly$ \begin{eqnarray} n\approx \frac{\alpha^\prime}{2\sqrt{3}} M_{n0}^2 \label{REG1} \end{eqnarray} We recall that the meson slope of the Regge trajectory $n=\alpha' M^2$ has $\alpha^\prime=1/2\pi\sigma_T$. So, in the same units our ``star-shaped" baryons have a slope smaller by $2\sqrt{3}\approx 3.46$, compared to mesons. It is close but not equal to the number $3$, naively corresponding to the number of strings. This WKB radial Regge trajectory calculated in the rest frame, has similar but not identical slope to that derived from the light front (see (\ref{M2NN}) below). For large orbital excitations $l$, the motion is classical, and an estimate can be obtained by noting that for the confining potential the virial theorem gives $$E_{0l}\approx K+V=3K =\frac{3l^2}{2R^2}$$ with $R=(l^2/\tilde\sigma_T)^{\frac 13}$ fixed by the force equation. After fixing $m$ by minimization, the mass spectrum of the star baryon is seen to Reggeize linearly in large orbital momentum $l$ as well, \begin{eqnarray} l\approx \frac{\alpha^\prime}{6/\pi} M_{0l}^2 \label{REG2} \end{eqnarray} with another smaller slope $\alpha^\prime/6/\pi$, than the meson slope $\alpha^\prime$. This factor is $6/\pi\approx 1.90986$, so the linear Reggeization is not the same in $n$ and $l$! This is in disagreement with the experimental data for light baryons, as we have demonstrated above for $\Delta$'s. \section{Transverse and longitudinal coordinates on the light-front } \subsection{Kinetic part of the Hamiltonian} The non-relativistic problem with three (equal mass) particles (e.g. tritium in nuclear physics) has a kinetic energy with a sum of all momenta squared. In Jacobi coordinates (\ref{eqn_Jacobi}) it takes the form of $$\sum_i {\partial^2 \over \partial \vec r_i^2}\rightarrow {\partial^2 \over \partial \vec \lambda^2}+{\partial^2 \over \partial \vec \rho^2}$$ plus a motion of center of mass term to be ignored. The kinetic part of the LF Hamiltonian is unfortunately more complex, since the expansion of the energy of ultra-relativistic particles has the form $$\sum_i {\vec p_{i\perp}^2+m_i^2 \over 2p_{i\, long}}$$ including both the transverse and longitudinal momenta. As in our previous papers, we rewrite it in the following form \begin{eqnarray} &&{p_{1\perp}^2+m_Q^2 \over x_1}+{p_{2\perp}^2+m_Q^2 \over x_2}+{p_{3\perp}^2 +m_Q^2\over x_3}= \nonumber \\ && (p_{1\perp}^2+p_{2\perp}^2+p_{3\perp}^2+3m_Q^2 ) 3 +\prod_i (p_{i\perp}^2+m_Q^2)\bigg({1 \over x_i}-3\bigg) \nonumber \end{eqnarray} keeping the first term in $H_0$ as a transverse oscillator, and the second term as a nonfactorizable ``potential" $\tilde V$: how we calculate its matrix elements is explained in the Appendix. Note that using (\ref{eqn_inverse}), the first term is simplified to a nice symmetric combination $ 3(\lambda_\perp^2+\rho_\perp^2+3m^2)$. \subsection{The confining part of the LF Hamiltonian} In the non-relativistic Schroedinger equation, it is customary to write the confining part of the LF Hamiltonian, as a sum of linear terms $$V_{conf}=\sigma_T \sum_{i=1}^3 | \vec r_i |$$ or sum of quadratic terms, with coordinate vectors $\vec r_i=\vec \xi_i(0)$ (instead of their differences $| \vec r_i-\vec r_j |$). Again, a customary explanation for it, is the existence of a {\em color junction} located at the origin. (We will not go into the discussion of whether this tradition is or is not well grounded.) For the confining part of the LF Hamiltonian we again introduce a parameter $a$, and re-write the linear potential as quadratic As in our previous papers, we use the Hamiltonian in the momentum representation. Therefore the coordinate vectors are interpreted as $\vec r=i \partial / \partial\vec p$, and therefore the confining part play the role normally attributed to the kinetic energy. Quadratic confinement thus leads to a second order Schroedinger-like equation for the eigenfunctions. The same logics is applied to the transverse and longitudinal coordinates $\vec r_\perp, r_{long}$, so the immediate task is to write the Laplacian operator, both in Jacobi coordinates in transverse and in our curved map (\ref{eqn_long_map}). Both tasks are performed, as explained in Appendix~\ref{sec_app_st}. \subsection{Transverse oscillator in Jacobi coordinates} In \cite{Xu:2021wwj} the LFWFs are defined in a factorizable form for three quarks, by first ignoring the conditions on the total momentum \begin{eqnarray} && \vec p_\perp^{tot}=\vec p_{1\perp}+\vec p_{2\perp}+\vec p_{3\perp}=0\nonumber\\ && x_1+x_2+x_3=1 \end{eqnarray} This requires subtraction of the spurious CM motion, which is rather nontrivial. However, there is no need for this. These conditions can be satisfied by appropriate change of variables, which is well known in few-body applications. The motion in the transverse direction is described by using Jacobi coordinates for three quarks. For reasons to become clear below, we consider in this work only baryons made of quarks with the same mass. The transformation from three constrained transverse momenta, to three unconstrained relative momenta, are defined via \begin{equation} \label{eqn_Jacobi} \vec p_\rho={1 \over \sqrt{2}} (\vec p_1-\vec p_2), \, \, \vec p_\lambda ={1 \over \sqrt{6}}(\vec p_1+\vec p_2-2 \vec p_3) \end{equation} and $\vec p_{tot}= \vec p_1+\vec p_2+\vec p_3$ is performed. Then $\vec p_{tot}$ is set to zero, leaving only $\vec p_\rho, \vec p_\lambda$ for the transverse momenta. Let us also mention the inverse relations \begin{eqnarray} \label{eqn_inverse} \vec p_1 &=& ( \sqrt{6}\vec p_\lambda+ 3 \sqrt{2} \vec p_\rho)/6, \nonumber \\ \vec p_2 &=& ( \sqrt{6} \vec p_\lambda- 3 \sqrt{2} \vec p_\rho)/6, \nonumber \\ \vec p_3 &=& - \sqrt{6} \vec p_\lambda/3 \end{eqnarray} The kinetic term will have combination $\vec p_\rho^2+ \vec p_\lambda^2$ , and the confining part will be reduced to similar sum of corresponding coordinates $\vec r_\rho^2+\vec r_\lambda^2$, which together will form what we call a ``transverse oscillator". The corresponding basis functions are described in appendix \ref{sec_basis}. \subsection{Longitudinal momentum fractions in Jacobi coordinates} Let us use the same (modified) Jacobi coordinates in the longitudinal direction. Three momentum fractions are then defined via three coordinates $\lambda,\rho,X$ by \begin{eqnarray} \label{eqn_Jacobi_x} x_1&=&(\sqrt{6}\lambda+3\sqrt{2}\rho+2X)/6 \nonumber \\ x_2&=&(\sqrt{6}\lambda-3\sqrt{2}\rho+2X)/6 \nonumber \\ x_3&=&(-\sqrt{6}\lambda+X)/3 \end{eqnarray} Note that $X=x_1+x_2+x_3$, and unlike in the transverse direction, $X$ should be set to 1. This means that the three-dimensional cube $x_i\in[0,1]$ is cut by a plane $X=1$, leaving as a physical domain a triangle in $\lambda, \rho$ coordinates inside which the parton fractions are all positive $x_i>0$. The three corners of the cube correspond to parton configurations, where one quark has fraction 1, and the two others zero. The line element defining the metric tensor in the new coordinates, is diagonal and simple \begin{equation} dl^2=d\lambda^2+d\rho^2+dX^2/3 \end{equation} The Laplacian (which we encounter in the confining term of the Hamiltonian) in the original coordinates also takes a simple form \begin{equation} \nabla^2= \sum_i {\partial^2 \over \partial x_i^2} \rightarrow {\partial^2 \over \partial \lambda^2}+ {\partial^2 \over \partial \rho^2}+3{\partial^2 \over \partial X^2} \end{equation} and we need the eigenfunctions of the first two terms. So, the first difficulty we encounter, comes from the triangular shape of the physical domain in the $\lambda-\rho$ plane. The wave functions should be such as to lead to a non-singular Laplacian, or finite kinetic energy. As we will show below, for the equi-lateral triangle this problem can in fact be exactly solved. The second difficulty is related with the non-factorizable potential $\tilde V$. Its structure is schematically given by the combination \begin{equation} V=\bigg(\frac 1{x_1}+\frac 1{x_2}+\frac 1{x_3} -9\bigg)\,, \end{equation} which is singular at all boundaries of the triangle. Its contour plot is shown in Fig.\ref{fig_V_in_Jacobi}. As one can see, the potential is small in the middle of the triangle where $x_i\approx 1/3$, but it is large close to the edges: one may call it a ``triangular cup". Quantum mechanics in $\tilde V$ cannot be solved analytically, and so we represent this term as a matrix in the eigenbasis of the Laplacian. The singular nature of $\tilde V$ at the boundaries, leads to divergences in matrix elements, unless the wave functions vanish there. Therefore, the problem we set to solve must have Dirichlet boundary conditions $\psi_i(\lambda,\rho)=0$ at the boundaries for all functions. \begin{figure}[h] \begin{center} \includegraphics[width=6cm]{V-in-Jacobi} \includegraphics[width=1cm]{V-in-Jacobi-scale} \caption{The contour plot of the ``triangular cup" potential $V(\lambda,\rho)$ on $\lambda,\rho$ plot.} \label{fig_V_in_Jacobi} \end{center} \end{figure} Before we proceed with more systematic methods, let us demonstrate how the problem works using a variational method, which in many problems can approximate the ground (and perhaps few more) states. To exclude divergences on the boundaries, the wave function should vanish, so we simply include linear suppression factors and assume that \begin{equation} \label{JASTROW}\Psi( \lambda,\rho)=\big[ \prod_i x_i(\lambda,\rho)\big] \Phi( \lambda,\rho) \end{equation} with some regular $\Phi$ (This procedure is known in nuclear and condensed many-body physics, through the use of Jastrow type wavefunctions). Let us then take this regular function to be a Gaussian centered in the triangle \begin{equation} \label{GAUSS}\Phi(\lambda,\rho)= exp\bigg( -A\bigg(\lambda^2 +\bigg(\rho - {1\over \sqrt{6}}\bigg)^2\bigg)\bigg) \end{equation} with a variational parameter $A$. We use (\ref{JASTROW}), evaluate the average of the Laplacian and of the potential $V$, and plot the result as a function of $A$ in Fig~\ref{fig_L_V_av}. As expected, increasing $A$ -- that is making the wave function better localized near the center -- leads to a growth of the mean Laplacian and a decrease of the mean $V$. Taking those two averages with proper coefficients, one finds a minimum of the total Hamiltonian. \begin{figure}[h!] \begin{center} \includegraphics[width=6cm]{L-av} \includegraphics[width=6cm]{V-av} \caption{The average Laplacian (upper) and $V(\lambda,\rho)$ (lower plot) versus the variational paremeter $A$. See text.} \label{fig_L_V_av} \end{center} \end{figure} \subsection{Longitudinal momentum fractions in factorizable coordinates} The longitudinal motion can be treated in a different way, by a nonlinear but factorizable maps into a new set of variables. This mapping was developed in~\cite{Shuryak:2019zhv} for any number of constituents, in particular it was used for the 3 and 5 quark sectors of the baryons. Let us parameterize the three momentum fractions of the quarks, using the following three parameters $s,t,u$ \begin{eqnarray} \label{eqn_long_map} x_1&=&u\big({1+s \over 2}\big)\big( {1+t \over 2}\big) , \nonumber \\ x_2&=&u\big( {1-s \over 2} \big)\big({1+t \over 2} \big), \nonumber \\ x_3&=& u\big({1-t \over 2} \big), \end{eqnarray} The longitudinal momentum constraint $x_1+x_2+x_3=u$, will be enforced later $u\rightarrow 1$. The inverse map, explains better the meaning of $s,t$ as ``asymmetries" \begin{eqnarray} s&=&{x_1-x_2 \over x_1+x_2}, \nonumber \\ t&=&{x_1+x_2-x_3 \over x_1+x_2+x_3}, \nonumber \\ u&=& x_1+x_2+x_3 \end{eqnarray} The corresponding metric and Laplacian in the $s,t,u$ coordinates are listed in Appendix~\ref{sec_basis}. The main point is that the physical domain of the $s,t$ variables is a $square$, since both vary between -1 and 1. With the help of appropriate Jacobi polynomials, one can have factorized orthonormal basis functions, in terms of which the Hamiltonian matrix elements can be computed. With some model Hamiltonian (different from the one used in the present paper), the mass and wave function for the lowest $\Delta$ states have been evaluated in~\cite{Shuryak:2019zhv}, see Fig.~\ref{fig_Delta_s_t}. We show it in order to compare with the wave functions to be derived below. Note that the wave function is approximately Gaussian, with strong suppression near the edges of the physical domain. (It is rather different from that of the nucleon, see the original paper.) \begin{figure}[htbp] \begin{center} \includegraphics[width=6cm]{Delta-s-t} \caption{The wave function of $\Delta(3/2)$ baryon, in $s,t$ coordinates, from \cite{Shuryak:2019zhv}. } \label{fig_Delta_s_t} \end{center} \end{figure} In this paper we will not use the $s,t$ coordinates and the Jacobi polynomial basis. Yet we note, that whatever coordinates or basis is used, one cannot simply invent a convenient Hamiltonian in those coodinates, plus whatever motivations. In particular, the Laplacian in the original coordinates should be re-written using the pertinent expressions from differential geometry. For the $s,t$ map given above, the Laplacian is involved and listed in Appendix~\ref{sec_app_st}. \section{Nambu-Goto string and confinement} \subsection{Confining light front Hamiltonian} Ignoring Coulomb and spin effects, we start by focusing on confinement by a relativistic string. The action in the first quantized form can be written as \begin{eqnarray} \label{ACTION1} S[\theta]=&&\int_0^T d\tau \sum_{i=1}^3\bigg(e_i m_i^2+\frac 1{4e_i} \dot{x}_i^2\bigg)\\ &&+ \sigma_T\sum_{i=1}^3\int_0^T d\tau \int_0^1d\sigma_i \sqrt{\dot{X}^2_i{X}_i^{\prime 2}-(\dot{X}_i\cdot X_i^\prime)^2}\nonumber \end{eqnarray} In the first term, describing endpoint masses, we use the ``einbein trick" which we will use consistently throughout these papers to get rid of unwanted square roots. Note that if one performs minimization with respect to the three einbein parameters $e_i$, it yields back the standard free relativistic action for massive particles (in Euclidean signature). The string world-sheet action in the Nambu-Goto action includes derivatives over internal coordinates $\tau,\sigma$ shown by a dot and prime, respectively. The world-sheets themselves can be described by the so called ``ruled surfaces", parametrized by \begin{eqnarray} \label{EMBEDDING} X_i^\mu(\tau, \sigma_i; \theta)&=&z^\mu(\tau, \theta)+\sigma_i b_i^\mu \nonumber \\ b_i^\mu &=& (b_{i\perp}, b_{i3}, 0) \\ z^\mu(\tau, \theta)&=&(0_\perp, {\rm sin}\theta\tau, {\rm cos}\theta \tau) \nonumber \end{eqnarray} and $z^\mu(\tau, \theta)$ being the world-line of the string junction. (Our notations for the coordinates are 1,2 for transverse, 3 for longitudinal beam direction , and 4 for time.) For baryons in the so-called {\it star configuration}, the string junction and the end-points follow parallel trajectories, sloped at angle $\theta$ with respect to the 4-direction. For $\theta=0$, the analysis corresponds to a star baryon in the rest frame. For arbitrary $\theta$ with subsequent analytical continuation $\theta\rightarrow -i\chi$, the analysis corresponds to a star baryon on the light front. As already explained above, to factor out spurious motion of the center of mass, we use Jacobi coordinates. For equal quark masses $$m_1=m_2=m_3=m_Q\ ,$$ the center of mass coincides with the location of the string junction $z^\mu$. Also, although the einbeins are arbitrary and fixed only by minimization for the free part, symmetry suggests that the minima are equal or $e_1=e_2=e_3=e$, with only $e$ to minimize, by steepest descent. This will be assumed throughout. Let us define Jacobi coordinates for the end-points \begin{eqnarray} b^\mu_1&=&\frac 1{\sqrt{6}}b_\lambda^\mu+\frac 1{\sqrt{2}}b_\rho^\mu\nonumber\\ b^\mu_2&=&\frac 1{\sqrt{6}}b_\lambda^\mu-\frac 1{\sqrt{2}}b_\rho^\mu\nonumber\\ b^\mu_3&=& -\frac{\sqrt{2}}{\sqrt{3}}b_\lambda^\mu \end{eqnarray} with a kinetic contribution \begin{eqnarray} \label{END1} \int_0^Td\tau \bigg(3em_Q^2+\frac 3{4e}+\frac 1{4e}(\dot{b_\lambda}^2+\dot{b_\rho}^2)\bigg) \end{eqnarray} in (\ref{ACTION1}). The Nambu-Goto string contribution is \begin{eqnarray} \label{NB2} \int_0^Td\tau\, {\sigma_T} \sum_{i=1}^3 |\xi_i(\theta)| \end{eqnarray} with the invariant distances $$|\xi_i(\theta)|=(b_{i\perp}^2+{\rm cos}^2\theta b_{i3}^{2})^{\frac 12}\ ,$$ or, in the Jacobi coordinates \begin{eqnarray} \label{NB3} \xi^2_1(\theta)=&&\frac 16 b_{\lambda \perp}^2+\frac 12 b_{\rho\perp}^2\nonumber\\ &&+\frac 16 b_{\lambda\perp}\cdot b_{\rho \perp} +{\rm cos}^2\theta\bigg(\frac 1{\sqrt{6}}b_{\lambda 3}+\frac 1{\sqrt{2}}b_{\rho 3}\bigg)^2\nonumber\\ \xi^2_2(\theta)=&&\frac 16 b_{\lambda \perp}^2+\frac 12 b_{\rho \perp}^2\nonumber\\ &&-\frac 16b_{\lambda \perp}\cdot b_{\rho \perp} +{\rm cos}^2\theta\bigg(\frac 1{\sqrt{6}} b_{\lambda 3}-\frac 1{\sqrt{2}} b_{\rho 3}\bigg)^2\nonumber\\ \xi^2_3(\theta)=&&\frac 23 b_{\lambda \perp}^2+\frac 23 {\rm cos}^2\theta b_{\lambda 3}^2 \end{eqnarray} The full action (prior to analytical continuation) is (\ref{END1}) plus (\ref{NB2}) \begin{eqnarray} \label{FULLX} S[\theta]\rightarrow \int_0^Td\tau &&\bigg(3em_Q^2+\frac 3{4e}\nonumber\\ &&+\frac 1{4e}(\dot{b_\lambda}^2+\dot{b_\rho}^2) +\sigma_T \sum_{i=1}^3 |\xi_i(\theta)|\bigg)\nonumber\\ \end{eqnarray} \subsection{Going to the light front frame} For $\theta\rightarrow -i\chi$ and $T\rightarrow iT_M$, (\ref{FULLX}) analytically continues to the light front Hamiltonian or squared mass \begin{eqnarray} \label{HLF} H_{LF}=&&\sum_{i=1}^3\bigg(\frac{k^2_{i\perp}+m_Q^2}{x_i}\nonumber\\ &&+2\sigma_T \big(|i\partial /\partial x_i|^2+M^2b_{i\perp}^2\big)^{\frac 12}\bigg) \end{eqnarray} with the constraints: transverse $\sum_{i=1}^3k_{i\perp}=P_\perp=0$ and longitudinal $\sum_{i=1}^3 x_i=1$, with the standard momentum fractions $x_i=k_i^+/P^+$. \subsection{A digression to 1+1 space-time} The Hamiltonian derived above contains non-factorizable interaction between the longitudinal and transverse coordinates which make the problem difficult. So, before we will address it in full, let us discuss its longitudinal part alone. The Hamiltonian (\ref{HLF}) is then reduced to \begin{eqnarray} \label{HLF4} H_{LF,L}=\sum_{i=1}^3\bigg(\frac{m_Q^2}{x_i} +2\sigma_T |i\partial /\partial x_i|\bigg) \end{eqnarray} For a baryon in the star configuration, (\ref{HLF4}) yields a longitudinal squared mass spectrum $M_n^2$, and parton amplitudes $\varphi_n[x]$ \begin{eqnarray} \label{HLF5} \sum_{i=1}^3\bigg(\frac{m_Q^2}{x_i} +2\sigma_T |i\partial /\partial x_i|\bigg)\varphi_n[x] = M_n^2 \varphi_n[x]\nonumber\\ \end{eqnarray} Modulo the effective string tension from the 3-dimensional reduction, (\ref{HLF5}) is similar to the baryonic equation derived in 2-dimensional QCD~\cite{Bars:1976nk,Durgut:1976bc}. (\ref{HLF5}) can be regarded as the eigenvalue problem, for 3 identical particles with parton-x coordinates, moving in a box $0\leq x_i\leq 1$. If one naively substitutes the potential by vanishing ( Dirichlet ) boundary condition $\varphi_n(x_i=0,1)=0$, the eigenstates are standing waves, e.g. \begin{eqnarray} \label{SLATER} \varphi_n[x]\approx {2^{\frac 32}}\, \bigg({\rm sin}(n_1\pi x_1){\rm sin}(n_2\pi x_2){\rm sin}(n_3\pi x_3)\bigg) \nonumber\\ \end{eqnarray} with eigenvalues \begin{eqnarray} \label{MASSPU} M_n^2\approx 2\pi\sigma_T (|n_1|+|n_2|+|n_3|) \end{eqnarray} that reggeize along the diagonal $n_{1,2,3}=n\gg 1$ as \begin{eqnarray} \label{SLOPESPURIOUS} n\approx \frac{\alpha^\prime}{3}M_n^2 \end{eqnarray} where $\alpha^\prime=1/2\pi\sigma_T$ the meson Regge slope. (the factor 1/3 appears because in the star configuration there are three strings.) Unfortunately, this solution is very naive, for several reasons. The most obvious is that the independent quantization of three quarks in a box, ignores the important momentum conservation constraint $$x_1+x_2+x_3=1$$ and therefore contains spurious center of mass motion. As already discussed in the previous section, one can use other coordinates which are center of mass free. In particular, the Jacobi coordinates lead to a problem with $two$ particles inside the equi-lateral triangle. To solve this problem, we proceed in two steps. First, we unwind the square roots by using the einbein trick once again \begin{eqnarray} \label{CONF2} \sum_{i=1}^3\bigg|\frac{i\partial}{\partial x_i}\bigg|&=&\frac 12\bigg(\frac 1{e_{iL}}+e_{iL}\bigg(\frac{i\partial}{\partial x_i}\bigg)^2\bigg)\nonumber\\ &\rightarrow&\frac 12\bigg(\frac 3{e_{L}}+e_{L}\sum_{i=1}^3\bigg(\frac{i\partial}{\partial x_i}\bigg)^2\bigg) \end{eqnarray} and assume equal $e_{iL}=e_L$ at the extrema, in the steepest descent approximation. Second, we isolate the center of mass coordinate, using Jacobi coordinates (\ref{eqn_Jacobi_x}). The 3-particle laplacian in those coordinates is the sum of a 2-particle reduced Laplacian, plus derivative of the center of mass variable \begin{eqnarray} \label{SUMX} \sum_{i=1}^3\bigg(\frac{i\partial}{\partial x_i}\bigg)^2= \bigg(\frac{i\partial}{\partial\lambda}\bigg)^2+\bigg(\frac{i\partial}{\partial\rho}\bigg)^2+3 \bigg(\frac{i\partial}{\partial X}\bigg)^2 \nonumber\\ \end{eqnarray} For fixed center of mass $X=1$, (\ref{eqn_Jacobi_x}) maps the confining box-region $B=[0,1]^3$ for the coordinates $x_i$, to an equi-lateral triangle $\Sigma(x)$ of side $L=\sqrt{2}$, with corners located at $$(\lambda, \rho)=\bigg(-\sqrt{\frac 23},0\bigg), \bigg(\frac{1}{\sqrt{6}}, \frac{1}{\sqrt{2}}\bigg), \bigg(\frac{1}{\sqrt{6}}, -\frac{1}{\sqrt{2}}\bigg)$$ The corners correspond to one particle carrying all the momentum, with the two others at rest. The eigensystem of the first two terms in the Laplacian (now free from the center of mass motion!), amounts to solving \begin{eqnarray} \label{REDX} -\bigg(\frac{\partial^2}{\partial\lambda^2}+\frac{\partial}{\partial\rho^2}\bigg) \varphi_{m_L,n_L}(\lambda, \rho)=e_{m_Ln_L}\varphi_{m_L,n_L}(\lambda,\rho)\nonumber\\ \end{eqnarray} inside the triangle $\Sigma$, with Dirichlet boundary condition $\varphi_{m_L,n_L}(\partial \Sigma)=0$. Remarkably, although the solutions are not available for generic triangles, they are in fact known for equi-lateral triangles in closed form, found in \cite{RICHENS1981495}. Their existence is due to the finite number of ray reflections, which make a closed set, as explained in Appendix \ref{app_triangle}. The spectrum of the Laplacian is given by \begin{eqnarray} \label{EMN} e^D_{m_Ln_L}=\bigg(\frac{4\pi}{3L}\bigg)^2\bigg(\bigg(m_L-\frac {n_L}2\bigg)^2+\frac 34 n_L^2\bigg)\equiv \tilde{e}^D_{m_Ln_L}\pi^2\nonumber\\ \end{eqnarray} with integer valued longitudinal quantum numbers $m_L,n_L$, restricted by $m_L\geq 2n_L$. The states with $m_L>2n_L$ are doubly degenerate, with normalized eigenstates~\cite{RICHENS1981495} \begin{eqnarray} \label{BER1} &&\varphi_{m,n}^{Dc}(\lambda, \rho)=\frac{4 }{L\,3^{\frac 34}}\bigg[{\rm cos}\bigg(\frac{2\pi(2m_L-n_L)\rho}{3L}\bigg) {\rm sin}\bigg(\frac{2\pi n_L\tilde\lambda}{\sqrt{3}L}\bigg)\nonumber\\ &&-{\rm cos}\bigg(\frac{2\pi(2n_L-m_L)\rho}{3L}\bigg) {\rm sin}\bigg(\frac{2\pi m_L\tilde\lambda}{\sqrt{3}L}\bigg)\nonumber\\ &&+{\rm cos}\bigg(\frac{2\pi(m_L+n_L)\rho}{3L}\bigg) {\rm sin}\bigg(\frac{2\pi (m_L-n_L)\tilde\lambda}{\sqrt{3}L}\bigg)\bigg]\nonumber\\ &&\varphi_{m,n}^{Ds}(\lambda, \rho)=\frac{4 }{L\,3^{\frac 34}}\bigg[{\rm sin}\bigg(\frac{2\pi(2m_L-n_L)\rho}{3L}\bigg) {\rm sin}\bigg(\frac{2\pi n_L\tilde\lambda}{\sqrt{3}L}\bigg)\nonumber\\ &&-{\rm sin}\bigg(\frac{2\pi(2n_L-m_L)\rho}{3L}\bigg) {\rm sin}\bigg(\frac{2\pi m_L\tilde\lambda}{\sqrt{3}L}\bigg)\nonumber\\ &&-{\rm sin}\bigg(\frac{2\pi(m_L+n_L)\rho}{3L}\bigg) {\rm sin}\bigg(\frac{2\pi (m_L-n_L)\tilde\lambda}{\sqrt{3}L}\bigg)\bigg]\nonumber\\ \end{eqnarray} with $\tilde\lambda=\lambda+L/\sqrt{3}$. Their symmetry properties include e.g. $\rho$ mirror symmetry \begin{eqnarray} \varphi_{m_L,n_L}^{Dc,s} (\lambda, -\rho)=\pm \varphi^{Dc,s}_{m_L,n_L}(\lambda, \rho) \end{eqnarray} The Dirichlet states with $m_L=2n_L$ are non-degenerate, with normalized eigenstates~\cite{RICHENS1981495} \begin{eqnarray} \label{BER2} &&\varphi^D_{2n_L, n_L}(\lambda, \rho)=\nonumber\\ &&\frac{2^{\frac 32}}{L\,3^{\frac 34}}\bigg[ 2{\rm cos}\bigg(\frac{2\pi n_L\rho}{L}\bigg){\rm sin}\bigg(\frac{2\pi n_L\tilde\lambda}{\sqrt{3}L}\bigg) -{\rm sin}\bigg(\frac{4\pi n_L\tilde\lambda}{\sqrt{3}L}\bigg)\bigg]\nonumber\\ \end{eqnarray} Since (\ref{BER1}-\ref{BER2}) are separable in $(\lambda, \rho)$ and harmonic, they are readily seen to solve (\ref{SUMX}). The proof that these solutions form an orthonormal set on the triangle is nontrivial, but we checked a number of cases explicitly. Implicitly, it follows from the observation that the mode number following from (\ref{EMN}), saturates the so called Weyl area rule~\cite{RICHENS1981495}. We identify the ground state from the tower of states (\ref{BER2}) with $n_L=1$, and its radial excitations with $n_L>1$. In Fig.~\ref{fig_state_N} we show the probability distributions for $n_L=1,2$. \begin{figure}[htbp] \begin{center} \includegraphics[width=5cm]{state-NL1} \includegraphics[width=5cm]{state-NL2} \caption{Probability distribution $| \phi^D_{2n_L,n_L}|^2$ for $n_L=1$ (upper) and $n_L=2$ (lower) in the $\lambda,\rho$ plane, with manifest mirror symmetry in $\rho$. } \label{fig_state_N} \end{center} \end{figure} These states are shown to Reggeize below. We further note that (\ref{BER2}), can be recast as three standing waves with three ``momenta" $\tilde k$ \begin{eqnarray} \label{DSTANDING} &&\varphi^D_{2n_L,n_L}(\lambda, \rho)=\nonumber\\ &&\frac{2^{\frac 72}}{L3^{\frac 34}} {\rm sin}\bigg(\frac{2\pi n_L\tilde k_0}{\sqrt{3}L}\bigg){\rm sin}\bigg(\frac{\pi n_L\tilde k_+}{\sqrt{3}L}\bigg) {\rm sin}\bigg(\frac{\pi n_L\tilde k_-}{\sqrt{3}L}\bigg)\nonumber\\ \end{eqnarray} in the triangular domain limited by the sides $$\tilde k_0=\sqrt{3}L/2\qquad \tilde k_\pm=\tilde\lambda\pm \sqrt{3}\rho=0$$ Remarkably, in the original x-Bjorken coordinates the standing waves (\ref{DSTANDING}) are identical to those in (\ref{SLATER}), for fixed $X=x_1+x_2+x_3=1$, i.e. \begin{eqnarray} \label{DSTANDINGX} &&\varphi^D_{2n_L,n_L}(x_1,x_2,x_3)=(-1)^{n+1}\frac{2^{3}}{X3^{\frac 34}}\nonumber\\ &&\times {\rm sin}\bigg(\frac{n_L\pi x_1}X\bigg){\rm sin}\bigg(\frac{n_L\pi x_2}X\bigg){\rm sin}\bigg(\frac{n_L\pi x_3}X\bigg) \nonumber\\ \end{eqnarray} (This observation perhaps allows for the extension of the Dirichlet standing states and their excitations, to the states of more exotic hadrons with $N>2$ compact multi-quark content -- tetraquarks, pentaquarks, hexaquarks \begin{eqnarray} \varphi_{n_L}^D(x_1, ..., x_N)=\frac{C_N}{X}\prod_{i=1}^N{\rm sin}\bigg(\frac{n_L\pi x_i}{X}\bigg) \end{eqnarray} with $X=\sum_{i=1}^Nx_i$, and the normalization $C_N$ fixed by the polygonal volume, set by the longitudinal momentum constraint $X=1$. The meson case with $N=2$, of course requires a single standing wave, as we used in our previous papers.) Using (\ref{EMN}), the contribution of the Laplacian to the baryon spectrum is \begin{eqnarray} \label{RADFREE} \Delta M_{m_Ln_L}^2\approx&& 2\pi\sigma_T\sqrt{3\tilde{e}^D_{m_Ln_L}}\nonumber\\ \approx&& \bigg(\frac{4}{\sqrt{6}}\bigg)\bigg(2\pi\sigma_T\bigg(\bigg(m_L-\frac {n_L}2\bigg)^2+\frac 34 n_L^2\bigg)^{\frac 12}\bigg)\nonumber\\ \end{eqnarray} For large quantum numbers, it reggeizes into a linear dependence. For the ground state and its radial excitations series, with $m=2n$ in (\ref{BER2}), its contribution to the spectrum can be compared to the conventional Regge trajectory of the mesons $n=\alpha' M^2$ with $\alpha^\prime=1/2\pi\sigma_T$ \begin{eqnarray} \label{M2NN} 2\sqrt{2} n_L\approx \alpha^\prime \Delta M_{2n_L,n_L}^2 \end{eqnarray} There is an additional factor of $2\sqrt{2}\approx 2.83$, to be also compared with the Regge slope from the spurious spectrum (\ref{SLOPESPURIOUS}), where this factor is just 3, the number of strings. \\ \\ Let us add the following consideration. Although the ``cup" potential $V=\sum m_Q^2/x_i$ was not yet included, its very existence -- especially for heavy masses $m^2_Q/2\sigma_T\gg 1 $ -- motivated us to look at standing waves that vanish at the cup's boundaries. In the opposite limit of light quarks $m^2_Q/\sigma_T\leq 1$ we can ignore the confining potential as a small perturbation, and thus ignore its end-point constraint. (Recall that $m_Q$ is not the ``Lagrangian" quark mass but an effective one, including the constituent quark mass. So even for light quarks $m_Q\sim 350\, MeV$, while $ \sqrt{\sigma_T}\approx 400\, MeV$, so this limit may be of academic interest only.) In this case, it is perhaps more appropriate to use free end-point or $Neumann$ boundary conditions $$\varphi^\prime_{n_L}(x_i=0,1)=0$$ so as to minimize the {\it kinetic} contribution for the excited states. Again, the eigenstates free of center of mass motion, can be sought using also the ray reflection method, as we suggest in Appendix~\ref{app_triangle}. In particular, the Neumann analogue of the tower of Dirichlet standing states (\ref{DSTANDING}) built on the ground state, is readily found as \begin{eqnarray} \label{NSTANDING} &&\varphi^N_{2n_L,n_L}(\lambda, \rho)=\nonumber\\ &&\frac{2^{\frac 72}}{L3^{\frac 34}} {\rm cos}\bigg(\frac{2\pi n_L\tilde\lambda}{\sqrt{3}L}\bigg){\rm cos}\bigg(\frac{\pi n_L\tilde\lambda_+}{\sqrt{3}L}\bigg) {\rm cos}\bigg(\frac{\pi n_L\tilde\lambda_-}{\sqrt{3}L}\bigg)\nonumber\\ \end{eqnarray} or equivalently in x-Bjorken \begin{eqnarray} \label{NSTANDINGX} &&\varphi^N_{2n_L,n_L}(x_1,x_2,x_3)=(-1)^{n_L}\frac{2^{3}}{x3^{\frac 34}}\nonumber\\ &&\times {\rm cos}\bigg(\frac{n_L\pi x_1}x\bigg){\rm cos}\bigg(\frac{n_L\pi x_2}x\bigg){\rm cos}\bigg(\frac{n_L\pi x_3}x\bigg) \nonumber\\ \end{eqnarray} (\ref{NSTANDING}) satisfy Neumann boundary conditions in the triangular domain by inspection, with the same spectrum and Regge trajectory as Dirichlet for $m_L=2n_L$, i.e. $e_{2n_L,n_L}^N=e_{2n_L,n_L}^D$ but with $n_L=0$ a priori allowed. \subsection{Spectrum of the diagonal part of the Hamiltonian $H_0$}~\label{sec_H0} Our light front Hamiltonian contains three parts $$H_{LF}\approx H_{0\perp}+ H_{0 x_i} +\tilde V_{LF}$$ The first two are transverse oscillator and longitudinal ``triangular cup": for both of them we managed to find their complete set of eigenfunctions. The remaining residual part is not amenable to analytic integration, and will be treated numerically. Also we use the einbein trick to get rid of the square roots in the confining term \begin{eqnarray} \label{HLF2} H_{LF}\approx &&\sum_{i=1}^3\bigg(\frac{k^2_{i\perp}+m_Q^2}{x_i}\nonumber\\ &&+\sigma_T \bigg(3a +\frac 1a \sum_{i=1}^3(|i\partial/\partial x_i|^2+(3m_Q)^2b_{i\perp}^2\big)\bigg)\bigg)\nonumber\\ \end{eqnarray} with $M\approx 3m_Q$ used on the right-hand-side to close the mass squared operator. Again, we assumed equal einbeins $a_i\rightarrow a$ in (\ref{HLF2}) by steepest descent. To the first kinetic term we add and subtract its value at $x_i=\frac 13$, producing an oscillator with fixed frequency, and a residual potential $\tilde V$ which is close to zero at the center of the triangular cup. In terms of the Jacobi coordinates, the diagonalizable part reads \begin{eqnarray} \label{HLF3} &&H_{0LF}=3(\vec p_\rho^2+\vec p_\lambda^2+3m_Q^2) \\ &&+\frac {\sigma_T}a \bigg(|i\partial/\partial\lambda|^2+|i\partial/\partial\rho|^2+(3m_Q)^2(\vec b_\lambda^2+\vec b_\rho^2)\bigg)\nonumber \end{eqnarray} where the all the vectors are in the transverse plane, and $\vec b_\lambda, \vec b_\rho$ are coordinates conjugate to the corresponding momenta. To elucidate the dependence on $a$ we rewrite it as \begin{eqnarray} \label{HLF4} &&M^2_0(n_\lambda, n_\rho, n_L, m_L)=(3m_Q)^2\nonumber\\ &&+\frac{\sigma_T}{\sqrt{a}}M_\perp^2(n_\lambda, n_\rho)+\frac{\sigma_T}a M_L^2(m_L, n_L)+3\sigma_T a\nonumber\\ \end{eqnarray} with \begin{eqnarray} &&M_L^2(n_L, m_L)=e^D_{n_L, m_L}\nonumber\\ &&M_\perp^2(n_\lambda, n_\rho)=\frac{6\sqrt{3}m_Q}{\sqrt{\sigma_T}}(n_\lambda+n_\rho+2) \end{eqnarray} The einbein in (\ref{HLF4}) minimizes the squared mass, and is solution to the quartic Ferrari equation $$6 \sqrt{a}^4-M_\perp^2\sqrt{a}-2M_L^2=0$$ For large longitudinal quantum numbers $n_L, m_L\gg 1$ the squared mass reggeizes $$M^2_0\approx 2\sqrt{3}\sigma_T M_L$$ as we noted earlier. However, for large transverse quantum numbers $n_\lambda, n_\rho\gg 1$ the squared mass does not \begin{eqnarray} M_0^2\approx 18\sigma_T\bigg(\frac{M_\perp^2}{6\sigma_T}\bigg)^{\frac 23} \end{eqnarray} Recall that the results following from $H_{0LF}$ are still to be modified by the additional residual contributions, stemming from $\tilde V_{LF}$ to be added below, but which are independent of our variational parameter $a$. Therefore the minimization over $a$ can already be performed numerically. With our standard values for the string tension $\sigma_T=(0.4\, GeV)^2$, and quark masses $b,c,s,q$, we show in Fig.~\ref{fig_M2_of_a} the dependence on $a$ of the lowest eigenvalue for each species. \begin{figure}[h] \begin{center} \includegraphics[width=6cm]{M2-of-a} \caption{The lowest eigenvalue of $H_{0LF}$ in $GeV^2$ versus the (dimensionless) ``einbine parameter" $a$, for $b,c,s,q$ quarks. Using this plot we perform the minimization in $a$. } \label{fig_M2_of_a} \end{center} \end{figure} \section{The non-factorizable potential $\tilde V$} The non-factorizable part of the potential is \begin{eqnarray} \tilde V&=&{\vec p_1^2+m_Q^2\over x_1} + {\vec p_2^2 +m_Q^2\over x_2} \nonumber\\ &+& {\vec p_3^2+m_Q^2\over x_3} - 3 (\vec p_1^2 +\vec p_2^2 + \vec p_3^2)-9m_Q^2 \end{eqnarray} Using the Jacobi coordinates for the transverse and longitudinal momenta, we get \begin{eqnarray} \label{eqn_tilV} \tilde V=&& -\bigg((3 (-2 p_\lambda p_\rho (\sqrt{6} - 6 \lambda) \rho + 9 m_Q^2 (2 \lambda^2 + \sqrt{6} \lambda^3 \nonumber\\ &&\qquad+ 2 \rho^2 - 3 \sqrt{6} \lambda \rho^2) + p_\lambda^2 (9 \lambda^2 + 3 \sqrt{6} \lambda^3 + 3 \rho^2 \nonumber \\ &&\qquad+ \sqrt{6} \lambda (1 - 9 \rho^2)) + p_\rho^2 (3 \lambda^2 + 3 \sqrt{6} \lambda^3 + 9 \rho^2 \nonumber \\ &&\qquad- \sqrt{6} \lambda (1 + 9 \rho^2))\big) \bigg)\nonumber\\ &&\qquad\times\bigg( { 1 \over -2 + 9 \lambda^2 + 3 \sqrt{6} \lambda^3 + 9 \rho^2 - 9 \sqrt{6} \lambda \rho^2}\bigg)\nonumber\\ \end{eqnarray} For zero orbital motion, the two oscillators are independent, and the term $\langle p_\lambda p_\rho \rangle $ vanishes on average. $\langle p_\lambda^2\rangle$ and $ \langle p_\rho^2 \rangle $ are directly related to the number of quanta $n_\lambda,n_\rho$, and so one has to calculate only the matrix entries in terms of all possible longitudinal quantum numbers $\langle n_L,m_L | \tilde V | n_L' m_L' \rangle$, see Appendix~\ref{sec_tilV}. \subsection{Masses of the states} With the evaluation of the matrix $\tilde V$ and its eigenvalues, our technical task is completed. We now can finally carry the calculation of the full eigenvalues -- squared masses of the flavor symmetric baryons, for the four quark flavors $b,c,s,q$. We keep here longitudinal quantum numbers to their lowest values $n_L=1,m_L=2$, and assume that the transverse oscillators are excited as a function of a single $n=(n_\rho+n_\lambda)/2 $. Our results are shown in black-symbols in Fig.~\ref{fig_all_masses}. For comparison, we show the experimental masses in red hexagons, and the blue hexagons are model predictions for $ccc$ and $bbb$ baryons. Before we assess the lessons we infer from these results, it is worth recalling our previous findings for mesons~\cite{Shuryak:2021hng,Shuryak:2021mlh}:\\ \noindent (i) All squared masses $M^2$ were larger than the experimental ones, by about the same constant. This was not considered a problem, since confinement does not fix the absolute normalization of the potential. A constant can be arbitrarily added;\\ (ii) The dependence on the quantum radial $n$ and orbital $l$ quantum numbers, was found to be linear for each quark flavor, although with a flavor-dependent slope;\\ (iii) For light quark flavors, the slope was in agreement with the experimentally observed Regge slope $\alpha'=1/2\pi \sigma_T$, with the string tension taken from the static quark potentials well known from lattice and quarkonia. Now, looking at Fig.~\ref{fig_all_masses}, we see that (i) and (ii) are also true for the presented calculation of the baryon masses, but (iii) is strongly violated: the Regge slopes we obtain are now different. This feature can be anticipated from the simple observation, that baryons with heavier quark masses are dominated by their masses, with less dependence on the quantum numbers. Note that this plot shows a small $n$-range, for which the large $n$-asymptotic discussed in the text (e.g. in the semi-classical calculation presented in section~\ref{sec_semiclassics}) does not apply. \begin{figure}[h] \begin{center} \includegraphics[width=7cm]{all-masses} \caption{Squared masses of baryons $M_{n+1}^2(Q,\frac 32)$ in $GeV^2$, versus the principal quantum number $n+1=1..7$. The black circles, triangles, squared and pentagons are results of our calculations for the flavors $b,c,s,q$. The red hexagons are the experimental values of three $\Delta^{++}$ and one $\Omega^-$ masses, from PDG. The two blue hexagons are model predictions for masses of $ccc$ and $bbb$ baryons, from Table I.} \label{fig_all_masses} \end{center} \end{figure} \section{Wave functions of the states} Our main results are not the masses of the $\frac 32^+$ states and their radial excitations, but their corresponding wave functions. The ground states in all channels, have a transverse momentum dependence that is about Gaussian, e.g. $\psi(p_\perp) \sim exp(-\beta^2 p_\perp^2/2)$. The scale parameter $\beta$ is related to the mass and frequency of the effective oscillator $\mu,\omega$, \begin{equation} \beta= \sqrt{\mu \omega} = \bigg({a \over 3 m_Q^2 \sigma_T} \bigg)^{\frac 14} \end{equation} The mean square of the transverse momentum is approximately \begin{equation} \langle p_\perp^2 \rangle \approx \beta^{-2} \approx 0.942, 0.466, 0.183, 0.104 \, (GeV^2) \end{equation} for the $b,c,s,q$ $3/2^+$ flavor symmetric baryons. The longitudinal wave functions for heavy quark masses $m_Q$, are defined mainly by the $O(m_Q^2)$ part of the potential $\tilde V$. They are discussed in Appendix~\ref{app_triangle}, and illustrated in Fig.~\ref{fig_tilV_wf} by the solid-black line. In general, since the Hamiltonian is $H_0$ plus $\tilde V$, the wave functions lie between their eigenfunctions, or between the solid and dashed line in Fig.~\ref{fig_tilV_wf}. We note that the difference is mostly around the negative maximun of $\lambda$, or when $x_3$ is close to 1. Since the triangle is equi-lateral and the wave function is symmetric, this implies that the suppression in fact occurs near all three corners of the triangle. \section{Conclusions} We start by recalling the chief goals of this series of papers: to bring the studies of light front observables -- DAs, PDFS, GPDs etc -- to the same logical and methodical structure, as we have in atomic and nuclear physics. Quantum dynamics should be defined by a Hamiltonian, which is then diagonalized to find its physical states. The Hamiltonian can be simplified at first, and then improved with the addition of more complicated contributions (Coulomb, spin, etc.), but its basic properties (e.g. mutual orthogonality of states) are to be preserved. Any wave function is complete, in the sense that one can calculate any observable from it. In our attempts to $derive$ the light-front Hamiltonian $H_{LF}$ in QCD at {\it low resolution}, we use the theoretical and empirical information we have at the moment. They fall into two categories: (i) $confinement$ via relativistic QCD strings (flux tubes generated perhaps from long P-vortices on the lattice); and (ii) further information on Wilson line correlators, based on some semiclassical models of the Euclidean QCD vacuum based on instantons (as unraveled by current lattice cooling). In this paper, we only considered the confinement effect. The important distinctions between our approach and other versions of $H_{LF}$ in the literature are among others: \\(i) $H_{LF}$ is derived from established lattice facts; \\(ii) We do not have any parameters which can be fitted to the experimental data. All we used are the standard values for the quark masses, and the string tension $\sigma_T$;\\ (iii) We do not consider just few states in each quark channel, but analyze as many states as feasible, by relating the results to the experimentally observed Regge behavior. This paper is mostly technical in nature, in it we showed {\em how one can solve the quantum mechanical problem} of three identical quarks, given their interaction by a confining string. Unlike other approaches in the light front literature, we do not use more degrees of freedom than needed (and then try to ``subtract center of mass motion$^{\prime\prime}$). Jacobi coordinates are used throughout, in spite of the fact that in the longitudinal direction, one has to solve a Schroedinger equation in a triangular manifold. The ``unfactorizable" potential is defined, set into its minimal form and its effect on the wave functions is evaluated, by diagonalization in the approriate basis. The setting assumed the so called ``star" configuration of flavor symmetric ($QQQ$) baryons with spin $\frac 32$, in which all quarks are connected by QCD strings to a featureless junction at the CM. It is the most symmetric setting, and that is why we decided to start with it. However it is known that (at least light) baryons have a significant ``diquark" component, which yields a baryon Regge slope equal to the meson one. Additional empirical effects also follow. Physical effects leading to diquarks and their role in baryon wave functions, will be discussed in the sequels of this series. \vskip 1cm {\bf Acknowledgements} This work is supported by the Office of Science, U.S. Department of Energy under Contract No. DE-FG-88ER40388.
{ "timestamp": "2022-02-02T02:08:27", "yymm": "2202", "arxiv_id": "2202.00167", "language": "en", "url": "https://arxiv.org/abs/2202.00167" }
\section*{Video recording} \section{Introduction} \label{sec:intro} Many multi-robot applications such as search and rescue, environment monitoring, cooperative localization and mapping, target tracking, and entertainment have gained increasing attention in recent years \cite{queralta2020sarsurvey,shule2020mulituwbsurvey,rizk2019cooperative}. For these operations to succeed, it is important that the individual robot's pose and sensor measurements are expressed in a common reference frame. Therefore, the problem of acquiring the instantaneous relative pose or the initial relative frame transformation between the robots are of great interest. In this paper, we refer to the former problem as relative pose estimation (RPE) and the latter as relative transformation estimation (RTE), although the terms are sometimes interchangeable in the literature. When an external reference system is accessible (e.g., from GPS and compass, a priori known map or layout of the environment, or UWB anchors with known constellation), the robots' own poses are readily available and the relative poses can be easily computed. Hence, both RPE and RTE problems are solved. However, the application will be limited to within the area within the coverage of the external system. For example, GPS would not be reliable in environments such as indoors, underground, underwater, forests etc., no prior map would be available for an unknown environment, and localization with UWB anchors is limited to only the area where the anchors are installed. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/sys_overview.png} \caption{a) Overview of the proposed system. Our goal is to estimate $\prescript{\mathcal{L}_1}{\mathcal{L}_2}{\mathbf{T}}$. In general, the smaller the ratio $D/d_0$ the harder the problem. b-c) Examples of “easier" (b) and “harder" (c) trajectory configurations, where (c) is the initial section of (b). Our solutions outperform previous methods in both cases.} \label{fig:sys_overview} \end{figure} Thus, many researches on using sensor measurements to obtain the relative poses have been put forth. Depending on the exteroceptive sensor (e.g., camera, LiDAR, radar, UWB) equipped onboard the robot, the inter-robot relative measurements can be obtained in the form of relative bearing, distance, position, pose, or some combinations among them \cite{zhou2013relrangebearing,knuth2013colab}. However, one implicit assumption is that the neighbor robot has known shape, size, 3D model or markers. Furthermore, the complexity and hardware cost differ greatly depending on the sensor and should be taken into consideration \cite{de2017survey}. When using camera or LiDAR, rich information regarding the neighbor robot as well as the environment can be extracted with sophisticated algorithms \cite{shenghai2021ussurvey}. In general, the performance can be limited by \cite{de2017survey}: 1) the sensor's field of view (FOV) and detecting range, 2) environmental conditions such as lighting, rain, fog, etc., 3) the complexity of the framework, which includes data processing (target detection from image/point cloud), target tracking and re-identification over time, and 4) robustness against false detection and misclassification. In contrast, UWB sensor offers several key advantages: 1) the measurement is omnidirectional with cm-level accuracy and long range (up to hundreds of meters), 2) the measurement is unaffected by lighting conditions, 3) the ranging data is comparatively simpler to model and process, 4) each UWB tag (and by extension, each robot) can be assigned a unique id without any ambiguity. Furthermore, UWB also provides a communication network \cite{nguyen2018robust} and is a more affordable, small and lightweight sensor for a team of mobile robots compared to LiDAR \cite{shenghai2021ussurvey}. On the other hand, UWB sensor might suffer under non line-of-sight (LOS) or multipath conditions and provide no information about the surrounding environment. A single ranging measurement is also not sufficiently informative for either RPE or RTE task. As a result, designing an appropriate framework that leverages the pros and alleviates the cons of UWB and existing localization systems is a promising direction that has attracted more attention recently \cite{yu2021applications}. In this paper, our focus is on the 4 degree-of-freedom (DoF) RTE problem (i.e., estimating the 3D position and relative heading between the local frames) for a number of reasons. Firstly, if the robots' odometry data are highly accurate, the RPE and RTE problems can be seen as equivalent since the relative pose can be obtained from the initial frame transformation and the current local poses. Given that state-of-the-art simultaneous localization and mapping (SLAM) systems have achieved great progress in terms of accuracy\cite{shenghai2021ussurvey,nguyen2021viralfusion}, the RTE problem should be preferred over RPE since the state vector is much smaller. Secondly, IMU is one of the most popular sensors for SLAM systems thanks to the complementary advantages of fusing IMU and other sensors such as camera, LiDAR, radar, wheel encoder, etc. \cite{mohamed2019odomsurvey}. One of the characteristics of IMU-based SLAMs is that there are four unobservable directions \cite{huang2019visual} (three for global translation, one for rotation angle about the gravity axis). Thus, the study of the 4-DoF RTE problem would be useful to augment existing IMU-based SLAM pipelines. Thirdly, we are able to provide detailed theoretical analysis and interpretation for the 4-DoF case, which is difficult for the general 6-DoF case. The main contributions of this work include: \begin{itemize} \item Theoretical analysis for the 4-DoF RTE problem and related sub-problems, including the derivation and interpretation of the CRLB, the FIM and its determinant; \item Optimization-based approaches to solve the 4-DoF RTE problem, namely the QCQP method and its SDP relaxation, with analysis and experimental results to demonstrate the performance and tradeoff. \item Our methods take into account practical issues, including rejecting outliers, the spatial-temporal offsets between the sensors, checking for singular configurations and providing the uncertainty of the estimates; \item Unlike previous works where the system was often tested with only one trajectory configuration, we provide comprehensive simulation and real-life experiment results. \end{itemize} This paper is structured as follows. In Sect. \ref{sec:lit_review}, we review the related works on the RTE problem and highlight our contributions. The preliminaries and problem formulation are presented in Sect. \ref{sec:sys_overview}. The theoretical analysis is then presented in Sect. \ref{sec:TheorecticalAnalysis}, followed by the proposed optimization-based approaches in Sect. \ref{sec:main_approaches}. Next, simulation and experimental results in Sect. \ref{sec:exp} are used to verify the performance of our methods compared to state-of-the-arts, as well as compare the advantages and disadvantages of the proposed approaches. The concluding remarks are drawn in Sect. \ref{sec:conclusion}. \section{Literature Review} \label{sec:lit_review} In single-robot cases, if UWB anchors can be pre-installed in the environment, many methods have been introduced to fuse the UWB-based localization with existing onboard localization (using IMU, camera, LiDAR etc.) to reduce the error and accumulated drift \cite{yu2021applications,nguyen2020ranging}. Recent researches have relaxed the deployment requirements greatly, to even one anchor at an unknown position \cite{cao2020vir,Thien2020AuRO}. In multi-robot cases without UWB anchors, solving the RPE task by combining UWB and vision has been studied in \cite{cao2020vir,xu2020decentralized}. However, these approaches still rely mainly on vision for detection and tracking of neighbor robots, which means the disadvantages regarding limited FOV and detecting range still apply. If an array of UWB antennas is available on at least one robot, the range-based relative localization (RRL) problem can be solved instantaneously \cite{guler2021reloc,xianjia2021cooperative,shalaby2021relative,nguyen2018robust}. Thus, camera or LiDAR can be used for other tasks such as inspection or landing target detection. Such systems, however, require that the robot is relatively large in size in order to accommodate the UWB antenna configuration. Alternatively, various cooperative control or motion scheduling methods have been proposed \cite{shule2020mulituwbsurvey,nguyen2019persistently,guo2019ultra,cornejo2015distributed}, which can achieve RRL at the cost of time and energy. In our system, we assume no coordination of the robots' motion during the mission. We only require the robots' self-egomotion and a single inter-robot ranging link, which would be the most general assumptions in terms of practicality. Algebraic and analytical solutions for the RTE problem using any combination of range and bearing measurements have been proposed in \cite{zhou2013relrangebearing,knuth2014relanycombinations}. In noisy cases, the solution might be sub-optimal and a refinement step would be necessary. As a result, a common strategy in many subsequent works \cite{molina2019unique,li2020relSDP,jiang2020rel3D,trawny2010relplanar} is to use a non-iterative method to find an initial solution which is then refined with nonlinear least squares (NLS) or a variant of NLS. Given that any methods can be paired with an NLS refinement step, the quality of the initial guess obtained from the non-iterative method would be the main contributing factor when tested with the same dataset. In this paper, we compare the proposed methods against the previous non-iterative methods without the NLS refinement step as well as NLS without an initial guess to make the distinction between the different approaches clear. \begin{table}[t] \begin{adjustbox}{width=\columnwidth} \begin{tabular}[t]{ c|c|c} \toprule Method & Estimation problem & Method \\ \hline \multirow{2}{*}{\cite{shariati2016recovering}} & 2D translation, rotation, & Sampling-based \\ & scale factors & convex optimization \\ \hline \cite{li2020relSDP} & 2D translation, heading & SDP \\ \hline \cite{molina2019unique} & 3D translation, heading & Linear \\ \hline \cite{ziegler2021distributed} & 3D translation, heading & NLS \\ \hline \cite{trawny2010rel3Dtransform} & 3D translation, rotation & Algebraic \\ \hline \cite{jiang2020rel3D} & 3D translation, rotation & SDP \\ \hline \textbf{Ours} & 3D translation, heading & QCQP, SDP \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Related works on RTE using onboard ego-motion and inter-robot range measurements. The methods \cite{molina2019unique,ziegler2021distributed,trawny2010rel3Dtransform,jiang2020rel3D} are included in our comparison.} \label{table:lit_review} \end{table} The main related works are summarized in Table \ref{table:lit_review}. Among them, \cite{shariati2016recovering,li2020relSDP} only address the 2D problem, \cite{molina2019unique, ziegler2021distributed} tackle the same 4-DoF problem and \cite{jiang2020rel3D, trawny2010rel3Dtransform} solve the full 6-DoF problem. However, the methods in \cite{molina2019unique, trawny2010rel3Dtransform} are susceptible to UWB noise, with the translation error in the order of meters when the distance standard deviation is in the order of tens of centimeters; \cite{ziegler2021distributed} relies on having a very good initial guess not too far from the true value; \cite{jiang2020rel3D} requires solving a large problem (relative to the related works) with $16$ variables and $14$ constraints while also ignoring the first distance measurement $d_0$, which is important in more challenging scenarios. \cite{jiang2020rel3D} is the most similar to our method in the sense that an SDP problem is formulated from an original non-convex optimization problem. Lastly, there is a lack of theoretical analysis of the problem in the existing literature. Our work directly addresses all of these issues, improves the performance and provides a comparison between the SDP relaxation and the QCQP approaches which is useful for practical applications. \section{System Overview} \label{sec:sys_overview} \subsection{Notations} \label{subsec:notations} Let $\prescript{A}{}{\mathbf{p}} \in \mathbb{R}^3$ and $\prescript{A}{}{\mathbf{R}} \in SO(3)$ be the position vector and rotation matrix in frame $\{A\}$. The corresponding quaternion of $\prescript{A}{}{\mathbf{R}}$ is $\prescript{A}{}{\mathbf{q}} \in \mathbb{H}$. $\prescript{A}{}{\mathbf{T}}$ is a homogeneous transformation matrix in frame $\{A\}$, which is defined as: \begin{equation} \prescript{A}{}{\mathbf{T}} \coloneqq \begin{bmatrix} \prescript{A}{}{\mathbf{R}} & \prescript{A}{}{\mathbf{p}} \\ \mathbf{0}^\top & 1 \end{bmatrix} \in SE(3). \end{equation} Denote $\prescript{A}{B}{\mathbf{T}}$, $\prescript{A}{B}{\mathbf{R}}$ as the transformation and rotation matrices from frame $\{B\}$ to $\{A\}$. The noisy measurement and estimated value are indicated as $(\tilde{.})$ and $(\hat{.})$, respectively. $\mathbb{E}[\cdot]$ is the expectation of a matrix. For simplicity, denote $\textrm{s}\alpha = \sin{\alpha}$, $\textrm{c}\alpha = \cos{\alpha}$, $\textrm{s}(\alpha{\pm}\beta) = \sin{(\alpha{\pm}\beta)}$, $\textrm{c}(\alpha{\pm}\beta) = \cos{(\alpha{\pm}\beta)}$, $\mathbf{T}_{i,j}$ as the $(i,j)$-th element of the matrix $\mathbf{T}$ and $x_i$ as the $i$-th element of vector $\mathbf{x}$. For a position vector ${\mathbf{p}} \in \mathbb{R}^3$, denote its elements as ${\mathbf{p}} {\coloneqq} [p_x, p_y, p_z]^{\top}$. Lastly, let $t_k$ is the timestamp of the latest UWB range measurement $d_k$ and $d_0$ be the distance between the frames' origin of the two robots (Fig. \ref{fig:sys_overview}a), which is typically measured as the first distance ever received. \subsection{Problem Formulation} \label{subsec:prob_form} Fig. \ref{fig:sys_overview}a shows an overview of the system, which consists of a pair of robots denoted as $\mathcal{R}_n, n \in \{1,2\}$. Each robot is equipped with a UWB sensor and an IMU-based onboard odometry system. In this work, we specifically consider visual-inertial odometry (VIO) as VIO is used in our experiments. However, other IMU-based modalities \cite{shenghai2021ussurvey} still apply. Let $\{\mathcal{B}_n\}$ and $\{\mathcal{L}_n\}$ be the IMU body frame and local odometry frame, respectively. $\{\mathcal{L}_n\}$'s z-axis aligns with gravity. During the operation, the collected data include odometry from each robot and inter-robot range measurements. At time $t_k$, the available data set is: \begin{equation} \mathcal{J}_k = \{( \tilde{d}_{i}, \prescript{\mathcal{L}_1}{\mathcal{B}_1}{\tilde{\mathbf{p}}}_{i}^\top, \prescript{\mathcal{L}_1}{\mathcal{B}_1}{\tilde{\mathbf{q}}}_{i}^\top, \prescript{\mathcal{L}_2}{\mathcal{B}_2}{\tilde{\mathbf{p}}}_{i}^\top, \prescript{\mathcal{L}_2}{\mathcal{B}_2}{\tilde{\mathbf{q}}}_{i}^\top )\}_{i=1,...,k}, \end{equation} where $\tilde{d}_i$ is the inter-robot UWB ranging measurement, $\prescript{\mathcal{L}_n}{\mathcal{B}_n}{\tilde{\mathbf{p}}}_{i}$ and $\prescript{\mathcal{L}_n}{\mathcal{B}_n}{\tilde{\mathbf{q}}}_{i}$ are poses of the robots in their respective local frames $\{\mathcal{L}_n\}$. In this work, the peer-to-peer two-way time of flight (TW-ToF) UWB ranging scheme is employed to avoid complicated clock synchronization between the sensors \cite{nguyen2020ranging}. Without loss of generality, we assign $\mathcal{R}_1$ as the host robot and $\mathcal{R}_2$ as the target robot. Our goal is to estimate the relative frame transformation in the world frame $\{\mathcal{L}_1\}$ \begin{equation} \prescript{\mathcal{L}_1}{\mathcal{L}_2}{\mathbf{T}} \coloneqq \begin{bmatrix} \prescript{\mathcal{L}_1}{\mathcal{L}_2}{\mathbf{R}}& \prescript{\mathcal{L}_1}{\mathcal{L}_2}{\mathbf{p}}\\ \mathbf{0}^\top & 1 \end{bmatrix} = \begin{bmatrix} \mathbf{C} & \mathbf{t}\\ \mathbf{0}^\top & 1 \end{bmatrix}, \end{equation} with $\prescript{\mathcal{L}_1}{\mathcal{L}_2}{\mathbf{R}} \coloneqq \mathbf{C}$ and $\prescript{\mathcal{L}_1}{\mathcal{L}_2}{\mathbf{p}} \coloneqq \mathbf{t}$ for simplicity. Since it has been shown that VIO systems have four unobservable directions \cite{huang2019visual}, $\prescript{\mathcal{L}_1}{\mathcal{L}_2}{\mathbf{T}}$ can be parameterized by \begin{equation} \mathbf{\Theta} \coloneqq [\mathbf{t}^\top, \theta]^\top = [t_x, t_y, t_z, \theta]^{\top}. \end{equation} with $\theta$ as the relative yaw angle between $\mathcal{L}_1$ and $\mathcal{L}_2$ (Fig. \ref{fig:sys_overview}a), i.e. $\mathbf{C}$ can be calculated as the basic 3D rotation matrix around the $z$-axis by an angle $\theta$ as \begin{equation} \mathbf{C} = \begin{bmatrix} \cos \theta & -\sin \theta & 0\\ \sin \theta & \cos \theta & 0\\ 0 & 0 & 1 \end{bmatrix}. \end{equation} \subsection{UWB measurement model} We use the same UWB measurement model as our previous works \cite{nguyen2021viralfusion,Thien2021RAL}, which takes into account: 1) the spatial offset of the UWB antenna in the body frame, and 2) the temporal offset between UWB and odometry data. Let $\prescript{\mathcal{B}_n}{a_n}{\mathbf{p}}$ be the position of the UWB antenna $a_n$ in the body frame $\{\mathcal{B}_n\}$, which is obtained from calibration. At time $t_k$, the UWB antenna position in the local odometry frame is: \begin{equation} \prescript{\mathcal{L}_n}{a_n}{\mathbf{{p}}_k} = \prescript{\mathcal{L}_n}{B_n}{\mathbf{p}_k} + \prescript{\mathcal{L}_n}{B_n}{\mathbf{R}_k} \prescript{B_n}{a_n}{\mathbf{p}}. \end{equation} The relationship between the noisy distance measurement and the state vector can be written as \begin{equation} \tilde{d}_{k} = d_k + \eta_k = \norm{ \mathbf{t} + \mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_k - \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_k } + \eta_k, \end{equation} where $d_k = \norm{\mathbf{t} + \mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_k - \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_k}$ is the true inter-robot distance. The measurement is assumed to be corrupted by independent and identically distributed Gaussian noise $\eta_k \sim \mathcal{N}(0, \sigma_{\textrm{r}}^2)$. The robot pose at time $t_k$, $\prescript{\mathcal{L}_n}{a_n}{\mathbf{p}}_k \coloneqq [p^{nx}_k,\; p^{ny}_k,\; p^{nz}_k]^\top$, is obtained from linear interpolation between the two nearest poses. If the spatial offset is negligible (i.e., $\prescript{\mathcal{B}_n}{a_n}{\mathbf{p}} \approx \mathbf{0}$), the UWB measurement model is reverted back to the one used in the literature (i.e., $d_k = \norm{\mathbf{t} + \mathbf{C} \prescript{\mathcal{L}_2}{\mathcal{B}_2}{\mathbf{p}}_k - \prescript{\mathcal{L}_1}{\mathcal{B}_1}{\mathbf{p}}_k}$). However, in some applications the UWB antenna is located at the boundary of the robot platform to ensure LOS during the operation and/or large baseline between the antennas \cite{nguyen2021viralfusion,xianjia2021cooperative}. In such cases, $\prescript{\mathcal{B}_n}{a_n}{\mathbf{p}}$ is not trivial and should be taken into account. \section{Theoretical Analysis} \label{sec:TheorecticalAnalysis} In this section, we present the derivation of the FIM, the CRLB and the determinant of the FIM associated with the 4-DoF RTE problem formulated in Sect. \ref{subsec:prob_form}. These entities provide different perspectives to understand the problem, independent of any specific estimator. \subsection{Fisher Information Matrix and Cramér-Rao Lower Bound} \label{subsec:CRLB_derivation} Denote the noise-free and noisy distance vectors respectively as \begin{equation} \begin{aligned} \mathbf{f}(\mathbf{\Theta}) &= [d_1, \; d_2, \; \dots, d_k]^{\top},\\ \tilde{\mathbf{d}} &= [\tilde{d}_1, \; \tilde{d}_1, \; \dots, \tilde{d}_k]^\top, \end{aligned} \end{equation} the probability density function of $\tilde{\mathbf{d}}$ is given by: \begin{equation} \begin{aligned} p(\tilde{\mathbf{d}}, \mathbf{\Theta}) &= \frac{1}{(2\pi)^{k/2} \sqrt{\operatorname{det}(\mathbf{\Sigma}_r)}}\\ &. \operatorname{exp} \left[ - \frac{1}{2} (\tilde{\mathbf{d}} - \mathbf{f}(\mathbf{\Theta}))^\top \mathbf{\Sigma}_r^{-1} (\tilde{\mathbf{d}} - \mathbf{f}(\mathbf{\Theta})) \right] \end{aligned} \end{equation} where $\mathbf{\Sigma}_r = \sigma_r^2 \; \mathbf{I}_{k \times k}$. Let $\hat{\mathbf{\Theta}}$ be an estimate of ${\mathbf{\Theta}}$. The CRLB is the lowest bound for the error covariance matrix of an unbiased estimator, i.e. \begin{equation} \label{eq:CRLB_definition} \mathbb{E}\left[ (\hat{\mathbf{\Theta}} - \mathbf{\Theta}) (\hat{\mathbf{\Theta}} - \mathbf{\Theta})^{\top} \right] \geq \textrm{CRLB} = \mathbf{F}^{-1}, \end{equation} where $\mathbf{F}$ is the FIM. The FIM encodes the amount of information provided by the set of measurements to estimate the parameters. In general, the $(i,j)$-th element of FIM is: \begin{equation} \mathbf{F}_{i,j} \coloneqq \mathbb{E}\left[ \frac{\partial}{\partial \Theta_i}\ln \left( p(\tilde{\mathbf{d}}, \mathbf{\Theta}) \right) \frac{\partial}{\partial \Theta_j}\ln \left( p(\tilde{\mathbf{d}}, \mathbf{\Theta}) \right) \right], \end{equation} where the natural logarithm of $p(\tilde{\mathbf{d}}, \mathbf{\Theta})$ is \begin{equation} \ln \left( p(\tilde{\mathbf{d}}, \mathbf{\Theta}) \right) = - \frac{1}{2} (\tilde{\mathbf{d}} - \mathbf{f}(\mathbf{\Theta}))^\top \mathbf{\Sigma}_r^{-1} (\tilde{\mathbf{d}} - \mathbf{f}(\mathbf{\Theta})) + c, \end{equation} with $c$ being a constant scalar. Under the i.i.d. zero-mean Gaussian noise assumption, the formulation of FIM is \cite{zekavat2011handbook}: \begin{equation} \begin{aligned} \mathbf{F} = \left[ \frac{\partial \mathbf{f}(\mathbf{\Theta})}{\partial \mathbf{\Theta}} \right]^{\top} \mathbf{\Sigma}_r^{-1} \left[ \frac{\partial \mathbf{f}(\mathbf{\Theta})}{\partial \mathbf{\Theta}} \right] \end{aligned} \end{equation} which can be rewritten as \begin{equation} \begin{aligned} \mathbf{F} = \sum\limits_{i=1}^{k} \mathbf{G}_i^\top \mathbf{\Sigma}_{r,4}^{-1} \mathbf{G}_i, \end{aligned} \end{equation} where $\mathbf{\Sigma}_{r,4} = \sigma_r^2 \; \mathbf{I}_{4 \times 4}$ and \begin{equation} \label{eq:G_i_orginial} \begin{aligned} \mathbf{G}_i &= \left[ \frac{\partial f_i(\mathbf{\Theta})}{\partial t_x}, \; \frac{\partial f_i(\mathbf{\Theta})}{\partial t_y}, \; \frac{\partial f_i(\mathbf{\Theta})}{\partial t_z}, \; \frac{\partial f_i(\mathbf{\Theta})}{\partial \theta} \right] \\ &= \left[ \partial_x f_i, \; \partial_y f_i, \; \partial_z f_i, \; \partial_{\theta} f_i \right]. \end{aligned} \end{equation} The derivations of the above Jacobians can be found in Appendix \ref{appendix:FIM_full}. The FIM can be simplified as \begin{equation}\label{eq:FIM_simplified} \begin{aligned} \mathbf{F} &= \frac{1}{\sigma_r^2} \sum\limits_{i=1}^{k} \begin{bmatrix} (\partial_x f_i)^2 & \dots & (\partial_x f_i)(\partial_{\theta} f_i) \\ \vdots & \ddots & \vdots \\ (\partial_{\theta} f_i)(\partial_x f_i) & \dots & (\partial_{\theta} f_i)^2 \end{bmatrix} \\ &= \frac{1}{\sigma_r^2} \sum\limits_{i=1}^{k} \mathbf{G}_i^\top \mathbf{G}_i = \frac{1}{\sigma_r^2} \mathbf{J}^\top \mathbf{J}, \end{aligned} \end{equation} where the $i$-th row of the matrix $\mathbf{J}$ is $\mathbf{G}_i$, i.e. $\mathbf{J} \coloneqq \begin{bmatrix} \mathbf{G}_1\\ \vdots\\ \mathbf{G}_k \end{bmatrix}$. From Eq. (\ref{eq:FIM_simplified}), it can be seen that the FIM: is either positive definite or semi-definite; depends on the number of available measurements ($k$); depends on the precision of the measurements ($\sigma_r$) as well as the Jacobians ($\mathbf{G}_i$). If the FIM is non-singular, i.e. $\det(\mathbf{F}) \neq 0$, the CRLB can be obtained as the inverse of FIM ($\mathbf{F}^{-1}$). For evaluation purposes, we additionally define the translation and heading CRLBs as \begin{equation} \begin{aligned} \textrm{CRLB}_t = \sum\limits_{i=1}^{3} [\mathbf{F}^{-1}]_{i,i},\;\textrm{CRLB}_{\theta} = [\mathbf{F}^{-1}]_{4,4}. \end{aligned} \end{equation} The $\textrm{CRLB}_t$ and $\textrm{CRLB}_{\theta}$ are used to compare the mean square error (MSE) of different methods against the theoretical bound. \subsection{Determinant of the FIM} \label{subsec:det_FIM} Applying the Cauchy-Binet formula to $\mathbf{F} {=} \frac{1}{\sigma_r^{2}} \mathbf{J}^\top \mathbf{J}$, we have \begin{equation}\label{eq:det_FIM_4dof} \begin{aligned} \det(\mathbf{F}) = \frac{1}{\sigma_r^2} \sum\limits_{1 \leq j_1 < j_2 < j_3 < j_4 \leq k}^{} (\det(\mathbf{\Lambda}))^2 \end{aligned} \end{equation} where $\mathbf{\Lambda}$ denotes the $4\times4$ matrix consisting of the $j_1$,$j_2$,$j_3$,$j_4$-th rows of $\mathbf{J}$. As shown in Appendix \ref{appendix:FIM_full}, we can write \begin{equation} \label{eq:Lambda_def} \mathbf{\Lambda} \coloneqq \begin{bmatrix} \mathbf{G}_{j_1} \\[0.2em] \mathbf{G}_{j_2} \\[0.2em] \mathbf{G}_{j_3} \\[0.2em] \mathbf{G}_{j_4} \end{bmatrix} = \begin{bmatrix} \mathbf{u}_{j_1}^\top & \Phi_{j_1} \\[0.2em] \mathbf{u}_{j_2}^\top & \Phi_{j_2} \\[0.2em] \mathbf{u}_{j_3}^\top & \Phi_{j_3} \\[0.2em] \mathbf{u}_{j_4}^\top & \Phi_{j_4} \end{bmatrix}, \end{equation} where $\mathbf{u}_{j_i} {=} \frac{1}{d_{j_i}} \prescript{\mathcal{L}_1}{a_{1{,}2}}{\mathbf{p}}_{j_i}$, $\Phi_{j_i} {=} \left[ \mathbf{u}_z \times (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}) \right] \cdot \mathbf{u}_{j_i}$ ($i \in \{1, 2, 3, 4\}$), $\mathbf{u}_z = [0, 0, 1]^\top$ is the unit vector in the $z$ axis and $\prescript{\mathcal{L}_1}{a_{1{,}2}}{\mathbf{p}}_{j_i}$ is the relative position vector between the UWB antennas on the two robots in the world frame at time $t_{j_i}$. By applying the Laplace expansion along the last column of $\mathbf{\Lambda}$ in Eq. (\ref{eq:Lambda_def}), we have: \begin{equation}\label{eq:det_Lambda_Laplace_expansion} \begin{aligned} \det(\mathbf{\Lambda}) = -\Phi_{j_1} T_1 + \Phi_{j_2} T_2 - \Phi_{j_3} T_3 + \Phi_{j_4} T_4 \end{aligned} \end{equation} where the minors are: \begin{equation}\label{eq:det_Lambda_minors_original} \begin{aligned} T_1 {=} \begin{vmatrix} \mathbf{u}_{j_2}^\top \\[0.2em] \mathbf{u}_{j_3}^\top \\[0.2em] \mathbf{u}_{j_4}^\top \end{vmatrix},\ T_2 {=} \begin{vmatrix} \mathbf{u}_{j_1}^\top \\[0.2em] \mathbf{u}_{j_3}^\top \\[0.2em] \mathbf{u}_{j_4}^\top \end{vmatrix},\ T_3 {=} \begin{vmatrix} \mathbf{u}_{j_1}^\top \\[0.2em] \mathbf{u}_{j_2}^\top \\[0.2em] \mathbf{u}_{j_4}^\top \end{vmatrix},\ T_4 {=} \begin{vmatrix} \mathbf{u}_{j_1}^\top \\[0.2em] \mathbf{u}_{j_2}^\top \\[0.2em] \mathbf{u}_{j_3}^\top \end{vmatrix} \end{aligned} \end{equation} For any vectors $\mathbf{a}$, $\mathbf{b}$, $\mathbf{c} \in \mathbb{R}^3$, we have \begin{equation} \begin{aligned} &\det \left( \begin{bmatrix} \mathbf{a}^\top \\ \mathbf{b}^\top \\ \mathbf{c}^\top \end{bmatrix} \right) = \det([\mathbf{a} \; \mathbf{b} \; \mathbf{c}]) = (\mathbf{a} \times \mathbf{b}) \cdot \mathbf{c} \\ &= \norm{\mathbf{a}} \norm{\mathbf{b}} \norm{\mathbf{c}} \left| \sin \left(\measuredangle (\mathbf{a}, \mathbf{b})\right) \right| \cos \left( \measuredangle (\mathbf{a} \times \mathbf{b}, \mathbf{c}) \right). \end{aligned} \end{equation} As such, $T_4$ can be written as \begin{equation} \label{eq:det_Lambda_T4} \begin{aligned} T_4 &= (\mathbf{u}_{j_1} \times \mathbf{u}_{j_2}) \cdot \mathbf{u}_{j_3} \\ &= \norm{\mathbf{u}_{j_1}} \norm{\mathbf{u}_{j_2}} \norm{\mathbf{u}_{j_3}} \left| \textrm{s}{\alpha_{j_1,j_2}} \right| \textrm{c}{\measuredangle (\mathbf{u}_{j_1} \times \mathbf{u}_{j_2}, \mathbf{u}_{j_3})}\\ &= \left| \textrm{s}{\alpha_{j_1,j_2}} \right| \textrm{s}{\beta_{j_1 j_2, j_3}}, \\ \end{aligned} \end{equation} where $\alpha_{j_1,j_2} = \measuredangle (\mathbf{u}_{j_1}, \mathbf{u}_{j_2})$, $\beta_{j_1 j_2, j_3} = \frac{\pi}{2} - \measuredangle (\mathbf{u}_{j_1} \times \mathbf{u}_{j_2}, \mathbf{u}_{j_3})$. Note that $\beta_{j_1 j_2, j_3}$ is defined only when $\mathbf{u}_{j_1} \times \mathbf{u}_{j_2} \neq \mathbf{0}$, i.e. $\mathbf{u}_{j_1}$ and $\mathbf{u}_{j_2}$ are non-parallel. Otherwise, if $\mathbf{u}_{j_1} \times \mathbf{u}_{j_2} = \mathbf{0}$ then $T_4 = 0$ and $\beta_{j_1 j_2, j_3}$ is not relevant. Similarly: \begin{equation} \label{eq:det_Lambda_T123} \begin{aligned} T_1 &= (\mathbf{u}_{j_2} \times \mathbf{u}_{j_3}) \cdot \mathbf{u}_{j_4} = \left| \textrm{s}{\alpha_{j_2, j_3}} \right| \textrm{s}{\beta_{j_2 j_3, j_4}},\\ T_2 &= (\mathbf{u}_{j_1} \times \mathbf{u}_{j_3}) \cdot \mathbf{u}_{j_4} = \left| \textrm{s}{\alpha_{j_1, j_3}} \right| \textrm{s}{\beta_{j_1 j_3, j_4}}, \\ T_3 &= (\mathbf{u}_{j_1} \times \mathbf{u}_{j_2}) \cdot \mathbf{u}_{j_4} = \left| \textrm{s}{\alpha_{j_1, j_2}} \right| \textrm{s}{\beta_{j_1 j_2, j_4}}. \\ \end{aligned} \end{equation} On the other hand, $\Phi_{j_4}$ can be rewritten as \begin{equation} \begin{aligned} \Phi_{j_4} &= \left[ \mathbf{u}_z \times (\mathbf{C}\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_4}) \right] \cdot \mathbf{u}_{j_4} \\ &= \norm{\mathbf{u}_z} \norm{\mathbf{C}\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_4}} \norm{\mathbf{u}_{j_4}} \\ &\left| \textrm{s} \measuredangle (\mathbf{u}_z, \mathbf{C}\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_4}) \right| \textrm{c} \measuredangle \left( \mathbf{u}_z \times (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_4}), \mathbf{u}_{j_4} \right).\\ \end{aligned} \end{equation} Let $\gamma_{j_4} = \frac{\pi}{2} - \measuredangle \left( \mathbf{u}_z \times (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_4}), \mathbf{u}_{j_4} \right)$ and $\rho_{j_i}$ be the length of the projection of $\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_4}$ on the $xy$ plane of $\{\mathcal{L}_2\}$, we have \begin{equation*} \begin{aligned} &\rho_{j_4} \coloneqq \norm{\mathbf{C}\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_4}} \left| \textrm{s} \measuredangle (\mathbf{u}_z, \mathbf{C}\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_4}) \right|, \\ &\textrm{c} \measuredangle \left( \mathbf{u}_z \times (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_4}), \mathbf{u}_{j_4} \right) = \textrm{s} \gamma_{j_4}. \end{aligned} \end{equation*} which leads to \begin{equation} \label{eq:det_Lambda_Phi4} \Phi_{j_4} = \rho_{j_4} \textrm{s} \gamma_{j_4}. \end{equation} Similar to $\beta_{j_1 j_2, j_3}$, $\gamma_{j_4}$ is defined only when $\mathbf{u}_z \times (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_4}) \neq \mathbf{0}$, i.e. $\mathbf{u}_z$ and $\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_4}$ are non-parallel, otherwise $\Phi_{j_4} = 0$ and $\gamma_{j_4}$ is irrelevant. Likewise: \begin{equation}\label{eq:det_Lambda_Phi123} \begin{aligned} \Phi_{j_1} &= \left[ \mathbf{u}_z \times (\mathbf{C}\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_1}) \right] \cdot \mathbf{u}_{j_1} = \rho_{j_1} \textrm{s} \gamma_{j_1}, \\ \Phi_{j_2} &= \left[ \mathbf{u}_z \times (\mathbf{C}\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_2}) \right] \cdot \mathbf{u}_{j_2} = \rho_{j_2} \textrm{s} \gamma_{j_2}, \\ \Phi_{j_3} &= \left[ \mathbf{u}_z \times (\mathbf{C}\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_3}) \right] \cdot \mathbf{u}_{j_3} = \rho_{j_3} \textrm{s} \gamma_{j_3}.\\ \end{aligned} \end{equation} \begin{figure}[t] \begin{subfigure}[t]{\linewidth} \centering \includegraphics[width=\linewidth]{Figures/viz_vectors.png} \caption{The vectors used to derive $\det(\mathbf{F})$.} \label{fig:viz_vectors} \end{subfigure} \begin{subfigure}[t]{.48\linewidth} \centering \includegraphics[width=\linewidth]{Figures/viz_T_i.png} \caption{$T_i = (\mathbf{u}_{j_l} \times \mathbf{u}_{j_p}) \cdot \mathbf{u}_{j_q}$} \label{fig:viz_T_i} \end{subfigure} \hfill \begin{subfigure}[t]{.48\linewidth} \centering \includegraphics[width=\linewidth]{Figures/viz_Phi_ji.png} \caption{$\Phi_{j_i} = \left[ \mathbf{u}_z {\times} (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}) \right] \cdot \mathbf{u}_{j_i}$} \label{fig:viz_Phi_ji} \end{subfigure} \caption{Visualization of the vectors, the parallelepiped defined by the three vectors $\{\mathbf{u}_{j_l},\mathbf{u}_{j_p},\mathbf{u}_{j_q}\}$ and $\{\mathbf{u}_z,\mathbf{C}\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i},\mathbf{u}_{j_i}\}$, as well as the angles $\alpha_{j_l,j_p}$, $\beta_{j_l j_p,j_q}$, $\gamma_{j_i}$ that comprise $\det(\mathbf{F})$.} \label{fig:analytical_coord_systems} \end{figure} Combining Eq. (\ref{eq:det_FIM_4dof}), (\ref{eq:det_Lambda_Laplace_expansion}), (\ref{eq:det_Lambda_T4}), (\ref{eq:det_Lambda_T123}), (\ref{eq:det_Lambda_Phi4}) and (\ref{eq:det_Lambda_Phi123}), we have \begin{equation}\label{eq:det_FIM_RTE} \begin{aligned} &\det(\mathbf{F}) = \frac{1}{\sigma_r^2} \sum\limits_{S}^{} \left[ \sum\limits_{i=1}^{4} (-1)^{i} \; \Phi_{j_i} \; T_i \right]^2 \\ &= \frac{1}{\sigma_r^2} \sum\limits_{S}^{} \left[ \mbox{\footnotesize $\displaystyle \sum\limits_{i=1}^{4} (-1)^{i} \left( \left[ \mathbf{u}_z {\times} (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}) \right] \cdot \mathbf{u}_{j_i} \right) \left[ (\mathbf{u}_{j_l} {\times} \mathbf{u}_{j_p}) \cdot \mathbf{u}_{j_q} \right]$} \right]^2\\ &= \frac{1}{\sigma_r^2} \sum\limits_{S}^{} \left[ \sum\limits_{i=1}^{4} (-1)^{i} \rho_{j_i} \left| \textrm{s}{\alpha_{j_l, j_p}} \right| \textrm{s}{\beta_{j_l j_p, j_q}} \; \textrm{s}{\gamma_{j_i}} \right]^2 \end{aligned} \end{equation} where ${(l,p,q) \in \{1,2,3,4\} \setminus i}$, ${l < p < q}$ and \begin{equation} \begin{aligned} &\alpha_{j_l, j_p} = \measuredangle (\mathbf{u}_{j_l}, \mathbf{u}_{j_p}), \\ &\beta_{j_l j_p, j_q} = \frac{\pi}{2} - \measuredangle (\mathbf{u}_{j_l} \times \mathbf{u}_{j_p}, \mathbf{u}_{j_q}), \\ &\gamma_{j_i} = \frac{\pi}{2} - \measuredangle \left( \mathbf{u}_z \times (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}), \mathbf{u}_{j_i}, \right), \\ &S = \{ (j_1,j_2,j_3,j_4) \; \vert \; 1 \leq j_1 < j_2 < j_3 < j_4 \leq k \}. \end{aligned} \end{equation} The components that build up $\det(\mathbf{F})$ are illustrated in Fig. \ref{fig:analytical_coord_systems}. In the next section, we will discuss in more details the physical meaning behind the angles $\alpha$, $\beta$ and $\gamma$. \begin{prop} \label{prop:sub_prob} From the general problem of 3D RTE with unknown $\theta$ (i.e., without a common heading reference between the robots), one can derive the sub-problems, the corresponding state vector and determinant of the FIM as \begin{enumerate}[label=(\arabic*)] \item 3D RTE with known $\theta$, \begin{equation} \begin{aligned} &\mathbf{\Theta}_1 \coloneqq [t_x, t_y, t_z]^{\top},\\ &\det(\mathbf{F}_1) = \frac{1}{\sigma_r^2} \sum\limits_{S_1}^{} \textrm{s}^2{\alpha_{j_1,j_2}} \textrm{s}^2{\beta_{j_1 j_2, j_3}}, \end{aligned} \end{equation} where $S_1 = \{ (j_1,j_2,j_3) \; \vert \; 1 \leq j_1 < j_2 < j_3 \leq k \},$ \begin{equation*} \begin{aligned} \alpha_{j_1, j_2} = \measuredangle (\mathbf{u}_{j_1}, \mathbf{u}_{j_2}), \; \beta_{j_1 j_2, j_3} = \frac{\pi}{2} - \measuredangle (\mathbf{u}_{j_1} \times \mathbf{u}_{j_2}, \mathbf{u}_{j_3}). \end{aligned} \end{equation*} \item 2D RTE with unknown $\theta$, \begin{equation} \begin{aligned} &\mathbf{\Theta}_2 \coloneqq [t_x, t_y, \theta]^{\top}, \\ &\det(\mathbf{F}_2) = \frac{1}{\sigma_r^2} \sum\limits_{S_2}^{} \left[ \sum\limits_{i = 1}^{3} (-1)^{i+1} \rho_{j_i} \textrm{s}{\alpha_{j_l, j_p}} \textrm{s}{\gamma_{j_i}} \right]^2, \end{aligned} \end{equation} where $S_2 = \{(j_1,j_2,j_3) \; \vert \; 1 \leq j_1 < j_2 < j_3 \leq k \},$ \begin{equation*} \begin{aligned} &\alpha_{j_l,j_p} = \atantwo \left( [\mathbf{u}_{j_l} \times \mathbf{u}_{j_p}]_z, \; \mathbf{u}_{j_l} \cdot \mathbf{u}_{j_p} \right),\\ &{\gamma_{j_i} = \frac{\pi}{2} - \measuredangle \left( \mathbf{u}_z \times (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}), \mathbf{u}_{j_i} \right)}, \end{aligned} \end{equation*} with $(l,p) \in \{1,2,3\} \setminus i$, $l < p$, and $[.]_z$ extracts the $z$ element of the argument vector in $\mathbb{R}^3$. \item \label{case:2D_known_theta} 2D RTE with known $\theta$, \begin{equation} \begin{aligned} &\mathbf{\Theta}_3 \coloneqq [t_x, t_y]^{\top}, \\ &\det(\mathbf{F}_3) = \frac{1}{\sigma_r^2} \sum\limits_{S_3}^{} \textrm{s}^2{\alpha_{j_1,j_2}}, \end{aligned} \end{equation} where $S_3 = \{(j_1,j_2) \; \vert \; 1 \leq j_1 < j_2 \leq k \},$ \begin{equation*} \begin{aligned} \alpha_{j_1,j_2} = \atantwo \left( [\mathbf{u}_{j_1} \times \mathbf{u}_{j_2}]_z, \; \mathbf{u}_{j_1} \cdot \mathbf{u}_{j_2} \right). \end{aligned} \end{equation*} \end{enumerate} \end{prop} \begin{proof} See Appendix \ref{appendix:sub_problems_derivation}. \end{proof} \subsection{Geometric interpretation of det(F)} \label{subsec:CRLB_geometric_interpretation} \begin{figure}[t] \begin{subfigure}[t]{.495\linewidth} \centering \includegraphics[width=\linewidth]{Figures/exp_degenerate_01.png} \caption{Robots move in parallel.} \label{fig:exp_singular_parallel} \end{subfigure} \hfill \begin{subfigure}[t]{.495\linewidth} \centering \includegraphics[width=\linewidth]{Figures/exp_degenerate_02.png} \caption{Robots move linearly on the same 2D plane.} \label{fig:exp_singular_planar} \end{subfigure} \begin{subfigure}[t]{0.495\linewidth} \centering \includegraphics[width=\linewidth]{Figures/exp_degenerate_03.png} \caption{Target robot is stationary or moves only on the $z$ axis.} \label{fig:exp_singular_static_target} \end{subfigure} \hfill \begin{subfigure}[t]{.495\linewidth} \centering \includegraphics[width=\linewidth]{Figures/exp_degenerate_04.png} \caption{Host robot is stationary.} \label{fig:exp_singular_static_host} \end{subfigure} \caption{Examples of singular configurations that lead to $\det(\mathbf{F}) = 0$. The true and plausible solutions are depicted in solid and transparent colors, respectively.} \label{fig:exp_degenerate_configs} \end{figure} \begin{figure}[t] \begin{subfigure}[t]{.495\linewidth} \centering \includegraphics[width=\linewidth]{Figures/instant_spherical_coord_system.png} \caption{ \label{fig:instant_spherical_coord_system} \end{subfigure} \hfill \begin{subfigure}[t]{.495\linewidth} \centering \includegraphics[width=\linewidth]{Figures/special_case_degen_straight_line.png} \caption{ \label{fig:special_case_degen_straight_line} \end{subfigure} \caption{a) Spherical coordinate frame. b) Robots move on a straight line that goes through the initial positions.} \label{fig:special_case_frame_degen} \end{figure} We can interpret the components that build up $\det(\mathbf{F})$ in Eq. (\ref{eq:det_FIM_RTE}) as follows ($i \in \{1, 2, 3, 4\}$): $\mathbf{u}_{j_i}$ is the unit vector parallel to the relative position vector in the world frame $\{\mathcal{L}_1\}$ (Fig. \ref{fig:analytical_coord_systems}a). $\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}$ is the local position vector of the target robot $\mathcal{R}_2$ as measured in the world frame. $T_i$ is the signed volume of the parallelepiped with three vectors $\mathbf{u}_{j_l}$, $\mathbf{u}_{j_p}$, $\mathbf{u}_{j_q}$ as edges (Fig. \ref{fig:analytical_coord_systems}b). $\Phi_{j_i}$ is the signed volume of the parallelepiped with three vectors $\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}$, $\mathbf{u}_{j_i}$ and $\mathbf{u}_z$ as edges (Fig. \ref{fig:analytical_coord_systems}c). For every four measurements at different time instances $j_1,\cdots,j_4$, $\det(\mathbf{\Lambda})$ is a combination of both $T_i$ and $\Phi_{j_i}$. Finally, $\det(\mathbf{F})$ is computed over all possible sets of four measurements $S$. The larger $\det(\mathbf{F})$ is, the smaller the uncertainty volume \cite{ly2017FIMtutorial}. Intuitively, $T_i = \abs{\textrm{s}\alpha_{j_l, j_p}} \textrm{s}\beta_{j_l j_p, j_q}$ represents the information gain regarding the translation parameters ($t_x, t_y, t_z$) and is directly correlated with the volume occupied by the relative position vectors (encoded by $\alpha_{j_l, j_p}$ and $\beta_{j_l j_p, j_q}$) but not the absolute positions of the robots. This suggests that in order to improve the estimates of $\mathbf{t}$, one should focus on enlarging the angles between the measurements, which is exactly the strategy that was presented in \cite{guo2017reloc} for a static target. On the other hand, $\Phi_{j_i} = \rho_{j_i} \textrm{s}\gamma_{j_i}$ represents the information gain regarding the relative heading parameter ($\theta$) and is directly influenced by the horizontal displacement of the target robot in its local frame ($\rho_{j_i}$) and how perpendicular is the relative position vector ($\mathbf{u}_{j_i}$) to the vertical plane that contains $\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}$ ($\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i} \times \mathbf{u}_z$), measured by $\textrm{s}\gamma_{j_i}$. For a given configuration, if $T_i = 0$ or $\Phi_{j_i} = 0$ or $\textstyle \sum_{i=1}^{4} (-1)^{i} \; \Phi_{j_i} T_i {=} 0$ for the whole set $S$, then $\det(\mathbf{F}) = 0$, in which case the associated parameters are at the critical points of $\mathbf{f}(\mathbf{\Theta})$ and their uncertainty cannot be reduced regardless of the number of measurements. For example: \begin{enumerate} \item If the robots move in parallel, so do the relative position vectors, i.e. $\mathbf{u}_{j_l} \: \| \: \mathbf{u}_{j_p}$ or $\alpha_{j_l,j_p} = 0 \; \forall l \neq p$, then $T_i = 0 \; \forall i$ and $\det(\mathbf{F})=0$. In this case, the relative translation $\mathbf{t}$ cannot be fully resolved (Fig. \ref{fig:exp_singular_parallel}). \item If the trajectories of the robots are linear and on the same 2D plane then their relative position vectors are also on the same 2D plane, which leads to $\mathbf{u}_{j_l} \times \mathbf{u}_{j_p} \perp \mathbf{u}_{j_q}$ or equivalently $\beta_{j_l j_p, j_q} = 0 \; \forall l,p,q$, hence $T_i = 0 \; \forall i$ and $\det(\mathbf{F})=0$. In this case, the solution can be recovered up to a flip ambiguity (Fig. \ref{fig:exp_singular_planar}). \item If the target robot is stationary or only moves on the $z$ axis, i.e. $\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i} = \mathbf{0}$ or $\rho_{j_i} = 0$, then $\Phi_{j_i} = 0 \; \forall j_i \in S$ and $\det(\mathbf{F})=0$. In this case, the relative heading $\theta$ cannot be fully resolved (Fig. \ref{fig:exp_singular_static_target}). \item If the host robot is stationary, then $\textstyle \sum_{i=1}^{4} (-1)^{i} \; \Phi_{j_i} T_i = 0$ and $\det(\mathbf{F})=0$. In this case, both $\mathbf{t}$ and $\theta$ cannot be fully resolved since the solution is invariant to rotation around the host robot (Fig. \ref{fig:exp_singular_static_host}). \end{enumerate} These examples with clear physical interpretations have also been stated separately in \cite{van2019board,cornejo2015distributed}. Additionally, we provide simulation results in Sect. \ref{subsec:UncertaintyEval} to demonstrate the above observations. Although the study of optimal configurations (which maximize $\det(\mathbf{F})$ or equivalently minimize the uncertainty volume) is outside the scope of this paper, it is an interesting subject for future works. \section{Proposed Approaches} \label{sec:main_approaches} In this section, the proposed methods are presented in details. First, we briefly summarize the squared distances weighted least squares (SD-WLS) problem \cite{trawny2010relplanar}. Then, the SD-WLS problem is reformulated as a non-convex QCQP problem and further as an SDP relaxation problem. \subsection{4-DoF squared distances weighted least squares} \label{subsec:main_approaches_SD_WLS} The true inter-robot distance at time $t_k$ can be written as \begin{equation} d_k = \norm{\mathbf{w}_k} = \sqrt{\mathbf{w}_k^{\top} \mathbf{w}_k}, \end{equation} where $\mathbf{w}_k \coloneqq \mathbf{t} + \mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_k - \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_k$. The ranging error vector is \begin{equation} \mathbf{e}_r \coloneqq \left[ \norm{\mathbf{w}_1} - \tilde{d}_1,\;\; \norm{\mathbf{w}_2} - \tilde{d}_2,\; \cdots,\; \norm{\mathbf{w}_k} - \tilde{d}_k \right]^\top \end{equation} The squared distance measurement \begin{equation} \tilde{d}_k^2 = d_k^2 + 2 d_k \eta_k + \eta_k^2 \end{equation} has the noise term $\nu_k \coloneqq 2 d_k \eta_k + \eta_k^2$, which is not zero-mean Gaussian. However, this non-Gaussian pdf can be approximated as one \cite{trawny2010relplanar}: \begin{equation} \begin{aligned} &\tilde{s}_k = d_k^2 + \eta'_k = \mathbf{w}_k^\top \mathbf{w}_k + \eta'_k,\\ &\tilde{s}_k \simeq \tilde{d}_k^2 - \bar{\nu}_k, \;\; \bar{\nu}_k \coloneqq \mathbb{E}[\nu_k] = \left({\mathbf{\Sigma}_r}\right)_{k,k},\\ &\bm{\eta}' = [\eta'_1, \dots, \eta'_k] \sim \mathcal{N}(\bm{\eta}'; \mathbf{0}, \mathbf{\Sigma}_s). \end{aligned} \end{equation} The elements of the covariance matrix $\mathbf{\Sigma}_s$ are computed as \begin{equation} \begin{aligned} \left(\mathbf{\Sigma}_s\right)_{i,i} &\coloneqq \mathbb{E}[(\nu_i - \bar{\nu}_i)^2] = \left({\mathbf{\Sigma}_r}\right)_{i,i} \left[]4 \tilde{d}_i^2 + 2 \left({\mathbf{\Sigma}_r}\right)_{i,i}\right],\\ \left({\mathbf{\Sigma}_s}\right)_{i,j} &\coloneqq \mbox{\footnotesize $\displaystyle \mathbb{E}[(\nu_i - \bar{\nu}_i)(\nu_j - \bar{\nu}_j)] = \left({\mathbf{\Sigma}_r}\right)_{i,j} \left[4 \tilde{d}_i \tilde{d}_j + 2\left({\mathbf{\Sigma}_r}\right)_{i,j}\right] $}. \end{aligned} \end{equation} The 4-DoF SD-WLS problem is defined as \begin{equation}\label{eq:prob_SD_WLS} \min\limits_{\mathbf{\Theta}} \frac{1}{2} \mathbf{e}_s^{\top} \mathbf{\Sigma}_s^{-1} \mathbf{e}_s \end{equation} where $\mathbf{e}_s$ is the vector of the squared distance errors \begin{equation} \mathbf{e}_s \coloneqq [ \mathbf{w}_{1}^{\top} \mathbf{w}_{1} - \tilde{s}_1, \;\; \mathbf{w}_{2}^{\top} \mathbf{w}_{2} - \tilde{s}_2, \dots , \; \mathbf{w}_{k}^{\top} \mathbf{w}_{k} - \tilde{s}_k]^{\top}. \end{equation} The original SD-WLS \cite{trawny2010relplanar} was established for the 2D case. Here, we change the parameters to that of the 4-DoF case. \subsection{Non-convex QCQP} \label{subsec:main_approaches_QCQP} Expanding and simplifying $\mathbf{w}_{k}^{\top} \mathbf{w}_{k}$ lead to: \begin{equation}\label{eq:squared_dist_full_form} \begin{aligned} \mathbf{w}_{k}^{\top} \mathbf{w}_{k} &= (\mathbf{t} + \mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_k - \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_k)^{\top} (\mathbf{t} + \mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_k - \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_k)\\ &= \mathbf{t}^{\top} \mathbf{t} + \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_k^{\top} \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_k + \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_k^{\top} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_k\\ &+ 2 (\mathbf{t} - \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_k)^{\top} \mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_k - 2 \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_k^{\top} \mathbf{t}. \end{aligned} \end{equation} Let $\mathbf{x} = [x_1,x_2,\dots,x_9]^\top$ be the state vector: \begin{equation} \label{eq:x_state_vector} \begin{aligned} \mathbf{x} \coloneqq [ &t_x, \;\;\;\; t_y, \;\;\;\; t_z, \;\;\;\; \textrm{c}\theta,\;\;\;\; \textrm{s}\theta,\;\;\;\; t_x \textrm{c}\theta + t_y \textrm{s}\theta, \\ &t_y \textrm{c}\theta - t_x \textrm{s}\theta, \;\;\;\; t_x^2 + t_y^2 + t_z^2, \;\;\;\; 1]^{\top} \in \mathbb{R}^{9 \times 1}, \end{aligned} \end{equation} each row of $\mathbf{e}_s$ can be rearranged as $\mathbf{w}_i^\top \mathbf{w}_i - \tilde{s}_i = \mathbf{A}_i \mathbf{x}$, where \begin{equation} \begin{aligned} \mathbf{A}_i &\coloneqq [ {-}2\prescript{}{}{p}^{1x}_i, \qquad {-}2\prescript{}{}{p}^{1y}_i, \qquad 2(\prescript{}{}{p}^{2z}_i {-} \prescript{}{}{p}^{1z}_i), \\ &\; {-}2(\prescript{}{}{p}^{1x}_i\prescript{}{}{p}^{2x}_i {+} \prescript{}{}{p}^{1y}_i\prescript{}{}{p}^{2y}_i), \qquad 2(\prescript{}{}{p}^{1x}_i\prescript{}{}{p}^{2y}_i {-} \prescript{}{}{p}^{2x}_i\prescript{}{}{p}^{1y}_i),\\ &\; \;\;\;2\prescript{}{}{p}^{2x}_i, \qquad 2\prescript{}{}{p}^{2y}_i, \qquad 1, \qquad \varepsilon_i] \in \mathbb{R}^{1 \times 9},\\ \varepsilon_i &\coloneqq \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_i^{\top} \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_i + \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_i^{\top} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_i - 2\prescript{}{}{p}^{1z}_i\prescript{}{}{p}^{2z}_i - \tilde{s}_i,\\ \end{aligned} \end{equation} with $i = 1,\dots,k$. $\mathbf{e}_s$ can then be simplified as \begin{equation} \mathbf{e}_s = \begin{bmatrix} \mathbf{A}_1\\ \vdots\\ \mathbf{A}_k \end{bmatrix} \mathbf{x} = \mathbf{B} \mathbf{x}, \end{equation} which allows the SD-WLS cost function (\ref{eq:prob_SD_WLS}) to be reformulated into a quadratic form \begin{equation} \begin{aligned} \frac{1}{2} \mathbf{e}_s^{\top} \mathbf{\Sigma}_s^{-1} \mathbf{e}_s = \frac{1}{2} \mathbf{x}^{\top} \overbrace{ \mathbf{B}^{\top} \mathbf{\Sigma}_s^{-1} \mathbf{B} }^{\mathbf{P}_0} \mathbf{x} = \frac{1}{2} \mathbf{x}^{\top} \mathbf{P}_0 \mathbf{x}. \end{aligned} \end{equation} Finally, the 4-DoF SD-WLS problem (\ref{eq:prob_SD_WLS}) is reformulated as a non-convex QCQP problem \begin{equation}\label{eq:prob_QCQP} \begin{aligned} \min\limits_{\mathbf{x}} \;\; & \mathbf{x}^{\top} \mathbf{P}_0 \mathbf{x}\\ \textrm{s.t.} \;\; &\mathbf{x}^{\top} \mathbf{P}_i \mathbf{x} = r_i,\;\;i=1,\dots,5.\\ \end{aligned} \end{equation} The constraints are drawn from the relations between the parameters of $\mathbf{x}$ in Eq. (\ref{eq:x_state_vector}) as follows: \begin{equation} \begin{aligned} &\textrm{c}^2\theta + \textrm{s}^2\theta = 1 \Leftrightarrow x_4^2 + x_5^2 = 1 \Rightarrow \mathbf{x}^{\top} \mathbf{P}_1 \mathbf{x} = r_1,\\ &t_x \textrm{c}\theta {+} t_y \textrm{s}\theta {=} x_6 x_9 \Leftrightarrow x_1 x_4 {+} x_2 x_5 {-} x_6 x_9 {=} 0 \Rightarrow \mathbf{x}^{\top} \mathbf{P}_2 \mathbf{x} = r_2,\\ &t_y \textrm{c}\theta {-} t_x \textrm{s}\theta {=} x_7 x_9 \Leftrightarrow x_2 x_4 {-} x_1 x_5 {-} x_7 x_9 {=} 0 \Rightarrow \mathbf{x}^{\top} \mathbf{P}_3 \mathbf{x} = r_3,\\ &t_x^2 {+} t_y^2 {+} t_z^2 {=} x_8 x_9 \Leftrightarrow x_1^2 {+} x_2^2 {+} x_3^2 {-} x_8 x_9 {=} 0 \Rightarrow \mathbf{x}^{\top} \mathbf{P}_4 \mathbf{x} = r_4,\\ &t_x^2 + t_y^2 + t_z^2 = \norm{\mathbf{t}}^2 \Leftrightarrow x_1^2 + x_2^2 + x_3^2 = \tilde{d}_0^2 \Rightarrow \mathbf{x}^{\top} \mathbf{P}_5 \mathbf{x} = r_5, \end{aligned} \end{equation} where \begin{equation*} \begin{aligned} &\mathbf{P}_1 = \operatorname{sparse}([4,5],[4,5],[1,1],9,9), r_1 = 1,\\ &\mathbf{P}_2 = \operatorname{sparse}([1,2,9],[4,5,6],[1,1,-1],9,9), r_2 = 0,\\ &\mathbf{P}_3 = \operatorname{sparse}([2,1,9],[4,5,7],[1,-1,-1],9,9), r_3 = 0,\\ &\mathbf{P}_4 = \operatorname{sparse}([1,2,3,8],[1,2,3,9],[1,1,1,-1],9,9), r_4 = 0,\\ &\mathbf{P}_5 = \operatorname{sparse}([1,2,3],[1,2,3],[1,1,1],9,9), r_5 = \tilde{d}_0^2. \end{aligned} \end{equation*} Here, we use the MATLAB operator $\operatorname{sparse}(\mathbf{a},\mathbf{b},\mathbf{c},m,n)$ which generates an $m \times n$ sparse matrix $\mathbf{P}$ from the vectors $\mathbf{a}$, $\mathbf{b}$, and $\mathbf{c}$, such that $\mathbf{P}_{a_k,b_k} = c_k$. \begin{rem}\label{rem:SD_WLS_d0} While it is possible to reduce the number of parameters by exploiting $\tilde{d}_0$\cite{li2020relSDP}, we found that in real-life scenarios $\tilde{d}_0$ might not exist (e.g., if one robot starts moving when the other robot is not ready for operation, or if the starting points are not within line-of-sight). Hence, $\tilde{d}_0$ is used only for constraint $i{=}5$, which is removed if $\tilde{d}_0$ is not available. \end{rem} \subsection{SDP Relaxation} \label{subsec:main_approaches_SDP} \subsubsection{Problem reformulation} We follow the steps outlined in \cite{luo2010sdp} to obtain the SDP relaxation formulation of the QCQP problem (\ref{eq:prob_QCQP}). First, observe that \begin{equation} \mathbf{x}^{\top} \mathbf{P}_i \mathbf{x} = \operatorname{Tr}( \mathbf{x}^{\top} \mathbf{P}_i \mathbf{x}) = \operatorname{Tr}( \mathbf{P}_i \mathbf{x} \mathbf{x}^{\top}). \end{equation} By using a new variable $\mathbf{X} = \mathbf{x} \mathbf{x}^{\top}$, an equivalent formulation of (\ref{eq:prob_QCQP}) can be written as: \begin{equation}\label{eq:prob_equivalent_QCQP} \begin{aligned} \min\limits_{\mathbf{X}} \;\; &\operatorname{Tr}( \mathbf{P}_0 \mathbf{X})\\ \textrm{s.t.} \;\; &\operatorname{Tr}( \mathbf{P}_i \mathbf{X} ) = r_i,\;\;i=1,\dots,5\\ &\; \mathbf{X} \succeq 0,\;\operatorname{rank} (\mathbf{X}) = 1, \end{aligned} \end{equation} where $\mathbf{X} \succeq 0$ indicates that $\mathbf{X}$ is positive semidefinite. By dropping the nonconvex rank constraint, we obtain the SDP relaxation problem as \begin{equation}\label{eq:prob_SDP} \begin{aligned} \min\limits_{\mathbf{X}} \;\; &\operatorname{Tr}( \mathbf{P}_0 \mathbf{X})\\ \textrm{s.t.} \;\; &\operatorname{Tr}( \mathbf{P}_i \mathbf{X} ) = r_i,\;\;i=1,\dots,5\\ &\; \mathbf{X} \succeq 0. \end{aligned} \end{equation} The advantage of the SDP problem (\ref{eq:prob_SDP}) over the original NP-hard problems (\ref{eq:prob_QCQP}) and (\ref{eq:prob_equivalent_QCQP}) is that it can be solved in polynomial time. In the noiseless or low noise cases, the SDP relaxation can be expected to be tight, i.e. solving either the relaxed and original problems is equivalent \cite{luo2010sdp}. A problem specific explanation which is also applicable to our case can be found in Lemma 1 in \cite{li2020relSDP}. \subsubsection{Recover the original solution} Let $\overset{*}{\mathbf{x}}$ and $\overset{*}{\mathbf{X}}$ be the solution of (\ref{eq:prob_QCQP}) and (\ref{eq:prob_SDP}), respectively. If $\textrm{rank}(\overset{*}{\mathbf{X}}) = 1$, then the low-rank decomposition $\overset{*}{\mathbf{X}} = \overset{*}{\mathbf{x}} (\overset{*}{\mathbf{x}})^{\top}$ will provide the feasible and optimal solution $\overset{*}{\mathbf{x}}$. If $\textrm{rank}(\overset{*}{\mathbf{X}}) > 1$, the rank-one decomposition process in \cite{luo2010sdp} can be used to extract $\overset{*}{\mathbf{x}}$. Using SVD to decompose $\overset{*}{\mathbf{X}}$ as \begin{equation} \overset{*}{\mathbf{X}} = \sum\limits_{i=1}^{n} \lambda_i \mathbf{q}_i \mathbf{q}_i^{\top},\; \lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_n > 0, \end{equation} where $\lambda_i$ and $\mathbf{q}_i$ are the eigenvalues and respective eigenvectors, the best rank-one approximation is \begin{equation} \overset{*}{\mathbf{X}}_1 = \lambda_1 \mathbf{q}_1 \mathbf{q}_1^{\top}, \end{equation} Lastly, since we expect $x_9$ to be positive from the way $\mathbf{x}$ is constructed (Eq. (\ref{eq:x_state_vector})), the sign of the final solution is flipped if ${\overset{*}{x}_9}$ is negative. \subsection{Practical issues consideration} \subsubsection{UWB outliers rejection} \label{subsubsec:good_practice_uwb_outlier} We employ the same UWB outlier rejection scheme as our previous work \cite{Thien2021multiviro}. The variance of the UWB data in a sliding window consisting of $K$ latest samples is computed as \begin{equation} \hat{\sigma}_k^2 = \frac{1}{K}\sum\limits_{i=k-K+1}^{k}(\tilde{d}_i - \bar{d}_k)^2, \end{equation} where $\bar{d}_k$ is the mean value of $K$ samples. When a new data $\tilde{d}_{k+1}$ is received, if the new $\hat{\sigma}_{k+1}$ increases over a pre-defined threshold, the data is discarded. In our real-life experiments, we set $K=20$ and the threshold as $0.005$ \subsubsection{Singular configuration detection}\label{subsubsec:UncertaintyEst} Singular configurations refer to the cases where multiple solutions exist for at least one of the parameters given the available measurements \cite{trawny2010rel3Dtransform}. In the physical sense, this happens if the target robot's trajectory can be flipped, rotated or a combination of both in the world frame and the distance measurements will be the same \cite{cornejo2015distributed}. This observation corroborates our interpretation of the intuitive unobservable cases as discussed previously in Sect. \ref{subsec:CRLB_geometric_interpretation}. In previous works listed in Table \ref{table:lit_review}, no detection method for singular configurations nor measures of the uncertainty of the estimates were provided. In the noiseless case, if one parameter is unobservable in a given configuration, the associated row and column in the FIM will become zeros (no information gain from the available measurements). As such, the singular configurations will manifest in the form of the FIM losing rank in noiseless cases, or the condition number of the FIM approaches infinity in noisy cases. Our detection scheme for singular configuration works as follows. Firstly, we evaluate the FIM using the analytical formula in Eq. (\ref{eq:FIM_simplified}) at the latest estimates $\hat{\mathbf{\Theta}}_k$. Then, we compute the condition number of the estimated FIM $\kappa(\hat{\mathbf{F}})$. A configuration is deemed singular if $\kappa(\hat{\mathbf{F}}_k)$ is larger than a threshold, which can be determined empirically In practice, before the first optimization we check that the sample variance based on recent positions of the robots is higher than a threshold (empirically set as $0.05\si{m}$ for the last $100$ positions in our experiments), otherwise the optimization process is skipped. This simple check ensures that 1) both robots are moving, and 2) there are motions on all axes, which is sufficient to avoid simple singular cases such as one robot is static or both robots are moving in parallel. Additionally, it was observed that if the poses only change marginally in $x$ and $y$ directions, such as when the quadrotors are hovering, the estimates do not improve regardless of how many new measurements are added. As such, we keep performing this check during the mission and skip any unnecessary updates. \begin{rem} In the noiseless case, the singular configurations will also manifest in the form of matrix $\mathbf{P}_0$ in Eq. (\ref{eq:prob_QCQP}) losing rank, as the optimal solution of the QCQP problem is not unique. As a result, the singular cases can be detected by checking whether $\mathbf{P}_0$ is rank-deficient, which can avoid computing the FIM entirely. However, in our real-life experiments the noise will generally make $\mathbf{P}_0$ full rank even when the robots just move slightly. Hence, this method was not used. \end{rem} \subsubsection{Uncertainty of the estimates} Let $\hat{\textrm{CRLB}}$ be the CRLB computed with the estimated $\hat{\mathbf{\Theta}}$ instead of the true $\mathbf{\Theta}$. Following \cite{ly2017FIMtutorial}, the 95\%-confidence interval for each parameter of $\mathbf{\Theta}$ is \begin{equation} CI_i = \left[ \hat{\Theta}_i - 1.96 \sqrt{\hat{\textrm{CRLB}}_{i,i}} \;, \; \hat{\Theta}_i + 1.96 \sqrt{\hat{\textrm{CRLB}}_{i,i}} \right] \end{equation} for $i \in \{1,\cdots,4\}$. Additionally, $\hat{\textrm{CRLB}}$, or generally the inverse of the observed FIM, can be used as the error covariance matrix of the Maximum Likelihood Estimator \cite{ponda2009trajopt}. In essence, at time $t_k$, our framework reports the current estimates $\hat{\mathbf{\Theta}}$ and the corresponding uncertainty metrics, including the condition number $\kappa(\hat{\mathbf{F}}_k)$ and the standard error $\sigma_{\hat{\Theta}_i} \coloneqq \sqrt{\hat{\textrm{CRLB}}_{i,i}}$ for each parameter ($i \in \{1,\cdots,4\}$). If $\kappa(\hat{\mathbf{F}}_k)$ and $\sigma_{\hat{\Theta}_i}$ are sufficiently large, the trajectory configuration can be recognized as singular, with $\Theta_i$ being the unobservable parameter. \subsection{Extension to multi-robot scenario} \label{subsec:general_multi_robot} To extend our method to the general multi-robot case, simply copying the same estimator to each pair of robots is one valid option \cite{cornejo2015distributed}. However, the communication bandwidth and the TW-ToF UWB measurement frequency will be reduced, which would adversely affect the performance of the system. To mitigate these issues, one potential approach would be to exchange only measurements that significantly contribute to the solution. From Sect. \ref{subsec:CRLB_geometric_interpretation}, the simple motions that do not improve one or multiple parameters can be identified (for quadrotors: hovering, yawing while hovering, ascending/ descending, etc.). As such, during these motions the robot can stop sending data and the estimation results would not be affected. Nonetheless, the most effective approach would be to incorporate recent advances in distributed optimization for multi-robot systems \cite{halsted2021survey} in the problem formulation. Each robot would then only need to use its local observations to compute the global solution while only minimal information is exchanged periodically. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{Figures/compare_all_methods_d0_vs_errors.png} \caption{Estimation errors of different methods with varying $d_0$. Top: translation error $e_t$. Bottom: heading error $e_{\theta}$. All simulations are run with $D = 2\si{m}$, $\sigma_r = 0.1\si{m}$ and $\sigma_o = 0.001\si{m}$. As $d_0$ increases, the scale of the errors increases noticeably.} \label{fig:compare_all_d0_vs_errors} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{Figures/compare_all_methods_r12_vs_errors.png} \caption{Estimation errors of different methods with varying $D$. Top: translation error $e_t$. Bottom: heading error $e_{\theta}$. All simulations are run with $d_0 = 1\si{m}$, $\sigma_r = 0.1\si{m}$ and $\sigma_o = 0.001\si{m}$. As $D$ decreases, the scale of the errors increases noticeably.} \label{fig:compare_all_radius_vs_errors} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[t]{\textwidth} \centering \includegraphics[width=\textwidth]{Figures/compare_all_methods_sigmas_vs_both_trans_head.png} \caption{Average translation $\bar{e}_t (\si{m})$ and heading $\bar{e}_{\theta} (\si{rad})$ errors.} \label{fig:compare_all_sigmas_vs_both_errors} \end{subfigure} \begin{subfigure}[t]{0.325\textwidth} \centering \includegraphics[width=\textwidth]{Figures/compare_all_methods_sigmar_0.001_sigmav_10.png} \caption{$\sigma_r = 0.001\si{m}$, $\sigma_o = 10\si{m}$.} \label{fig:compare_all_max_sigma_o} \end{subfigure} \hfill \begin{subfigure}[t]{0.325\textwidth} \centering \includegraphics[width=\textwidth]{Figures/compare_all_methods_sigmar_10_sigmav_0.001.png} \caption{$\sigma_r = 10\si{m}$, $\sigma_o = 0.001\si{m}$.} \label{fig:compare_all_max_sigma_r} \end{subfigure} \hfill \begin{subfigure}[t]{0.325\textwidth} \centering \includegraphics[width=\textwidth]{Figures/compare_all_methods_sigmar_10_sigmav_10.png} \caption{$\sigma_r = 10\si{m}$, $\sigma_o = 10\si{m}$.} \label{fig:compare_all_max_both_sigmas} \end{subfigure} \caption{Effects of UWB noise ($\sigma_r$) and odometry noise ($\sigma_o$). All simulations are run with $d_0 = 50\si{m}$ and $D = 20\si{m}$.} \label{fig:compare_all_sigmas_vs_errors} \end{figure*} \section{Experimental Results} \label{sec:exp} Video of the simulations and experiments can be found at \url{https://youtu.be/eLDQ4q7mm_s}. \subsection{Implementation details and performance metrics} Among the 3D methods listed in Table \ref{table:lit_review}, we include the algebraic \cite{trawny2010rel3Dtransform}, linear \cite{molina2019unique}, nonlinear least squares (NLS) \cite{ziegler2021distributed} and previous SDP \cite{jiang2020rel3D} methods for comparison. None of these methods provide open-source code and in our experience, the implementation details can affect the performance noticeably. As such, we strive for fair comparison as follows: \begin{itemize} \item For the methods in \cite{trawny2010rel3Dtransform,jiang2020rel3D} which are designed for full 6-DoF, we reduce the estimation problem to the same 4-DoF as ours by setting the extra parameters as constants and removing/adjusting the related constraints. \item For simulation, all methods are implemented in MATLAB with SeDuMi\footnote{\url{https://github.com/sqlp/sedumi}} solver for SDP methods. For real-life experiments, the methods are implemented in C++ with the solvers: CSDP\footnote{\url{https://github.com/coin-or/Csdp}} for SDP methods, Gurobi\footnote{\url{https://www.gurobi.com/}} for the QCQP method, Eigen\footnote{\url{https://eigen.tuxfamily.org/}} for algebraic and linear methods, Ceres\footnote{\url{http://ceres-solver.org/}} for the NLS method. \item The timing comparison takes into account only the solver time, i.e. the time it took to find the optimal solution, while excludes other data processing and preparation steps (construction and inversion of matrices, etc.). \item The initial guess is always set to $\mathbf{0}$ since we assume no prior knowledge of the problem. The NLS method is the most affected by the quality of the initial guess. \item All methods are presented with the same processed input data (degenerate configurations checked, outlier rejection scheme applied, spatial-temporal offsets compensated). \end{itemize} We evaluate the translation and heading parameters separately, with the estimated translation and heading error denoted as $e_t = \norm{\hat{\mathbf{t}} - \mathbf{t}}$ and $e_{\theta} = \abs{\hat{\theta} - \theta}$, respectively. The average errors over all runs $\bar{e}_t$ and $\bar{e}_{\theta}$, the root mean square error $\textrm{RMSE}_t$ and $\textrm{RMSE}_{\theta}$, the mean square errors (MSE) against the CRLBs, and the solver time are the subjects of comparison. \subsection{Simulation} The simulations are designed to evaluate the methods with different factors in isolation. We focus on two of the most important aspects that can affect the performance of the estimator: the trajectories configuration and the signal noises. As previously mentioned, $d_0 = \norm{\mathbf{t}}$ is the true initial relative distance between the robots. Let $D$ be the maximum moving radius of the robots from their initial positions, i.e. all possible trajectories are confined in a sphere centered at the local origin with a radius $D$ (Fig. \ref{fig:sys_overview}a). The standard deviation of UWB and odometry data are denoted as $\sigma_r$ and $\sigma_o$, respectively. In previous works, the simulations are done for a specific scenario: the trajectories' shape and size are fixed, and often only one value of $\mathbf{\Theta}$ is tested. This might undermine the generalizability of the observations as well as the conclusions. In this work, we aim for more universal and comprehensive results. To this end, for each method, we perform $100$ Monte-Carlo simulations with the true relative translation $\mathbf{t}$ uniformly sampled on a sphere centered at $\mathbf{0}^{3\times1}$ with a radius $d_0$ and the true relative heading $\theta$ randomly sampled between $[-\pi,\pi)$. Then, in each simulation, the robots' trajectories are generated randomly. Each trajectory consists of $20$ poses with the distance to the origin no larger than $D$. Finally, the odometry data and UWB data are generated from the noise-free data by adding Gaussian noises. In this manner, the results should be indicative of all possible trajectories configuration (shape, size and relative transform) that can be generated from a given condition specified by $d_0$ and $D$. \subsubsection{Effect of trajectory configuration} \label{subsubsec:effect_config} Fig. \ref{fig:compare_all_d0_vs_errors} and \ref{fig:compare_all_radius_vs_errors} show the related results. It can be seen that both $d_0$ and $D$ affects the performance of all methods. The main observations are: the larger the $d_0$, the larger the errors, while $D$ has the opposite effect; $d_0$ affects the translation error $e_t$ more noticeably than the heading error $e_{\theta}$ while it is the reverse for $D$; the performance of the methods are more similar when $d_0$ is smaller or $D$ is larger. For $d_0 = 50\si{m}$ (Fig. \ref{fig:compare_all_sigmas_vs_errors}), the NLS and previous SDP methods fail to find the solution, which can be attributed to the fact that these methods do not limit the search space for $\mathbf{t}$ using the constraint $\norm{\mathbf{t}} = d_0$ while the rest does (in different forms). Overall, the proposed QCQP and SDP methods outperform previous methods, especially in more challenging scenarios (large $d_0$ and small $D$), with the best method being QCQP. In relation to our theoretical findings: the combination of $d_0$ and $D$ would limit the maximum relative angles between successive measurements (which consequently limits $\alpha$ and $\beta$), while $D$ would limit the maximum displacement of the target robot on the horizontal plane (which consequently limits $\rho$). As such, $d_0$ and $D$ together affects $T_i$ and consequently the relative translation $\hat{\mathbf{t}}$, while $D$ directly changes $\Phi_i$ and consequently the relative heading $\hat{\theta}$. In principle, we have $\sin\alpha_{\max} \simeq D / d_0$. From the problem-solving perspective, a situation with larger $d_0$ or smaller $D$ would be more difficult. Generally, either increasing $d_0$ or decreasing $D$ would improve the relative translation estimates, while increasing $D$ for the target robot would improve the relative heading estimates \subsubsection{Effect of noise levels} For this evaluation, all simulations are run with $d_0 = 50\si{m}$ and $D = 20\si{m}$ which is similar to an outdoor scenario. The UWB noise $\sigma_r$ and odometry noise $\sigma_o$ vary from ground truth ($\sim 0.001\si{m}$) to GPS ($\sim 10\si{m}$) level of accuracy. Fig. \ref{fig:compare_all_sigmas_vs_both_errors} illustrates the general influence of the $\sigma_r$ and $\sigma_o$ on the translation and heading estimation errors. Fig. \ref{fig:compare_all_max_sigma_o}-\ref{fig:compare_all_max_both_sigmas} show the detailed results at the extremes of the tested noise levels. We note that the scale of the errors can change depending on $d_0$ and $D$, as shown previously in Sect. \ref{subsubsec:effect_config}. The results show that the NLS and previous SDP methods could not obtain usable solutions regardless of the noise levels. Hence, the disadvantage of not having an initial guess for the NLS method and not using the first distance $d_0$ for the SDP method are demonstrated in this scenario. Among the other methods, $\sigma_r$ and $\sigma_o$ affect the methods in similar manner but at different scales. For the translation error $e_t$, our methods are noticeably more robust at higher $\sigma_r$, but are only marginally better at higher $\sigma_o$. The same can be said regarding the heading error $e_{\theta}$. However, it should be noted that the range of error is mostly the same between methods since unlike $\mathbf{t}$, the value of $\theta$ is limited to $[-\pi, \pi)$ and thus $e_{\theta}$ is bounded. Overall, our methods significantly excel at higher UWB noise $\sigma_r$ and slightly improve at higher odometry noise $\sigma_o$. As such, there is still room for further improvement in our methods regarding inaccurate odometry data. The proposed QCQP and SDP methods have mostly identical results, which indicates that the relaxation is tight under the tested conditions. In most situations $\sigma_r < 1$ and $\sigma_o < 0.5$, we believe both methods can be considered useful for practical applications. \subsubsection{Drift-correction capability} While the odometry from SLAM methods can provide accurate short-term ego-motions, they suffer from long-term drift due to the accumulated errors which is particularly prevalent for large-scale missions \cite{shenghai2021ussurvey,nguyen2021viralfusion}. This drift can be modelled as Gaussian noises that acts on the relative frame transformation $\prescript{\mathcal{L}_1}{\mathcal{L}_2}{\mathbf{T}}$ \cite{ziegler2021distributed}. As such, an RTE estimator can monitor and correct the drift by continuously estimating $\prescript{\mathcal{L}_1}{\mathcal{L}_2}{\hat{\mathbf{T}}}_k$ in a sliding window fashion. Let $\sigma_{\mathbf{t}}$ and $\sigma_{\theta}$ be the noise on the translation ($x$, $y$, $z$ axes) and heading $\theta$ parts of $\prescript{\mathcal{L}_1}{\mathcal{L}_2}{\mathbf{T}}$, respectively. The larger $\sigma_{\mathbf{t}}$ and $\sigma_{\theta}$, the more significant the drift (Fig. \ref{fig:drift_correction_overview}a). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/drift_correction_overview.png} \caption{a) The trajectories and UWB measurements. QCQP-$\mathcal{R}_2$ refers to the global trajectory of the target robot as computed by our QCQP method, with $\sigma_{\mathbf{t}} = \sigma_{\theta} = 0.1$. Note that only a subset of all $d_k$ is illustrated to improve clarity. b) Translation errors of different methods with $\sigma_{\mathbf{t}} = \sigma_{\theta} = 0.1$.} \label{fig:drift_correction_overview} \end{figure} \begin{table}[t] \centering \begin{adjustbox}{width=\columnwidth} \begin{tabular}[t]{ @{\hskip3pt}c@{\hskip3pt}| @{\hskip3pt}c@{\hskip3pt}| @{\hskip2pt}c@{\hskip2pt}| @{\hskip2pt}c@{\hskip2pt}| c|c|c|c} \toprule $\sigma_{\theta}$ & \multirow{2}{*}{NC} & Algebra & Linear & NLS & SDP & \multicolumn{2}{c}{Proposed} \\ \cline{7-8} (deg) & & \cite{trawny2010rel3Dtransform} & \cite{molina2019unique} & \cite{ziegler2021distributed} & \cite{jiang2020rel3D} & SDP & QCQP \\ \hline \multicolumn{8}{c}{$\sigma_{\mathbf{t}} = 0.01 \si{m}$} \\ \hline $0.01$ & \underline{2.28} & 84.36 & 94.35 & 6.81 & 4.86 & 2.36 & \textbf{0.51} \\ \hline $0.05$ & 11.75 & 88.06 & 108.83 & 7.59 & 4.91 & \underline{2.37} & \textbf{0.42} \\% NLS: 0.55 \hline $0.1$ & 21.82 & 87.86 & 150.11 & 13.96 & 5.02 & \underline{2.40} & \textbf{0.58} \\% NLS: 0.71 \hline \multicolumn{8}{c}{$\sigma_{\mathbf{t}} = 0.05 \si{m}$} \\ \hline $0.01$ & 2.71 & 82.07 & 153.26 & 7.05 & 4.94 & \underline{2.37} & \textbf{0.59} \\ \hline $0.05$ & 11.71 & 83.14 & 160.39 & 7.50 & 4.88 & \underline{2.36} & \textbf{0.53} \\ \hline $0.1$ & 23.19 & 81.16 & 173.13 & 7.06 & 4.97 & \underline{2.39} & \textbf{0.43} \\ \hline \multicolumn{8}{c}{$\sigma_{\mathbf{t}} = 0.1 \si{m}$} \\ \hline $0.01$ & 3.48 & 84.86 & 166.32 & 9.03 & 4.73 & \underline{2.39} & \textbf{0.53} \\% NLS: 0.56 \hline $0.05$ & 11.09 & 85.01 & 170.38 & 9.04 & 4.82 & \underline{2.38} & \textbf{0.54} \\ \hline $0.1$ & 23.06 & 100.53 & 199.29 & 10.15 & 4.92 & \underline{2.40} & \textbf{0.61} \\% NLS: 0.70 \bottomrule \end{tabular} \end{adjustbox} \caption{RMSE of the translation error ($\si{m}$) of the target robot's aligned trajectory with different drift conditions ($\sigma_{\mathbf{t}}$ and $\sigma_{\theta}$). The \textbf{first} and \underline{second} best results are ranked for each row. NC: no correction.} \label{table:results_drift_correction} \end{table} We simulate a scenario with two robots exploring a large environment (Fig. \ref{fig:drift_correction_overview}a) in a period of $10\si{mins}$. The host robot is equipped with a highly accurate localization system, such as RTK-GPS or LiDAR-based SLAM. The target robot is equipped with a VIO system that will drift away from the ground truth as time goes on. The host robot performs a sinuous trajectory to ensure the observability of the data within the sliding window, while the host robot performs a simple trajectory to scan the area. The sliding window contains the latest $50$ measurements, which are collected in $5$ seconds. All measurements are corrupted by $\sigma_o = 0.001\si{m}$ and $\sigma_r = 0.1\si{m}$. No initial guess was provided, meaning that all methods must 1) estimate the relative frame transformation $\prescript{\mathcal{L}_1}{\mathcal{L}_2}{\hat{\mathbf{T}}}_0$ and 2) track the changes of $\prescript{\mathcal{L}_1}{\mathcal{L}_2}{\hat{\mathbf{T}}}_k$, using only recent measurements. The global position of the target robot in the world frame, i.e. $\prescript{\mathcal{L}_1}{a_2}{\hat{\mathbf{p}}}_k \coloneqq \hat{\mathbf{t}}_k + \hat{\mathbf{C}}_k \prescript{\mathcal{L}_2}{a_2}{\hat{\mathbf{p}}}_k$, is the final output that we are most interested in. Table \ref{table:results_drift_correction} shows the RMSE of the translation error of $\prescript{\mathcal{L}_1}{a_2}{\hat{\mathbf{p}}}_k$. The smaller the RMSE, the better the system in performing both finding and monitoring $\prescript{\mathcal{L}_1}{\mathcal{L}_2}{\hat{\mathbf{T}}}_k$. The NC column shows the original accuracy of the onboard odometry without any correction, which serves as baseline to assess improvements. Fig. \ref{fig:drift_correction_overview}b demonstrates the translation errors in the case with the most drift. It is noticeable that the algebraic and linear methods actually worsen the position estimates. The reason is that with only data in a short sliding window, the situation is similar to the “hard" cases described in previous section (small $D$, large $d_0$). Hence, these methods could not obtain a good estimate for $\prescript{\mathcal{L}_1}{\mathcal{L}_2}{\hat{\mathbf{T}}}_k$ during the whole operation. The other optimization-based methods show more consistent and improved results in cases with larger drifts. Overall, our QCQP and SDP are the best and second best methods, with a significant enhancement compared to other methods as well as the baseline. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/multi_viro_platforms.png} \caption{The quadrotor platforms used in our experiments.} \label{fig:exp_real_platforms} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/real_exp_trajs.png} \caption{The trajectories in the real-life flight tests.} \label{fig:exp_real_traj} \end{figure} \subsection{Real-life experiments} \begin{table}[t] \centering \begin{adjustbox}{width=\columnwidth} \begin{tabular}[t]{ @{\hskip3pt}c@{\hskip3pt}| @{\hskip3pt}c@{\hskip3pt}| @{\hskip2pt}c@{\hskip2pt}| @{\hskip2pt}c@{\hskip2pt}| c|c|c|c} \toprule \multirow{2}{*}{ID} & \multirow{2}{*}{RMSE} & Algebra & Linear & NLS & SDP & \multicolumn{2}{c}{Proposed} \\ \cline{7-8} & & \cite{trawny2010rel3Dtransform} & \cite{molina2019unique} & \cite{ziegler2021distributed} & \cite{jiang2020rel3D} & SDP & QCQP \\ \hline \multirow{2}{*}{01} & $\mathbf{t} (\si{m})$ & 0.759 & 0.469 & 0.333 & \textbf{0.258} & 0.310 & \underline{0.296} \\ & ${\theta} (\si{rad})$ & 0.057 & 0.311 & 0.053 & \underline{0.047} & \textbf{0.044} & 0.062 \\ \hline \multirow{2}{*}{02} & $\mathbf{t} (\si{m})$ & 0.384 & 0.456 & 0.295 & 0.379 & \underline{0.203} & \textbf{0.201} \\ & ${\theta} (\si{rad})$ & 0.156 & 0.285 & \textbf{0.072} & 0.102 & \underline{0.082} & \textbf{0.072} \\ \hline \multirow{2}{*}{03} & $\mathbf{t} (\si{m})$ & 0.570 & 0.339 & 1.855 & \underline{0.130} & 0.131 & \textbf{0.124} \\ & ${\theta} (\si{rad})$ & 0.131 & 0.217 & 0.846 & \underline{0.067} & 0.072 & \textbf{0.059} \\ \hline \multirow{2}{*}{04} & $\mathbf{t} (\si{m})$ & 0.315 & 0.397 & 0.162 & 0.531 & \underline{0.124} & \textbf{0.108} \\ & ${\theta} (\si{rad})$ & 0.103 & 0.160 & 0.114 & 0.061 & \underline{0.021} & \textbf{0.014} \\ \hline \multirow{2}{*}{05} & $\mathbf{t} (\si{m})$ & 1.265 & 1.125 & 0.195 & 0.459 & \underline{0.184} & \textbf{0.178} \\ & ${\theta} (\si{rad})$ & 0.211 & 0.483 & 0.132 & 0.138 & \textbf{0.119} & \underline{0.125} \\ \bottomrule \end{tabular} \end{adjustbox} \caption{RMSE of translation ($\mathbf{t}$) and heading (${\theta}$) in real-life experiments. Each result is averaged over 03 runs. The \textbf{first} and \underline{second} best results are ranked for each row.} \label{table:results_real_life} \end{table} \begin{figure}[t] \centering \begin{subfigure}[t]{0.49\columnwidth} \centering \includegraphics[width=\textwidth]{Figures/flight02_results_errors.png} \caption{Estimation error} \label{fig:flight_02_results_errors} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\columnwidth} \centering \includegraphics[width=\textwidth]{Figures/flight02_results_benchmark.png} \caption{MSE against CRLB} \label{fig:flight_02_results_benchmark} \end{subfigure} \caption{Estimation results in one run for flight 02.} \label{fig:flight_02_estimation_results} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[t]{0.49\linewidth} \centering \includegraphics[width=\linewidth]{Figures/uncertainties_flight_02.png} \caption{Flight 02.} \label{fig:uncertainties_flight_02} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\linewidth} \centering \includegraphics[width=\linewidth]{Figures/uncertainties_flight_05.png} \caption{Flight 05.} \label{fig:uncertainties_flight_05} \end{subfigure} \caption{Uncertainty metrics in two real flight tests.} \label{fig:uncertainty_metrics_real_flights} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.195\textwidth} \centering \includegraphics[width=\textwidth]{Figures/uncertainties_singular_01_parallel.png} \caption{Parallel motion (Fig. \ref{fig:exp_singular_parallel})} \label{fig:uncertainties_singular_parallel} \end{subfigure \hfill \begin{subfigure}[t]{0.195\textwidth} \centering \includegraphics[width=\textwidth]{Figures/uncertainties_singular_02_planar.png} \caption{Planar motion (Fig. \ref{fig:exp_singular_planar})} \label{fig:uncertaintis_singular_planar} \end{subfigure \hfill \begin{subfigure}[t]{0.195\textwidth} \centering \includegraphics[width=\textwidth]{Figures/uncertainties_singular_03_static_target.png} \caption{Static target (Fig. \ref{fig:exp_singular_static_target})} \label{fig:uncertainties_singular_static_target} \end{subfigure \hfill \begin{subfigure}[t]{0.195\textwidth} \centering \includegraphics[width=\textwidth]{Figures/uncertainties_singular_04_static_host.png} \caption{Static host (Fig. \ref{fig:exp_singular_static_host})} \label{fig:uncertainties_singular_static_host} \end{subfigure \hfill \begin{subfigure}[t]{0.195\textwidth} \centering \includegraphics[width=\textwidth]{Figures/uncertainties_normal_no_singular.png} \caption{Normal case (Fig. \ref{fig:sys_overview}b)} \label{fig:uncertainties_normal_case} \end{subfigure \caption{Estimation errors (top row) and uncertainty metrics (bottom row) in simulations. We can see that: 1) the estimated uncertainty metrics are clearly much larger in unobservable (Fig. \ref{fig:uncertainties_singular_parallel}-\ref{fig:uncertainties_singular_static_host}) than observable cases (Fig. \ref{fig:uncertainties_normal_case}), 2) if the standard error $\sigma_i$ or estimation error $e_i$ do not improve over time, the parameter $\Theta_i$ is likely to be unobservable in that configuration.} \label{fig:uncertainty_metrics_simulations} \end{figure*} Fig. \ref{fig:exp_real_platforms} shows the hardware platforms in our real-life experiments. Each quadrotor is equipped with a Humatics P440 UWB\footnote{\url{https://fccid.io/NUF-P440-A}}, an UP2 mini computer\footnote{\url{https://up-board.org/}}, an Intel Realsense T265 VIO sensor\footnote{\url{https://www.intelrealsense.com/tracking-camera-t265/}}. The UWB antenna positions in the body frames are $\prescript{\mathcal{B}_1}{a_1}{\mathbf{p}} {=} [-0.02, 0.1, -0.05]^\top$ and $\prescript{\mathcal{B}_2}{a_2}{\mathbf{p}} {=} [-0.05, 0.15, -0.15]^\top$. The UWB data is generated at $37\si{Hz}$ while the VIO data arrives at $200\si{Hz}$. The noise standard deviations are set as $\sigma_r = 0.1$ and $\sigma_o = 0.002$. We collect the data over 5 flight tests with various trajectory configurations (Fig. \ref{fig:exp_real_traj}) in a VICON room that provides ground truth poses at $100\si{Hz}$. All methods are run on the same Intel NUC i7 mini computer. Table \ref{table:results_real_life} shows the average results over 3 runs. The algebraic and linear methods get the same level of accuracy for most cases, but the results are not desirable. The NLS and previous SDP method works well only in some scenarios. Our QCQP method surpasses the other methods in most cases while also provides consistent results, with our SDP method being a close second. Fig. \ref{fig:flight_02_results_errors} illustrates the evolution of the estimates in one run. It is evident that the proposed methods outperform previous approaches. This result correlates with the performance benchmark in Fig. \ref{fig:flight_02_results_benchmark}, where our methods approach the lower bound faster than others. However, it can be seen that as the operation continues, the estimates can become bias. The reasons might be due to the accumulated drift in the odometry data, errors in the calibration, or any unmodelled effect in the UWB data during the flight. \subsection{Uncertainty estimation} \label{subsec:UncertaintyEval} As stated in Sect. \ref{subsubsec:UncertaintyEst}, the uncertainty of the configuration is quantified by the condition number of the estimated FIM, $\kappa(\hat{\mathbf{F}})$, whereas the uncertainty of each parameter is measured by the standard error $\sigma_{\hat{\Theta}_i}$. Fig. \ref{fig:uncertainty_metrics_real_flights} and \ref{fig:uncertainty_metrics_simulations} demonstrate these values in real-life experiments and simulations, respectively. All simulations are done with $\sigma_r = 0.1$, $\sigma_o = 0.001$, $D = 1$ and $d_0 = 3$. All results are obtained from our QCQP method. In typical observable situations (Fig. \ref{fig:uncertainties_flight_02}, \ref{fig:uncertainties_flight_05}, \ref{fig:uncertainties_normal_case}), the estimated uncertainties follow the trend of the actual errors: as more measurements are incorporated, the errors as well as the uncertainties reduce. In simulation, the robots' movements cover all directions and the uncertainty on $x$, $y$, $z$ axes behave similarly. On the other hand, in real flights the motion on the $z$ axis is much more limited than the other axes due to safety concerns and the platforms' capability. As such, the rate of improvement for $\sigma_z$ is noticeably much slower. On the other hand, when the configuration is unobservable (Fig. \ref{fig:uncertainties_singular_parallel}-\ref{fig:uncertainties_singular_static_host}), firstly notice that the value of the condition number $\kappa(\hat{\mathbf{F}})$ is substantially larger and tend to only increases. Secondly, the particular parameters that are unobservable (shown in Fig. \ref{fig:exp_degenerate_configs}) will have considerably larger standard errors and do not improve over time, while the others behave similar to the observable cases. Hence, these configurations can be recognized as singular with the unobservable DoFs identified. \subsection{Computational demands} \begin{figure}[t] \centering \begin{subfigure}[t]{0.95\linewidth} \centering \includegraphics[width=\linewidth]{Figures/all_sims_aver_solver_time_main_figure.png} \caption{Solver time of all simulations in Fig. \ref{fig:compare_all_d0_vs_errors}.} \label{fig:solver_time_sim_only} \end{subfigure} \hfill \begin{subfigure}[t]{\linewidth} \centering \includegraphics[width=\linewidth]{Figures/all_flights_aver_solver_time.png} \caption{Average solver time in real-life experiments in Table \ref{table:results_real_life}.} \label{fig:solver_time_real_only} \end{subfigure} \caption{Comparison of solver time (ms).} \label{fig:solver_time_both_sim_real} \end{figure} Fig. \ref{fig:solver_time_sim_only} and \ref{fig:solver_time_real_only} demonstrate the solver time in all simulations of Fig. \ref{fig:compare_all_d0_vs_errors} and real-life experiments in Table \ref{table:results_real_life}, respectively. Overall, most methods can run in real-time. The linear method is consistently the fastest, closely matched by the algebraic method. Our QCQP method is the slowest in simulation. Since the QCQP method's solver time often correlates with the hardness of the problem (takes longer with smaller $D/d_0$ or larger noise) and the simulation covers all cases from easy to hard, the variation is much larger than the other methods. In real-life experiments, our SDP method runs faster than our QCQP method with an almost $50\%$ improvement, while the previous SDP method is the slowest and not real-time. The reason might be that the QCQP solver library directly supports our problem formulation, while we needed to adapt the public SDP solver library for the formulation of \cite{jiang2020rel3D}. Hence, the optimization of the library would be the main reason for the improved speed of our SDP method. However, as the scale of our real-life experiments is limited, these results are not as indicative as the simulation. \section{Conclusion} \label{sec:conclusion} We study the 4-DoF RTE problem using the local odometry and inter-robot UWB range measurements. The theoretical analysis of the problem is put forth, including the CRLB, the FIM and the determinant of the FIM. Based on these findings, insights for the geometric interpretation of information gains for each parameter, methods to detect singular configurations and measure the uncertainty of the estimates are provided. To solve the problem, optimization-based solutions are introduced which consist of a QCQP approach and the corresponding SDP relaxation. Our system outperforms previous methods in both simulations and real-life experiments, especially in challenging scenarios, and are more robust to large UWB noise. While both proposed approaches can run in real-time on mini computers, the QCQP method generally provides the most accurate results but takes longer time than the SDP counterpart. Finding the full unobservable conditions, the optimal trajectories configuration using $\det(\mathbf{F})$ as well as extending the system to the general case with $N$ robots are interesting topics for future works. \section*{Acknowledgements} We would like to thank Dr. Cao Kun and Mr. Cao Muqing for the fruitful discussions regarding previous works. \begin{appendices} \section{}\label{appendix:FIM_full} After computing the Jacobian, we have \begin{equation} \label{eq:G_i_original} \begin{aligned} \mathbf{G}_i = \left[ \partial_x f_i, \partial_y f_i, \partial_z f_i, \partial_{\theta} f_i \right] = \frac{1}{d_i} \left[ \prescript{}{1}{g}_i,\;\prescript{}{2}{g}_i,\;\prescript{}{3}{g}_i,\;\prescript{}{4}{g}_i \right], \end{aligned} \end{equation} where \begin{equation*} \begin{aligned} d_i &= \norm{ \mathbf{t} + \mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_i - \prescript{\mathcal{L}_1}{a_1}{\mathbf{p}}_i } {=} \norm{ \begin{bmatrix} t_x {+} p^{2x}_{i} \textrm{c} \theta - p^{2y}_{i} \textrm{s} \theta - p^{1x}_{i} \\ t_y {+} p^{2x}_{i} \textrm{s} \theta + p^{2y}_{i} \textrm{c} \theta - p^{1y}_{i} \\ t_z + p^{2z}_{i} - p^{1z}_{i} \end{bmatrix} }\\ &= \sqrt{\prescript{}{1}{g}_i^2 + \prescript{}{2}{g}_i^2 + \prescript{}{3}{g}_i^2},\\ \prescript{}{1}{g}_i &= \prescript{}{1}{g}_i (t_x, \theta) = t_x + p^{2x}_{i} \textrm{c} \theta - p^{2y}_{i} \textrm{s} \theta - p^{1x}_{i},\\ \prescript{}{2}{g}_i &= \prescript{}{2}{g}_i (t_y, \theta) = t_y + p^{2x}_{i} \textrm{s} \theta + p^{2y}_{i} \textrm{c} \theta - p^{1y}_{i},\\ \prescript{}{3}{g}_i &= \prescript{}{3}{g}_i (t_z) = t_z + p^{2z}_{i} - p^{1z}_{i},\\ \prescript{}{4}{g}_i &{=} \prescript{}{4}{g}_i (t_x, t_y, \theta) {=} \prescript{}{1}{g}_i(-p^{2x}_{i} \textrm{s} \theta {-} p^{2y}_{i} \textrm{c} \theta) {+} \prescript{}{2}{g}_i(p^{2x}_{i} \textrm{c} \theta {-} p^{2y}_{i} \textrm{s} \theta). \end{aligned} \end{equation*} Let $\mathbf{u}_i = \left[\partial_x f_i, \; \partial_y f_i, \; \partial_z f_i \right]^\top = \frac{1}{d_i}\left[\prescript{}{1}{g}_i,\; \prescript{}{2}{g}_i,\;\prescript{}{3}{g}_i\right]^\top$, it is clear that $\norm{\mathbf{u}_i} = 1$. Notice that $\prescript{}{1}{g}_i,\prescript{}{2}{g}_i,\prescript{}{3}{g}_i$ respectively correspond to the displacement in $x,y,z$ axes between the positions of the two robots at time $t_i$. Hence, $\mathbf{u}_i$ is a unit vector parallel to the relative position vector $\prescript{\mathcal{L}_1}{a_{1{,}2}}{\mathbf{p}}_i$ between the UWB antenna $a_1$ and $a_2$ at time $t_i$ in the world frame. We can simplify $\partial_{\theta} f_i = \prescript{}{4}{g}_i / d_i$ as \begin{equation} \label{eq:df_dtheta_simplified} \begin{aligned} \partial_{\theta} f_i &= \frac{1}{d_i} \left[( \prescript{}{1}{g}_i(-p^{2x}_{i} \textrm{s} \theta - p^{2y}_{i} \textrm{c} \theta) + \prescript{}{2}{g}_i(p^{2x}_{i} \textrm{c} \theta - p^{2y}_{i} \textrm{s} \theta )\right]\\ &= \left( \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \times \begin{bmatrix} p^{2x}_{i} \textrm{c} \theta - p^{2y}_{i} \textrm{s} \theta \\ p^{2x}_{i} \textrm{s} \theta + p^{2y}_{i} \textrm{c} \theta \\ p^{2z}_{i} \end{bmatrix} \right) \cdot \begin{bmatrix} \frac{\prescript{}{1}{g}_i}{d_i} \\ \frac{\prescript{}{2}{g}_i}{d_i} \\ \frac{\prescript{}{3}{g}_i}{d_i} \end{bmatrix} \\ &= \left[ \mathbf{u}_z \times (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_i) \right] \cdot \mathbf{u}_i \end{aligned} \end{equation} where $\mathbf{u}_z = [0,0,1]^\top$. Finally, Eq. (\ref{eq:G_i_original}) can be rewritten as \begin{equation} \label{eq:G_i_vector} \begin{aligned} \mathbf{G}_i = \left[ \mathbf{u}_i^\top, \; \Phi_i \right]. \end{aligned} \end{equation} where $\Phi_i \coloneqq \partial_{\theta} f_i = \left[ \mathbf{u}_z \times (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_i) \right] \cdot \mathbf{u}_i$. \section{}\label{appendix:sub_problems_derivation} \subsection{3D RTE with a common heading reference} \label{app_subsec:3D_known_theta} In this case, the state vector is $\mathbf{\Theta}_1 \coloneqq [t_x, t_y, t_z]^{\top}$ and the Jacobian is reduced to $\prescript{}{1}{\mathbf{G}}_i = [\partial_x f_i,\partial_y f_i,\partial_z f_i]$ and the FIM is $\mathbf{F}_1 = \sigma_r^{-2} {\mathbf{J}^\top_1} {\mathbf{J}_1}$, where the $i$-th row of ${\mathbf{J}_1}$ is $\prescript{}{1}{\mathbf{G}}_i$. The Cauchy-Binet formula gives \begin{equation} \begin{aligned} \det(\mathbf{F}_1) &= \frac{1}{\sigma_r^2} \sum\limits_{S_1}^{} \left(\det \left( \begin{matrix} \mathbf{u}_{j_1}^\top \\[0.2em] \mathbf{u}_{j_2}^\top \\[0.2em] \mathbf{u}_{j_3}^\top \end{matrix} \right) \right)^2 \\ &= \frac{1}{\sigma_r^2} \sum\limits_{S_1}^{} \left(\det \left( [\mathbf{u}_{j_1} \; \mathbf{u}_{j_2} \; \mathbf{u}_{j_3}] \right) \right)^2 \end{aligned} \end{equation} where $S_1 = \{ (j_1, j_2, j_3) \; \vert \; 1 \leq j_1 < j_2 \leq k \}$. Since ${\det([\mathbf{a} \; \mathbf{b} \; \mathbf{c}]) = (\mathbf{a} \times \mathbf{b}) \cdot \mathbf{c}}$, we have \begin{equation} \begin{aligned} &\det(\mathbf{F}_1) = \frac{1}{\sigma_r^2} \sum\limits_{S_1}^{} \left[ (\mathbf{u}_{j_1} \times \mathbf{u}_{j_2}) \cdot \mathbf{u}_{j_3} \right]^2 \\ &= \frac{1}{\sigma_r^2} \sum\limits_{S_1}^{} \norm{\mathbf{u}_{j_1}}^2 \norm{\mathbf{u}_{j_2}}^2 \norm{\mathbf{u}_{j_3}}^2 \textrm{s}^2 \alpha_{j_1, j_2} \textrm{s}^2 \beta_{j_1 j_2, j_3} \\ &= \frac{1}{\sigma_r^2} \sum\limits_{S_1}^{} \textrm{s}^2 \alpha_{j_1, j_2} \textrm{s}^2 \beta_{j_1 j_2, j_3}, \end{aligned} \end{equation} where $\alpha_{j_1,j_2} = \measuredangle (\mathbf{u}_{j_1}, \mathbf{u}_{j_2})$, $\beta_{j_1 j_2, j_3} = \frac{\pi}{2} - \measuredangle (\mathbf{u}_{j_1} \times \mathbf{u}_{j_2}, \mathbf{u}_{j_3})$. \subsection{2D RTE without a common heading reference} \label{subsec:2D_RTE_no_heading} In this case, the state vector is $\mathbf{\Theta}_2 \coloneqq [t_x, t_y, \theta]^{\top}$. To simplify the analysis, we still define the local odometry vectors $\prescript{\mathcal{L}_n}{a_n}{\mathbf{p}}_i$ and the unit relative position vector $\mathbf{u}_i$ in $\mathbb{R}^3$ but with zero $z$ elements. The target rotation matrix $\mathbf{C} \in SO(3)$ is the same. The length of the projection of $\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_i$ on the $xy$ plane of $\{\mathcal{L}_2\}$ is $\rho_i = \norm{\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_i}$. The range measurement model is \begin{equation} \begin{aligned} f_i = \norm{ \begin{bmatrix} t_x + p^{2x}_{i} \textrm{c}\theta - p^{2y}_{i} \textrm{s}\theta - p^{1x}_{i} \\ t_y + p^{2x}_{i} \textrm{s}\theta + p^{2y}_{i} \textrm{c}\theta - p^{1y}_{i} \end{bmatrix} }, \end{aligned} \end{equation} and the Jacobians are $\prescript{}{}{\mathbf{H}}_i = \frac{1}{d_i} [\prescript{}{1}{h}_{i}, \prescript{}{2}{h}_{i}, \prescript{}{3}{h}_{i}]$, where \begin{equation}\label{eq:hi123} \begin{aligned} d_i &= \sqrt{\prescript{}{1}{h}_{i}^2 + \prescript{}{2}{h}_{i}^2}, \\ \prescript{}{1}{h}_{i} &= t_x + p^{2x}_{i} \textrm{c}\theta - p^{2y}_{i} \textrm{s}\theta - p^{1x}_{i}, \\ \prescript{}{2}{h}_{i} &= t_y + p^{2x}_{i} \textrm{s}\theta + p^{2y}_{i} \textrm{c}\theta - p^{1y}_{i}, \\ \prescript{}{3}{h}_{i} &= \prescript{}{1}{h}_{i}(-p^{2x}_{i} \textrm{s} \theta - p^{2y}_{i} \textrm{c} \theta) + \prescript{}{2}{h}_{i}( p^{2x}_{i} \textrm{c} \theta - p^{2y}_{i} \textrm{s} \theta). \end{aligned} \end{equation} The FIM is $\mathbf{F}_2 = \sigma_r^{-2} {\mathbf{J}^\top_2} {\mathbf{J}_2}$, where the $i$-th row of ${\mathbf{J}_2}$ is $\prescript{}{}{\mathbf{H}}_i$. The Cauchy-Binet formula gives \begin{equation} \label{eq:det_F2_full} \begin{aligned} &\det(\mathbf{F}_2) = \frac{1}{\sigma_r^2} \sum\limits_{S_2}^{} \left(\det \left( \begin{bmatrix} \partial_x f_{j_1} & \partial_y f_{j_1} & \partial_{\theta} f_{j_1} \\ \partial_x f_{j_2} & \partial_y f_{j_2} & \partial_{\theta} f_{j_2} \\ \partial_x f_{j_3} & \partial_y f_{j_3} & \partial_{\theta} f_{j_3} \\ \end{bmatrix} \right) \right)^2 \\ &= \frac{1}{\sigma_r^2} \sum\limits_{S_2}^{} \left( \sum\limits_{i=1}^{3} (-1)^{i+3} \; \partial_{\theta} f_{j_i} \det \left( \begin{bmatrix} \partial_x f_{j_l} & \partial_y f_{j_l} \\ \partial_x f_{j_p} & \partial_y f_{j_p} \end{bmatrix} \right) \right)^2 \end{aligned} \end{equation} where $S_2 {=} \{ (j_1, j_2, j_3) \vert 1 \leq j_1 < j_2 \leq k \}$, $(l,p) \in \{1,2,3\} \setminus i, l < p$. From Eq. (\ref{eq:hi123}), we can write $\partial_{\theta} f_{j_i}$ ($i \in \{1,2,3\}$) as \begin{equation} \label{eq:2d_no_theta_dtfi} \begin{aligned} \partial_{\theta} f_{j_i} &= \frac{1}{d_{j_i}} \left[( \prescript{}{1}{h}_{j_i}(-p^{2x}_{j_i} \textrm{s} \theta - p^{2y}_{j_i} \textrm{c} \theta) + \prescript{}{2}{h}_{j_i}(p^{2x}_{j_i} \textrm{c} \theta - p^{2y}_{j_i} \textrm{s} \theta )\right]\\ &= \left( \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \times \begin{bmatrix} p^{2x}_{j_i} \textrm{c} \theta - p^{2y}_{j_i} \textrm{s} \theta \\ p^{2x}_{j_i} \textrm{s} \theta + p^{2y}_{j_i} \textrm{c} \theta \\ 0 \end{bmatrix} \right) \cdot \begin{bmatrix} \frac{\prescript{}{1}{h}_{j_i}}{d_i} \\ \frac{\prescript{}{2}{h}_{j_i}}{d_i} \\ 0 \end{bmatrix} \\ &= \left[ \mathbf{u}_z \times (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}) \right] \cdot \mathbf{u}_{j_i} \end{aligned} \end{equation} where $\mathbf{u}_{j_i} {=} [\frac{\prescript{}{1}{h}_{j_i}}{d_{j_i}}, \frac{\prescript{}{2}{h}_{j_i}}{d_{j_i}}, 0]^\top$. Since vector $(\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i})$ resides on the $xy$ plane, the angle between $\mathbf{u}_z$ and $\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}$ is always $\pi / 2$. Also, notice that $\norm{\mathbf{C}\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}} {=} \norm{\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}} {=} \rho_{j_i}$. Let ${\gamma_{j_i} = \frac{\pi}{2} - \measuredangle \left( \mathbf{u}_z \times (\mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}), \mathbf{u}_{j_i} \right)}$, $\partial_{\theta} f_{j_i}$ in Eq. (\ref{eq:2d_no_theta_dtfi}) can be simplified as \begin{equation} \label{eq:det_2d_dfi_simplified} \begin{aligned} \partial_{\theta} f_{j_i} &= \norm{\mathbf{u}_z} \norm{\mathbf{C}\prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}} \norm{\mathbf{u}_{j_i}} \left| \textrm{s} \measuredangle (\mathbf{u}_z, \mathbf{C} \prescript{\mathcal{L}_2}{a_2}{\mathbf{p}}_{j_i}) \right| \textrm{s} \gamma_{j_i} \\ &= \rho_{j_i} \textrm{s} \gamma_{j_i}. \end{aligned} \end{equation} Next, we have \begin{equation} \label{eq:det_flfp_2d} \begin{aligned} &\det \left( \begin{bmatrix} \partial_x f_{j_l} & \partial_y f_{j_l} \\ \partial_x f_{j_p} & \partial_y f_{j_p} \end{bmatrix} \right) = \det \left( \begin{bmatrix} \partial_x f_{j_l} & \partial_x f_{j_p} \\ \partial_y f_{j_l} & \partial_y f_{j_p} \end{bmatrix} \right) \\ &= \det \left( \begin{bmatrix} \partial_x f_{j_l} & \partial_x f_{j_p} & 0 \\ \partial_y f_{j_l} & \partial_y f_{j_p} & 0 \\ 0 & & 1 \end{bmatrix} \right) = (\mathbf{u}_{j_l} \times \mathbf{u}_{j_p}) \cdot \mathbf{u}_z. \end{aligned} \end{equation} Since both vectors $\mathbf{u}_{j_l}$ and $\mathbf{u}_{j_p}$ reside on the $xy$ plane, their cross product will align with $\mathbf{u}_z$. The signed angle from $\mathbf{u}_{j_l}$ to $\mathbf{u}_{j_p}$ can be computed as \begin{equation} \alpha_{j_l,j_p} = \atantwo \left( [\mathbf{u}_{j_l} \times \mathbf{u}_{j_p}]_z, \; \mathbf{u}_{j_l} \cdot \mathbf{u}_{j_p} \right), \end{equation} where $[.]_z$ denotes the $z$ element of the argument vector in $\mathbb{R}^3$. As such, $\mathbf{u}_{j_l} \times \mathbf{u}_{j_p} = \norm{\mathbf{u}_{j_l}} \norm{\mathbf{u}_{j_p}} \textrm{s}\alpha_{j_l,j_p} \mathbf{u}_z = \textrm{s}\alpha_{j_l,j_p} \mathbf{u}_z$. Eq. (\ref{eq:det_flfp_2d}) can then be written as \begin{equation} \label{eq:det_2d_fxfy_simplified} \det \left( \begin{bmatrix} \partial_x f_{j_l} & \partial_y f_{j_l} \\ \partial_x f_{j_p} & \partial_y f_{j_p} \end{bmatrix} \right) = \textrm{s}\alpha_{j_l,j_p} \norm{\mathbf{u}_z}^2 = \textrm{s}\alpha_{j_l,j_p}. \end{equation} Replacing Eq. (\ref{eq:det_2d_dfi_simplified}) and (\ref{eq:det_2d_fxfy_simplified}) into (\ref{eq:det_F2_full}), we have \begin{equation} \det(\mathbf{F}_2) = \frac{1}{\sigma_r^2} \sum\limits_{S_2}^{} \left[ \sum\limits_{i = 1}^{3} (-1)^{i+1} \rho_{j_i} \textrm{s}{\alpha_{j_l, j_p}} \textrm{s}{\gamma_{j_i}} \right]^2. \end{equation} \subsection{2D RTE with a common heading reference} In this case, the state vector is $\mathbf{\Theta}_3 \coloneqq [t_x, t_y]^{\top}$. Built upon the notations and equations in Appendix \ref{app_subsec:3D_known_theta}, the determinant of the FIM is \begin{equation} \begin{aligned} &\det(\mathbf{F}_3) = \frac{1}{\sigma_r^2} \sum\limits_{S_3}^{} \left(\det \left( \begin{bmatrix} \partial_x f_{j_1} & \partial_y f_{j_1} \\ \partial_x f_{j_2} & \partial_y f_{j_2} \\ \end{bmatrix} \right) \right)^2 \\ &= \frac{1}{\sigma_r^2} \sum\limits_{S_3}^{} \left( (\mathbf{u}_{j_1} \times \mathbf{u}_{j_2}) \cdot \mathbf{u}_z \right)^2 = \frac{1}{\sigma_r^2} \sum\limits_{S_3}^{} \textrm{s}^2\alpha_{j_1,j_2}, \end{aligned} \end{equation} where $S_3 = \{(j_1,j_2) \; \vert \; 1 \leq j_1 < j_2 \leq k \}$ and $\alpha_{j_1,j_2} = \atantwo \left( [\mathbf{u}_{j_1} \times \mathbf{u}_{j_2}]_z, \; \mathbf{u}_{j_1} \cdot \mathbf{u}_{j_2} \right)$. This completes the proof. \end{appendices} \balance \bibliographystyle{IEEEtran}
{ "timestamp": "2022-02-02T02:14:16", "yymm": "2202", "arxiv_id": "2202.00279", "language": "en", "url": "https://arxiv.org/abs/2202.00279" }
\section{Introduction} The notion of Terwilliger algebra was introduced by Paul Terwilliger in \cite{terwilliger}. After the introduction, there were some works related to it. Other works discussing Terwilliger algebra can be found in \cite{fernadezmiklavic,HamidTerwilliger}. For the recent paper, the reader can see \cite{balmacedareyes,hanaki}. The computation is done using \cite{sagemath}. The main objective of this paper is to obtain the Terwilliger algebras of some group association schemes for the the groups $G_I$, $G_{II}, G_{III},G_{IV}$. Let $X \in \lbrace \text{I}, \text{ II}, \text{ III}, \text{ IV} \rbrace$. It is known that a weight enumerator of Type $X$ codes is $G_X$-invariant. The reader who is interested in Terwilliger algebra can directly see Section \ref{sec: Terwilliger algebra}. The section shows that \begin{align*} T(G_{I}) & \cong \mathcal{M}_1 \oplus \mathcal{M}_1 \oplus \mathcal{M}_2 \oplus \mathcal{M}_3 \oplus \mathcal{M}_7, \\ T(G_{II}) & \cong \mathcal{M}_4 \oplus \mathcal{M}_8 \oplus \mathcal{M}_{12} \oplus \mathcal{M}_{16} \oplus \mathcal{M}_{24} \oplus \mathcal{M}_{32}, \\ T(G_{III}) & \cong \mathcal{M}_2 \oplus \mathcal{M}_{10} \oplus \mathcal{M}_{16} \\ T(G_{IV}) & \cong \mathcal{M}_2 \oplus \mathcal{M}_2 \oplus \mathcal{M}_{6}. \end{align*} From coding theory point of view, the invariant theory of finite groups can connect number theory to coding theory. The point of view continues to show that the ring of the weight enumerators of Type $X$ codes can be generated by Eisenstein polynomials (E-polynomials for short) associated to Type $X$ codes. In other words, it is obtained that \[ \mathfrak{R}^{G_{I}} = \mathbb{C}[\varphi_2, \varphi_8] \] \[ \mathfrak{R}^{G_{II}} = \mathbb{C}[\varphi_8, \varphi_{24}] \] \[ \mathfrak{R}^{G_{III}} = \mathbb{C}[\varphi_4, \varphi_{12}] \] \[ \mathfrak{R}^{G_{IV}} = \mathbb{C}[\varphi_2, \varphi_6] \] where $\mathfrak{R}^{G_X}$ denotes the invariant ring for $G_X$. Some results in Section \ref{sec:E-polynomials} are not new. For example, the E-polynomials related to $G_{\text{II}}$ was discussed in \cite{ourapoly}. \section{Preliminaries} We follow \cite{MiezakiOura} for the notations. Let \[ G_I = \left\langle \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \right\rangle, \] \[ G_{II} = \left\langle \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & i \end{pmatrix} \right\rangle, \] \[ G_{III} = \left\langle \frac{1}{\sqrt{3}} \begin{pmatrix} 1 & 2 \\ 1 & -1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & \exp(2 \pi i / 3) \end{pmatrix} \right\rangle, \] \[ G_{IV} = \left\langle \frac{1}{2} \begin{pmatrix} 1 & 3 \\ 1 & -1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \right\rangle. \] The orders and the number of conjugacy classes for these groups can be seen in Table \ref{tab:order of G}. \begin{table} \centering \begin{tabular}{cccc} $G$ & $|G|$ & Conjugacy Classes \\ \hline $G_I$ & 16 & 7 \\ $G_{II}$ & 192 & 32 \\ $G_{III}$ & 48 & 14 \\ $G_{IV}$ & 12 & 6 \end{tabular} \caption{Order and conjugacy classes of some groups} \label{tab:order of G} \end{table} Now some terms in coding theory are given. Let $\mathbb{F}_q$ be the field of $q$ elements. A linear code $C$ of length $n$ is a linear subspace of $\mathbb{F}_q^n$. The number of nonzero components of $\mathbf{c}$ is called the weight of $\mathbf{c}$ and denoted by $wt(\mathbf{c})$. The weight enumerator $w_C (x,y)$ of a code $C$ is defined by \[ w_C (x,y) := \sum_{\mathbf{c} \in C} x^{n- wt(\mathbf{c})} y^{wt(\mathbf{c})}. \] The inner product $(\mathbf{x}, \mathbf{y})$ of two elements $\mathbf{x}, \mathbf{y} \in \mathbb{F}_q^n$ is defined by \[ (\mathbf{x}, \mathbf{y}) := x_1 y_1 + \cdots + x_n y_n \text{ mod } n. \] The dual of $C$, denoted by $C^{\perp}$, is defined by \[ C^{\perp} := \lbrace \mathbf{y} \in \mathbb{F}_q^n \ \vert \ (\mathbf{x},\mathbf{y})=0, \forall \mathbf{x} \in C \rbrace. \] We say $C$ self-dual if $C=C^\perp$ holds. Let $C$ be a self-dual code. The self-dual code is said to be as follows: \begin{enumerate} \item Type I if it is defined over $\mathbb{F}_2^q$ with all weights multiples of of 2; \item Type II if it is defined over $\mathbb{F}_2^n$ with all weights multiples of 4; \item Type III if it is defined over $\mathbb{F}_3^n$ with all weights multiples of 3; and \item Type IV if it is defined over $\mathbb{F}_4^n$ with all weights multiples of 2. \end{enumerate} It is necessary to note that the weight enumerator $w_C(x,y)$ for Type $X$ codes is in invariant ring of $G_X$ for $X = \lbrace \text{I}, \text{ II}, \text{ III}, \text{ IV} \rbrace$. Let $\mathfrak{R}^{G_X}$ be the invariant ring of $G_X$. That is \[ \mathfrak{R}^G = \lbrace f \in \mathbb{C}[x,y] \ \vert \ f^g = f, \ \forall g \in G_X \rbrace. \] Here $f^g$ means the action of $g$ on $f$. In this paper, the dimension formula $\mathcal{I}(G_X)$ for $\mathfrak{R}^{G_X}$ is written by the formal series \[ \mathcal{I}(G_X) = \sum_k ^\infty \dim \mathfrak{R}^{G_X}_k t^k. \] The dimension formulas for $\mathfrak{R}^G$ where $G=G_I, G_{II},G_{III},G_{IV}$ are: \[ \mathcal{I}(G_{I}) = \frac{1}{(1-t^2)(1-t^8)}, \] \[ \mathcal{I}(G_{II}) = \frac{1}{(1-t^8)(1-t^{24})}, \] \[ \mathcal{I}(G_{III}) = \frac{1}{(1-t^4)(1-t^{12})}, \] \[ \mathcal{I}(G_{IV}) = \frac{1}{(1-t^2)(1-t^6)}. \] \section{E-polynomials} \label{sec:E-polynomials} This section discusses E-polynomials. Before proceeding, some information about the generators of some invariants rings mentioned before is provided. It refer to \cite{MiezakiOura} for the generators. \begin{enumerate} \item $\mathfrak{R}^{G_I} = \mathbb{C}[f,g], f=x^2+y^2, g=x^2 y^2(x^2-y^2)^2$ \item $\mathfrak{R}^{G_{II}} = \mathbb{C}[f,g], f=x^8 + 14 x^4 y^4 + y^8, g=x^4 y^4(x^4-y^4)^4$ \item $\mathfrak{R}^{G_{III}} = \mathbb{C}[f,g], f=x^4+8xy^3, g=y^3(x^3-y^3)^3$ \item $\mathfrak{R}^{G_{IV}} = \mathbb{C}[f,g], f=x^2 + 3y^2, g=y^2(x^2-y^2)^2$ \end{enumerate} Hence, $R^\mathbb{G_X}$ can be generated by the E-polynomials related to Type $X$ codes. Let $\bar{x}$ be a column vector of $x$ and $y$. An E-polynomial $\varphi_k$ of degree $k$ for the group $G_X$ with respect to Type $X$ code is defined by \[ \varphi_k (\bar{x})=\varphi_k(x,y)= \frac{1}{|G|} \sum_{\sigma \in G} (\sigma_1 \bar{x})^k \] where $\sigma_1$ is the first row of $\sigma$. It is not difficult to show that E-polynomial for $G_X$ belongs to $\mathbb{C}[x,y]^{G_X}$. Let $\mathfrak{E}^{G_X}$ be the ring of E-polynomials for the group $G_{X}$. By obtaining the generators of $\mathfrak{E}^{G_X}$, the following theorem is obtained. \begin{thm} \label{thm:epoly generated} \[ \mathfrak{E}^{G_I} = \mathbb{C}[\varphi_2, \varphi_8] \] \[ \mathfrak{E}^{G_{II}} = \mathbb{C}[\varphi_8, \varphi_{24}] \] \[ \mathfrak{E}^{G_{II}} = \mathbb{C}[\varphi_4, \varphi_{12}] \] \[ \mathfrak{E}^{G_{IV}} = \mathbb{C}[\varphi_2, \varphi_{6}] \] \end{thm} \begin{proof} This is done by computation. The proof is similar to \cite[Theorem 4.2.]{Hamid}. \end{proof} Finding the generators of $\mathfrak{E}^{G_X}$, I observe if the generators can generated the invariant ring $\mathfrak{R}^{G_X}$. The following theorem shows that the invariant ring of the group $G_I, G_{II}, G_{III}, G_{IV}$ can be generated by the E-polynomials for each group. \begin{thm} \label{thm:invariant generator} The followings hold \[ \mathbb{R}^{G_{I}} = \mathbb{C}[\varphi_2, \varphi_8] \] \[ \mathbb{R}^{G_{II}} = \mathbb{C}[\varphi_8, \varphi_{24}] \] \[ \mathbb{R}^{G_{III}} = \mathbb{C}[\varphi_4, \varphi_{12}] \] \[ \mathbb{R}^{G_{IV}} = \mathbb{C}[\varphi_2, \varphi_6] \] \end{thm} \begin{proof} We give the proof for $G_{III}$ case. The proofs of $G_{I}, G_{II},$ and $G_{IV}$ cases are similar. The explicit form of $\varphi_4$ and $\varphi_{12}$ are \begin{align*} \varphi_4 & = \frac{1}{3} \left(x^{4} + 8 x y^{3} \right), \\ \varphi_{12} & = 243 \left( 61 x^{12} + 440 x^{9} y^{3} + 14784 x^{6} y^{6} + 28160 x^{3} y^{9} + 1024 y^{12} \right). \end{align*} Then $f$ and $g$ for $G_{III}$ case can be expressed as \begin{align*} f = & 3 \varphi_4, \\ g = & \frac{1}{1024} \left(1647 \varphi_4^3 - 243 \varphi_{12} \right). \end{align*} In the same way, other cases could be proved. This completes the proof. \end{proof} Theorem \ref{thm:invariant generator} shows that the invariant ring of $G_I,G_{II}, G_{III}, G_{IV}$ can be generated by the E-polynomials related to them. For the details, the explicit forms of the generators taken from the E-polynomials are given. \begin{align*} G_{I} : & \ \varphi_2 = \frac{1}{2} \left( x^2 + y^2 \right), \\ &\ \varphi_8 = \frac{1}{32} \left( 9 x^{8} + 28 x^{6} y^{2} + 70 x^{4} y^{4} + 28 x^{2} y^{6} + 9 y^{8}. \right)\\ G_{II} : & \ \varphi_8 = \frac{1}{24} \left( 5 x^{8} + 70 x^{4} y^{4} + 5 y^{8} \right), \\ & \ \varphi_{24}=\frac{1}{6144} ( x^{24} + 10626 x^{20} y^{4} + 735471 x^{16} y^{8} + 2704156 x^{12} y^{12} \\ & \ \ \ \ \ \ \ \ + 735471 x^{8} y^{16} + 10626 x^{4} y^{20} + 1025 y^{24} ) \\ \end{align*} \begin{align*} G_{IV} : & \ \varphi_8 = \frac{1}{2} \left( x^{2} + 3 y^{2}\right), \\ & \ \varphi_{24}=\frac{1}{32} ( 11 x^{6} + 45 x^{4} y^{2} + 405 x^{2} y^{4} + 243 y^{6}). \\ \end{align*} \section{Terwilliger Algebra} \label{sec: Terwilliger algebra} Before continuing the investigation of the Terwilliger algebra, the definition group association scheme needs to be defined. \begin{df}\label{defGroupAssociation} Let $G$ be a finite group and $C_0,C_1,\ldots,C_d$ be the ordering conjugacy classes of $G$. Define the relations $R_i(i=0,1,\ldots,d)$ on $G$ by $$(x,y)\in R_i \Longleftrightarrow yx^{-1}\in C_i.$$ Then $\mathfrak{X}(G)=(G,\lbrace R_i \rbrace_{0\leq i \leq d})$ forms a commutative association scheme of class $d$ called the \textit{group association scheme of $G$}. \end{df} We associate the matrix $A_i$ of the relation $R_i$ as \begin{equation*} (A_i)_{x,y} = \begin{cases} 1 & \text{if} \left(x,y\right)\in R_i,\\ 0 & \text{otherwise}. \end{cases} \end{equation*} Then, \begin{equation*} A_i A_j = \sum_{k=0}^{d}{p_{ij}^k A_k} \end{equation*} and the matrices $A_0,\ldots, A_d$ generate an algebra called a Bose-Mesner algebra. The intersection numbers $p_{ij}^k$ of the group association scheme $\mathfrak{X}$ are given by $$\vert \lbrace \left(x,y\right) \in C_i \times C_j \vert xy=z, z\in C_k\rbrace \vert.$$ For each $i=0,\ldots,d,$ let $E_i^*$ be the diagonal matrices of size $n \times n$ which are defined as follows. \begin{equation*} \left( E_i^* \right)_{x,x} = \begin{cases} 1, & \text{if } x \in C_i\\ 0, & \text{if } x \notin C_i \end{cases} \qquad \left(x\in G \right). \end{equation*} Then $\mathfrak{A^*} = \langle E_0^*,\ldots,E_d^* \rangle$ is a commutative algebra called the dual Bose-Mesner algebra. The intersection numbers provide information about the structure of the Terwilliger algebra. The following relation refers to \cite{terwilliger}. \begin{equation*} \begin{array}{c c c c} E_i^* A_j E_k^* = 0 & \text{iff} & p_{ij}^k=0 & (0\leq i,j,k \leq d) \end{array} \end{equation*} \begin{df} Let $G$ be a finite group. The Terwilliger algebra $T(G)$ is the sub-algebra of $Mat_G (\mathbb{C})$ generated by $\mathfrak{A}$ and $\mathfrak{A}^*$. \end{df} The Terwilliger algebra is noncommutative algebra. It is also semisimple since it is closed under conjugate-transpose map. Then, the investigation of this algebra is undertaken by obtaining its properties, such as dimension, primitive central idempotent, and structure. From \cite{bannaimunemasa}, the bound on the dimension of $T(G)$ turns to be \[ \vert \lbrace i,j,k \vert p_{ij}^k \neq 0 \rbrace \vert \leq \dim T \leq \sum_{i=0}^d \frac{|G|}{|C_i|}. \] The dimension of $T(G_X)$ is provided. \begin{thm} \label{thm:dimension of T} The dimension of $T(G_I),T(G_{II}),T(G_{III}),$ and $T(G_{IV})$ are given by the followings: \begin{center}\label{thdim} \begin{tabular}{l l l} 1. & $T(G_I)$ & 64\\ 2. & $T(G_{II})$ & 2808 \\ 3. & $T(G_{III})$ & 300 \\ 4. & $T(G_{IV})$ & 44 \\ \end{tabular} \end{center} \end{thm} \begin{proof} The dimension of each case is obtained by determining a basis for each algebra. A set $\mathcal{B}$ of linearly independent elements for the set $\lbrace E_i^* A_j E_k^*, E_i^* A_j E_k^* \cdot E_k^* A_l E_m^* \rbrace$ is found. The computation shows that $\mathcal{B}$ can generates the set $\lbrace E_i^* A_j E_k^* \cdot E_k^* A_l E_m^* \cdot E_m^* A_n E_m^p \rbrace$. This completes the proof. The details of distribution of the basis elements related to each conjugacy class are given in Appendix \ref{sec:appendix2}. \end{proof} Theorem \ref{thm:dimension of T} shos that $T(G_{IV})$ satisfies the condition \[ \dim T = \sum_{i=0}^d \frac{|G_{IV}|}{|C_i|} \] where $C_i (i=1, ...)$ are the conjugacy classes of $G_{IV}$. The readers who want to know from where the dimension of $T(G_X)$ is provided, can see Appendix \ref{sec:appendix2}. After providing the dimension of $T(G_x)$, the primitive central idempotents need to be obtained. We denote by $Z(T)$ the center of $T$. From \cite{balmaceda}, $Z(T)$ contains block diagonal matrices. Hence, it can be written as follows. \[ Z(T) \subseteq \oplus_{i=0}^d Z(E_i^* T E_i^*). \] Thus, to obtain the center of $T$, it is sufficient to consider the basis elements which are related to $(C_i,C_i)$ position. \begin{lem} The dimensions of the center of $T(G_{I}), T(G_{II}), T(G_{III})$ and $T(G_{IV})$ are the following: \begin{enumerate} \item $\dim Z(T(G_I)) = 5$. \item $\dim Z(T(G_{II})) = 6$. \item $\dim Z(T(G_{III}) = 3$ \item $\dim Z(T(G_{III}) = 3$ \end{enumerate} \end{lem} \begin{proof} The result is obtained by determining the basis of center, that is the dimension of linear equation system solution $\lbrace x_i y = y x_i \rbrace$ where $y=\sum {c_j b_j}$, $b_j$ and $x_i$ are in the basis of $T$. \end{proof} The degrees of the irreducible complex representation afforded by the primitive central idempotents are provided, that is a set $\lbrace \varepsilon_i \ | \ 1 \leq i \leq s \rbrace$ which satisfies $\varepsilon_i^2 = \varepsilon_i \neq \mathbf{0}, \varepsilon_i \varepsilon_j = \delta_{ij}\varepsilon_i, \sum_{i=1}^2 \varepsilon_i, \text{ and } \varepsilon_i \in Z(T)$. These are obtained using the method described on \cite{balmaceda}. Let $e_1, e_2, \ldots, e_s$ be the basis for $\mathcal{Z}(T(G))$. Therefore, \[ e_i e_j = \sum r_{ij}^k e_k. \] Define a matrix $B$ by \[ B_i := (r_{ij}^k), 1 \leq i \leq s. \] As the matrices $B_i$ are mutually commute, they can be simultaneously diagonalised by a nonsingular matrix. Thus, there is a matrix $P$ such that \begin{equation} \label{eq:diagonal} P^{-1} B_i P \end{equation} is a diagonal matrix for $i=1, \ldots, s$. Let $v_1(i), \ldots, v_s(i)$ be the diagonal entries of (\ref{eq:diagonal}). Define a matrix $M$ by \[ M_{ij} := v_i(j). \] Then the primitive central idempotents $\varepsilon_1, \ldots, \varepsilon_s$ of $T(G)$ can be obtained by \[ (\varepsilon_1, \ldots, \varepsilon_s) = (e_1, \ldots, e_s) M^{-1}. \] Using the primitive central idempotents, the following result is obtained. \begin{thm}\label{thpmt} The degrees of the irreducible complex representations afforded by every idempotents are given below. \begin{tabular}{c l c c c c c c c c c} (1)&$T(G_I)$ & $\varepsilon_i$ & $\varepsilon_1$ & $\varepsilon_2$ & $\varepsilon_3$ & $\varepsilon_4$ & $\varepsilon_5$ & & &\\ & & \text{deg}$\varepsilon_i$ & 1 & 1 & 2 & 3 & 7 & & & \\ (2)&$T(G_{II})$ & $\varepsilon_i$ & $\varepsilon_1$ & $\varepsilon_2$ & $\varepsilon_3$ & $\varepsilon_4$ & $\varepsilon_5$ & $\varepsilon_6$ & & \\ & & \text{deg}$\varepsilon_i$ & 4 & 8 & 12 & 16 & 24 & 32 & & \\ (3)&$T(G_{III})$ & $\varepsilon_i$ & $\varepsilon_1$ & $\varepsilon_2$ & $\varepsilon_3$ & & & & &\\ & & \text{deg}$\varepsilon_i$ & 2 & 10 & 16 & & & & & \\ (3)&$T(G_{III})$ & $\varepsilon_i$ & $\varepsilon_1$ & $\varepsilon_2$ & $\varepsilon_3$ & & & & &\\ & & \text{deg}$\varepsilon_i$ & 2 & 2 & 6 & & & & & \\ \end{tabular} \end{thm} \begin{proof} To determine the degrees of $d_i$ afforded by each $\varepsilon_i$, the fact that $T \varepsilon_i \cong \mathcal{M}_{d_i}(\mathbb{C})$ is used. Thus $d_i^2 = dim T\varepsilon_i$ equals the dimension of the set $\lbrace x_j \varepsilon_i\rbrace$ where $x_j$ are the basis elements of $T$. \end{proof} The degrees of primitive idempotents enable us to get the following structure theorem, in which $M_i$ denotes a full matrix algebra over $\mathbb{C}$ of degree $i$. \begin{cor}[Structure Theorem for $T(G_X)$ ] \begin{enumerate} \item $T(G_{I}) \cong \mathcal{M}_1 \oplus \mathcal{M}_1 \oplus \mathcal{M}_2 \oplus \mathcal{M}_3 \oplus \mathcal{M}_7 $. \item $T(G_{II}) \cong \mathcal{M}_4 \oplus \mathcal{M}_8 \oplus \mathcal{M}_{12} \oplus \mathcal{M}_{16} \oplus \mathcal{M}_{24} \oplus \mathcal{M}_{32} $. \item $T(G_{III}) \cong \mathcal{M}_2 \oplus \mathcal{M}_{10} \oplus \mathcal{M}_{16} $. \item $T(G_{IV}) \cong \mathcal{M}_2 \oplus \mathcal{M}_2 \oplus \mathcal{M}_{6}$. \end{enumerate} \end{cor}
{ "timestamp": "2022-02-02T02:12:07", "yymm": "2202", "arxiv_id": "2202.00241", "language": "en", "url": "https://arxiv.org/abs/2202.00241" }
\section{Introduction} A century after the birth of Einstein's theory of general relativity (GR) \cite{Einstein1916, Einstein1918}, the first direct observation of gravitational waves (GWs) was done for the golden event GW150914. GR is not perfectly consistent with quantum physics and string theoretical viewpoints. It is thus important to probe new physics beyond GR \cite{Will, LVK, KAGRA}. In a four-dimensional spacetime, general metric theories allow at most six GW polarization states (two spin-0, two spin-1 and two spin-2) \cite{Eardley}. Once the TT polarizations are detected, it will be of great importance to probe the extra polarizations beyond GR. The two scalar modes called Breathing (B) and Longitude (L) are degenerate in interferometry, because the antenna pattern functions for B and L modes take the same form but with the opposite sign \cite{Nishizawa2009}. Therefore, a direct test of each polarization state needs five or more ground-based detectors. For a merger event associated with an electromagnetic counterpart, we can know the GW source sky position by the multi-messenger astronomy. For such multi-messenger events in particular sky regions, the minimum requirement becomes four ground-based detectors including KAGRA \cite{Hagihara2018, Hagihara2019, Hagihara2020, PTEP}. The GW150914 data fits well with a binary black hole merger in GR \cite{LIGO2016}, though this test is inconclusive because the number of GW polarization states in GR is equal to the number of aLIGO detectors. The addition of Virgo to the aLIGO detectors for GW170814 enabled the first informative test of GW polarizations. According to their analysis, the GW data are described much better by the pure tensor modes than pure scalar or pure vector modes, \cite{LIGO2017}. A range of tests of GR for GW170817, the first observation of GWs from a binary neutron star inspiral \cite{GW170817}, were done by aLIGO and Virgo \cite{LIGO2019}. The tests include a test similar to Ref. \cite{LIGO2017} by performing a Bayesian analysis of the signal properties with the three detector outputs, using the tensor, the vector or the scalar response functions, though the signal-to-noise ratio in Virgo was much lower than those in the two aLIGO detectors. The prospects for polarization tests were discussed (e.g. \cite{Hayama, Isi2015,Isi2017,Takeda}). GW signals are a linear combination of different polarization modes, where the coefficients of each mode is called the antenna pattern function that depends on the polarization state as well as the source direction \cite{ST, GT, CYC, PW, Book-Creighton, Book-Maggiore}. For a merger event so far, the antenna pattern is almost instantaneous. As a result, the required minimum number of detectors must equal to the number of independent polarization states when we wish a direct separation of all the possible polarizations states. With GEO600 that has recently placed a tight constraint on a scalar dark matter \cite{GEO-2021}, LIGO continues also searching continuous GWs from pulsars \cite{Jaranowski, LIGO-pulsar-2004}. With great efforts on substantial improvement of the detector sensitivity and the data analysis methods, LIGO has placed upper bounds not only on TT modes of GWs from known pulsars \cite{LIGO-pulsar-2007, LIGO-pulsar-2017, LIGO-pulsar-2019, Papa2019, LIGO-pulsar-2022} especially including the Crab pulsar \cite{LIGO-pulsar-2008} but also on those from unknown pulsars in all sky survey, from which upper bounds are put on small-ellipticity of neutron stars \cite{Papa2021}. The present paper assumes a pulsar for which the spin period and sky position are precisely known. The antenna pattern for continuous GWs changes significantly with time owing to the Earth rotation, whereas the GWs from the pulsar are periodic. How does the Earth rotation affect the separability of GW polarization states from the known pulsar? The main purpose of the present paper is to show that the Earth rotation separates out all the possible polarization states of the pulsar GWs. This paper is organized as follows. Section II briefly summarizes expressions for the antenna pattern functions and the strain outputs. Section III discusses the cyclically averaging of the antenna patterns in order to demonstrate the separability of different polarization modes. Section IV is devoted to Conclusion. \section{Antenna patterns and GW signals} In a four-dimensional spacetime, a general metric theory allows six polarizations at most \cite{Eardley}; $h_B(t)$ for the spin-0 B mode, $h_L(t)$ for the spin-0 L mode, $h_V(t)$ and $h_W(t)$ for two spin-1 modes, $h_+(t)$ for the plus mode and $h_{\times}(t)$ for the cross mode. For a laser interferometer, the antenna pattern function to each polarization is denoted as $F^{I}(t)$, where $I = B, L, V, W, +, \times$ \cite{PW,ST}. It depends the GW source direction $\theta$ and $\phi$ as well as the polarization angle $\Psi$. The latitude and longitude of a GW source are functions of time $\theta(t)$ and $\phi(t)$, whereas they are almost instantaneous for a merger or burst event. For the brevity, we omit $\theta(t)$ and $\phi(t)$ in the notation. The strain output at the detector is written as \cite{Nishizawa2009, PW, Book-Creighton, Book-Maggiore} \begin{align} S(t) =& F^S(t) h_S(t) + F^V(t) h_V(t) + F^W(t) h_W (t) \notag\\ &+ F^+(t) h_+(t)+ + F^{\times}(t) h_{\times}(t) + n(t) \nonumber\\ =& \sum_{I = S, V, W, +, \times} F^I(t) h_I(t) + n(t) , \label{Sa} \end{align} where we define $F^S(t) \equiv F^B(t) = - F^L(t)$, we denote $h_S(t) \equiv h_B(t) - h_L(t)$, and $n(t)$ means noises. In the rest of this paper, $I \in S, V, W, +, \times$ is denoted simply as $I$. For LIGO-Virgo merger events, the duration is roughly $\sim 1-1000$ milliseconds ($\ll T_E$), where $T_E$ is the Earth rotation period $\sim 24$ hours. The time variation of $F^I(t)$ is negligible enough for us to safely use the instantaneous antenna pattern for the data analysis. The dependence on time is discussed e.g. by Takeda et al. \cite{Takeda2019}. On the other hand, the antenna pattern changes significant with time in a day. \section{$N$-cycle Averaging for periodic GWs} We consider periodic GWs with period $T_P$ ($\sim 1-1000$ msec.) as \begin{align} h_I(t) = h_I(t+n T_P) , \label{hI} \end{align} where $n$ is an integer. It is sufficient to consider only $h_I(t)$ for $t \in [0, T_P)$ because of being periodic. \begin{figure} \includegraphics[width=7.5cm]{fig-1.pdf} \caption{ Schematic figure for each cycle of periodic GWs. } \label{figure-period} \end{figure} For the sake of simplicity, we focus on one day as the observational duration, where the number of the GW cycles in one day is $N \equiv [T_E/T_P]$ for the Gauss symbol $[ \: ]$, namely the integer part. Note that $h(t)$ is cyclic with period $T_P$, while $F_I(t)$ has another period $T_E$. For $N$ cycles, the signals can be expressed in terms of the periodic function $h_I(t)$. We divide the total $N$ cycles into each one cycle of $t \in [(a-1) T_p, a T_p)$, where $a = 1, 2, \cdots, N$ is an integer. The strain output in each cycle is \begin{align} S_{1}(t) \equiv& S(t) \notag\\ =& \sum_I F^I(t) h_I(t) + n(t) , \notag \\ S_{2}(t) \equiv& S(t+T_P) \notag\\ =& \sum_I F^I(t+T_P) h_I(t+T_P) + n(t+T_P) , \notag \\ & \cdots \notag \\ S_{N}(t) \equiv& S(t+(N-1)T_P) \notag\\ =& \sum_I F^I(t+(N-1)T_P) h_I(t+(N-1)T_P) \notag\\ & + n(t+(N-1)T_P) , \label{S} \end{align} where we denote $S_a(t) \equiv S(t + (a-1) T_P)$. Using the least square method, therefore, let us define $A(t)$ by \begin{align} A(t) \equiv& \left(S_{1}(t) - \sum_I F^I_{1}(t) h_I(t)\right)^{2} \notag\\ &+ \cdots + \left(S_{N}(t) - \sum_I F^I_{N}(t) h_I(t)\right)^{2} \notag \\ =& \sum_{a=1}^{N} \Bigl(S_{a}(t) - F^I_a(t) h_I(t)\Bigr)^{2} , \label{A} \end{align} where Eqs. (\ref{hI}) and (\ref{S}) are used and $F^I_a(t) \equiv F^I(t + (a-1) T_P)$. In the rest of the paper, the $N$-cycle sum $\sum\limits_{a=1}^N$ is abbreviated as $\sum\limits_a$. In the least square method, the most expected $h_I(t)$ should satisfy five equations as $\partial A(t)/\partial h_I(t) = 0$ for each $I$. The coupled equations for $h_I(t)$ are rearranged in a vectorial form as \begin{align} M(t) \vec{H}(t) = \vec{L}(t) , \label{eq-vec} \end{align} where we define \begin{widetext} \begin{align} \overrightarrow{H}(t) &\equiv \begin{pmatrix} h_{+}(t) \\ h_{\times}(t) \\ h_V(t) \\ h_W(t) \\ h_S(t) \end{pmatrix} , \label{H} \\ \overrightarrow{L}(t) &\equiv \begin{pmatrix} \sum\limits_a F^{+}_{a}(t) S_{a}(t) \\ \sum\limits_a F^{\times}_{a}(t) S_{a}(t) \\ \sum\limits_a F^{\rm{V}}_{a}(t) S_{a}(t) \\ \sum\limits_a F^{\rm{W}}_{a}(t) S_{a}(t) \\ \sum\limits_a F^{\rm{S}}_{a}(t) S_{a}(t) \end{pmatrix} , \label{L} \\ M(t) &\equiv \begin{pmatrix} \sum\limits_a [F^{+}_{a}(t)]^{2} & \sum\limits_a F^{+}_{a}(t) F^{\times}_{a}(t) & \sum\limits_a F^{+}_{a}(t) F^{V}_{a}(t) & \sum\limits_a F^{+}_{a}(t) F^{W}_{a}(t) & \sum\limits_a F^{+}_{a}(t) F^{S}_{a}(t) \\ \sum\limits_a F^{\times}_{a}(t) F^{+}_{a}(t) & \sum\limits_a [F^{\times}_{a}(t)]^{2} & \sum\limits_a F^{\times}_{a}(t) F^{V}_{a}(t) & \sum\limits_a F^{\times}_{a}(t) F^{W}_{a}(t) & \sum\limits_a F^{\times}_{a}(t) F^{S}_{a}(t) \\ \sum\limits_a F^{V}_{a}(t) F^{+}_{a}(t) & \sum\limits_a F^{V}_{a}(t) F^{\times}_{a}(t) & \sum\limits_a [F^{V}_{a}(t)]^{2} & \sum\limits_a F^{V}_{a}(t) F^{W}_{a}(t) & \sum\limits_a F^{V}_{a}(t) F^{S}_{a}(t) \\ \sum\limits_a F^{W}_{a}(t) F^{+}_{a}(t) & \sum\limits_a F^{W}_{a}(t) F^{\times}_{a}(t) & \sum\limits_a F^{W}_{a}(t) F^{V}_{a}(t) & \sum\limits_a [F^{W}_{a}(t)]^{2} & \sum\limits_a F^{W}_{a}(t) F^{S}_{a}(t) \\ \sum\limits_a F^{S}_{a}(t) F^{+}_{a}(t) & \sum\limits_a F^{S}_{a}(t) F^{\times}_{a}(t) & \sum\limits_a F^{S}_{a}(t) F^{V}_{a}(t) & \sum\limits_a F^{S}_{a}(t) F^{W}_{a}(t) & \sum\limits_a [F^{S}_{a}(t)]^{2} \end{pmatrix} . \label{M} \end{align} \end{widetext} The solution for $h_I(t)$ is thus \begin{align} \overrightarrow{H}(t) = M^{-1}(t) \overrightarrow{L}(t) , \label{sol} \end{align} where $M^{-1}(t)$ is the inverse matrix of $M(t)$. We refer to $M(t)/N$ as the cyclically averaged antenna matrix (CAAM), because the procedure of $\frac1N \sum_a$ is the averaging for the $N$ cycles. One may ask if $M(t)/N$ corresponds to the covariance matrix. This is not the case, because the averaging of $F^I(t)$ as $\frac1N \sum_a F^I_a(t)$ does not vanish. The formal solution as Eq. (\ref{sol}) with Eqs. (\ref{H})-(\ref{M}) shows clearly the existence and uniqueness of the solution for the inverse problem. In practical calculations, however, we do not need obtain $M^{-1}(t)$, for which numerically performing the inverse of a matrix is rather time-consuming. It is sufficient and even convenient to solve Eq. (\ref{eq-vec}) by using a more sophisticated algorithm. It is an open problem whether or not the solution is globally unique in an inverse problem even for realistic nonlinear noises, In addition, known pulsars show secular changes in their spin period, for which $dT_P/dt$ and $d^2 T_P/dt^2$ should be taken into account. A generalization based on such a realistic pulse modeling in pulsar astronomy \cite{Stairs} is interesting. These issues are beyond the scope of the present paper. \begin{figure*} \includegraphics[width=8.5cm]{fig-2a.pdf} \includegraphics[width=8.5cm]{fig-2b.pdf} \includegraphics[width=8.5cm]{fig-2c.pdf} \includegraphics[width=8.5cm]{fig-2d.pdf} \caption{ Polarization separation: From $S(t)$ to $h_I(t)$ by Eq. (\ref{sol}). The uint of the vertical axis is arbitrary. Top left: $N = 21600$, Top right: $N = 43200$, Bottom left: $N = 64800$, Bottom right: $N = 86400$, each of which corresponds to $\sim 6, 12, 18, 24$ hours, respectively. The LIGO-Hanford detector and the sky location of the Crab pulsar are assumed, where GW waveforms follow sine functions, indicated by solid black (in color) lines. For exaggerations, the GW amplitude for the extra polarizations (S, V, W) is chosen as 0.1. The noise $n(t)$ obeys a Gaussian distribution with the standard deviation of 2. For $N = 21600$ cycles ($\sim 6$ hours), the TT modes are well reconstructed, whereas the S, V and W modes are hardly distinguishable from noises. As $N$ increases, the noise is effectively reduced as $n_{eff}(t) \propto 1/\sqrt{N}$. As a result, the S, V and W modes are well separated out for $N=86400$ for instance. } \label{figure-h} \end{figure*} Figure \ref{figure-h} shows numerical results of separating out GW polarization states, where one day observation and a pulsar GW period of 1000 milliseconds are assumed, corresponding to $N \sim 8 \times 10^4$ , because of the limited computational resources. For exaggerations, this figure assumes that the amplitudes of the plus and cross modes are equal to each other, the standard deviation of the noise $\bar n$ is the double of the TT amplitude, and the amplitude of the extra polarizations is equal to the one tenth of that of the TT mode. The amplitude of $h_+(t)$ and $h_{\times}(t)$ is denoted simply as $h_{TT}$. By the same way, we denote the amplitudes of $S$, $V$ and $W$ modes as $h_S, h_V, h_W$, respectively. In Figure \ref{figure-h}, $h_S = h_V = h_W = h_{TT}/10$ and $\bar{n} =2 \times h_{TT}$ are chosen. For $N$ cycles, the noise contribution $n(t)$ can be reduced effectively to $n_{eff}(t) \equiv \frac1N \sum_a n_a(t) \sim \bar{n}/\sqrt{N}$, when the noise obeys a Gaussian distribution, we denote $n_a(t) \equiv n(t + (a-1)T_P)$ and $N$ is large. Namely, $n_{eff}(t)$ gets smaller $\propto N^{-1/2}$, as $N$ increases. In Figure \ref{figure-h}, roughly estimating, the typical size of $n_{eff}(t)$ is $\sim n(t)/140, n(t)/210, n(t)/250, n(t)/290$, respectively, for $N \sim 2.1 \times 10^4, 4.3 \times 10^4, 6.5 \times 10^4, 8.6 \times 10^4$. This is consistent with Figure \ref{figure-h}. \section{Future prospects and possible subdominant effects} In this section, we briefly discuss subdominant effects on the current method and result. First, the expected continuous GW signal is much smaller than a current detector noise. Namely, $\bar{n} \gg h_{TT}$. A large $N$ is thus required. For three months ($\sim 100$ days) and twelve years for example, the effective $n_{eff}(t)$ becomes $n(t)/3000, n(t)/20000$, respectively. From a twelve-year observation, the upper bound on the extra polarizations can be placed at the $O(10^{-4})$ level of $\bar{n}$, though it is optimistic as mentioned in next paragraph. The third-generation detectors such as the Cosmic Explorer (CE) and the Einstein Telescope (ET) are aiming at the sensitivity of $\sim 2-8 \times10^{-25}$ for a range of 100-500 Hz according to their white papers \cite{CE, ET}. For a twelve-year observation of a pulsar with $T_p \sim 10$ milliseconds (corresponding to $\sim 10^2$ Hz), $N$ is $\sim 4 \times 10^{10}$. CE and ET will be thus able to put a stringent upper bound $\sim (2 \times 10^{-25})/(2 \times10^5) = 10^{-30}$ on the extra polarization amplitudes. This means that, if the TT modes are detected at the level $\sim 10^{-26}$, the upper bound on the extra modes is lower by four digits or more. For a millisecond pulsar, $N$ becomes ten times and thereby the upper bound is tighter by a factor $\sim 3$. In any case, this test is robust, because waveform templates are not used. This possible bound will be much stronger than the existing indirect test by the orbital decay observation of the binary pulsar B1913+16, where the radiation energy by extra polarizations has been limited to less than $\sim 0.1\%$ \cite{Will, Weisberg2016}. Very recently, Kramer et al. have reported that the double pulsar PSR J0737–3039A/B validates the prediction of GR more precisely at the level of $\sim 1 \times 10^{-4}$ \cite{Kramer2021}. A second comment is related with the first one. For a very long-time observation such as three months or twelve years, a simple periodic model in this paper is not sufficient \cite{Stairs}. In addition to the Earth rotation, we have to take account of the orbital motion of the Earth as well as the geophysical disturbance. These subdominant effects do not affect $h_I(t)$ but modify a function of time for $F^I(t)$. Hence, the existence and uniqueness from Eq. (\ref{sol}) still hold, where $M(t)$ is calculated from the accordingly modified $F^I(t)$. On the other hand, we may need to take account of the modulation in the pulsar spin period \cite{Jaranowski}, which affects both the amplitude and period of the GWs. Hence, some modification of Eq. (\ref{sol}) is needed. Up to this point, we have assumed $\det M \neq 0$, where $\det$ denotes the determinant of a matrix. What is meant by $\det M = 0$? The detector cannot distinguish GW polarizations from a pulsar that satisfies $\det M = 0$, describing a curve in the sky, because the CAAM is degenerate. Finally, we mention the speed of extra GW modes \cite{LIGO2019}. The possible arrival time difference between the TT and extra modes does not change the current discussion, because only the GW period matters but the time translation does not affect the $N$-cycle averaging. \section{Conclusion} We considered a possible daily variation of antenna patterns for a ground-based GW detector due to Earth rotation. By defining the CAAM for continuous GWs from a known pulsar, we showed that different polarization states can be separated out from a given set of the strain outputs at a single detector. By the planned third-generation GW detectors such as the Cosmic Explorer and the Einstein Telescope, a stringent constraint as $\sim 10^{-30}$ can be placed on the extra GW polarization amplitudes at $\sim100$ Hz. Further detailed simulations are needed, when we wish to take account of possible subdominant effects including the Earth orbital motion around the barycenter and the geophysical disturbance as well as secular changes in the pulsar period. It is left for future. \begin{acknowledgments} We would like to thank Atsushi Nishizawa for fruitful conversations. We wish to thank Yousuke Itoh, Nobuyuki Kanda, Hideyuki Tagoshi and Seiji Kawamura for stimulating discussions. We thank Yuuiti Sendouda , Ryuichi Takahashi, Naoya Era, Yuki Hagihara, Daisuke Iikawa and Naohiro Takeda, Ryuya Kudo, Ryousuke Kubo, and Shou Yamahira for the useful conversations. This work was supported in part by Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research, No. 20K03963 (H.A.), in part by Ministry of Education, Culture, Sports, Science, and Technology, No. 17H06359 (H.A.). \end{acknowledgments}
{ "timestamp": "2022-02-02T02:08:31", "yymm": "2202", "arxiv_id": "2202.00171", "language": "en", "url": "https://arxiv.org/abs/2202.00171" }
\section{} \tableofcontents \section{Introduction} \label{sec:introduction} A photon sphere, the radius of circular photon orbits, is known to play a key role in observation of a black hole shadow, for example, in the Schwarzschild spacetime. Concerning black hole shadow observations, there are two important aspects. First, the radius is a threshold for photons coming from distant light sources to escape to infinity or fall into the black hole. In the observer's sight, there exists a region from which the photons cannot come in principle. This dark region is called a black hole shadow. Second, the photon sphere accumulates photons along the radius. If light sources emit photons for a long time, an enormous number of photons orbit around the sphere and eventually escape to infinity. Then the observer observes a very bright shadow edge corresponding to the photon sphere as actually observed by the Event Horizon Telescope~\cite{eht}. \par In static and spherically symmetric cases, we can analyze the escaping photon behaviors and what we will observe around a black hole by use of the conserved quantities, energy and angular momentum. In the Schwarzschild case, the first analysis was given by Synge~\cite{synge}. Pande and Durgapal gave the analysis in generic spherically symmetric configuration~\cite{Pande_1986}. From the analyses, we can see that the radius of circular photon orbits, namely the photon sphere, is important for black hole shadow formation. \par In dynamical cases, it is challenging to define a photon sphere as a structure that shapes a black hole shadow even in spherical symmetry. This is because there are not so many exact solutions to the Einstein equation that are physically reasonable and the geodesic equation does not reduces to one-dimensional radial potential problem due to the absence of the conserved energy. Although several generalized notions of photon sphere have been proposed from various points of view~\cite{claudel,siino_2019,siino_2021,yoshino_tts,yoshino_dtts,kobialko_2020}, not so many examples in dynamical cases are known yet. The aim of this paper is to specify dynamical photon spheres that shape black hole shadows in specific cases. \par One of good models for this problem is the Vaidya spacetime, an exact solution to the Einstein equation with accreting null dust~\cite{Vaidya_1951}. The spacetime metric looks like the Schwarzschild spacetime in Eddington-Finkelstein coordinates $(v,r,\theta,\phi)$ with the mass $M$ replaced by the arbitrary mass function $m(v)$. We can model an accreting black hole and gravitational collapse by specifying the mass function appropriately. As a preceding work, Mishra, Chakraborty, and Sarkar~\cite{Mishra_2019} investigated photon spheres of the Vaidya spacetime with several mass functions and showed their evolution in the future characteristic time regions. Solanki and Perlick~\cite{Solanki_2022} investigated the Vaidya spacetime by assuming a linearly increasing mass functions over the entire time region and specified the photon sphere analytically by using the self-similarity of the spacetime. See also Ref.~\cite{Sarkar_2021} for works on variations of the Vaidya spacetime. \par In this paper, we investigate photon spheres and black hole shadows of the Vaidya spacetime focusing on the first aspect of the importance mentioned above. That is, we suppose that black hole shadows are formed not due to the red shift of photons, but due to the absence of null geodesics that emanate from a distant light source and reach the corresponding points on the celestial sky of a distant observer. We assume the mass function to be exactly constant in the past and future so that the global structure becomes as simple as the Schwarzschild spacetime and define the photon sphere from the causal point of view. As our main focus, we show the evolution of the photon sphere for the entire time region and clarify what the appropriate boundary condition is. Specifying the photon sphere as analytically as possible, we also discuss the relation between our photon sphere and several generalized notions of a photon sphere. \par In Sec.~\ref{sec:review}, reviewing the photon sphere and the black hole shadow of the Schwarzschild spacetime, we clarify what is the photon sphere shaping a black hole shadow in the Vaidya spacetime. In the current work, we focus our attention on eternal black holes that are static in the past and future time domain. We specify the photon spheres and the behaviors of null geodesic motions corresponding to the edge of the black hole shadows in the following three cases. In Sec.~\ref{sec:case1}, we consider the case where the black hole increases its mass moderately and numerically show the photon sphere shaping the black hole shadow. The result briefly shows the existence of a photon sphere relevant to a black hole shadow in a dynamical case. In Sec.~\ref{sec:analytical}, we consider linearly increasing mass in the dynamical time domain. The photon sphere is explicitly specified in terms of the parameters of the spaacetime, such as the initial and final mass. In Sec.~\ref{sec:analytical-shell}, the null dust shell accretion is considered. In Sec.~\ref{sec:discussion}, we discuss the relation between our photon sphere and recently proposed generalized notions of a photon sphere. In Sec.~\ref{sec:summary}, we summarize our results. We use units in which $G=1$ and $c=1$. \afterpage{\clearpage} \newpage \section{Review and Preliminary} \label{sec:review} Reviewing the photon sphere and the black hole shadow in the Schwarzschild spacetime, we clarify what we investigate as a photon sphere and a black hole shadow in the Vaidya spacetime. \subsection{Schwarzschild photon sphere and the black hole shadow} The spacetime is given by the metric, \begin{equation} g=-\left(1-\frac{2M}{r}\right)dt^2+\left(1-\frac{2M}{r}\right)^{-1}dr^2+r^2d\Omega_2^2. \end{equation} A null geodesic obeys the one-dimensional potential problem, \begin{equation} \dot{r}^2+V(E,L;r)=0,\;\; V(E,L;r):=L^2\left(1-\frac{2M}{r}\right)r^{-2}-E^2, \end{equation} where $\dot{}:=d/d\lambda$ is the derivative by the affine parameter $\lambda$, $E:=-g(\partial_t,k)$ and $L:=g(\partial_\phi,k)$ are the energy and the angular momentum, and $k$ is the null geodesic tangent. We have assumed the null geodesics are in the equatorial plane $\theta=\pi/2$, without loss of generality. Normalizing the affine parameter as $\lambda\to\lambda/E$, the equation reduces to \begin{equation} \dot{r}^2+V(b;r)=0,\;\; V(b;r):=V(1,b;r), \label{eq:effpotential2} \end{equation} where $b:=L/E$ is the impact parameter. Null geodesics are drawn as horizontal lines in $r$-$b$ plane (Fig.~\ref{fig:schwarzschild-potential}). They are reflected by the effective potential when they touch the boundary of the forbidden region, $V(b;r)>0$. The extremum of $V(b;r)$ is at $r=3M$. This is the Schwarzschild photon sphere. \begin{figure}[h] \centering \includegraphics[width=200pt]{figures/bVplot_axes.pdf} \caption{ \label{fig:schwarzschild-potential} $r$-$b$ plane for null geodesics in the Schwarzschild spacetime. The region of $V(b;r)>0$ (shaded region) is the forbidden region. Null geodesics with $b>b_\text{c}$ coming from infinity are eventually reflected by the potential at radii corresponding to $V(b;r)=0$. } \end{figure} \par Suppose an observer looking to the black hole at $r_{\mr{obs}}\gg2M$ and a spherical light source at $r_{\mr{src}}>r_{\mr{obs}}$. There is the critical impact parameter, $b_\text{c}=3\sqrt{3}M \simeq 5.196 M$. For $|b|>b_\text{c}$, null geodesics emanating from the source are reflected by the potential and eventually reach $r_{\mr{obs}}$. For $|b|<b_\text{c}$, null geodesics from the source fall into the black hole. For the null geodesics reaching $r_{\mr{obs}}$, the incident angle $\alpha$ to the observer is given by \begin{equation} k=\beta(e_0+\cos\alpha e_1+\sin\alpha e_2), \end{equation} where $\beta$ is a constant and the tetrad $\{e_\mu\}$ for the equatorial plane is given by \begin{equation} e_0=\sqrt{1-\frac{2M}{r}}^{-1}\partial_t,\;\; e_1=\sqrt{1-\frac{2M}{r}}\partial_r,\;\; e_2=r^{-1}\partial_\phi. \end{equation} Using the impact parameter, we have \begin{equation} \tan\alpha = \frac{g(k,e_2)}{g(k,e_1)} =\frac{b\sqrt{\left(1-2M/r_{\mr{obs}}\right)r_{\mr{obs}}^{-2}}}{\sqrt{1-b^2\left(1-2M/r_{\mr{obs}}\right)r_{\mr{obs}}^{-2}}} \end{equation} or, by approximation with the large $r_{\mr{obs}}$, \begin{equation} \label{eq:shadow-size} \alpha\simeq\frac{b}{r_{\mr{obs}}}. \end{equation} Since $|b|>b_\text{c}$ for null geodesics reaching the observer, the least impact parameter $b_\text{c}$ determines the apparent angular size of the dark region, i.e., the black hole shadow, as $\alpha_{\mr{sh}}=b_\text{c}/r_{\mr{obs}}$.\footnote{ If we take $\{X,Y\}$ as Cartesian coordinates of the observer's celestial sky with the origin corresponding to the line of sight to the black hole, the incident angle corresponds to the radius, $\sqrt{X^2+Y^2}=|\alpha|$. The normalization, $X\to X/r_{\mr{obs}}, Y\to Y/r_{\mr{obs}}$, gives the relation, $\sqrt{X^2+Y^2}=|b|$. In this sense, the shadow size is often said to be $b_\text{c}$.} Furthermore, since the near critical null geodesics with $|b|=b_\text{c}+0$ are the orbits asymptoting $r=3M$, the photon sphere is said to shape the black hole shadow. Note that the formula, Eq.~\eqref{eq:shadow-size}, is valid for other asymptotically flat spacetimes. Fig.~\ref{fig:Shadowimage_Sch} shows the image of the black hole shadow for a distant observer surrounded by a spherical light source. Even in dynamical black hole spacetimes, the impact parameter of the marginally escaping null geodesics determines the shadow size. \begin{figure}[t] \includegraphics[width=100pt]{figures/Shadowimage_Sch.pdf} \caption{Image of the black hole shadow in the Schwarzschild spacetime. The bright and dark regions in the observer's sight are shown as the orange and black regions. The distance from the center corresponds to the impact parameter, and the shadow edge (red dashed line) corresponds to the critical impact parameter. } \label{fig:Shadowimage_Sch} \end{figure} We consider a null geodesic which is emitted from a distant point and reflected at the turning point whose radius is $r_{\rm min}$, and reaches a distant observer. From $L = r^2 d\phi/d\lambda$ and Eq.~\eqref{eq:effpotential2}, we obtain \begin{align} r^2 \frac{d\phi}{dr} = \pm \left(b^{-2} - r^{-2}\left(1-\frac{2M}{r}\right)\right)^{-1/2}. \end{align} Integrating this equation, we can define the winding number as $n = \Delta \phi/(2\pi)$ where $\Delta \phi$ is \begin{align} \Delta \phi = 2 \int_0^{1/r_{\rm min}} \frac{du}{\sqrt{b^{-2} - u^2(1- 2 M u)}} -\pi, \end{align} and $u := 1/r$. We can regard $\Delta \phi$ as a function of the impact parameter $b$. In Fig.~\ref{fig:deltaphi_Sch}, the winding number is plotted and it is divergent at $b = b_{c}$. It is known that this divergent behavior is logarithmic $n \sim -\ln(b-b_\text{c})$~\cite{1959RSPSA.249..180D, Luminet:1979nyg, Bozza:2002zj}. \begin{figure}[t] \includegraphics[width=200pt]{figures/b-phi_graph_Sch_line_rename_axes_revision.pdf} \caption{Winding number of null geodesics in the Schwarzschild spacetime as a function of the impact parameter $b$. The red dashed line denotes the critical impact parameter $b = b_\text{c} = 3 \sqrt{3}M \simeq 5.196 M$. We set $M = 1$ in the numerical calculation. } \label{fig:deltaphi_Sch} \end{figure} \subsection{Schwarzschild photon sphere from the causal point of view} From the causal point of view, the Schwarzschild photon sphere can be characterized as a hypersurface generated by null geodesics from $i^-$ to $i^+$. This fact is important for the photon sphere to be a structure that shapes a black hole shadow for the following reason. \par As a simple set up for a black hole shadow observation, one may suppose a distant observer looking to the black hole at $r=r_{\mr{obs}}\gg2M$ and a distant light source filling a sphere of $r=r_{\mr{src}}>r_{\mr{obs}}$~\cite{Cunha:2018acu, Gralla_2019, Okabayashi_2020}. They are described as timelike curves from $i^-$ to $i^+$ in the Penrose diagram as in Fig.~\ref{fig:shadow-setup-gralla}. What the observer observes is understood by the behavior of the past-directed null geodesics from each point of the observer's world line. That is, the orbits of observed photons correspond to the null geodesics connecting the observer's and the source's world lines, and mapping of their impact parameters to the observer's celestial sky gives the shadow image. Note that photons are supposed to be observed if they enter the observer's sight from the front, i.e., photons are outgoing when observed. \begin{figure}[t] \includegraphics[width=200pt]{figures/shadow_setup_gralla.pdf} \caption{\label{fig:shadow-setup-gralla} The photon sphere (PS) and null geodesics (dashed lines) from a distant spherical light source to a distant observer (solid lines from $i^-$ to $i^+$) in the Schwarzschild spacetime. } \end{figure} \par Here we further idealize the situation from the causal point of view by taking the limit $r_{\mr{src}},\;r_{\mr{obs}}\to \infty$. As we are concerned with null geodesics from the light source to the observer, the observer are supposed to be in the future of the light source, and therefore, we identify $\mathscr{I}^+$ and $\mathscr{I}^-$ with the idealized observer and source, respectively. Note that, from the assumption that photons are outgoing when observed, we can ignore the case where the light source is on $\mathscr{I}^+$. The past-directed null geodesics from $\mathscr{I}^+$ are classified into two types, ones to $\mathscr{I}^-$ and ones to $\mathscr{H}^-$. \footnote{ Strictly speaking, there is only one null geodesic going to $i^-$ for each point on $\mathscr{I}^+$, ignoring those that can be identified by rotation reduced from the spherical symmetry of the spacetime. Such a null geodesic has the exact critical impact parameter $b=b_\text{c}$. } The photon sphere then works like as the boundary of the set of null geodesics from $\mathscr{I}^+$ and $\mathscr{I}^-$ (Fig.~\ref{fig:shadow-setup-causal}). In particular, the null geodesics corresponding to the shadow edge (i.e., ones with $b=b_\text{c}+0$ in terms of the impact parameter) are the orbits that asymptote to the photon sphere. Therefore, the causal feature of the photon sphere is actually important for the black hole shadow formation. \begin{figure}[h] \includegraphics[width=200pt]{figures/shadow_setup_causal.pdf} \caption{\label{fig:shadow-setup-causal} The photon sphere (PS) and null geodesics (dashed lines) from the idealized light source, $\mathscr{I}^-$, to the idealized observer $\mathscr{I}^+$. } \end{figure} \subsection{Photon sphere of dynamical eternal black hole} Even in a dynamical spacetime, the above causal argument for the photon sphere would hold if the causal structure is the same and the geometrical structure is not so different. In this paper, we consider the Vaidya spacetime~\cite{Vaidya_1951}, \begin{equation} g=-f(v,r)dv^2+2dvdr+r^2d\Omega^2,\;\; f(v,r)=1-\frac{2m(v)}{r}, \label{eq:vaidyametric} \end{equation} where the mass function $m(v)$ is arbitrary. The Vaidya spacetime is the spherically symmetric black hole solution to the Einstein equation with null dust accretion. By setting the mass function appropriately, we consider dynamical and eternal black hole cases. We assume that the mass function $m(v)$ is initially constant ($m(v)=M_1$ for $v\le v_1$), monotonically increases temporally for $v_1<v\le v_2$, and finally constant ($m(v)=M_2$ for $v>v_2$). The configuration guarantees the null energy condition and avoids the appearance of a naked singularity and the absence of future null infinity~\cite{Hiscock_1982,Kuroda_1984}. Outside the horizons $\mathscr{H}_\pm$, the spacetime has the same causal structure as the Schwarzschild spacetime as shown in Fig.~\ref{fig:eternal-vaidya}. \begin{figure}[h] \includegraphics[width=400pt]{figures/eternal_vaidya.pdf} \caption{\label{fig:eternal-vaidya} The causal structure of the Vaidya spacetime with temporal accretion for $v_1<v\le v_2$. The apparent horizon (AP) is given by $r=2m(v)$~\cite{nielsen} and does not coincide with the future event horizon $\mathscr{H}_+$ for $v\le v_2$. } \end{figure} \par We investigate the photon sphere of the dynamical black hole spacetime especially focusing on its property as a structure that shapes the black hole shadow. We define the dynamical photon sphere as a hypersurface generated by null geodesics from $i^-$ to $i^+$. Trivial examples for such hypersurfaces are also shown in Appendix~\ref{sec:conf-sch}, however, we do not investigate them in detail because the causal structures are out of our scope. Concerning the black hole shadow, we define {\it shadow edge orbits} as follows. For each point on $\mathscr{I}^+$, consider the set of all the past-directed null geodesics emanating from the point and going to $\mathscr{I}^-$. Among the set, we call the null geodesic having the smallest impact parameter at the point on $\mathscr{I}^+$ {\it a shadow edge orbit}. \footnote{ This definition is thanks to the spherical symmetry and asymptotic flatness. If the spacetime asymptote to the Kerr metric in the far region for example, we should define the shadow edge orbit in terms of the impact parameter associated with the Killing vector of rotation and the Carter constant associated with the Killing tensor. } If every shadow edge orbit asymptotes to the photon sphere, we can say that the photon sphere is the structure shaping the black hole shadow. Note that this behavior of shadow edge orbits around the photon sphere implies that the photon sphere corresponds to {\it an unstable photon sphere} of a static case rather than {\it a stable photon sphere}, or {\it an anti-photon sphere}. Although a stable photon sphere may also go from $i^-$ to $i^+$~\cite{gibbons_2016}, we focus on the dynamical photon sphere corresponding to an unstable photon sphere. \footnote{Since spacetimes of black holes formed by gravitational collapse have quite different causal structure, it is not trivial whether our consideration and expectation for the dynamical photon spheres and shadow edge orbits are valid.} When the spacetime is dynamical, it is generally difficult to discuss the radial geodesic equation in an analytic way, and it is often solved numerically. Hence, in the dynamical spacetime, finding the geodesic that goes to the timelike infinity is a difficult problem in general. However, in our case, since the spacetime in the past and future is the Schwarzschild spacetime, we just need to find a geodesic that has the critical impact parameter $b=b_{\mr{c}1}:=3\sqrt{3}M_1$ at $v=v_1$ and $b=b_{\mr{c}2}:=3\sqrt{3}M_2$ at $v=v_2$. Hence, we can find the geodesic from $i^-$ to $i^+$, i.e., the dynamical photon sphere with arbitrary accuracy by using the shooting method for the dynamical region. By contrast, shadow edge orbits are obtained as follows. We numerically solve the null geodesic equation in the past direction from each time $v$ and some large $r$ for various impact parameters. The solutions going to $r\to \infty$ for $v\to -\infty$ are the photons observed by the observer, and among them, one that have the smallest impact parameter is the shadow edge orbit for each $v$. Then we see that the shadow edge orbits asymptote to the photon sphere. \afterpage{\clearpage} \newpage \section{Numerical investigation of Vaidya photon sphere} \label{sec:case1} First we give an example of the photon sphere and shadow edge orbits by solving the geodesic equation numerically. We consider the Vaidya spacetime, Eq.~\eqref{eq:vaidyametric}, with the mass function, \begin{equation} \label{eq:massfunction-cos} m(v)=\left\{ \begin{array}{cc} M_1 & v\le v_1\\ M_1+\dfrac{1}{2}(M_2-M_1)\left(1-\cos\left(\dfrac{v-v_1}{v_2-v_1}\pi\right)\right) & v_1<v\le v_2\\ M_2 & v>v_2 \end{array} \right. . \end{equation} The spacetime is isometric to Schwarzschild spacetime with the mass $M_1$ and $M_2>M_1$ in the past time domain $v<v_1$ and the future time domain $v\ge v_2$, respectively. In the intermediate dynamical domain, the mass increases monotonically. The mass function moderately increases and is of class $C^1$ as shown in Fig.~\ref{fig:mass-cos}. \begin{figure}[h \includegraphics[width=200pt]{figures/mcosplot_axes.pdf} \caption{\label{fig:mass-cos} The mass function~\eqref{eq:massfunction-cos} increases monotonically from $M_1$ to $M_2$. } \end{figure} The photon sphere generator is a null geodesic with finite radius in far past and future. In the current case, it implies that the null geodesic must asymptote to or coincide with $r=3M_1$ in the past and $r=3M_2$ in the future. This is the boundary condition for the photon sphere generator. We calculated the photon sphere generator satisfying the boundary condition by numerical integration of the null geodesic equation. We found that (i) the generator is uniquely determined, (ii) asymptotes to $r=3M_1$ from outside in the past direction, and (iii) asymptotes to $r=3M_2$ from inside in the future direction. The generator is shown in Fig.~\ref{fig:psgenerator-numerical} in $r$-$v$ plane with shadow edge orbits that closely approaches the generator. Although the Vaidya spacetime with the mass function~\eqref{eq:massfunction-cos} is locally isometric to Schwarzschild spacetimes in the past and future, the photon sphere does not coincide with those of the Schwarzschild spacetimes. Even if the spacetime is locally static, the photon sphere is not static there. The shadow edge orbits are obtained by integrating the null geodesic equation in the past direction from each points of the observer's world line at $r=300$. Each orbit determines the apparent shadow size at each time $v$. Note that, for the observer at constant radius, time interval of the ingoing null coordinate $v$ is the same as the outgoing coordinate $u$, which is called ``an observer's time" for an observer on $\mathscr{I}^+$. Fig.~\ref{fig:psgenerator-numerical-logplot} shows the shadow edge orbits radius subtracted by the photon sphere generator radius. We can see that the shadow edge orbits asymptote to the photon sphere generator exponentially in time. The time evolution of the shadow image is shown in Fig.~\ref{fig:shadowimage-numerical}. The shadow radius is increasing in time, and the image is very close to that for the Schwarzschild spacetime with $M=M_1$ at early time and $M=M_2$ at late time, respectively. Fig.~\ref{fig:vo-bo_graph-cos} shows the corresponding time evolution of the shadow edge. \begin{figure}[h \includegraphics[width=200pt]{figures/dynamicalphotonspere_v2_sin_M1-3_v-0-100_axes.pdf} \caption{\label{fig:psgenerator-numerical} The orbits of the photon sphere generator (red line) and shadow edge orbits that once approach the generator (blue line) in the Vaidya spacetime with the mass function~\eqref{eq:massfunction-cos}. We set the parameters as $M_1=1$, $M_2=3$, $v_1=0$, and $v_2=100$. The photon sphere generator asymptotes to $r=3M_1+0$ in the past and $r=3M_2-0$ in the future. } \end{figure} \begin{figure}[h \includegraphics[width=180pt]{figures/shadowedge_logplot_cos_frame.pdf} \caption{\label{fig:psgenerator-numerical-logplot} The shadow edge orbits radius subtracted by the photon sphere generator radius. We took the same parameters as in Fig.~\ref{fig:psgenerator-numerical}. This shows that the shadow edge orbits asymptote to the photon sphere generator exponentially in time. } \end{figure} \begin{figure}[h \includegraphics[width=280pt]{figures/Shadowimage_cos_revision.pdf} \caption{\label{fig:shadowimage-numerical} Image of the black hole shadow observed at $r=300$ for $v=500, 550, 600, 650, 700, 750, 800, 850,$ and $900$ in the Vaidya spacetime with the mass function~\eqref{eq:massfunction-cos}. Parameters are same as those in Fig.~\ref{fig:psgenerator-numerical}. The distance from the center corresponds to the impact parameter observed at $r=300$, and the red dashed lines are $b=3 \sqrt{3} M_1$ and $b=3 \sqrt{3}M_2$ for the inner and outer, respectively. } \end{figure} \begin{figure}[h] \includegraphics[width=200pt]{figures/vo-bographic_cos_line_static_axes.pdf} \caption{\label{fig:vo-bo_graph-cos} Time evolution of the shadow edge observed at $r=300$. Parameters are same as those in Fig.~\ref{fig:psgenerator-numerical}. } \end{figure} In Fig.~\ref{fig:psgenerator-numerical2}, we plotted the photon sphere orbit and the orbit $r = 3 m(v)$ for comparison. This shows that the photon sphere orbit does not coincide with the orbit $r = 3 m(v)$, but qualitative behaviors of those two orbits are similar. As will shown later, for the weakly linear accretion case, the deviation of the photon sphere orbit from $r = 3 m(v)$ is proportional to the first order of the accretion rate. \begin{figure}[h \includegraphics[width=170pt]{figures/dynamicalphotonspere_compare_cos_axes.pdf} \caption{\label{fig:psgenerator-numerical2} The red and blue dashed line denote the photon sphere orbit and the orbit $r=3m(v)$, respectively. The parameters of the spacetime are the same as those in Fig.~\ref{fig:psgenerator-numerical}. } \end{figure} \par We can discuss the winding number $n=\Delta \phi / 2\pi$, where $\Delta \phi$ is the total change of $\phi$ subtracted by $\pi$ for a null geodesic which comes from a light source and reaches an observer. Fig.~\ref{fig:deflection_angle-numerical} shows the time evolution of the winding number as a function of the impact parameter $b$. The winding number is divergent at the impact parameter $b_{\text{edge}}$ which corresponds to the shadow edge orbit. In fact, similar to the Schwarzschild case, this divergent behavior is also logarithmic. A typical case is plotted in Fig.~\ref{fig:deflection_angle-log_plot}. We note that the logarithmic divergent behavior at the impact parameter which corresponds to the shadow edge orbit also can be seen in linear and shell accretion cases which are studied later. \begin{figure}[h \includegraphics[width=390pt]{figures/b-phi_graph_cos_line_axes.pdf} \caption{\label{fig:deflection_angle-numerical} The time evolution of the winding number $n=\Delta \phi/2\pi$ of null geodesics emitted from $r=350$ and observed at $r=300$ as a function of the impact parameter $b_o$. We took the same parameters as in Fig.~\ref{fig:psgenerator-numerical}. The gray dashed lines are $b=3\sqrt{3}M_1$ and $b=3\sqrt{3}M_2$ for the left and right, respectively, and the red dashed line is the impact parameter of the shadow edge orbit $b_{\text{edge}}$ at $v=v_\text{o}$ } \end{figure} \begin{figure}[h \includegraphics[width=200pt]{figures/b-phi_graph_cos_linearlog_vo700_line_axes.pdf} \caption{\label{fig:deflection_angle-log_plot} Semilogarithmic scale of Fig.~\ref{fig:deflection_angle-numerical} for $v_\text{o}=700$. The horizontal axis is $b_o-b_{\text{edge}}$ and the vertical axis is $\Delta \phi/2\pi$. } \end{figure} In the following two sections, we investigate the cases where the photon spheres are derived more analytically. The shadow edge orbits that asymptote to the photon spheres are also shown numerically. \afterpage{\clearpage} \newpage \section{Analytical investigation: linear accretion} \label{sec:analytical} Here we consider another case of the Vaidya spacetime, Eq.~\eqref{eq:vaidyametric}, with the mass function, \begin{equation} \label{eq:massfunction-linear} m(v)=\left\{ \begin{array}{cc} M_1 & v\le v_1\\ M_1+\mu\left(v-v_1\right) & v_1<v\le v_2\\ M_2=M_1+\mu\left(v_2-v_1\right) & v>v_2 \end{array} \right. . \end{equation} The static time domains correspond to Schwarzschild spacetime with masses $M_1$ and $M_2$. The mass linearly increases in $v$ in the intermediate dynamical time domain. We assume $M_1>0$ and $0<\mu<1/16$. The causal structure of the dynamical region is given as a part of the diagram in Fig.~1 of Ref.~\cite{Hiscock_1982}. Thus, our spacetime is an eternal black hole and the Penrose diagram is given by Fig.~\ref{fig:causal-structure}. \begin{figure}[t] \includegraphics[width=400pt]{figures/causal_structure.pdf} \caption{\label{fig:causal-structure} The Penrose diagram of the Vaidya spacetime with the mass function, Eq.~\eqref{eq:massfunction-linear}, for $\mu<1/16$. The apparent horizon (AH) of Vaidya spacetime is given by $r=2m(v)$~\cite{nielsen}. The event horizon of the black hole ($\mathscr{H}_+$) is the null hypersurface that matches $r=2M_2$ in the future time domain $v>v_2$. The analysis of the dynamical domain is given in Sec.~\ref{sec:dynamical_domain}. } \end{figure} \par Because the mass function, Eq.~\eqref{eq:massfunction-linear}, mimics that of Eq.~\eqref{eq:massfunction-cos}, we can expect that the photon sphere of this spacetime also satisfies similar boundary conditions to the ones in the previous section. Actually, we can see similar behaviors of shadow edge orbits and the photon sphere generator as their limiting surface from numerical integration of the null geodesic equation (Fig.~\ref{fig:psgenerator-linear}). The corresponding shadow images and shadow edges are shown in Fig~\ref{fig:Shadowimage-linear} and Fig.~\ref{fig:vo-bo_graph-self_similar}, respectively (see also Appendix.~\ref{sec:linearlargeaccretion} for various accretion rates $\mu$). As in the previous case, the photon sphere generators asymptote to $r\to 3M_1+0$ and $r\to 3M_2-0$ in the past and future, respectively. These boundary conditions seem to be generic for this kind of eternal black hole spacetimes. \begin{figure}[t] \includegraphics[width=200pt]{figures/dynamicalphotonspere_selfsimila_mu1-20_M1-2_revision.pdf} \caption{\label{fig:psgenerator-linear} The orbits of the photon sphere generator (red line) and shadow edge orbits that once approach the generator (blue line) for the Vaidya spacetime with the linear accretion mass function~(\ref{eq:massfunction-linear}). We set the parameters as $M_1=1$, $M_2=2$, $\mu=1/20$, $v_1=0$ and $v_2=40$. The photon sphere generator asymptotes to $r=3M_1+0$ in the past and $r=3M_2-0$ in the future. } \end{figure} \begin{figure}[t] \includegraphics[width=280pt]{figures/Shadowimage_selfsimilar_mu1-20_M1-2_v2_revision.pdf} \caption{\label{fig:Shadowimage-linear} Image of the black hole shadow observed at $r=50$ for $v=20, 40, 60, 80, 100, 120, 140, 160$ and $180$ in the Vaidya spacetime with the linear accretion mass function~(\ref{eq:massfunction-linear}). We took the same parameters as in Fig.~\ref{fig:psgenerator-linear}. The distance from the center corresponds to the impact parameter observed at $r=50$, and the red dashed lines are $b=3 \sqrt{3} M_1$ and $b=3 \sqrt{3}M_2$ for the inner and outer, respectively. } \end{figure} \begin{figure}[h] \includegraphics[width=200pt]{figures/vo-bographic_selfsimilar_mu1-20_M1-2_line_static_axes.pdf} \caption{\label{fig:vo-bo_graph-self_similar} Time evolution of the shadow edge observed at $r=50$. We took the same parameters as in Fig.~\ref{fig:psgenerator-linear}. } \end{figure} \par In the following, we find the photon sphere generator more analytically by assuming that the generators asymptote to $r\to 3M_1+0$ and $r\to 3M_2-0$ in the past and future, respectively. It is explicitly shown that the deviation of the photon sphere from the hypersurfaces of $r=3M_1$ and $r=3M_2$ depends on the parameters of the geometry, $M_1$, $M_2$, $\mu$, and $v_2-v_1$. \subsection{Null geodesics in the static domains} The static time domains, $v\le v_1$ and $v>v_2$, of the Vaidya spacetime are isometric to Schwarzschild spacetimes of masses, $M_1$ and $M_2$, respectively. We analyze null geodesics in each region in a usual way. \par Consider the Hamiltonian for the null geodesic equation, \begin{equation} \mathcal{H}=g^{\mu\nu}k_\mu k_\nu =0, \end{equation} for the null geodesic tangent $k^\mu=dx^\mu/d\lambda=\dot{x}^\mu$ with the affine parameter $\lambda$. We assume that the null geodesic lies on the equatorial plane $\theta=\pi/2$ without loss of generality. Since the basis $\partial_v$ is locally a Killing vector in each static domains and $\partial_\phi$ is globally a Killing vector, we have a locally conserved energy and a globally conserved angular momentum, \begin{equation} \label{eq:energy-angular-momentum} E:=-g(k,\partial_v),\;\;\; L:=g(k,\partial_\phi). \end{equation} Then the null geodesic equations of each domain reduces to \begin{equation} \label{eq:potential-static} \dot{r}^2+V_i(E,L;r)=0,\;\;V_i(E,L,r):=L^2f_i(r)r^{-2}-E^2, \end{equation} where the functions $f_i(r)=1-(2M_i)/r$ $(i=1,2)$ are the metric component $-g_{vv}$ of each domain. \par The effective potentials $V_i$ have maxima at $r=3M_i$. From the condition $\dot{r}=0$ at $r=3M_i$, we can see that the null geodesics staying the radii have the critical impact parameters $b_{\mr{c}i}^2=27M_i^2$, where a impact parameter is defined by $b^2=L^2/E^2$. The null geodesics that asymptote to $r=3M_i$ also have the critical impact parameters $b_{\mr{c}i}^2$. Specifically, the null geodesic that asymptotes to $r=3M_1$ from outside in the past infinity has the critical impact parameter $b_{\mr{c}1}^2$ and satisfies $r>3M_1$ and $\dot{r}>0$ for $v\le v_1$. The one that asymptotes to $r=3M_2$ from inside in the future infinity has the critical impact parameter $b_{\mr{c}2}^2$ and satisfies $r<3M_2$ and $\dot{r}>0$ for $v> v_2$. \subsection{Null geodesics in the dynamical domain} \label{sec:dynamical_domain} The spacetime locally has a homothetic vector in the dynamical domain~\cite{nielsen,Hiscock_1982}. Therefore, for $v_1<v\le v_2$, we can take conformally static coordinates by the coordinate transformation $\{v,r\}\to\{T,R\}$, \begin{equation} \label{eq:coordtrans-to-TR} T(v,r)=\ln (v+v_0)-R^*(R(v,r)),\;\; R^2(v,r)=\frac{r}{v+v_0}, \end{equation} where \begin{eqnarray} R^*(R)&:=&\int F^{-1}(R)dR\nonumber\\ &=&-\frac{1}{2(R_{\mr{H}+}^2-R_{\mr{H}-}^2)}\left[R_{\mr{H}+}^2\ln |R_{\mr{H}+}^2-R^2|-R_{\mr{H}-}^2\ln |R^2-R_{\mr{H}-}^2|\right],\nonumber\\ F(R)&:=&\frac{1}{2R}\left(f(R)-2R^2\right)=\frac{1}{2R}\left(1-2\mu R^{-2}-2R^2\right),\nonumber \end{eqnarray} $v_0:=(M_1-\mu v_1)/\mu$, and $f(v,r)=1-2\mu (v+v_0)/r=1-2\mu /R^2=:f(R)$ for $v_1<v\le v_2$. The time coordinate basis $\partial_T$ is the homothetic vector and $\partial_R$ is the radial basis orthogonal to $\partial_T$. The metric in the dynamical time domain is then given by \begin{eqnarray} \label{eq:conformally-static-metric} ds^2&=&\Omega^2\left[\frac{2}{R}\left(-FdT^2+F^{-1}dR^2\right)+R^2d\Omega^2\right],\\ \Omega^2&=&(v+v_0)r=e^{2(T+R^*)}R^2. \end{eqnarray} The radii $R_{\mr{H}\pm}$ given by \begin{equation} \label{eq:RH+-} R_{\mr{H}\pm}:=\frac{1}{2}\sqrt{1\pm\sqrt{1-16\mu}} \end{equation} are the solutions to the equation, $F(R)=0$. They are the conformal (homothetic) Killing horizons in the sense that $\partial_T$ becomes null. We can adapt the coordinates $\{T,R\}$ for each of the regions $0<R<R_{\mr{H}-}$, $R_{\mr{H}+}<R<R_{\mr{H}-}$, and $R_{\mr{H}+}<R<\infty$. However, only the region $R_{\mr{H}+}<R<R_{\mr{H}-}$ is conformally static because $\partial_T$ is timelike there. The causal structure is investigated in Ref.~\cite{Hiscock_1982} for the maximal extension, corresponding to change of the range $v\in(v_1,v_2)$ to $v\in(v_1-M_1/\mu,\infty)$. \par The basis $\partial_T$ is the homothetic vector which is timelike in the conformally static region. Thus, for \blue{the} null geodesic with the tangent $k^\mu=dx^\mu/d\lambda$ in the dynamical domain, we have a locally conserved ``energy", \begin{equation} \label{eq:def-conformal-energy} C:=-g(k,\partial_T), \end{equation} in addition to the globally conserved angular momentum, $L$. Then the null geodesic equation reduces to \begin{equation} \Omega^4\dot{R}^2+\frac{1}{2}FR^{-1}L^2-\frac{R^2}{4}C^2=0. \end{equation} The parameter transformation $\lambda\to\widetilde{\lambda}(\lambda)$ given by \begin{equation} \frac{d\lambda}{d\widetilde{\lambda}}=\Omega^2, \end{equation} further reduces the equation to \begin{equation} \label{eq:conformal-pontial-problem} {R'}^2+U(C,L;R)=0,\;\;U(C,L;R):=\frac{1}{2}FR^{-1}L^2-\frac{R^2}{4}C^2, \end{equation} where ${}'=d/d\widetilde{\lambda}$. \par The behaviors of null geodesics are characterized in terms of the conformal impact parameter, $D:=L/C$, and the rescaled potential, $U(D;R):=C^{-2}U(C,L;R)=U(1,D;R)$. The null geodesics are given as horizontal lines in $R$-$D$ plane in Fig.~\ref{fig:conformal-potential}, where the forbidden region $U(D;R)>0$ is shown as the shaded region. The solution of $R$ to $U(D;R)=0$ and $\frac{dU}{dR}(D;R)=0$ is given by \begin{equation} R_{\mr{ex}}:=\frac{1}{\sqrt{2}}\sqrt{1- \sqrt{1-12\mu}}. \end{equation} The corresponding critical impact parameter is given by \begin{equation} D_\text{c}^2=\frac{(1-\sqrt{1-12\mu})(1-\sqrt{1-12\mu}-6\mu)}{2(8\mu-(1-\sqrt{1-12\mu}))}. \end{equation} For the critical orbit, the radius $R_{\mr{ex}}$ corresponds to the maximum of the potential, i.e., $\frac{d^2U}{dR^2}(D_\text{c};R_{\mr{ex}})<0$. As can be seen from Fig.~\ref{fig:conformal-potential}, null geodesics with an impact parameter $D^2<D_\text{c}^2$ is not reflected by the potential. Another important radius would be $R=\sqrt{3\mu}$, which corresponds to $r=3M_1$ at $v=v_1$ and $r=3M_2$ at $v=v_2$. In the case $\mu<1/16$, the conformal Killing horizons $R_{\mr{H}\pm}$ exist and these characteristic radii of the dynamical domain satisfy the relations, \begin{equation} 2\mu<R_{\mr{H}-}^2<3\mu<R_{\mr{ex}}^2<R_{\mr{H}+}^2, \end{equation} as shown in Fig.~\ref{fig:causal-structure}. \begin{figure}[h] \centering \includegraphics[width=200pt]{figures/DUplot_axes.pdf} \caption{ \label{fig:conformal-potential} $R$-$D$ plane for null geodesics in the dynamical region of the Vaidya spacetime with the mass function, Eq.~\eqref{eq:massfunction-linear}. The region of $U(D;R)>0$ (shaded region) is the forbidden region. The vertical dashed lines are $R=R_{\mr{H}-}$ and $R=R_{\mr{H}+}$. The accretion rate is set to $\mu=1/32<1/16$ for example. } \end{figure} \subsection{Transition among the potential problems} From the one-dimensional potential problems in each time domain, Eqs.~\eqref{eq:potential-static} and~\eqref{eq:conformal-pontial-problem}, the null geodesic equation of the spacetime is formulated as a piecewise potential problem. At $v=v_1$ and $v=v_2$, the potential problems are transformed to other ones by the coordinate transformation, $\{v,r\}\leftrightarrow\{T,R\}$. \par At $v=v_i\; (i=1,2)$, the energies and radial velocities are transformed as \begin{eqnarray} \label{eq:EtoC} C&=&-g(k,\partial_T)=-g\left(k,\frac{\partial T}{\partial v}\partial_v+\frac{\partial T}{\partial r}\partial_r \right)\nonumber\\ &=&\frac{(v_i+v_0)f_i(v_i,r)-r}{f_i(v_i,r)}E_i-\frac{r}{f_i(v_i,r)}\dot{r},\\ R' &=&\frac{d\lambda}{d\lambda'}\left(\frac{\partial R}{\partial v}\dot{v}+\frac{\partial R}{\partial r}\dot{r}\right)\nonumber\\ &=&\frac{R}{2C}\left[-\frac{r}{f_i(v_i,r)}E_i+\frac{(v_i+v_0)f_i(v_i,r)-r}{f_i(v_i,r)}\dot{r}\right], \end{eqnarray} for $\{v,r\}\rightarrow\{T,R\}$ and \begin{eqnarray} E_i&=&-g(k,\partial_v)=-g\left(k,\frac{\partial v}{\partial T}\partial_T+\frac{\partial v}{\partial R}\partial_R \right)\nonumber\\ &=&e^{-(T+R_*(R))}\frac{f(R)C}{f^2(R)-2R^2}\left[f(R)-R^2+2RR'\right],\\ \dot{r}&=&\left(\frac{\partial r}{\partial T}\dot{T}+\frac{\partial r}{\partial R}\frac{d\lambda'}{d\lambda}R'\right)\nonumber\\ &=&-e^{-(T+R_*(R))}\frac{f(R)C}{f^2(R)-2R^2}\left[R^2+\frac{2(f(R)-R^2)}{R}R'\right], \end{eqnarray} for $\{T,R\}\rightarrow\{v,r\}$, where the angular momentum $L$ is globally conserved and the coordinate values are transformed according to Eq.~\eqref{eq:coordtrans-to-TR}. Note that $\dot{r}=\pm V(C,L;,r)$ and $R'=\pm U(C,L;R)$ are obtained more easily from the potentials once the energies are obtained, however, the informations of the signs are then lost. Note also $e^{-(T+R_*(R))}=M_i/\mu$. \subsection{Photon sphere generator} \label{sec:ps-generator} Here we identify for a null geodesic to be the photon sphere generator. Without loss of generality, we assume that the generator lies on the equatorial plane $\theta=\pi/2$. First we define {\it a past critical orbit} and {\it a future critical orbit} as null geodesics that asymptote to $r=3M_1$ from outside in the past direction and $r=3M_2$ from inside in the future direction, respectively. Then we investigate the conditions for them to be connected successfully in the dynamical domain. \par A past critical orbit is given as a null geodesic $\gamma_1$ satisfying the condition, \begin{equation} \label{eq:gamma1-condition} r|_{v=v_1}=r_1:=3M_1(1+\epsilon_1),\;\; \dot{r}|_{v=v_1}=+\sqrt{-V(E_1,L_1;r_1)},\;\; k_v=-E_1,\;\; L_1^2=b_{\mr{c}1}^2E_1^2, \end{equation} where $\epsilon_1>0$ and the subscripts $1$ represent the quantities of $\gamma_1$. From Eqs.~\eqref{eq:coordtrans-to-TR} and~\eqref{eq:EtoC}, we have \begin{eqnarray} \label{eq:R1-T1} R^2&=&R_1^2:=3\mu(1+\epsilon_1),\;\; T=T_1:=\ln\frac{M_1}{\mu}-R^*(R_1),\\ \label{eq:critical-C1} C_1 &=&\frac{E_1M_1}{\mu}\left[1-9\mu\frac{(1+\epsilon_1)^2}{1+3\epsilon_1}\left(1+\sqrt{1-\frac{1+3\epsilon_1}{(1+\epsilon_1)^3}}\right)\right]. \end{eqnarray} In the same manner, a future critical orbit is given as a null geodesic $\gamma_2$ satisfying \begin{equation} \label{eq:gamma2-condition} r|_{v=v_2}=r_2:=3M_2(1-\epsilon_2),\;\; \dot{r}|_{v=v_2}=+\sqrt{-V(E_2,L_2;r_2)},\;\; k_v=-E_2,\;\; L_2^2=b_{\mr{c}2}^2E_2^2, \end{equation} where $\epsilon_2>0$. From Eqs.~\eqref{eq:coordtrans-to-TR} and~\eqref{eq:EtoC}, we have \begin{eqnarray} \label{eq:R2-T2} R^2&=&R_2^2:=3\mu(1-\epsilon_2),\;\; T=T_2:=\ln\frac{M_2}{\mu}-R^*(R_2),\\ \label{eq:critical-C2} C_2 &=&\frac{E_2M_2}{\mu}\left[1-9\mu\frac{(1-\epsilon_2)^2}{1-3\epsilon_2}\left(1+\sqrt{1-\frac{1-3\epsilon_2}{(1-\epsilon_2)^3}}\right)\right]. \end{eqnarray} \par The critical orbits $\gamma_1$ and $\gamma_2$ are successfully connected if they satisfy \begin{equation} \label{eq:jc} \gamma_1^\mu|_{v=v_2}=\gamma_2^\mu|_{v=v_2},\;\;\dot\gamma_1^\mu|_{v=v_2}=\dot\gamma_2^\mu|_{v=v_2}. \end{equation} The latter condition is equivalent to the conditions for the conserved quantities, \begin{equation} \label{eq:1stjc} C_1=C_2,\;\;L_1=L_2, \end{equation} because each tangent vector has only two independent components due to the fact that the null geodesics are supposed to be on the equatorial plane and satisfy $\mathcal{H}=0$. From that $L_1=b_{\mathrm{c}1}E_1=3\sqrt{3}M_1E_1$ and $L_2=b_{\mathrm{c}2}E_2=3\sqrt{3}M_2E_2$, the equation $L_1=L_2$ implies \begin{equation} \label{eq:reduced-1stjc-a} E_1M_1=E_2M_2. \end{equation} From Eqs~\eqref{eq:critical-C1},~\eqref{eq:critical-C2}, and~\eqref{eq:reduced-1stjc-a}, the equation $C_1=C_2$ reduces to \begin{equation} \label{eq:reduced-1stjc-b} \frac{(1+\epsilon_1)^2}{1+3\epsilon_1}\left(1+\sqrt{1-\frac{1+3\epsilon_1}{(1+\epsilon_1)^3}}\right) =\frac{(1-\epsilon_2)^2}{1-3\epsilon_2}\left(1+\sqrt{1-\frac{1-3\epsilon_2}{(1-\epsilon_2)^3}}\right). \end{equation} The relation between $\epsilon_1$ and $\epsilon_2$ is independent of any parameters concerning the spacetime itself, $\mu$, $v_1$, $v_2$, $M_1$, and $M_2$. \par The former condition in Eq.~\eqref{eq:jc} is explicitly obtained by extending $\gamma_1$ to the time $v=v_2$ from $v=v_1$ by integrating the null geodesic equation. For simplicity, we assume that, for $v\in (v_1,v_2)$, $\gamma_1$ and $\gamma_2$ are in the conformally static region spanned by the coordinates $\{T,R\}$. Then the condition is expressed as \begin{equation} \label{eq:2ndjc} T_2-T_1=\int^{\lambda_2}_{\lambda_1}\frac{dT(\lambda)}{d\lambda}d\lambda, \;\;R_2-R_1=\int^{\lambda_2}_{\lambda_1}\frac{dR(\lambda)}{d\lambda}d\lambda, \end{equation} where $T(\lambda)$ and $R(\lambda)$ are the coordinates of $\gamma_1(\lambda)$ and $\lambda_1$ and $\lambda_2$ are the values satisfying $R(\lambda_1)=R_1$ and $R(\lambda_2)=R_2$, respectively. Further assuming that $\dot{R}(\lambda)<0$ for $\lambda\in[\lambda_1,\lambda_2]$, which is verified in Appendix~\ref{sec:Rdotsign}, the equations are transformed to the form, \begin{equation} T_2-T_1=\int^{R_2}_{R_1}\frac{dT/d\lambda}{dR/d\lambda}dR=\int^{R_2}_{R_1}\frac{dT/d\widetilde{\lambda}}{dR/d\widetilde{\lambda}}dR. \end{equation} Using Eq.~\eqref{eq:coordtrans-to-TR} and the fact that $v_1+v_0=M_1/\mu$ and $v_2+v_0=M_2/\mu$, the left-hand side becomes \begin{equation} T_2-T_1=\ln\frac{M_2}{M_1}-R^*(R_2)+R^*(R_1). \end{equation} Using Eqs.~\eqref{eq:def-conformal-energy} and~\eqref{eq:conformal-pontial-problem} and the fact $U(C,L;R)=C^2U(1,D;R)$, the right-hand side reduces to \begin{equation} \int^{R_2}_{R_1}\frac{dT/d\widetilde{\lambda}}{dR/d\widetilde{\lambda}}dR=-\int^{R_2}_{R_1}\frac{R}{2F(R)}\left[-U(1,D_1;R)\right]^{-1/2}dR, \end{equation} where \begin{equation} D_1:=\frac{L_1}{C_1}=\frac{3\sqrt{3}M_1E_1}{C_1} =-3\sqrt{3}\mu f_1(r_1)\left[R_1^2-f_1(r_1)+R_1^2\sqrt{1-b_{\mr{c}1}^2f_1(r_1)r_1^{-2}}\right]^{-1}. \end{equation} Then we have \begin{equation} \ln\frac{M_2}{M_1}-R^*(R_2)+R^*(R_1) =-\int^{R_2}_{R_1}\frac{R}{2F(R)}\left[-U(1,D_1;R)\right]^{-1/2}dR. \end{equation} Finally, using Eqs.~\eqref{eq:gamma1-condition},~\eqref{eq:R1-T1},~\eqref{eq:gamma2-condition}, and~\eqref{eq:R2-T2}, we obtain the equation \begin{eqnarray} \label{eq:reduced-2ndjc} \ln\frac{M_2}{M_1} &=&R^*\left(3\mu(1-\epsilon_2)\right)-R^*\left(3\mu(1+\epsilon_1)\right) +\int^{3\mu(1+\epsilon_1)}_{3\mu(1-\epsilon_2)}\frac{R}{2F(R)}\left[-U(1,D_1;R)\right]^{-1/2}dR,\nonumber\\ D_1&=&3\sqrt{3}\mu \left[1-9\mu\frac{(1+\epsilon_1)^2}{1+3\epsilon_1}\left(1+\sqrt{1-\frac{1+3\epsilon_1}{(1+\epsilon_1)^3}}\right)\right]^{-1}, \end{eqnarray} as the former condition of Eq.~\eqref{eq:jc}. Independently of Eq.~\eqref{eq:reduced-1stjc-b}, Eq.~\eqref{eq:reduced-2ndjc} gives the relation between $\epsilon_1$ and $\epsilon_2$ depending on $M_1$, $\mu$, and $v_2-v_1$, where $M_2=M_1+\mu(v_2-v_1)$ is reducible from them. \par In summary, if there exist critical orbits $\gamma_1$ and $\gamma_2$ with the parameters $\epsilon_1$ and $\epsilon_2$, respectively, that satisfy Eqs.~\eqref{eq:reduced-1stjc-a},~\eqref{eq:reduced-1stjc-b} and~\eqref{eq:reduced-2ndjc}, they are the PS generator. Eq.~\eqref{eq:reduced-1stjc-a} determines the parameter scaling of $\gamma_2(\lambda)$ relative to $\gamma_1(\lambda)$. The combination of Eqs.~\eqref{eq:reduced-1stjc-b} and~\eqref{eq:reduced-2ndjc} determines $\epsilon_1$ and $\epsilon_2$ for given $M_1$, $\mu$, and $v_2-v_1$. \subsection{Results} The numerical results of Eqs.~\eqref{eq:reduced-1stjc-b} and~\eqref{eq:reduced-2ndjc} for various values of the parameters, $\mu$ and $v_2-v_1$, are shown in Fig.~\ref{fig:epsilon-results}. The parameter $M_2$ is given by $M_2=\mu(v_2-v_1)$, and we can take the other parameters as $M_1=1$, $E_1=1$, and $v_1=0$ without loss of generality. We have investigated the region $\mu<1/18$ because, for $\mu>1/18$, $R_2=\sqrt{3\mu(1-\epsilon_2)}<\sqrt{3\mu}<R_{\mathrm{H}-}$ implying that the past and future critical orbits corresponding to the PS generator are connected outside the conformally static region $R\in(R_{\mathrm{H}-},R_{\mathrm{H}+})$. This violates the assumption mentioned above Eq.~\eqref{eq:2ndjc}, under which we have derived Eq.~\eqref{eq:reduced-2ndjc}. \begin{figure}[h] \centering \includegraphics[width=200pt]{figures/v2-epsilon_mu-1-200_axes.pdf} \includegraphics[width=200pt]{figures/v2-epsilon_mu-1-100_axes.pdf} \includegraphics[width=200pt]{figures/v2-epsilon_mu-1-30_axes.pdf} \includegraphics[width=200pt]{figures/v2-epsilon_mu-1-20_axes.pdf} \caption{ \label{fig:epsilon-results} The dots show the numerical results of Eqs.~\eqref{eq:reduced-1stjc-b} and~\eqref{eq:reduced-2ndjc} for varying $v_2-v_1$ with the fixed value of $\mu=1/200, 1/100, 1/30,$ and $1/20$. The other parameters are chosen so that $M_1=1$, $E_1=1$, and $v_1=0$. } \end{figure} \if0 \begin{figure}[h] \centering \includegraphics[width=200pt]{figures/v2-epsilon_mu-1-200.pdf} \includegraphics[width=200pt]{figures/v2-epsilon_focused_mu-1-200.pdf} \caption{ \label{fig:epsilon-results-mu-1-200} The dots show the numerical results of Eqs.~\eqref{eq:reduced-1stjc-b} and~\eqref{eq:reduced-2ndjc} for varying $v_2-v_1$ with the fixed value $\mu=1/200$. The solid lines are the linear approximation, Eq.~\eqref{eq:linear-epsilon}. The other parameters are chosen so that $M_1=1$, $E_1=1$, and $v_1=0$. } \end{figure} \begin{figure}[h] \centering \includegraphics[width=200pt]{figures/v2-epsilon_mu-1-100.pdf} \includegraphics[width=200pt]{figures/v2-epsilon_focused_mu-1-100.pdf} \caption{ \label{fig:epsilon-results-mu-1-100} The dots show the numerical results of Eqs.~\eqref{eq:reduced-1stjc-b} and~\eqref{eq:reduced-2ndjc} for varying $v_2-v_1$ with the fixed value $\mu=1/100$. The solid lines are the linear approximation, Eq.~\eqref{eq:linear-epsilon}. The other parameters are chosen so that $M_1=1$, $E_1=1$, and $v_1=0$. } \end{figure} \begin{figure}[h] \centering \includegraphics[width=200pt]{figures/v2-epsilon_mu-1-30.pdf} \includegraphics[width=200pt]{figures/v2-epsilon_focused_mu-1-30.pdf} \caption{ \label{fig:epsilon-results-mu-1-30} The dots show the numerical results of Eqs.~\eqref{eq:reduced-1stjc-b} and~\eqref{eq:reduced-2ndjc} for varying $v_2-v_1$ with the fixed value $\mu=1/30$. The solid lines are the linear approximation, Eq.~\eqref{eq:linear-epsilon}. The other parameters are chosen so that $M_1=1$, $E_1=1$, and $v_1=0$. } \end{figure} \begin{figure}[h] \centering \includegraphics[width=200pt]{figures/v2-epsilon_mu-1-20.pdf} \includegraphics[width=200pt]{figures/v2-epsilon_focused_mu-1-20.pdf} \caption{ \label{fig:epsilon-results-mu-1-20} The dots show the numerical results of Eqs.~\eqref{eq:reduced-1stjc-b} and~\eqref{eq:reduced-2ndjc} for varying $v_2-v_1$ with the fixed value $\mu=1/20$. The solid lines are the linear approximation, Eq.~\eqref{eq:linear-epsilon}. The other parameters are chosen so that $M_1=1$, $E_1=1$, and $v_1=0$. } \end{figure} \fi \par For smaller values, we can analytically determine $\epsilon_1$ and $\epsilon_2$ by linear approximation. Eq.~\eqref{eq:reduced-1stjc-b} is expanded in $\epsilon_1$ and $\epsilon_2$ as \begin{equation} -1+9\mu+9\left(\sqrt{3}-1\right)\mu\epsilon_1+\mathcal{O}\left(\epsilon_1^2\right) =-1+9\mu+9\left(\sqrt{3}+1\right)\mu\epsilon_2+\mathcal{O}\left(\epsilon_2^2\right). \end{equation} Equating their orders of magnitude, we have \begin{equation} \label{eq:epsilon-ratio} \epsilon_1=\frac{\sqrt{3}+1}{\sqrt{3}-1}\epsilon_2. \end{equation} Eq.~\eqref{eq:reduced-2ndjc} is expanded as \begin{equation} \ln\frac{M_2}{M_1}+\frac{9}{2}\frac{\mu^2}{(R_{\mr{H}+}^2-3\mu)(3\mu-R_{\mr{H}-}^2)}({\epsilon}_1+{\epsilon}_2) =\frac{1-9\mu}{1-18\mu}({\epsilon}_1+{\epsilon}_2)+\mathcal{O}({\epsilon}_1^2,{\epsilon}_2^2). \end{equation} For $\ln M_2/M_1=\mathcal{O}({\epsilon}_1,{\epsilon}_2)$, we have \begin{equation} {\epsilon}_1+{\epsilon}_2 =\left[\frac{1-9\mu}{1-18\mu}-\frac{9}{2}\frac{\mu^2}{(R_{\mr{H}+}^2-3\mu)(3\mu-R_{\mr{H}-}^2)}\right]^{-1}\ln\frac{M_2}{M_1} =\ln\frac{M_2}{M_1}, \end{equation} where we have used Eq.~\eqref{eq:RH+-} in the second equality. Using Eq.~\eqref{eq:epsilon-ratio}, we finally obtain \begin{equation} \label{eq:linear-epsilon} {\epsilon}_1=\frac{3+\sqrt{3}}{6}\ln\frac{M_2}{M_1},\;\; {\epsilon}_2=\frac{3-\sqrt{3}}{6}\ln\frac{M_2}{M_1}. \end{equation} The result is valid for the case $\epsilon_1,\;\epsilon_2\ll1$ corresponding to the condition $\delta M/M_1:=(M_2-M_1)/M_1=\mathcal{O}(\epsilon)\ll1$. Actually, this coincides with $\epsilon_1$ and $\epsilon_2$ for the smaller values of $v_2-v_1$ in Fig.~\ref{fig:epsilon-results}. In the limit $M_2\to M_1$, the photon sphere coincides with the Schwarzschild photon sphere, $r=3M_1$. The photon sphere can be described in the Penrose diagram as in Fig.~\ref{fig:ps-penrose}. \begin{figure}[t] \includegraphics[width=400pt]{figures/ps_penrose.pdf} \caption{\label{fig:ps-penrose} The dynamical photon sphere (red dashed line, PS) in the temporally accreting Vaidya spacetime.} \end{figure} \par From Eqs.~\eqref{eq:R1-T1} and~\eqref{eq:R2-T2}, the radius of the photon sphere is in the range, $R^2\in(3\mu(1-\epsilon_2),3\mu(1+\epsilon_1))$. The radius $R^2=3\mu$ corresponds to $r=3m(v)$. Thus, the dynamical photon sphere deviates from but, for weaker accretion, approximately given by three times the Misner-Sharp mass. In the globally self-similar case of the Vaidya spacetime~\cite{Solanki_2022}, a photon sphere is specified as $R^2=R_{\mr{ex}}^2$, i.e., the maximum of the effective potential $U(C,L;R)$ in Eq.~\eqref{eq:conformal-pontial-problem}. The photon sphere in our temporarily self-similar case is also different from this case. \afterpage{\clearpage} \newpage \section{Analytical investigation: shell accretion} \label{sec:analytical-shell} In the previous section, we mainly focused on the weak accretion case. In this section, as another interesting case which also can be studied analytically, we discuss the null dust thin shell limit. If we consider the limit of $\mu \to \infty$ and $v_2 - v_1 \to 0$ with $\mu(v_2-v_1) = \delta M = {\rm finite}$ and $v_1=0$, the mass function $m(v)$ in Eq.~\eqref{eq:massfunction-linear} becomes \begin{align} m(v) = M_1 + \delta M \Theta(v), \label{eq:massfuncshell} \end{align} where $\delta M = M_2-M_1 (\ge 0)$ and $\Theta(v)$ is the Heaviside step function \begin{align} \Theta(v) = \begin{cases} 0 & ({\rm for}~v \le 0) \\ 1 & ({\rm for}~v > 0). \end{cases} \end{align} The metric Eq.~\eqref{eq:vaidyametric} with Eq.~\eqref{eq:massfuncshell} describes the Schwarzschild spacetime with $M_1$ for $v<0$ and $M_2$ for $v>0$, respectively, and there is a null dust thin shell at $v = 0$. To study the photon sphere, we discuss the null geodesic on this spacetime. The tangent of the null geodesic on the equatorial plane \begin{align} k = k^v(\lambda) \partial_v + k^r(\lambda) \partial_r + k^\phi(\lambda) \partial_\phi, \end{align} satisfies the geodesic equations $k^\nu \nabla_\nu k^\mu = 0$. Using $L = g(\partial_\phi, k)$, the $\phi$ component of the geodesic equations can be solved. The only non-trivial component of the geodesic equations is \begin{align} \frac{dk^v}{d\lambda} - \frac{L^2}{r^3} + \frac{(k^v)^2(M_1 + \delta M \Theta(v))}{r^2} =0. \end{align} This equation implies that $k^v$ is continuous when the null geodesic goes through the null shell at $v = 0$ surface.\footnote{ If $k^v$ is not continuous, $dk^v/d\lambda$ contains the Dirac delta function, and then the equation cannot be satisfied. } {}From the null condition for $k$ \begin{align} 2 k^r k^v + \frac{L^2}{r^2} + \frac{(k^v)^2 (2 M_1 - r + 2 \delta M \Theta(v))}{r} = 0, \end{align} we obtain the condition for $k^r$ just before and after the null geodesic goes through the null shell \begin{align} k^r|_{+0} - k^r|_{-0} = - \frac{\delta M k^v}{r_0}, \label{eq:discontinuity-kr} \end{align} where $k^r|_{\pm 0} = \lim_{v \to \pm 0}k^r$ and $r_0$ is the radius of the intersection point of the null geodesic and the null shell. We wish to find a geodesic which asymptotes to $r=3 M_1$ for $v \to -\infty$ and $r=3 M_2$ for $v \to \infty$, then the geodesic has the critical impact parameters $b_{\mr{c}1} = 3\sqrt{3}M_1$ for $v < 0$ and $b_{\mr{c}2} = 3\sqrt{3}M_2$ for $v>0$. Because $L$ is globally conserved, the relations \begin{align} E_1 &= \frac{L}{b_{\mr{c}1}} = \frac{L}{3\sqrt{3}M_1}, \\ E_2 &= \frac{L}{b_{\mr{c}2}} = \frac{L}{3\sqrt{3}M_2}, \end{align} hold, where $E_1 = -g(\partial_v, k)$ for $v<0$ and $E_2 = -g(\partial_v, k)$ for $v>0$. {}From the definition of the energy in $v<0$ and $v >0$ regions, we have \begin{align} k^v = \begin{cases} \dfrac{r (\sqrt{3}L + 9 M_1 k^r)}{9M_1(r - 2 M_1) } & ({\rm for~~} v <0) \\ \dfrac{r (\sqrt{3}L + 9 M_2 k^r)}{9M_2(r - 2 M_2) } & ({\rm for~~} v >0). \end{cases} \label{eq:kvbeforeafter} \end{align} The continuity of $k^v$ at the null shell implies \begin{align} \frac{\sqrt{3}L + 9 M_1 k^r|_{-0}}{M_1(r_0 - 2 M_1 )} = \frac{\sqrt{3}L + 9 M_2 k^r|_{+0}}{M_2(r_0 - 2 M_2)}. \label{eq:continuity-kv} \end{align} From Eqs.~\eqref{eq:discontinuity-kr}, \eqref{eq:kvbeforeafter} and \eqref{eq:continuity-kv}, we can show $3 M_1 < r_0 < 3 M_2$, $k^r|_{-0}>0$ and $k^r|_{+0}>0$.\footnote{ Eqs.~\eqref{eq:discontinuity-kr}, \eqref{eq:kvbeforeafter} and \eqref{eq:continuity-kv} leads that $k^r|_{+0}k^r|_{-0}$ becomes negative if and only if $r_0$ satisfies $3 M_1 +\delta M < r_0 < 3 M_1 + 2 \delta M (< 3 M_2)$. If $k^r|_{-0}>0$ and $k^r|_{+0}<0$ for $r_0 < 3 M_2$, the geodesic goes to the black hole horizon. Thus, we only need to consider the possibility $k^r|_{-0}>0, k^r|_{+0}>0$ and $3 M_1 < r_0 < 3 M_2$. } Thus, the relations \begin{align} k^r = \begin{cases} \dfrac{L(r-3 M_1)\sqrt{r + 6 M_1}}{3\sqrt{3}M_1 r^{3/2}} > 0 & ({\rm for~} v <0) \\ \dfrac{L(3 M_2 - r)\sqrt{r + 6 M_2}}{3\sqrt{3}M_2r^{3/2}} > 0 & ({\rm for~} v >0), \end{cases} \label{eq:krbeforeafter} \end{align} are satisfied. Eqs.~\eqref{eq:continuity-kv} and \eqref{eq:krbeforeafter} determine the value of $r_0$ for the desired null geodesic which corresponds to the photon sphere\footnote{ If we remove the square roots in Eq.~\eqref{eq:eqforr0}, we obtain a simple equation $r_0^4 - 2(M_1+M_2)r^3 + 27 M_1^2M_2^2=0$. We should be careful about that the solution of this equation may not satisfy the original equation~\eqref{eq:eqforr0}. } \begin{align} \frac{r_0^{3/2}+(r_0-3 M_1)\sqrt{r_0+6 M_1}}{M_1(r_0-2 M_1)} = \frac{r_0^{3/2}-(r_0-3 M_2)\sqrt{r_0+6 M_2}}{M_2(r_0-2 M_2)}. \label{eq:eqforr0} \end{align} The solution of Eq.~\eqref{eq:eqforr0} is given by \begin{align} r_0 = \frac{1}{2}\left( (M_1 + M_2)(1 + \alpha_2) + \sqrt{ \frac{2(M_1+M_2)^2(1+\alpha_2)}{\alpha_2} - \frac{3 M_1 M_2 (4+\alpha_1^2)}{\alpha_1} } \right), \label{eq:exactsolr0} \end{align} with \begin{align} \alpha_1 &=2^{1/3} \left( \frac{ (M_1+M_2)^2 + (M_2-M_1)\sqrt{M_1^2 + 6 M_1 M_2 + M_2^2} }{M_1 M_2} \right)^{1/3}, \\ \alpha_2 &= \sqrt{1 + \frac{3 M_1 M_2(4+\alpha_1^2)}{(M_1 + M_2)^2\alpha_1}}. \end{align} We should note that \eqref{eq:exactsolr0} also satisfies Eq.~\eqref{eq:discontinuity-kr}. To understand the property of Eq.~\eqref{eq:exactsolr0}, it is convenient to introduce $\delta r_0$ as \begin{align} \label{eq:dev-3M1} r_0 = 3M_1 \left(1 + \delta r_0 \frac{\delta M}{M_1}\right), \end{align} then, $\delta r_0$ represents the deviation of the dynamical photon sphere radius from $3M_1$ at $v=0$. We note that $\delta r_0$ is a function of $\delta M$ and $0 \le \delta r_0 \le 1$ is satisfied. Eq.~\eqref{eq:dev-3M1} also can be written as \begin{align} \label{eq:dev-3M2} r_0=3M_2\left(1-(1-\delta r_0)\frac{\delta M}{M_2}\right), \end{align} then, $1-\delta r_0$ represents the deviation from $3M_2$ at $v=0$. If $1 \gg \delta M/M_1$, $\delta r_0$ approximately behaves \begin{align} \label{eq:delta-r0} \delta r_0 &= \frac{3+\sqrt{3}}{6} - \frac{1}{18} \left(\frac{\delta M}{M_1}\right) + \frac{18+\sqrt{3}}{648} \left(\frac{\delta M}{M_1}\right)^2 - \frac{32+3\sqrt{3}}{1944} \left(\frac{\delta M}{M_1}\right)^3 +{\cal O}(\delta M^4), \notag\\&\simeq 0.7887 - 0.05556 \left(\frac{\delta M}{M_1}\right) + 0.03045 \left(\frac{\delta M}{M_1}\right)^2 - 0.01913 \left(\frac{\delta M}{M_1}\right)^3 +{\cal O}(\delta M^4), \end{align} If $1 \ll \delta M/M_1$, $\delta r_0$ approximately behaves \begin{align} \delta r_0 &= \frac{2}{3} + \frac{1}{3}\left(\frac{\delta M}{M_1}\right)^{-1} - \frac{9}{8}\left(\frac{\delta M}{M_1}\right)^{-2} + \frac{9}{2}\left(\frac{\delta M}{M_1}\right)^{-3} + {\cal O}(\delta M^{-4}). \end{align} For general cases, the behavior of $\delta r_0$ is plotted in Fig.~\ref{fig:nullshell}. The dynamical photon sphere whose generator asymptotes to $3M_1+0$ and $3M_2-0$ in the far past and future, respectively, in the Penrose diagram is shown in Fig.~\ref{fig:ps-penrose-shell}. In Fig.~\ref{fig:psgenerator-nullshell}, the photon sphere generator and the null geodesics which asymptotes to it in the past direction are plotted, and the discontinuity behavior of $k^r$ in Eq.~\eqref{eq:discontinuity-kr} at $v = 0$ can be seen. The corresponding shadow images and shadow edges are shown in Fig.~\ref{fig:Shadowimage-nullshell} and Fig.~\ref{fig:vo-bo_graph-shell}, respectively. While the spacetime is suddenly changes at $v = 0$ due to the shell accretion and the photon sphere generator is continuous but not smooth (see Fig.~\ref{fig:psgenerator-nullshell}), the shadow image for a distant observer is continuously and smoothly changes in time. \begin{figure}[h] \centering \includegraphics[width=150pt]{figures/fignullshell_axes.pdf} \caption{ \label{fig:nullshell} Behavior of $\delta r_0$ as a function of $\delta M$. } \end{figure} \begin{figure}[t] \includegraphics[width=400pt]{figures/ps_penrose_shell.pdf} \caption{\label{fig:ps-penrose-shell} The dynamical photon sphere (red dashed line, PS) in the Vaidya spacetime with shell accretion.} \end{figure} \begin{figure}[t] \includegraphics[width=200pt]{figures/dynamicalphotonspere_nullshell_dM_1_axes.pdf} \caption{\label{fig:psgenerator-nullshell} The orbits of the photon sphere generator (red line) and shadow edge orbits that once approaches the generator (blue line) for the Vaidya spacetime with the shell accretion mass function~\eqref{eq:massfuncshell}. We took the parameters of the spacetime as $M_1=1$, $M_2=2$. The photon sphere generator asymptotes to $r=3M_1+0$ in the past and $r=3M_2-0$ in the future. We can see the discontinuity behavior of $k^r$ in Eq.~\eqref{eq:discontinuity-kr} at $v = 0$. } \end{figure} \begin{figure}[t] \includegraphics[width=280pt]{figures/Shadowimage_nullshell_dM_1_revision.pdf} \caption{\label{fig:Shadowimage-nullshell} Image of the black hole shadow observed at $r=50$ for $v=10, 40,70, 100, 110, 120, 130, 140$ and $150$ in the Vaidya spacetime with the shell accretion mass function (\ref{eq:massfuncshell}). We took the same parameters as in Fig.~\ref{fig:psgenerator-nullshell}. The distance from the center corresponds to the impact parameter observed at $r=50$, and the red dashed lines are $b=3 \sqrt{3} M_1$ and $b=3 \sqrt{3}M_2$ for the inner and outer, respectively. The shadow image for a distant observer is continuously changes in time even for the shell accretion case. } \end{figure} \begin{figure}[h] \includegraphics[width=200pt]{figures/vo-bographic_nullshell_dM_1_line_static_axes.pdf} \includegraphics[width=200pt]{figures/vo-bographic_nullshell_dM_1_line_1derivative_static_axes.pdf} \caption{\label{fig:vo-bo_graph-shell} Time evolution of the shadow edge (left) and the time derivative (right) observed at $r=50$. We took the same parameters as in Fig.~\ref{fig:psgenerator-nullshell}. While the photon sphere generator is continuous but not smooth (see Fig.~\ref{fig:psgenerator-nullshell}), the shadow edge for a distant observer is continuously and smoothly changes in time. } \end{figure} Finally, we make a remark on the expressions of Eqs.~\eqref{eq:dev-3M1} and~\eqref{eq:dev-3M2}. Using Eq.~\eqref{eq:delta-r0}, the equations can be written as \begin{eqnarray} r_0&=&3M_1(1+\frac{3+\sqrt{3}}{6}\ln\frac{M_2}{M_1})+\mathcal{O}(\delta M^2)=3M_1(1+\epsilon_1)+\mathcal{O}(\delta M^2),\nonumber\\ r_0&=&3M_2(1-\frac{3-\sqrt{3}}{6}\ln\frac{M_2}{M_1})+\mathcal{O}(\delta M^2)=3M_2(1-\epsilon_2)+\mathcal{O}(\delta M^2), \end{eqnarray} where we quoted the parameters $\epsilon_1$ and $\epsilon_2$ from the result of the weak accretion limit of the linear accretion case, Eq.~\eqref{eq:linear-epsilon}. Therefore, although our analysis of the linear accretion case (Sec.~\ref{sec:analytical}) depends on the assumption, $\mu<1/18$, the expressions of the photon sphere radius, $r|_{v=v_1}=3M_1(1+\epsilon_1)$ and $r|_{v=v_2}=3M_2(1-\epsilon_2)$, with Eq.~\eqref{eq:linear-epsilon} are also valid in the shell accretion case corresponding to $\mu\to\infty$. \afterpage{\clearpage} \newpage \section{Relation to photon sphere generalizations} \label{sec:discussion} We have specified the photon sphere shaping the black hole shadow. Our photon sphere should coincide with, be included by, or have some relations to the recently proposed notions generalizing a photon sphere. \subsection{Photon surface} In 2001, Claudel, Virbhadra, and Ellis proposed {\it a photon surface} as a geometrical generalization of the Schwarzschild photon sphere~\cite{claudel}: \begin{definition \label{definition:photonsurface} A photon surface of a spacetime $(M, {g})$ is an immersed, nowhere-spacelike hypersurface $S$ of $(M, {g})$ such that, for every point $p\in S$ and every null vector ${k}\in T_pS$, there exists a null geodesic $\gamma\colon (-\epsilon,\epsilon) \to M$ of $(M, {g})$ such that $\dot{\gamma}(0) ={k},~ |\gamma|\subset S$. \end{definition} The excellent feature of a photon surface is that, in a timelike case, it is a totally umbilic hypersurface~\cite{claudel,perlick}. The surface is completely characterized by a local geometrical quantity, the extrinsic curvature being pure-trace. See Refs.~\cite{cederbaum,cederbaum_maxwell,yazadjiev_psuniqueness,rogatko_psuniqueness,koga3,tsuchiya,koga:psf-wht,Koga_2021,Kobialko_2021} for the various investigations of photon surface. \par In a spherically symmetric spacetime, there are an infinite number of spherically symmetric photon surfaces, or equivalently $SO(3)$-invariant photon surfaces, even in a dynamical case because they are given as solutions to a second order ordinary differential equation~\cite{claudel}. In other words, any null geodesic with non-zero angular momentum is tangent to some spherically symmetric timelike photon surface. In this sense, our photon sphere of the Vaidya spacetime is the special one of many photon surfaces that goes to both $i^+$ and $i^-$. \par As an application of a photon surface to a black hole shadow, the notion of ``stability" is also important~\cite{koga_2019}. That is, an unstable photon surface generalizes the usual photon sphere, which is relevant to a black hole shadow, whereas a stable one does the anti-photon sphere, which is irrelevant. In Appendix.~\ref{sec:psf-stability}, we show our photon sphere is actually an unstable photon surface and therefore, the photon sphere relevant to the black hole shadow. \subsection{Wandering set} In the Schwarzschild spacetime, a null geodesic on the photon sphere is a circular orbit and it comes from the past timelike infinity and go to the future timelike infinity. This means that although the geodesics on the photon sphere are null, they do not fall into the black hole or escape to the null infinity. We call such a null geodesic a `neutral' null geodesic. Since the generator of the event horizon is also a neutral geodesic, to exclude it, Siino defines {\it the wandering null geodesic} as follows: \begin{definition}[\cite{siino_2019,siino_2021}] A future (past) wandering null geodesic from $p$ is a future (past) complete null geodesic with infinite number of conjugate points starting from $p$ to the future (past) direction. A totally wandering null geodesic is a future and past complete null geodesic with infinite number of conjugate points in both the future and past directions. \label{wandering_geodesic} \end{definition} \noindent The set of the totally wandering null geodesics is called a {\itshape wandering set}, and it is a generalization of Schwarzschild photon sphere. For the Vaidya spacetime discussed in this paper, first, according to the Penrose diagram in Fig.~\ref{fig:ps-penrose}, the null generators of the dynamical photon sphere are complete. Next, let us consider two null generators of the photon sphere starting form a north pole. We assume that these two geodesics have slightly different azimuth angle. When one of the geodesics reaches the south pole, the other geodesic also reaches the same point due to the spherical symmetry. Repeating this argument, we find that these two geodesics intersect infinitely many times, then there is an infinite number of conjugate points. To make this intuitive explanation clear, we consider the future directed null geodesic, $k^\mu=dx^\mu/d\lambda$, asymptoting from $r=3M_{1}$ to $r=3M_{2}$ obtained in the previous sections. We here do not restrict the null geodesic motion to the equatorial plane, $\theta=\pi/2$. In the future static region, which is described by the Schwarzschild metric, \begin{align} ds_2^2= -f_2(r) dt_2^2 +\frac{dr^2}{f_2(r)}+r^2(d\theta^2+\sin^2 \theta d\phi^2), \end{align} where $f_{2}(r)=1-2M_2/r$ and $dt_2= dv-f_2^{-1}(r)dr$, the conserved energy $E_2$, the conserved angular momentum $L$, and the Carter constant $Q$ are given by \begin{align} E_2=f_2(r)\frac{dt}{d\lambda}, \quad L=r^2 \sin^2 \theta \frac{d\phi}{d\lambda}, \quad Q=r^4 \left(\frac{d\theta}{d\lambda}\right)^2+ L^2 \cot^2 \theta. \label{eq:conservation_ene_ang_cater} \end{align} From the null condition, we have \begin{align} \frac{d r}{d \lambda}= E_2 \sqrt{1-\frac{ f_{2}}{r^{2}} \frac{Q+L^2}{E_2^2} }. \end{align} Since the null geodesic asymptotes to the spherical photon orbit with $r=3M_{2}$, we have $(Q+L^2)/E_2^2 = 27 M_2^2$ and the polar angle $\theta$ varies in the range $\theta_{\rm min} \le \theta \le \theta_{\rm max}$ where \begin{align} \theta_{\rm min}=\arctan\left( \frac{|L|}{ \sqrt{Q} } \right) \quad {\rm and} \quad \theta_{\rm max}=\pi -\theta_{\min}. \end{align} Hence, the expansion $\tilde{\Theta}$ of the null congruence consisting of nearby null geodesics with the same $E_2$, $L$, and $Q$ is given by \begin{align} \tilde{\Theta} =k^{\mu}_{~;\mu} =-\frac{ E_2}{r^2} \frac{2r^2 +6M_2r -9M_2^2 }{ \sqrt{r(r+6M_2)}} + \epsilon_{\theta} \frac{ 27M_2^2 E_2^2 }{r^2 \sqrt{ Q(\tan^2 \theta - \tan^2 \theta_{\mr{max,min}}) } }, \end{align} where $\epsilon_{\theta}=\pm 1$ according to the $\theta$ direction of motion. Since the polar angle $\theta$ repeatedly takes the values $\theta_{\rm min}$ and $\theta_{\rm max}$ for in a finite interval of the affine parameter, the expansion $\tilde{\Theta}$ repeatedly becomes singular. \footnote{ The case of $L=0$ corresponds to the intuitive explanation. } \footnote{ Since we choose the null congruence with specific conservation quantities Eq.~(\ref{eq:conservation_ene_ang_cater}), the expansion becomes singular at $\theta=\theta_{\rm min},~\theta_{\rm max}$, but due to the spherical symmetry, there exists a congruence whose expansion becomes singular at $\theta=\theta_{0},~\pi-\theta_{0}$ for any $\theta_0$. } This means that the future directed orbit asymptoting from $r=3M_{1}$ to $r=3M_{2}$ is a future wandering null geodesic. Note that this conclusion holds for both cases when the geodesics are future and past directed. Thus, the dynamical photon sphere derived in this paper is a wandering set. \subsection{Dynamically transversely trapping surfaces} Yoshino, Izumi, Shiromizu, and Tomikawa introduced the transversely trapping surface in the static and stationary spacetimes as a generalization of the static photon surface by using local quantities~\cite{yoshino_tts}. Further, they define the dynamically transversely trapping surface (DTTS) as a concept applicable to the dynamical spacetime~\cite{yoshino_dtts}. The definition of the dynamically transversely trapping surface is given as follows: \begin{definition}[\cite{yoshino_dtts}] Suppose $\Sigma$ to be a smooth spacelike hypersurface of a spacetime $\mathcal{M}$. A closed orientable two-dimensional surface $\sigma_0$ in $\Sigma$ is a dynamically transversely trapping surface if and only if there exists a timelike hypersurface $S$ in $\mathcal{M}$ that intersects $\Sigma$ precisely at $\sigma_0$ and satisfies the following three conditions at arbitrary points on $\sigma_0$: \begin{align} \bar{k} &=0, \\ \max \left( \bar{K}_{a b} k^{a} k^{b} \right) & = 0 , \\ ^{(3)}\bar{\mathcal{L}}_{\bar{n}} \bar{k} &\leq 0, \end{align} where $\bar{k}$ is the trace of the extrinsic curvature of $\sigma_0$ in the surface $S$, $\bar{K}_{a b}$ is the extrinsic curvature of $S$, $k^a$ are arbitrary future-directed null vectors tangent to $S$, $\bar{n}^a$ is the future-directed unit normal in $S$, and $^{(3)}\bar{\mathcal{L}}_{\bar{n}}$ is a Lie derivative in $S$. The quantity $^{(3)}\bar{\mathcal{L}}_{\bar{n}}\bar{k}$ is evaluated with a time coordinate in $S$ whose lapse function is constant on $\sigma_0$. \label{DTTS} \end{definition} \noindent The region in which DTTSs exist is said to be a {\itshape dynamically transversely trapping region}. If the outer boundary of a dynamically transversely trapping region satisfies the condition $^{(3)}\bar{\mathcal{L}}_{\bar{n}} \bar{k} = 0$, then it is said to be a {\itshape marginally DTTS} and a generalization of Schwarzschild photon sphere. To discuss whether the dynamical photon sphere in this paper is the marginally DTTS or not, we consider a null geodesic on the equatorial plane in the static regions which are described by the Schwarzschild metric with different masses: \begin{align} ds^{2}_{i}&= -f_{i}(r) d t_i^{2}+\frac{d r^{2}}{f_{i} (r)}+r^{2}(d \theta^{2}+\sin ^{2} \theta d \phi^{2}), \end{align} where $f_i(r) = 1-2M_i/r~ (i=1,2)$ and $dt_i = dv - f_i^{-1}dr$. In this case, we have locally conserved energies and a globally conserved angular momentum Eq.~(\ref{eq:energy-angular-momentum}): \begin{align} E_i &= f_{i}(r) \frac{dt_i}{d\lambda}, \quad L = r^{2} \frac{d\phi}{d\lambda} = E_{i} b_{i}. \end{align} From the null condition, we have \begin{align} \frac{d r}{d t_i}=\pm f_{i}(r) \sqrt{1-\frac{b_{i}^{2}}{r^{2}} f_{i}(r)}. \end{align} If we obtain a solution of a radial geodesic $r(t_i)$, a photon surface can be constructed due to the spherical symmetry. This photon surface is denoted as $S$. Then, the induced metric on the photon surface $S$ is given by \begin{align} d s^{2}_{i}=-\alpha_{i}^{2} d t_i^{2}+r^{2}(d \theta^{2}+\sin ^{2} \theta d \phi^{2}), \end{align} where the lapse function is given by \begin{align} \alpha_{i}=\frac{b_{i}}{r} f_{i}(r). \end{align} We take a spacelike hypersurface $\Sigma_{t_i}$ such that the time coordinate is constant and the intersection of $\Sigma_{t_i}$ and $S$ is written as $\sigma_{t_i}$ which is a closed two-dimensional surface. The future-directed unit normal to $\sigma_{t_i}$ in the hypersurface $S$ and the outward spacelike unit normal to $S$ are denoted as $\bar{n}^a$ and $\bar{r}^a$, respectively. Then, $\bar{k}$ and $ ^{(3)}\bar{\mathcal{L}}_{\bar{n}} \bar{k}$ are given by \begin{align} \bar{k}=\frac{2}{b_{i} f_{i}(r)} \frac{d r}{d t_i} \quad \text{and} \quad ^{(3)}\bar{\mathcal{L}}_{\bar{n}} \bar{k} =\frac{2}{r^{2}}\left(1-\frac{3 M_{i}}{r}\right). \end{align} If we choose the impact parameter of null geodesics from $\sigma_{t_i}$ such that $\bar{k}=0$, i.e. $b_i^2=r^2/f_i(r)$, then the first definition of the DTTS is satisfied. Since the hypersurface $S$ is a photon surface, there is a null geodesic $\gamma\colon (-\epsilon,\epsilon) \to M$ of $(M, {g})$ such that $\dot{\gamma}(0) ={k},~|\gamma|\subset S$. Hence, we obtain \begin{align} \bar{K}_{a b} k^{a} k^{b}=\bar{r}_{a; b} k^{a} k^{b}=(\bar{r}_{a} k^{a})_{;b}k^{b}=0, \end{align} along $\gamma$ and then the second definition is also satisfied. By contrast, whether the third definition is satisfied or not depends on the radius of $\sigma_{t_i}$. The radius of the dynamical photon sphere in this paper is lager than $3M_1$ in the past region and less than $3M_2$ in the future region. Hence, the time slice of the dynamical photon sphere is the DTTS in the future region while it is not the DTTS in the past region. Since the marginally DTTS is located at $r=3M_2$ in the future region and $r=3M_1$ in the past region, the time slice of the dynamical photon sphere is not the marginally DTTS if a spacelike hypersurface $\Sigma_{t_i}$ is taken such that the time coordinate $t_i$ is constant. The dynamical photon sphere in this paper clearly depends on the past and future mass, and hence it is determined by a global geometrical structure while the DTTS is defined by local geometrical quantities. Therefore, as with the relation between the event horizon and the apparent horizon, the dynamical photon sphere in this paper does not necessarily coincide with the DTTS in general. Note that since the definition of the DTTS depends on a choice of a hypersurface, there may exist a hypersurface that the dynamical photon sphere in this paper is the DTTS. \afterpage{\clearpage} \newpage \section{Summary and Discussion} \label{sec:summary} We have investigated dynamical photon spheres that shape the black hole shadows in the Vaidya spacetime from the causal point of view. The spacetime has been assumed to be static in the past and future time domains, i.e., isometric to the Schwarzschild spacetime with the mass $M_1$ and $M_2$, respectively. As a result, we have obtained the photon spheres as hypersurfaces generated by null geodesics that asymptote to $r\to3M_1+0$ and $r\to3M_2-0$ in the past and future, respectively. Remarkably, the radii of the photon spheres deviate from the Schwarzschild photon spheres $3M_1$ and $3M_2$ even in the static domains. \par We have also derived the photon sphere analytically in the case where the evolution of the black hole is linear in the time coordinate $v$ by using the self-similarity of the spacetime there. The result shows that the photon sphere radius also deviates from the maximum of the conformal effective potential $U(C,L;R)$ as opposed to the entirely self-similar case in Ref.~\cite{Solanki_2022}. In the weak accretion limit, $\delta M/M_1\ll1$, the deviations of the photon sphere radius from $3M_1$ and $3M_2$ has been derived as $r|_{v=v_1}=3M_1(1+(3+3\sqrt{3})/6\ln M_2/M_1)$ and $r|_{v=v_2}=3M_2(1-(3-3\sqrt{3})/6\ln M_2/M_1)$, respectively. In the shell accretion limit, the dynamical photon sphere also locates at the radius between $3M_1$ and $3 M_2$. Remarkably, in the weak accretion limit of the shell case, i.e. $\mu\to\infty$ and $v_2-v_1\to0$ but $\delta M\ll1$, the expression $r_0=r|_{v=v_1}=3M_1(1+(3+3\sqrt{3})/6\ln M_2/M_1)=r|_{v=v_2}=3M_2(1-(3-3\sqrt{3})/6\ln M_2/M_1)$ holds. Therefore, we conclude that a dynamical photon sphere shaping a black hole shadow is not determined by local geometry only. Rather, it depends on global informations of the spacetime if one adopt our definitions of a photon sphere and a shadow. \par We have discussed the relation between our photon sphere and several notions that generalize a photon sphere. We have concluded that our photon sphere is a unstable photon surface~\cite{claudel,koga_2019} and a wandering set~\cite{siino_2019,siino_2021}. Concerning the DTTS~\cite{yoshino_dtts}, we have not found the coincidence with our photon sphere, however, it can be expected from the difference of the viewpoints of the definitions. \par It is still challenging to propose, for generic dynamical cases of spacetimes, a generalized definition of a photon sphere as a structure that shapes a black hole shadow. One of approaches for this problem is to gather many examples in specific cases and to study their essential points. Then one can check if an existing generalization of a photon sphere is consistent with them or define a new notion so that it is consistent with them. Our numerical and analytical results would be the good examples in a spherically symmetric spacetime whose dynamics is clearly understood in a physical sense. As a further investigation, it is important to investigate a dynamical photon sphere in a non-spherically symmetric spacetime. For example, the photon spheres of the Kastor-Traschen spacetime~\cite{Kastor:1992nn}, a spacetime of two colliding black holes, and its relation to the shadows investigated in Ref.~\cite{Okabayashi_2020, Yumoto:2012kz} are interesting. \par Let us apply our results, Eqs.~\eqref{eq:gamma1-condition}, \eqref{eq:gamma2-condition}, and~\eqref{eq:linear-epsilon}, to the observation of M87. According to Ref.~\cite{Kuo_2014}, the current mass and the accretion rate are estimated as $M_1=3\times 10^9 M_{\odot}$ and $\mu=10^{-3}M_{\odot} \mr{year}^{-1}$. From $M_2=\mu (v_2-v_1)+M_1$, the radius of the photon sphere after the accretion for the time period of observation, $v_2-v_1$, becomes $r|_{v=v_2} \simeq 3M_1(1+(3+\sqrt{3})\delta/6)$ where $\delta=\mu(v_2-v_1)/M_1=0.33\times10^{-12}(v_2-v_1)/(1\mr{year})$. After a few decades, the photon sphere radius evolve only by the ratio $\sim 10^{-11}$. For black holes with much greater efficient accretion, we might be able to observe the time evolution. \begin{acknowledgments} The authors are grateful to T. Harada, T. Ishii, K. Nakao, M. Siino, K. Toma, C. Yoo and H. Yoshino for their fruitful discussions. This work was supported by JSPS KAKENHI Grant Nos. JP21K20367 (Y.K.), JP21J15676 (K.O.), and 20H04746 (M.K.) from the Japan Society for the Promotion of Science. \end{acknowledgments} \afterpage{\clearpage} \newpage
{ "timestamp": "2022-03-10T02:26:31", "yymm": "2202", "arxiv_id": "2202.00201", "language": "en", "url": "https://arxiv.org/abs/2202.00201" }
\section{The greedy underapproximation algorithm} An \emph{Egyptian fraction}\index{Egyptian fraction} is a fraction of the form $1/x$, where $x$ is a positive integer. Let $\theta \in (0,1]$. A finite sequence $(x_i)_{i=1}^n$ of integers is an \emph{$n$-term Egyptian underapproximation sequence}\index{ Egyptian underapproximation} of $\theta$ if \[ 2 \leq x_1 \leq x_2 \leq \cdots \leq x_n \] and \[ \sum_{i=1}^n \frac{1}{x_i} < \theta. \] For example, $(2,3,7,43)$ is a 4-term underapproximation sequence of 1. If $x$ is an integer such that $x > n/\theta$, then \[ \sum_{i=1}^n \frac{1}{x+i} < \theta \] and $(x+i)_{i=1}^n$ is is an $n$-term Egyptian underapproximation sequence of $\theta$. An infinite sequence $(x_i)_{i=1}^{\infty}$ of integers is an \emph{infinite Egyptian underapproximation sequence} of $\theta$ if the finite sequence $(x_i)_{i=1}^n$ is an $n$-term Egyptian underapproximation sequence of $\theta$ for all $n \geq 1$. For all $\theta \in (0,1]$, there is a unique positive integer $G(\theta) = a$ such that \[ a \geq 2 \qqand \frac{1}{a} < \theta \leq \frac{1}{a-1}. \] Thus, $G(\theta)$ is the smallest positive integer such that Egyptian fraction $1/G(\theta)$ underapproximates $\theta$. Equivalently, \[ a \leq \frac{1}{\theta} + 1 < a+1 \] and so\footnote{ The \emph{greatest integer function}\index{greatest integer function} of the real number $w$, also called the \emph{floor} of $w$, is the unique integer $\ell$ such that $\ell \leq w < \ell+1$. We write $\lfloor w \rfloor = \ell$. The \emph{ceiling} of $w$, denoted $\lceil w \rceil$, is the unique integer $m$ such that $m \geq w > m-1$. Define the interval $(t_1, t_2] = \{ t \in \ensuremath{\mathbf R}: t_1 < t \leq t_2 \}$.} \[ G(\theta) = \left\lfloor \frac{1}{\theta} \right\rfloor + 1. \] For all $\theta \in (0,1]$, the \emph{greedy underapproximation algorithm}\index{greedy underapproximation algorithm} \index{underapproximation algorithm} applied to $\theta$ constructs an infinite sequence of integers $(a_i)_{i=1}^{\infty}$ as follows: \begin{equation} \label{Egyptian:underapproxIneqality-0} a_1 = G(\theta) \geq 2 \end{equation} and, for all $i \geq 1$ and integers $a_1,a_2,\ldots, a_i$, \[ a_{i+1} = G\left( \theta - \sum_{j =1}^i \frac{1}{a_j} \right). \] Thus, \begin{equation} \label{Egyptian:underapproxIneqality-1} \frac{1}{a_{i+1}} < \theta - \sum_{j=1}^i \frac{1}{a_j} \leq \frac{1}{a_{i+1} -1}. \end{equation} Equivalently, \[ \sum_{j =1}^{i+1} \frac{1}{a_j} < \theta \leq \sum_{j =1}^{i} \frac{1}{a_j} + \frac{1}{a_{i+1} - 1}. \] We call $(a_i)_{i=1}^{\infty}$ the \emph{infinite greedy underapproximation sequence}\index{greedy underapproximation sequence!infinite} of $\theta$ and $(a_i)_{i=1}^n$ the \emph{$n$-term greedy underapproximation sequence}\index{greedy underapproximation sequence! $n$-term} of $\theta$. The rational number $\sum_{i=1}^n 1/a_i$ is the \emph{$n$-term greedy underapproximation} of $\theta$. Let $ i \geq 1$. Inequality~\eqref{Egyptian:underapproxIneqality-1} implies that \begin{align*} \frac{1}{a_{i+1}} & < \theta - \sum_{j=1}^i \frac{1}{a_j} = \left( \theta - \sum_{j=1}^{i-1} \frac{1}{a_j} \right) - \frac{1}{a_{i} } \\ & \leq \frac{1}{a_{i} -1} - \frac{1}{a_{i} } = \frac{1}{a_{i} (a_{i} -1)} \end{align*} and so \begin{equation} \label{Egyptian:underapproxIneqality-2} a_{i+1} \geq a_i^2 - a_i + 1. \end{equation} It follows from~\eqref{Egyptian:underapproxIneqality-1} and~\eqref{Egyptian:underapproxIneqality-2} that $(a_i)_{i=1}^{\infty}$, the infinite greedy underapproximation sequence of $\theta$, is a strictly increasing sequence of positive integers and that \[ \sum_{i=1}^{\infty} \frac{1}{a_i} = \theta. \] Here is a classical example of Egyptian underapproximation. \emph{Sylvester's sequence}~\cite{sylv80} is the sequence of positive integers $(s_i)_{i=1}^{\infty}$ constructed recursively by the following rule: \begin{equation} \label{Egyptian:Sylvester} s_1 = 2 \qqand s_{i+1} = \prod_{j=1}^i s_j \ + 1 \end{equation} for all $i \geq 1$. We have \begin{align*} s_1 & = 2 \\ s_2 & = 3 \\ s_3 & = 7 \\ s_4 & = 43 \\ s_5 & = 1807 \\ s_6 & = 3263443 \\ s_7 & = 10650056950807 \\ s_8 & = 113423713055421844361000443 \\ s_9 & = 12864938683278671740537145998360961546653259485195807. \end{align*} Sylvester's sequence is sequence A000058 in the OEIS. By Corollary~\ref{Egyptian:corollary:Sylvester}, Sylvester's sequence $(s_i)_{i=1}^{\infty}$ is the infinite greedy underapproximation sequence of $\theta = 1$. The following theorem constructs a set of rational numbers whose infinite greedy approximation sequences generalize Sylvester's sequence. \begin{theorem} \label{Egyptian:theorem:pq-greedy-sequence} Let $\theta = p/q \in (0,1]$, where $p$ and $q$ are positive integers such that $p$ divides $q+1$, and let $(a_i)_{i=1}^{\infty}$ be the infinite greedy underapproximation sequence of $\theta$. Then \[ a_1 = \frac{q+1}{p} \] and, for all $k \geq 1$, \[ a_{k+1} = q\prod_{i=1}^k a_i + 1 \] and \[ \frac{p}{q} = \sum_{i=1}^k \frac{1}{a_i} + \frac{1}{q\prod_{i=1}^k a_i}. \] \end{theorem} \begin{proof} The proof is by induction on $k$. Let $q+1 = pt$. We have \[ \frac{1}{t} = \frac{p}{q+1} < \frac{p}{q} = \frac{p}{pt-1} \leq \frac{1}{t-1} \] and so \[ a_1 = G\left( \frac{p}{q} \right) = t = \frac{q+1}{p}. \] It follows that \[ \frac{p}{q} - \frac{1}{a_1} = \frac{p}{q} - \frac{1}{t} = \frac{1}{qt} = \frac{1}{qa_1} \] and so \[ a_2 = G\left( \frac{p}{q} - \frac{1}{a_1} \right) = qa_1 + 1. \] We obtain \[ \frac{p}{q} - \frac{1}{a_1} - \frac{1}{a_2} = \frac{1}{qa_1} - \frac{1}{qa_1 + 1} = \frac{1}{q a_1 (qa_1 + 1)} = \frac{1}{q a_1 a_2 } \] and so \[ a_3 = G\left( \frac{p}{q} - \frac{1}{a_1} - \frac{1}{a_2} \right) = qa_1 a_2 + 1. \] Let $k \geq 2$. If \[ \frac{p}{q} - \sum_{i=1}^k \frac{1}{a_i} = \frac{1}{q\prod_{i=1}^k a_i} \] then \[ a_{k+1} = G\left( \frac{p}{q} - \sum_{i=1}^k \frac{1}{a_i}\right) = q\prod_{i=1}^k a_i + 1 \] and \begin{align*} \frac{p}{q} - \sum_{i=1}^{k+1} \frac{1}{a_i} & = \frac{p}{q} - \sum_{i=1}^{k} \frac{1}{a_i} - \frac{1}{a_{k+1}} = \frac{1}{q\prod_{i=1}^k a_i} - \frac{1}{ q\prod_{i=1}^k a_i + 1} \\ & = \frac{1}{q\prod_{i=1}^k a_i \left( q\prod_{i=1}^k a_i + 1\right) } \\ & = \frac{1}{ q \prod_{i=1}^{k+1} a_i}. \end{align*} This completes the proof. \end{proof} \begin{corollary} \label{Egyptian:corollary:Sylvester} Sylvester's sequence is the infinite greedy underapproximation sequence for $\theta = 1$. \end{corollary} \section{A criterion for greedy underapproximation} \label{Egyptian:section:criterion} \begin{theorem} \label{Egyptian:theorem:greedy-n-term-condition} Let $(a_i)_{i=1}^n$ be a sequence of integers such that \[ a_1 \geq 2 \qqand a_{i+1} \geq a_i^2 - a_i + 1 \] for all $i = 1,\ldots, n-1$. The sequence $(a_i)_{i=1}^n$ is the $n$-term greedy underapproximation sequence of the real number $\theta$ if and only if \begin{equation} \label{Egyptian:greedy-n-term-interval} \theta \in \left( \sum_{i=1}^n \frac{1}{a_i} , \ \sum_{i=1}^{n-1} \frac{1}{a_i} + \frac{1}{a_n -1} \right]. \end{equation} \end{theorem} \begin{proof} If $(a_i)_{i=1}^n$ is the $n$-term greedy underapproximation sequence of $\theta$, then \[ \frac{1}{a_n} < \theta - \sum_{i=1}^{n-1} \frac{1}{a_i} \leq \frac{1}{a_n-1} \] and so $\theta$ is in the interval~\eqref{Egyptian:greedy-n-term-interval}. To prove the converse, we observe that, for all $i = 1,\ldots, n-1$, the inequality $a_{i+1} \geq a_i^2 - a_i + 1$ implies that \[ \frac{1}{a_i} + \frac{1}{a_{i+1}-1} \leq \frac{1}{a_i-1}. \] It follows that, for all $k = 1, \ldots, n$, we have \begin{align*} \sum_{i=1}^{n-1} \frac{1}{a_i} + \frac{1}{a_n -1} & = \sum_{i=1}^{n-2} \frac{1}{a_i} + \frac{1}{a_{n -1}} + \frac{1}{a_n -1} \leq \sum_{i=1}^{n-2} \frac{1}{a_i} + \frac{1}{a_{n -1} -1} \\ & \leq \cdots \leq \sum_{i=1}^{k-1} \frac{1}{a_i} + \frac{1}{a_k-1} \end{align*} and so \[ \sum_{i=1}^{k} \frac{1}{a_i} \leq \sum_{i=1}^{n} \frac{1}{a_i} < \sum_{i=1}^{n-1} \frac{1}{a_i} + \frac{1}{a_n -1} \leq \sum_{i=1}^{k-1} \frac{1}{a_i} + \frac{1}{a_k-1} \] If $\theta$ is in the interval~\eqref{Egyptian:greedy-n-term-interval}, then for all $k = 1, \ldots, n$ we have \[ \sum_{i=1}^{k} \frac{1}{a_i} < \theta \leq \sum_{i=1}^{k-1} \frac{1}{a_i} + \frac{1}{a_k-1}. \] Equivalently, \[ \frac{1}{a_k} < \theta - \sum_{i=1}^{k-1} \frac{1}{a_i} \leq \frac{1}{a_k-1} \] and \[ a_k = G\left( \theta - \sum_{i=1}^{k-1} \frac{1}{a_i} \right). \] Thus, $(a_i)_{i=1}^n$ is the $n$-term greedy underapproximation sequence of $\theta$. This completes the proof. \end{proof} \begin{corollary} \label{Egyptian:corollary:GUAsequence-1} Let $\theta \in (0,1]$. The pair of integers $(a_1,a_2)$ with $2 \leq a_1 \leq a_2$ is the 2-term greedy underapproximation sequence of $\theta$ if and only if $a_2 \geq a_1^2 - a_1 + 1$ and \[ \frac{1}{a_1} + \frac{1}{a_2} < \theta \leq \frac{1}{a_1} + \frac{1}{a_2-1}. \] \end{corollary} \begin{corollary} \label{Egyptian:corollary:TheConverseAlgorithm} Let $(a_i)_{i=1}^{\infty}$ be a sequence of integers such that \[ a_1 \geq 2 \qqand a_{i+1} \geq a_i^2 - a_i + 1 \] for all $i \geq 1$. The infinite series \[ \sum_{i=1}^{\infty} \frac{1}{a_i} \] converges to a number $\theta \in (0,1]$, and $(a_i)_{i=1}^{\infty}$ is the infinite greedy underapproximation sequence of $\theta$. \end{corollary} \section{Best Egyptian approximation} \label{Egyptian:section:BestApproximation} Let $E_n$ be the set of all sequences $(x_i)_{i=1}^n$ of integers such that \[ 2 \leq x_1 \leq x_2 \leq \cdots \leq x_n. \] For $\theta \in (0,1]$, let $U_n(\theta)$ be the set of all $n$-term Egyptian underapproximation sequences of $\theta$. Thus, \[ U_n(\theta) = \left\{ (x_i)_{i=1}^n \in E_n: \sum_{i=1}^n \frac{1}{x_i} < \theta \right\}. \] Let \[ u_n(\theta) = \sup \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n(\theta) \right\}. \] We call $u_n(\theta)$ the \emph{best $n$-term Egyptian underapproximation of $\theta$}. If $(x_i)_{i=1}^n \in U_n(\theta)$, then $\sum_{i=1}^n 1/x_i < \theta$ and so $u_n(\theta) \leq \theta$. We shall prove (Theorem~\ref{Egyptian:theorem:BestUnderapproximation}) that there is a sequence $(b_i)_{i=1}^n \in U_n(\theta)$ such that $u_n(\theta) = \sum_{i=1}^n 1/b_i$ and so $u_n(\theta)$ is a rational number that is strictly less than $\theta$. We shall also construct examples to prove that the $n$-term greedy underapproximation of $\theta$ is not necessarily the best $n$-term Egyptian underapproximation and that there is not necessarily a unique sequence that is the best $n$-term Egyptian underapproximation of $\theta$. \begin{theorem} \label{Egyptian:theorem:BestUnderapproximation} Let $\theta \in (0,1]$. For all $n \geq 1$, there is a sequence $ (b_i)_{i=1}^n \in U_n(\theta)$ such that \[ u_n(\theta) = \sum_{i=1}^n \frac{1}{b_i} < \theta. \] Thus, the best $n$-term underapproximation $u_n(\theta)$ is rational. \end{theorem} \begin{proof} For $n=1$, we have \[ U_1(\theta) = \left\{(x_1) : x_1 \geq a_1 = G(\theta) \right\}. \] Setting $b_1 = a_1$ gives $u_1(\theta) = 1/a_1 = 1/b_1 < \theta$. Let $n \geq 2$. Choose an $n$-tuple $\left( c^{(1)}_i \right)_{i=1}^n \in U_n(\theta)$. We have \[ 2 \leq c^{(1)}_1 \leq c^{(1)}_2 \leq \cdots \leq c^{(1)}_n \qqand \sum_{i=1}^n \frac{1}{c^{(1)}_i} < \theta. \] If $\left( x_i \right)_{i=1}^n \in U_n(\theta)$ and \[ x_1 \geq nc^{(1)}_1 = x_1^* \] then the inequality $x_1 \leq x_2 \leq \cdots \leq x_n$ implies that \[ \sum_{i=1}^n \frac{1}{x_i} \leq \frac{n}{x_1} \leq \frac{1}{c^{(1)}_1} < \sum_{i=1}^n \frac{1}{c^{(1)}_i} < \theta. \] Thus, $\left( c^{(1)}_i \right)_{i=1}^n$ is a larger $n$-term Egyptian underapproximation of $\theta$ than $\left( x_i \right)_{i=1}^n$. Let \[ U_n^{(1)}(\theta) = \left\{ \left( x_i \right)_{i=1}^n \in U_n(\theta) \text{ and } x_1 < x_1^* \right\}. \] We have \begin{align*} u_n(\theta) & = \sup \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n(\theta) \right\} \\ & = \sup \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n(\theta) \text{ and } x_1 < x_1^* \right\} \\ & = \sup \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n^{(1)}(\theta) \right\}. \end{align*} Let $k \in \{1,\ldots, n-1\}$ and let $x_1^*,\ldots, x_k^*$ be positive integers such that \[ u_n(\theta) = \sup \left\{ \sum_{i=1}^n \frac{1}{x_i} : \left( x_i \right)_{i=1}^n \in U_n(\theta) \text{ and } x_i < x_i^* \text{ for all } i=1,\ldots, k \right\}. \] Let \[ U_n^{(k)}(\theta) = \left\{ \left( x_i \right)_{i=1}^n \in U_n(\theta) : x_i < x_i^* \text{ for all } i=1,\ldots, k \right\}. \] Thus, \[ u_n(\theta) = \sup \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n^{(k)}(\theta) \right\}. \] Let $\ensuremath{ \mathcal Y}(k,n)$ be the finite set of all $k$-tuples of positive integers $\ensuremath{ \mathbf y} = \left( y_i \right)_{i=1}^k$ such that \begin{enumerate} \item[(i)] $y_i < x_i^*$ for all $i=1,\ldots, k$, and \item[(ii)] there exists an $n$-tuple $\left( x_i \right)_{i=1}^n \in U_n^{(k)}(\theta)$ such that $x_i = y_i$ for all $i = 1,\ldots, k$. \end{enumerate} For each $k$-tuple $\ensuremath{ \mathbf y} = \left( y_i \right)_{i=1}^k \in \ensuremath{ \mathcal Y}(k,n)$, let $U_n^{ (\ensuremath{ \mathbf y})}(\theta)$ be the nonempty set of all $n$-tuples $\left( x_i \right)_{i=1}^n \in U_n^{(k)}(\theta)$ such that $x_i = y_i$ for all $i = 1,\ldots, k$. We have \[ U_n^{(k)}(\theta) = \bigcup_{\ensuremath{ \mathbf y} \in \ensuremath{ \mathcal Y}(k,n)} U_n^{ (\ensuremath{ \mathbf y})}(\theta). \] For all $\ensuremath{ \mathbf y} \in \ensuremath{ \mathcal Y}(k,n)$, choose an $n$-tuple $\left(c^{(\ensuremath{ \mathbf y})}_i \right)_{i=1}^n \in U_n^{(\ensuremath{ \mathbf y})}(\theta)$. If $\left( x_i \right)_{i=1}^n \in U_n^{(\ensuremath{ \mathbf y})}(\theta)$ and \[ x_{k+1} \geq (n-k)c_{k+1}^{(\ensuremath{ \mathbf y})} \] then $x_i = y_i = c^{(\ensuremath{ \mathbf y})}_i $ for all $i=1,\ldots, k$ and \begin{align*} \sum_{i=1}^n \frac{1}{x_i} & = \sum_{i=1}^k \frac{1}{ c^{(\ensuremath{ \mathbf y})}_i } + \sum_{i=k+1}^n \frac{1}{x_i} \\ & \leq \sum_{i=1}^k \frac{1}{ c^{(\ensuremath{ \mathbf y})}_i } + \frac{n-k}{x_{k+1}} \\ & \leq \sum_{i=1}^{k+1} \frac{1}{ c^{(\ensuremath{ \mathbf y})}_i } \leq \sum_{i=1}^{n} \frac{1}{ c^{(\ensuremath{ \mathbf y})}_i } \\ & < \theta \end{align*} and so the $n$-term Egyptian underapproximation of $\theta$ by $\left( x_i \right)_{i=1}^n$ is no larger than the $n$-term Egyptian underapproximation of $\theta$ by $\left(c^{(\ensuremath{ \mathbf y})}_i \right)_{i=1}^n$. Therefore, \begin{align*} \sup & \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n^{(\ensuremath{ \mathbf y})}(\theta) \right\} \\ & = \sup \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n^{(\ensuremath{ \mathbf y})}(\theta) \text{ and } x_{k+1} < (n-k)c_{k+1}^{(\ensuremath{ \mathbf y})}\right\}. \end{align*} Let \[ x_{k+1}^* = \max\left\{(n-k)c_{k+1}^{(\ensuremath{ \mathbf y})} : \ensuremath{ \mathbf y} \in \ensuremath{ \mathcal Y}(k,n) \right\}. \] It follows that \begin{align*} \sup & \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n^{(k)}(\theta) \right\} \\ & = \sup \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n^{(k)}(\theta) \text{ and } x_i < x_i^* \text{ for all } i = 1,\ldots, k+1\right\}. \end{align*} Continuing inductively, we obtain positive integers $x_1^*,\ldots, x_n^*$ such that \begin{align*} u_n(\theta) & = \sup \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n(\theta) \right\} \\ & = \sup \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n(\theta) \text{ and } x_i < x_i^* \text{ for all } i = 1,\ldots, n \right\} \\ & = \sup \left\{ \sum_{i=1}^n \frac{1}{x_i} : (x_i)_{i=1}^n \in U_n^{(n)}(\theta) \right\} \end{align*} where \[ U_n^{(n)}(\theta) = \left\{ (x_i)_{i=1}^n \in U_n(\theta) : x_i < x_i^* \text{ for all } i = 1,\ldots, n \right\}. \] The set $U_n^{(n)}(\theta)$ is finite and so there exists $ (b_i)_{i=1}^n \in U_n^{(n)}(\theta) \subseteq U_n(\theta)$ such that \[ u_n(\theta) = \sum_{i=1}^n \frac{1}{b_i} < \theta. \] This completes the proof. \end{proof} \section{When greedy is best} \label{Egyptian:section:bb} It had been conjectured by Miller~\cite{mill19} and Kellogg~\cite{kell21} and then proved by Curtiss~\cite{curt22} and Takenouchi~\cite{take21} that, for every positive integer $n$, the $n$-tuple of Sylvester numbers $(s_i)_{i=1}^n$ is the unique best $n$-term Egyptian fraction underapproximation of 1. Equivalently, if $(x_1,\ldots, x_n) \in U_n(1)$ and \[ \sum_{i=1}^n \frac{1}{s_i} \leq \sum_{i=1}^n \frac{1}{x_i} < 1 \] then $x_i = s_i$ for all $i=1,\ldots, n$. There is also a recent proof by Soundararajan~\cite{soun05}. In this section we generalize this result. We construct an infinite set of rational numbers whose infinite greedy underapproximation sequences can be expicitly computed, and for which, for every $n$, the $n$-term greedy underapproximation sequence is the unique best $n$-term underapproximation by Egyptian fractions. We use the method of Soundararajan~\cite{soun05}, which is based on the following inequality. \begin{theorem} \label{Egyptian:theorem:MuirheadCorollary} If $(x_i)_{i=m+1}^n$ and $(a_i)_{i=m+1}^n$ are increasing sequences of positive numbers such that $(x_i)_{i=m+1}^n \neq (a_i)_{i=m+1}^n$ and \[ \prod_{i=m+1}^{m+k} a_i \leq \prod_{i=m+1}^{m+k} x_i \] for all $k = 1,\ldots, n-m$, then \[ \sum_{i= m+1}^n \frac{1}{x_i} < \sum_{i= m+1}^n \frac{1}{a_i} . \] \end{theorem} \begin{proof} This inequality is a corollary of Muirhead's inequality (see Nathanson~\cite{nath22}). A nice direct proof due to Ambro and Barc\u{a}u~\cite{ambr-barc15} is given in the Appendix. \end{proof} \begin{theorem} \label{Egyptian:theorem:pq-greedy} Let $\theta = p/q \in (0,1]$, where $p$ and $q$ are positive integers such that $p$ divides $q+1$, and let $(a_i)_{i=1}^{\infty}$ be the infinite greedy underapproximation sequence of $\theta$. For every positive integer $n$, if $(x_i)_{i=1}^n$ is an $n$-term Egyptian underapproximation sequence of $\theta$ such that \begin{equation} \label{Egyptian:pq-greedy} \sum_{i=1}^n \frac{1}{a_i} \leq \sum_{i=1}^n \frac{1}{x_i} < \frac{p}{q} \end{equation} then $x_i = a_i$ for all $i = 1,\ldots, n$. \end{theorem} \begin{proof} The proof is by induction on $n$. For $n=1$, the greedy algorithm gives \[ \frac{1}{a_1} \leq \frac{1}{x_1} < \theta \leq \frac{1}{a_1 -1} \] and so $x_1 = a_1$. Thus, the Theorem is true for $n=1$. Let $n \geq 2$ and assume that the Theorem is true for all increasing sequences $(x_i)_{i=1}^{m}$ with $m < n$. Let $(x_i)_{i=1}^n$ be an increasing sequence that satisfies~\eqref{Egyptian:pq-greedy}. Inequality~\eqref{Egyptian:pq-greedy} and Theorem~\ref{Egyptian:theorem:pq-greedy-sequence} give \[ 0 < \frac{p}{q} - \sum_{i=1}^n \frac{1}{x_i} \leq \frac{p}{q} - \sum_{i=1}^n \frac{1}{a_i} = \frac{1}{q\prod_{i=1}^n a_i}. \] A common denominator of the $n+1$ fractions $p/q$, $1/x_1$, \ldots, $1/x_n$ is $q\prod_{i=1}^n x_i$, and so there is a positive integer $r$ such that \[ 0 < \frac{1}{q\prod_{i=1}^n x_i} \leq \frac{r}{q\prod_{i=1}^n x_i} = \frac{p}{q} - \sum_{i=1}^n \frac{1}{x_i} \leq \frac{1}{q\prod_{i=1}^n a_i}. \] This implies \[ \prod_{i=1}^n a_i \leq \prod_{i=1}^n x_i. \] Let $m$ be the largest integer $\leq n-1$ such that \begin{equation} \label{Egyptian:m+1-product} \prod_{i=m+1}^n a_i \leq \prod_{i=m+1}^n x_i. \end{equation} We shall prove that \begin{equation} \label{Egyptian:j-product} \prod_{i=m+1}^{m+j} a_i \leq \prod_{i=m+1}^{m+j} x_i \end{equation} for all $j \in \{1,\ldots, n-m-1\}$. If not, then there exists $k \in \{1,\ldots, n-m-1\}$ such that \[ \prod_{i=m+1}^{m+k} x_i <\prod_{i=m+1}^{m+k} a_i. \] It follows from~\eqref{Egyptian:m+1-product} that \[ \prod_{i=m+k+1}^n a_i \leq \frac{ \prod_{i=m+1}^n x_i}{\prod_{i=m+1}^{m+k} a_i} = \left( \frac{ \prod_{i=m+1}^{m+k} x_i}{\prod_{i=m+1}^{m+k} a_i} \right) \prod_{i=m+k+1}^n x_i < \prod_{i=m+k+1}^n x_i \] which contradicts the maximality of $m$. This proves~\eqref{Egyptian:j-product}. Suppose that $a_i \neq x_i$ for some $i \in \{ m+1,\ldots, n\}$. Applying Theorem~\ref{Egyptian:theorem:MuirheadCorollary} to the distinct increasing sequences $(a_i)_{i=m+1}^n$ and $(x_i)_{i=m+1}^n$, we obtain \begin{equation} \label{Egyptian:m-product} \sum_{i=m+1}^{n} \frac{1}{x_i} < \sum_{i=m+1}^{n} \frac{1}{a_i}. \end{equation} From inequality~\eqref{Egyptian:pq-greedy} we have $1 \leq m \leq n-1$, and so \begin{align*} \sum_{i=1}^{m} \frac{1}{a_i} & \leq \sum_{i=1}^{m} \frac{1}{x_i} - \left( \sum_{i=m+1}^{n} \frac{1}{a_i} - \sum_{i=m+1}^{n} \frac{1}{x_i} \right) \\ & < \sum_{i=1}^{m} \frac{1}{x_i} \leq \sum_{i=1}^{n} \frac{1}{x_i} < \frac{p}{q}. \end{align*} The induction hypothesis implies $x_i = a_i$ for all $i = 1,\ldots, m$, which is absurd. Thus, $x_i = a_i$ for all $i = m+1,\ldots, n$, and \[ \sum_{i=1}^m \frac{1}{a_i} \leq \sum_{i=1}^m \frac{1}{x_i} < \sum_{i=1}^n \frac{1}{x_i} < \frac{p}{q}. \] The induction hypothesis again implies $x_i = a_i$ for all $i = 1,\ldots, m$. This completes the proof. \end{proof} \section{When is greedy best?} \label{Egyptian:section:cc} It is a critical observation that the $n$-term greedy underapproximation of a real number $\theta \in (0,1]$ is not always the unique best $n$-term Egyptian underapproximation, nor even a best $n$-term Egyptian underapproximation. Here are two examples for the case $n=2$. The inequality \[ \frac{1}{2} + \frac{1}{30} = \frac{8}{15} < \frac{31}{58} = \frac{1}{2} + \frac{1}{29} \] proves that $(2,30)$ is the 2-term greedy underapproximation sequence for all $\theta$ in the interval \[ \frac{8}{15} < \theta \leq \frac{31}{58}. \] We prove (Theorem~\ref{Egyptian:theorem:underapproximation-a1=2}) that $(2,30)$ is a best 2-term greedy underapproximation sequence for all $\theta$ in this interval. The equation \[ \frac{1}{2} + \frac{1}{30} = \frac{1}{3} + \frac{1}{5} = \frac{8}{15} \] shows that the best 2-term Egyptian underapproximation is not unique. Similarly, the inequality \[ \frac{1}{3} + \frac{1}{17} = \frac{20}{51} < \frac{19}{48} = \frac{1}{3} + \frac{1}{16} \] proves that $(3,17)$ is the 2-term greedy underapproximation sequence for all $\theta$ in the interval \[ \frac{20}{51} < \theta \leq \frac{19}{48} . \] The inequality \[ \frac{1}{3} + \frac{1}{17} < \frac{1}{4} + \frac{1}{7} = \frac{11}{28} < \theta \leq \frac{19}{48} \] proves that $(3,17)$ is not a best 2-term Egyptian underapproximation of $\theta$ for all $\theta$ in the interval \[ \frac{11}{28} < \theta \leq \frac{19}{48} . \] Theorem~\ref{Egyptian:theorem:underapproximation-a1=3} shows that $(4,7)$ is the best 2-term Egyptian underapproximation of $\theta$ for all $\theta$ in this interval. \section{Best 2-term Egyptian underapproximations} In this section we describe best 2-term Egyptian underapproximations for $\theta \in (0,1]$. For all integers $a_1 \geq 2$ we have the \emph{harmonic interval}\index{harmonic interval} \[ I(a_1) = \left(\frac{1}{a_1}, \frac{1}{a_1-1} \right] = \left(\frac{1}{a_1}, \frac{1}{a_1} + \frac{1}{a^2_1- a_1} \right]. \] The intervals $I(a_1)$ are pairwise disjoint and \[ (0,1] = \bigcup_{a_1=2}^{\infty} \left(\frac{1}{a_1}, \frac{1}{a_1-1} \right]. \] For all integers $a_1 \geq 2$ and $a_2 \geq a_1^2-a_1+1$, we have the \emph{harmonic subinterval}\index{harmonic subinterval} \[ J(a_1,a_2) = \left(\frac{1}{a_1} + \frac{1}{a_2}, \frac{1}{a_1} + \frac{1}{a_2-1} \right]. \] By Corollary~\ref{Egyptian:corollary:GUAsequence-1}, the pair $(a_1,a_2)$ is the 2-term greedy underapproximation of $\theta$ for all $\theta \in J(a_1,a_2)$. We have \[ J(a_1,a_2) \subseteq \left(\frac{1}{a_1}, \frac{1}{a_1-1} \right] = I(a_1). \] The intervals $J(a_1,a_2)$ are pairwise disjoint. It follows from the identity \[ \left(\frac{1}{a_1} + \frac{1}{a_1^2 - a_1+1}, \frac{1}{a_1} + \frac{1}{a_1^2 - a_1} \right] = \left(\frac{1}{a_1} + \frac{1}{a_1^2 - a_1+1}, \frac{1}{a_1} + \frac{1}{a_1-1} \right] \] that \begin{align*} I(a_1) & = \bigcup_{a_2 = a_1^2-a_1+1}^{\infty} J(a_1,a_2) = \bigcup_{a_2 = a_1^2-a_1+1}^{\infty} \left(\frac{1}{a_1} + \frac{1}{a_2}, \frac{1}{a_1} + \frac{1}{a_2-1} \right]. \end{align*} Thus, \[ (0,1] = \bigcup_{a_1=2}^{\infty} \quad \bigcup_{a_2 = a_1^2-a_1+1}^{\infty} \left(\frac{1}{a_1} + \frac{1}{a_2}, \frac{1}{a_1} + \frac{1}{a_2-1} \right]. \] The pair of integers $(x_1,x_2)$ with $2 \leq x_1 \leq x_2$ is not the 2-term greedy underapproximation sequence of some $\theta \in (0,1]$ if and only if $x_2 \leq x_1^2-x_1$. The pair $(a_1,a_2)$ is not a best 2-term underapproximation sequence of $\theta \in I(a_1,a_2)$ if and only if there exists a pair of positive integers $(x_1,x_2) $ with $x_1 \leq x_2$ such that \[ \frac{1}{a_1} + \frac{1}{a_2} < \frac{1}{x_1} + \frac{1}{x_2} < \theta \leq \frac{1}{a_1} + \frac{1}{a_2-1}. \] The following Lemmata enable us to compute, for all integers $a_1 \geq 2$, the set of real numbers $\theta$ in the harmonic interval $I(a_1) = (1/a_1, 1/(a_1 -1)]$ for which the 2-term greedy underapproximation is not the unique best 2-term Egyptian underapproximation. \begin{lemma} \label{Egyptian:lemma:calculate-1} Let $a_1$ and $a_2$ be integers such that \[ a_1 \geq 2 \qqand a_2 \geq a_1(a_1-1)+1. \] If $x_1$ and $x_2$ are integers such that \[ 2 \leq x_1 \leq x_2 \qqand (x_1,x_2) \neq (a_1, a_2) \] and \begin{equation} \label{Egyptian:estimate-x1-x2-0} \frac{1}{a_1} + \frac{1}{a_2} \leq \frac{1}{x_1} + \frac{1}{x_2} < \frac{1}{a_1} + \frac{1}{a_2-1} \end{equation} then \begin{equation} \label{Egyptian:estimate-x1-x2} a_1 + 1 \leq x_1 \leq 2a_1-1 \leq x_2 < \frac{a_1x_1}{x_1-a_1} \end{equation} and \begin{equation} \label{Egyptian:greedy-a2x2-2} x_2 \leq a_2-1. \end{equation} \end{lemma} \begin{proof} We have \[ \frac{2}{x_2} \leq \frac{1}{x_1} + \frac{1}{x_2} < \frac{1}{a_1} + \frac{1}{a_2-1} \leq \frac{1}{a_1-1} \] and so $2a_1 - 1 \leq x_2$. Similarly, \[ \frac{1}{x_1} < \frac{1}{x_1} + \frac{1}{x_2} < \frac{1}{a_1-1} \] implies $ a_1 \leq x_1$. If $a_1 = x_1$, then from~\eqref{Egyptian:estimate-x1-x2-0} we obtain \[ \frac{1}{a_2} \leq \frac{1}{x_2} < \frac{1}{a_2-1} \] and so $a_2 = x_2$, which contradicts $(x_1,x_2) \neq (a_1, a_2)$. It follows that $a_1 +1 \leq x_1$. If $x_1 \geq 2a_1$, then \[ \frac{1}{a_1} + \frac{1}{a_2} \leq \frac{1}{x_1} + \frac{1}{x_2} \leq \frac{2}{x_1} \leq \frac{1}{a_1} < \frac{1}{a_1} + \frac{1}{a_2} \] which is absurd. Therefore, \[ a_1 + 1 \leq x_1 \leq 2a_1-1 \leq x_2. \] The inequality \[ \frac{1}{a_1} < \frac{1}{a_1} + \frac{1}{a_2} \leq \frac{1}{x_1} + \frac{1}{x_2} \] implies \[ \frac{x_1-a_1}{a_1 x_1} = \frac{1}{a_1} - \frac{1}{x_1} < \frac{1}{x_2} \] and so \[ 2a_1-1 \leq x_2 < \frac{a_1 x_1} {x_1-a_1}. \] This finishes the proof of~\eqref{Egyptian:estimate-x1-x2}. Finally, $a_1 < x_1$ implies \[ \frac{1}{a_1} + \frac{1}{a_2} \leq \frac{1}{x_1} + \frac{1}{x_2} < \frac{1}{a_1} + \frac{1}{x_2} \] and so $x_2 \leq a_2 -1$, which is~\eqref{Egyptian:greedy-a2x2-2}. This completes the proof. \end{proof} \begin{lemma} \label{Egyptian:lemma:calculate-2} For all integers $a_1\geq 2$, there are $a_1-1$ integers $x_1$ that satisfy \[ a_1 + 1 \leq x_1 \leq 2a_1-1. \] For each such integer $x_1$ there are \[ \left\lceil \frac{a_1x_1}{x_1-a_1}\right\rceil - 2a_1 + 2 \geq 1 \] integers $x_2$ that satisfy \[ 2a_1-1 \leq x_2 < \frac{a_1x_1}{x_1-a_1} \] For all integers $a_1 \geq 2$, the set \begin{equation} \label{Egyptian:condition-2-term} X(a_1) = \left\{ (x_1,x_2) \in \ensuremath{ \mathbf N }^2: a_1 + 1 \leq x_1 \leq 2a_1-1 \leq x_2 < \frac{a_1x_1}{x_1-a_1} \right\} \end{equation} is nonempty. \end{lemma} \begin{proof} If $a_1\geq 2$, then $a_1+1 \leq 2a_1-1$. There are $a_1-1 \geq 1$ integers $x_1$ such that $a_1+1 \leq x_1 \leq 2a_1-1$. If $x_1 \leq 2a_1-1$, then \[ (a_1-1)x_1 \leq (a_1 -1)(2a_1 -1) < a_1(2a_1-1). \] Equivalently, \[ (2a_1-1) (x_1 - a_1) < a_1 x_1 \] and so \[ 2a_1-1 < \frac{a_1x_1}{x_1-a_1}. \] It follows that there are \[ \left\lceil \frac{a_1x_1}{x_1-a_1}\right\rceil - 2a_1 + 2 \geq 1 \] integers $x_2$ such that \[ 2a_1-1 \leq x_2 < \frac{a_1x_1}{x_1-a_1} \] and so the set $X(a_1)$ is nonempty. This completes the proof. \end{proof} \begin{lemma} \label{Egyptian:lemma:calculate-3} Let $a_1 \geq 2$. If $(x_1,x_2) \in X(a_1)$ and \begin{equation} \label{Egyptian:calculate-a2} a_2 =\left\lceil \left( \frac{1}{x_1} + \frac{1}{x_2} - \frac{1}{a_1} \right)^{-1} \right\rceil \end{equation} then \begin{equation} \label{Egyptian:condition-2-term-delete} \frac{1}{a_1} + \frac{1}{a_2} \leq \frac{1}{x_1} + \frac{1}{x_2} < \frac{1}{a_1} + \frac{1}{a_2 -1} \end{equation} and the pairs $(a_1,a_2)$ and $(x_1, x_2)$ are 2-term underapproximation sequences of $\theta$ for all \[ \theta \in \left( \frac{1}{x_1} + \frac{1}{x_2}, \frac{1}{a_1} + \frac{1}{a_2 -1} \right]. \] Moreover, \[ \frac{1}{a_1} + \frac{1}{a_2}= \frac{1}{x_1} + \frac{1}{x_2} \] if and only if \[ a_2 = \left( \frac{1}{x_1} + \frac{1}{x_2} - \frac{1}{a_1} \right)^{-1}. \] \end{lemma} \begin{proof} If $(x_1,x_2) \in X(a_1)$, then \[ a_2 -1 < \left( \frac{1}{x_1} + \frac{1}{x_2} - \frac{1}{a_1} \right)^{-1} \leq a_2 \] and \[ \frac{1}{a_1} + \frac{1}{a_2} \leq \frac{1}{x_1} + \frac{1}{x_2} < \frac{1}{a_1} + \frac{1}{a_2-1}. \] This proves~\eqref{Egyptian:condition-2-term-delete}. The remaining statements are immediate consequences. \end{proof} It is important to note that the integer $a_2$ computed from~\eqref{Egyptian:calculate-a2} does not necessarily satisfy the inequality $a_2 \geq a_1^2 - a_1 + 1$. Thus, $(x_1,x_2)$ is an equal or better 2-term underapproximation than $(a_1,a_2)$ for all $\theta > 1/x_1 + 1/x_2$, but $(a_1,a_2)$ is not necessarily a 2-term greedy underapproximation. Here are three examples in the case $a_1 = 5$. We have $a_1^2-a_1+1=21$ and \[ \left( \frac{1}{5} , \frac{1}{4} \right] = I(5) = \bigcup_{a_2 = 21}^{\infty} J(5,a_2) = \bigcup_{a_2 = 21}^{\infty} \left( \frac{1}{5} + \frac{1}{a_2}, \frac{1}{5} + \frac{1}{a_2-1} \right]. \] For all $a_2 \geq 21$, the pair $(a_1,a_2) = (5,a_2)$ is the 2-term greedy underapproximation of $\theta$ of all $\theta$ is in the harmonic subinterval $J(5,a_2)$. From~\eqref{Egyptian:condition-2-term} we obtain the inequality that determines the set $X(5)$: \[ 6 \leq x_1 \leq 9 \leq x_2 < \frac{5x_1}{x_1-5}. \] The set $X(5)$ contains the pairs $(x_1,x_2) = (7,10)$, $(9,11)$, and $(6,9)$. The pair $(7,10) \in X(5)$ generates the integer \[ a_2 = 24 > \left( \frac{1}{7} + \frac{1}{10} - \frac{1}{5} \right)^{-1} = \frac{70}{3} > 23 \] and $24 = a_2 \geq 21$. The pair $(5,24)$ is the 2-term greedy underapproximation sequence of all $\theta \in J(5,24)$. We have \[ \frac{29}{120} = \frac{1}{5} + \frac{1}{24} < \frac{1}{7} + \frac{1}{10} = \frac{17}{70} < \frac{1}{5} + \frac{1}{23} = \frac{28}{115}. \] Thus, the pair $(7,10)$ is a better 2-term underapproximation sequence of $\theta$ than the 2-term greedy underapproximation sequence $(5,24)$ for all \[ \theta \in \left( \frac{17}{70}, \frac{28}{115} \right] \subseteq \left( \frac{29}{120}, \frac{28}{115} \right] = J(5,24). \] The pair $(9,11) \in X(5)$ generates the integer \[ a_2 = 495 = \left( \frac{1}{9} + \frac{1}{11} - \frac{1}{5} \right)^{-1} \] and $495 = a_2 \geq 21$. The pair $(5,495)$ is the 2-term greedy underapproximation sequence for all $\theta \in J(5,495)$. For all \[ \theta \in \left( \frac{20}{99},\frac{499}{2470} \right] \] we have \[ \frac{1}{5} + \frac{1}{495} = \frac{1}{9} + \frac{1}{11} = \frac{20}{99} < \theta \leq \frac{499}{2470} = \frac{1}{5} + \frac{1}{494}. \] and the pairs $(5,495)$ and$(9, 11)$ give equal 2-term underapproximations. The pair $ (6,9) \in X(5)$ generates the integer \[ a_2 = 13 = \left\lceil \frac{90}{7} \right\rceil = \left\lceil \left( \frac{1}{6} + \frac{1}{9} - \frac{1}{5} \right)^{-1} \right\rceil > \frac{90}{7}. \] However, $a_2 = 13 < 21$ and $(5,13)$ is not a 2-term greedy underapproximation sequence. \section{Best 2-term underapproximations for $a_1=2$ and $a_1 = 3$} \label{Egyptian:section:dd} In this section we compute all real numbers $\theta$ in the harmonic intervals $I(2)$ and $I(3)$ whose 2-term greedy approximation sequences do not give best approximations or unique best approximations. \begin{theorem} \label{Egyptian:theorem:underapproximation-a1=2} Let $a_1=2$ and $a_2 \geq 3$. The 2-term greedy underapproximation sequence $(2,a_2)$ is a best 2-term Egyptian underapproximation sequence of $\theta$ for all $\theta$ in the harmonic subinterval \[ J(2,a_2) = \left( \frac{1}{2} + \frac{1}{a_2}, \frac{1}{2} + \frac{1}{a_2-1} \right]. \] Consider the harmonic subintervals \begin{align*} J(2,6) & = \left( \frac{2}{3},\frac{7}{10} \right] , \qquad J(2,12) = \left( \frac{7}{12},\frac{13}{22} \right], \qquad J(2,30) = \left( \frac{8}{15},\frac{31}{58} \right]. \end{align*} \begin{enumerate} \item[(i)] For all $\theta \in J(2,6)$, the pairs $(2,6)$ and $(3,3)$ are best 2-term underapproximations of $\theta$, and are the only best 2-term underapproximations of $\theta$. \item[(ii)] For all $\theta \in J(2,12) $, the pairs $(2,12)$ and $(3,4)$ are best 2-term underapproximations of $\theta$, and are the only best 2-term underapproximations of $\theta$. \item[(iii)] For all $\theta \in J(2,30) $, the pairs $(2,30)$ and $(3,5)$ are best 2-term underapproximations of $\theta$, and are the only best 2-term underapproximations of $\theta$. \item[(iv)] For all $\theta \in I(2) = (1/2,1]$ such that $\theta \notin J(2,6) \cup J(2,12) \cup J(2,30) $, the pair $ (a_1,a_2)$ is the unique best 2-term underapproximation of $\theta$. \end{enumerate} \end{theorem} \begin{proof} If $a_1=2$, then inequality~\eqref{Egyptian:estimate-x1-x2} is simply \[ 3 = x_1 \leq x_2 < 6 \] and so $x_2 = 3, 4$, or 5. If $x_2 = 3$, then \[ a_2 = \left( \frac{1}{3} + \frac{1}{3} - \frac{1}{2} \right)^{-1} = 6 \] and \[ \frac{1}{2} + \frac{1}{6} = \frac{1}{3} + \frac{1}{3} = \frac{2}{3}. \] If $x_2 = 4$, then \[ a_2 = \left( \frac{1}{3} + \frac{1}{4} - \frac{1}{2} \right)^{-1} = 12 \] and \[ \frac{1}{2} + \frac{1}{12} = \frac{1}{3} + \frac{1}{4} = \frac{7}{12}. \] If $x_2 = 5$, then \[ a_2 = \left( \frac{1}{3} + \frac{1}{5} - \frac{1}{2} \right)^{-1} = 30 \] and \[ \frac{1}{2} + \frac{1}{30} = \frac{1}{3} + \frac{1}{5} = \frac{8}{15}. \] The only solutions $(x_1,x_2) \neq (2,a_2)$ of the diophantine inequality \begin{equation} \label{Egyptian:2term-ineq} \frac{1}{2} + \frac{1}{a_2} \leq \frac{1}{x_1} + \frac{1}{x_2} < \frac{1}{2} + \frac{1}{a_2 -1} \end{equation} are $(x_1,x_2) = (3,3)$, $(3,4)$, and $(3,5)$. This completes the proof. \end{proof} \begin{theorem} \label{Egyptian:theorem:underapproximation-a1=3} Let $a_1=3$ and let $\theta \in I(3) = (1/3,1/2]$. The 2-term greedy underapproximation of $\theta$ is a best 2-term Egyptian underapproximation if and only if \[ \theta \notin \left( \frac{9}{20} , \frac{11}{24} \right] \cup \left( \frac{11}{28} , \frac{19}{48} \right]. \] The 2-term greedy underapproximation of $\theta$ is a best 2-term Egyptian underapproximation but not the unique best 2-term Egyptian underapproximation if and only if $\theta \in J(3,a_2)$ for \[ a_2 \in \{12, 15, 24, 30, 36, 60, 105, 132 \}. \] \end{theorem} \begin{proof} For $a_1=3$, inequality~\eqref{Egyptian:estimate-x1-x2} gives \[ 4 \leq x_1 \leq 5 \leq x_2 < \frac{3x_1}{x_1-3}. \] Thus, a complete list of the 10 solutions $(3,a_2) \neq (x_1,x_2)$ of the diophantine inequality \begin{equation} \label{Egyptian:underapproximation-3-equal} \frac{1}{3} + \frac{1}{a_2} \leq \frac{1}{x_1} + \frac{1}{x_2} < \frac{1}{3} + \frac{1}{a_2 -1} \end{equation} is the following: \begin{equation} \label{Egyptian:underapproximation-3-table} \begin{tabular}{|r | r | r | r | r | r | r | r | r | r | r |} \hline $x_1$ & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 5 & 5 & 5 \\ $x_2$ & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 5 & 6 & 7 \\ $a_2$ & 9 & 12 & 17 & 24 & 36 & 60 & 132 & 15 & 30 & 105 \\ \hline \end{tabular} \end{equation} We have strict inequality \[ \frac{1}{3} + \frac{1}{a_2} < \frac{1}{x_1} + \frac{1}{x_2} < \frac{1}{3} + \frac{1}{a_2 -1} \] only if either $a_2 = 9 $ and $(x_1,x_2) = (4,5)$ or $a_2 = 17$ and $(x_1,x_2) = (4,7)$. Note that \[ \frac{1}{4} + \frac{1}{5} = \frac{9}{20} \qqand \frac{1}{4} + \frac{1}{7} = \frac{11}{28}. \] The pair $(4,5)$ is the unique best 2-term underapproximation of all $\theta$ such that \[ \theta \in \left( \frac{9}{20} , \frac{11}{24} \right] \subseteq \left( \frac{4}{9}, \frac{11}{24} \right] = J(3,9). \] The pair $(4,7)$ is the unique best 2-term underapproximation of all $\theta$ such that \[ \theta \in \left( \frac{11}{28} , \frac{19}{48} \right] \subseteq \left( \frac{20}{51} , \frac{19}{48} \right] = J(3,17). \] The 8 solutions $(3,a_2) \neq (x_1,x_2)$ with $a_2 \geq 7$ and $4 \leq x_1 \leq x_2$ of the diophantine equation \[ \frac{1}{3} + \frac{1}{a_2} = \frac{1}{x_1} + \frac{1}{x_2} \] are \[ \frac{1}{3} + \frac{1}{ 12} = \frac{1}{4} + \frac{1}{ 6} =\frac{5}{12 } \] \[ \frac{1}{3} + \frac{1}{15 } = \frac{1}{5} + \frac{1}{5} = \frac{2}{5} \] \[ \frac{1}{3} + \frac{1}{24 } = \frac{1}{4} + \frac{1}{ 8} = \frac{3}{8 } \] \[ \frac{1}{3} + \frac{1}{30} = \frac{1}{5} + \frac{1}{ 6} =\frac{11}{30} \] \[ \frac{1}{3} + \frac{1}{36 } = \frac{1}{4} + \frac{1}{9 } = \frac{13}{ 36} \] \[ \frac{1}{3} + \frac{1}{60 } = \frac{1}{4} + \frac{1}{10 } = \frac{7}{ 20} \] \[ \frac{1}{3} + \frac{1}{105} = \frac{1}{5} + \frac{1}{7} = \frac{12}{35 }. \] \[ \frac{1}{3} + \frac{1}{ 132} = \frac{1}{4} + \frac{1}{11 } = \frac{15}{44 }. \] This completes the proof. \end{proof} \section{Open Problems} \begin{enumerate} \item Consider real numbers $\theta \in (0,1]$ whose infinite greedy underapproximation sequence $(a_i)_{i=1}^{\infty}$ has the property that $(a_i)_{i=1}^n$ is the unique best underapproximation of $\theta$ for all positive integers $n$. By Theorem~\ref{Egyptian:theorem:pq-greedy}, every rational number of the form $p/q$ where $p$ divides $q+1$ has this property. Do other rational numbers have this property? Do there exist irrational numbers with this property? \item Let $\theta \in (0,1]$, let $n \geq 3$, and let $(a_i)_{i=1}^n \in U_n(\theta)$ be the $n$-term greedy underapproximation sequence of $\theta$. \begin{enumerate} \item Do there exist sequences $(x_i)_{i=1}^n \in U_n(\theta)$ such that $(a_i)_{i=1}^n \neq (x_i)_{i=1}^n$ and \[ \sum_{i=1}^n \frac{1}{a_i} < \sum_{i=1}^n \frac{1}{x_i} < \theta? \] How many such sequences are there? \item Do there exist sequences $(x_1,\ldots, x_n) \in U_n(\theta)$ such that $(a_1,\ldots, a_n) \neq (x_1,\ldots, x_n)$ and \[ \sum_{i=1}^n \frac{1}{a_i} = \sum_{i=1}^n \frac{1}{x_i} < \theta? \] How many such sequences are there? \item Can we identify and understand counterexamples to unique best $n$-term underapproximation by the greedy algorithm? \end{enumerate} \item Let $n \geq 3$. Is there an efficient algorithm to compute the best $n$-term underapproximation sequence of a real number $\theta \in (0,1]$? \item Let $\theta \in (0,1]$. Erd\H os and Graham~\cite[p.31]{erdo-grah80} asserted (without proof or reference to any publication) that for every rational number $\theta$ there exists an integer $n_0 = n_0(\theta)$ such that, for all $n \geq n_0 + 1$, \[ u_n(\theta) = u_{n_0}(\theta) + u_{n-n_0}\left( \theta - u_{n_0}(\theta) \right) \] and the best $(n-n_0)$-term underapproximation $u_{n-n_0}\left( \theta - u_{n_0}(\theta) \right)$ is always constructed by the greedy algorithm. They also wrote, ``It is not difficult to construct irrationals for which the result fails.'' Prove or disprove these statements. \item Let $A$ be a nonempty set of positive integers and let \[ \frac{1}{A} = \left\{ \frac{1}{x} : x \in A \right\} \] be the set of Egyptian fractions with denominators in $A$. An \emph{$n$-term $A$-underapproximation}\index{Egyptian underapproximation!$A$} of $\theta$ is a sum of $n$ not necessarily distinct Egyptian fractions in $1/A$ that is strictly less than $\theta$. Let \[ u_{n,A}(\theta) = \sup\left\{ \sum_{i=1}^n \frac{1}{x_i}: (x_1,\ldots, x_n) \in A^n, x_1 \geq \cdots \geq x_n, \sum_{i=1}^n \frac{1}{x_i} < \theta \right\}. \] An $n$-term $A$-underapproximation $ \sum_{i=1}^n \frac{1}{x_i} < \theta$ is \emph{best}\index{Egyptian underapproximation!best} if \[ u_n(\theta) = \sum_{i=1}^n \frac{1}{x_i}. \] For what real numbers $\theta \in (0,1)$ does the greedy algorithm restricted to $A$ give a best $n$-term underapproximation? \end{enumerate}
{ "timestamp": "2022-02-04T02:23:30", "yymm": "2202", "arxiv_id": "2202.00191", "language": "en", "url": "https://arxiv.org/abs/2202.00191" }
\section{Introduction} \label{sec:int} Cyclic models of biodiversity describe nonhierarchical predator-prey interactions among species that promote the richness of ecosystems in nature \cite{ecology,Nature-bio}. The remarkable outcomes from experiments with bacteria \textit{Escherichia coli}, for example, revealed a cyclic dominance among three bacteria strains, successfully described by the spatial rock-paper-scissors game rules \cite{Coli,Allelopathy}. However, the experiment revealed that the cyclic dominance ensures coexistence only if organisms interact locally, leading to the formation of departed spatial domains \cite{Coli,Allelopathy}. Other authors have also found plenty of evidence that spatial segregation of species is crucial to the formation and stability of ecosystems (for example, in systems with lizards and coral reefs \cite{lizards,Extra1,BUCHHOLZ2007401}). Stochastic simulations of the spatial rock-paper-scissors model have been widely employed to investigate biological systems where individuals interact locally in a cyclic way \cite{Szolnoki_2020, Szolnoki-JRSI-11-0735}. Considering that both predation and mobility interactions can be influenced by evolutionary behaviour, stochastic simulations have been unveiled the mechanisms leading to the emergence of spatial patterns, impacting population dynamics and coexistence in cyclic models \cite{Moura,doi:10.1021/ja01453a010,Volterra,PhysRevE.78.031906,0295-5075-121-4-48003,PhysRevE.82.066211,weakest}. It has been shown that the stability of cyclic models is dependent on the strength the species dominate one another \cite{weakest}: if one species is weaker than the others, in terms of predation capacity, this species predominates \cite{uneven,PedroWeak,Weak4,AVELINO2022111738}. However, local instabilities - primarily dependent on the initial conditions - can lead to the extinction of two species; in this case, the weaker species is more likely to survive \cite{weakest}. The main characteristic of the uneven rock-paper-scissors models studied in literature is that the intrinsic organisms' weakness is not caused by local circumstances but resulting from an evolutionary condition or any external cause. For example, all organisms of one out of the species are affected by a disease outbreak making them less efficient to catch prey, independent of their spatial position \cite{PedroWeak,weakest} . But, in many biological systems, organisms face resistance due to a collective prey self-protection strategy \cite{ContraAtacck2,strategy3}. It has been reported that the effect of the antipredator behaviour is a drop in predation probability, which depends on the prey group size surrounding the predator \cite{manyeyes,dilution1, dilution2}. This means that each predator may be affected differently according to its neighbourhood. Furthermore, the success of the local antipredator response depends on the organisms' physical and cognitive abilities to detect a nearby enemy and the strength of the self-preservation tactic\cite{Cost2,LizardB1,detection}. In this work, we study a cyclic nonhierarchical tritrophic system whose predator-prey interactions are unbalanced by local antipredator response performed by organisms of one out of the species. Considering that the antipredator response diminishes the organism's predation capacity, we aim to answer the question of whether a locally weakened species predominates as occurs in the uneven rock-paper-scissors model widely studied, where other reasons than local weaken one out of the species. We introduce a conditioning factor parameter that indicates the fraction of individuals apt to join the collective tactic, meaning the percentage of organisms with the necessary physical and cognitive ability to learn and properly execute the antipredator strategy. Furthermore, we assume a maximal distance an organism can influence a predator attack and the strength of the antipredator reaction. Our goal is to comprehend how the local antipredator response unbalances the pattern formation and species densities. For this purpose, we follow a numerical implementation recently presented for local antipredator response in rock-paper-scissors models \cite{Anti1,anti2}, we explore the emergence of spatial patterns in regions where the cyclic model is locally unbalanced. Besides discovering which species predominates in a locally unbalanced cyclic model, we also focus on the effects of the local unevenness in jeopardising biodiversity, exploring the coexistence probability for a range of mobility probabilities. \begin{figure} \centering \includegraphics[width=46mm]{figure1} \caption{Illustration of the cyclic predator-prey interactions in the locally unbalanced rock-paper-scissors model. Red, purple, and light blue arrows represent the dominance of organisms of species $1$, $2$, and $3$, respectively. The dashed arrow illustrates the local reduction in predator capacity of organisms species $3$ by the antipredator behaviour of organisms of species $1$. Orange bars indicate that mobility interactions among organisms of every species occurs with same probability.} \label{fig1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=46mm]{figure2} \caption{Illustration of the Moore neighbourhood and the range of the antipredator response. An individual positioned at the yellow grid site can interact with one of the eight immediate neighbours (Moore neighbourhood), represented in yellow backgroud. A predator located at the yellow site faces opposition from the prey group within a range of antipredator response: dark purple dots for $R=1$; dark purple and light purple dots for $R=2$; dark purple, light purple, and pink dots for $R=3$. } \label{fig1b} \end{figure} \section{The Stochastic Model} We study a cyclic nonhierarchical system composed of $3$ species, whose predator-prey interactions are described by the rock-paper-scissors game rules. In our model, organisms of one out of the species can react to local predation threats, joining efforts with conspecifics to oppose predator's attacks. Each predator faces a particular antipredator resistance: the larger the prey group surrounding it, the lower the chances of successful predation. Our numerical implementation follows a standard algorithm widely employed in studies of spatial biological systems \cite{anti2,Reichenbach-N-448-1046,PhysRevE.89.042710,Bazeia_2017}. The dynamics of individuals' spatial organisation were simulated in square lattices with periodic boundary conditions, following predation and mobility rules. We assumed the Lotka-Volterra numerical implementation, with a conservation law for the total number of individuals \cite{Volterra}; the total number of individuals is $\mathcal{N}$, the total number of grid points. Figure~\ref{fig1} illustrates the main rules of our simulations, with red, purple, and light blue representing species $1$, $2$, and $3$, respectively. The arrows show the cyclic dominance of the predator-prey interactions: organisms of species $i$ consume individuals of species $i+1$, with $i=1,2,3$, with the cyclic identification $i=i\pm3\,\beta$, where $\beta$ is an integer. The dotted light blue arrow indicates that the predation probability of organisms of species $3$ is locally reduced because of the antipredator behaviour of individuals of species $1$. Orange dots illustrate the mobility interactions among organisms of every species. \begin{figure*} \centering \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3a} \caption{}\label{fig2a} \end{subfigure} % \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3b} \caption{}\label{fig2b} \end{subfigure} \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3c} \caption{}\label{fig2c} \end{subfigure} \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3d} \caption{}\label{fig2d} \end{subfigure} \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3e} \caption{}\label{fig2e} \end{subfigure} \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3f} \caption{}\label{fig2f} \end{subfigure} \\ \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3g} \caption{}\label{fig2g} \end{subfigure} \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3h} \caption{}\label{fig2h} \end{subfigure} \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3i} \caption{}\label{fig2i} \end{subfigure} \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3j} \caption{}\label{fig2j} \end{subfigure} \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3k} \caption{}\label{fig2k} \end{subfigure} \begin{subfigure}{.16\textwidth} \centering \includegraphics[width=29mm]{figure3l} \caption{}\label{fig2l} \end{subfigure} \caption{Snapshots captured from a simulation in a lattice with $300^2$ grid points. The realisation ran until $3000$ generations, for $R=3$, $\kappa=7.5$, $\alpha=1.0$, $p=m=0.5$. Figures a, b, c, d, e, and f show the organisms' spatial distribution after $36$, $60$, $84$, $120$, $144$, and $252$ generations, respectively. The colours follow the scheme in Fig. 1. Figures g, h, i, j, k, and l show how predation capacity are spatially distributed in the snapshots of Figs. a, b, c, d, e, and f, respectively. Pink dots represent $\varepsilon_1$ and $\varepsilon_2$, while the shades of grey shows the variation between the minimum (black) and maximum (white) values of $\varepsilon_3$. } \label{fig2} \end{figure*} The initial conditions were prepared so that the number of individuals is the same for every species, i.e., $I_i\,=\,\mathcal{N}/3$, with $i=1,2,3$. We allocate each individual at a random grid point. Every time step, one spatial interaction is completed: \begin{itemize} \item Predation: $ i\ j \to i\ i\,$, with $ j = i+1$. When one predation interaction occurs, a organism of species $i$ (the predator) replaces the grid point filled by the individual of species $i+1$ (the prey). \item Mobility: $ i\ \odot \to \odot\ i\,$, where $\odot$ means an individual of any species. When moving, an individual of species $i$ switches positions with another organism of any species. \end{itemize} We work with the Moore neighbourhood, i.e., individuals interact with one of their eight nearest neighbours, as illustrated by the yellow dot (active individual) and yellow background sites (eight possible passive individuals) in Fig.~\ref{fig1b}. The simulation algorithm follows three steps: i) randomly selecting an active individual; ii) raffling one interaction to be executed; iii) drawing one of the eight nearest neighbours to suffer the sorted interaction. Mobility interactions are always implemented because two organisms can switch positions irrespective of their species; however, predation only occurs if the randomly chosen neighbour is the active individual's prey. Therefore, if the randomly chosen interaction is realised, one timestep is counted. Otherwise, the three steps are redone. Predation and mobility interactions are chosen with probabilities $p$ and $m$, with $p+m=1$, for every species. The time necessary to $\mathcal{N}$ timesteps to occur is one generation, our time unit. The species densities $\rho_i$, with $i=1,2,3$, is defined as the fraction of the grid occupied by individuals of species $i$ at time $t$: $\rho_i\,=\,I_i/\mathcal{N}$. To explore the local aspects of antipredator behaviour, we define the maximum Euclidean distance at which prey can interfere with the predator action: the radius of the antipredator response $R$, measured in units of the lattice spacing. Consequently, the maximum number of individuals participating in a collective reaction against a predator is the number of organisms that fits within a circular area of radius $R$ centred at the predator position, which we define as $\mathcal{G}$. Therefore, we define the predation capacity $\varepsilon_i (x,y)$ that represents the probability of a predator of species $i$, located at the spatial position $(x,y)$ in the lattice, consuming a prey present in its immediate neighbourhood. As no antipredator resistance is performed by individuals of species $2$ and $3$, we assume $\varepsilon_1 =\varepsilon_2=1$ independent of the spatial position. On the other hand, each individual of species $3$ has its predation capacity reduced according to the prey group size in the neighbourhood. For a given predator of species $3$, the predation capacity is calculated by means of the Holling type II functional response \cite{holling_1965}: \begin{equation} \varepsilon_3\,=\frac{1}{1\,+\,\kappa\,\frac{g}{\mathcal{G}}} \label{ht2} \end{equation} where $g$ is the actual group size. This means that the effective predation probability for an organism of species $3$ is given by $p_{eff} =\varepsilon_3\,p$. The real parameter $\kappa$ is the antipredator strength factor, with $\kappa\geq0$; $\kappa=0$ represents the standard model (the absence of local antipredator response), that is, $\varepsilon_3=1$. In our model, a lonely prey ($g=1$) manages to reduce the effective predation probability to $\varepsilon_3=1/(1\,+\,\kappa/\mathcal{G})$, while $\varepsilon_3$ is minimal when $g=\mathcal{G}$, i.e., $\varepsilon_3=1(1\,+\,\kappa)$. In addition, we introduce the conditioning factor $\alpha$, a real parameter, $0\,\leq\,\alpha\,\leq\,1$, representing the percentage of organisms of species $1$ with physical and cognitive ability to perform the collective behavioural antipredator strategy. \section{Pattern formation} To study the pattern formation process, we first performed a single simulation in a square lattice with $300^2$ grid points for a timespan of $3000$ generations. All individuals of species $1$ were assumed to be conditioned to participate in the antipredator strategy, $\alpha=1.0$. The perception radius was set to $R=3$; while $\kappa\,=\,7.5$, $p\,=\,m\,=\,0.5$. We captured $250$ snapshots of the lattice in the first stage of the simulation; then, we used the snapshots to produce the video in https://youtu.be/lF4p7MTwR44. Following the colour scheme in Fig.~\ref{fig1}, organisms of species $1$, $2$, and $3$, are depicted by red, purple, and light blue dots, respectively. Figures~\ref{fig2a} to \ref{fig2f} show snapshots of the spatial configuration after $36$, $60$, $84$, $120$, $144$, and $252$ generations (we have chosen these snapshots to highlight the pattern formation process). Because of the local antipredator response of species $1$, the population of species $3$ declines immediately after the simulation begins. Individuals of species $1$ proliferate, consuming almost all organisms of species $2$. Then, the high density of species $1$ allows the population of species $3$ to grow. The cyclic predator-prey interactions give the chance of species $2$ to increase; however, Fig.~\ref{fig2a}, reveals that, while advancing over areas dominated by species $3$, individuals of species $2$ are quickly invaded by individuals of species $1$. We observed that the local unevenness introduced by the resistance against predation allows species $1$ to grow faster than the others. Our outcomes show that the transient creation of expanding spatial domains that cause the alternate lattice dominance is interrupted due to the formation of departed spatial domains, which is a consequence of the local decreasing of the predator capacity of of organisms of species $3$. This happens because in patches with a low concentration of species $1$, the antipredator response is limited, thus facilitating the multiplication of organisms of species $3$. On the other hand, in areas with many individuals of species $1$, the antipredator response limits the appearance of offsprings of species $3$. To observe the spatial distribution of organisms with different predation capacities, we calculate $\varepsilon_i$ for each individual during the entire simulation. Figures~\ref{fig2g} to ~\ref{fig2l} shows the results for the snapshots in Figs.~\ref{fig2a} to ~\ref{fig2f}. Pink dots show the presence of individuals of species $1$ and $2$, whose predation capacity is always maximum, irrespective of the spatial position. We applied a greyscale to distinguish individuals of species $i$ according to their predation capacity: the most affected individuals are depicted in black, while organisms not facing antipredator resistance appear in white; intermediary values of $\varepsilon_3$ are shown in shades of grey. The outcomes show that individuals with more dropped predation capacity are scattered within regions dominated by species $1$; in contrast, organisms with $\varepsilon_3=1$ are concentrated, forming dense white regions. Therefore, the proportion of individuals not affected by the local antipredator response far surpasses those coping with maximum resistance. We also calculated the temporal variation of the species densities, which is depicted in Fig.~\ref{fig3}. After a short pattern formation period, the average species densities remain constant until the end of the simulation, with $\rho_1>\rho_2>\rho_3$. Additionally, we quantified how the average spatial predation capacity of species $i$, denoted by $\overline{\varepsilon_i}$, changes during the simulation. The orange line in Fig.~\ref{fig3b} depicts the time dependence of $\overline{\varepsilon_3}$, while the green dashed line indicates that $\overline{\varepsilon_1}=\overline{\varepsilon_2}=1$. After a rapid variation in the initial stage, $\overline{\varepsilon_3}$ fluctuates around a constant value. Figure~\ref{fig3b} confirms that species $3$ is weaker than the others (in the sense that the average predation capacity is lower). Nevertheless, according to Fig.~\ref{fig3}, here, the weaker species does not preponderate over species $1$ and $2$, as it occurs if the species is weaker because of an intrinsic nonlocal condition \cite{PedroWeak,weakest,uneven}. \begin{figure} \centering \includegraphics[width=77mm]{figure4} \caption{Temporal changes of the species densities in the simulation presented in Fig. 2. The colours follow the scheme in Fig. 1, with red, purple, and light blue lines depicting $\rho_i$ for $i=1$, $i=2$, and $i=3$, respectively.} \label{fig3} \end{figure} \begin{figure} \centering \includegraphics[width=77mm]{figure5} \caption{Average predation capacity as a function of the time in the simulation presented in Fig. \ref{fig2}. The dashed green lines indicates that $\overline{\varepsilon_1}=\overline{\varepsilon_2}=1$ during the whole simulation, whereas the orange line shows the dynamics of $\overline{\varepsilon_3}$.} \label{fig3b} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}{.18\textwidth} \centering \includegraphics[width=33mm]{figure6a} \caption{}\label{fig4a} \end{subfigure} % \begin{subfigure}{.18\textwidth} \centering \includegraphics[width=33mm]{figure6b} \caption{}\label{fig4b} \end{subfigure} % \begin{subfigure}{.18\textwidth} \centering \includegraphics[width=33mm]{figure6c} \caption{}\label{fig4c} \end{subfigure} \begin{subfigure}{.18\textwidth} \centering \includegraphics[width=33mm]{figure6d} \caption{}\label{fig4d} \end{subfigure} \begin{subfigure}{.18\textwidth} \centering \includegraphics[width=33mm]{figure6e} \caption{}\label{fig4e} \end{subfigure} \caption{Snapshots obtained from a simulation running in lattice with $600^2$ sites starting from the initial conditions in Fig. a, for $R=3$, $\kappa=7.5$, $\alpha=1.0$, $p=m=0.5$. Figs. b, c, d, and e show the spatial configuration after $60$, $192$, $225$, and $435$ generations, respectively.} \label{fig4} \end{figure*} \begin{figure} \centering \includegraphics[width=77mm]{figure7} \caption{Frequency of weakened individuals of species $3$ as a function of predation capacity for various range of antipredator response $R$. The inset shows the mean value of $\varepsilon_3$ for various $R$.} \label{fig3c} \end{figure} \begin{figure*} \centering \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=57mm]{figure8a} \caption{}\label{fig5a} \end{subfigure} % \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=57mm]{figure8b} \caption{}\label{fig5b} \end{subfigure} % \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=57mm]{figure8c} \caption{}\label{fig5c} \end{subfigure} \caption{Characteristic length of spatial domains in the locally unbalanced rock-paper-scissors model. The results were obtained by analysing the spatial configuration at the end of $100$ simulations in grids with $600^2$ sites running until $5000$ generations. Figures a, b, and c show the dependence of $l_i$ on the radius of the antipredator response, the conditioning factor, and the antipredator strength factor, respectively. The colours follow the scheme in Fig.~\ref{fig1}.} \label{fig5} \end{figure*} We now aim to investigate the pattern formation mechanism in more detail. For this purpose, we prepared a single simulation starting from the particular initial condition shown in Fig.~\ref{fig4a}, where each species occupies a third of the grid. The realisation ran in a lattice with $600^2$ sites, for $R=3$, $\kappa=7.5$, $\alpha=1.0$, and $m=p=0.5$. The outcomes are depicted in Fig.~\ref{fig3} and video https://youtu.be/Lsz9E2eENOw; the colours represent the species according to the scheme in Fig.~1. Figures~\ref{fig4b} and ~\ref{fig4c} depict the spatial patterns after $60$, $192$, $225$, and $435$ generations. As soon as the simulation starts, the rings start moving due to the predator-prey interactions. However, the antipredator response of prey groups of species $1$ hampers the advance of species $3$. According to Fig.~\ref{fig4b}, the consequence is that: \begin{enumerate} \item the red ring enlarges because individuals of species $1$ consume organisms of species $2$ without opposition and defend themselves against predation. \item the light blue ring shortens because organisms of species $3$ do not perform the antipredator tactic but suffer resistance of individuals of species $1$; \item the purple ring width is approximately constant because organisms do not resist predation or suffer any resistance. \end{enumerate} To compute the temporal change in each ring width, we consider that the area occupied by species $i$ is defined by the total number of organisms of species $i$. Therefore, \begin{equation} \dot{\delta_i}\,=\,\frac{\dot{I_{i}}}{\sqrt{\mathcal{N}}}, \end{equation} where $\delta_i$ is the width of the ring occupied by species $i$, with $i=1,2,3$; the dot stands for the time derivative and $\sqrt{\mathcal{N}}$ is the torus cross section perimeter. We calculated the time variation of each ring width for the simulation presented in Fig.~\ref{fig2} using the results for $t\,\leq\,200$ generations, the period that precedes the pattern formation. In this period, the species densities vary linearly in time: $\dot{\delta_1}\,\approx\,0.95$, $\dot{\delta_2}\,\approx\,0$, and $\dot{\delta_3}\,\approx\,-\,0.95$ grid points per generation. The narrowing of the ring of species $3$ continues until being so thin that it allows stochastic fluctuations to facilitate the passage of organisms of species $1$ without being caught, as shown in Fig.~\ref{fig4c}. Once individuals of species $1$ reach the purple ring, they multiply because of the abundance of prey. The outcomes show that from the moment organisms of all species meet in the same spatial regions (as in Fig.~\ref{fig2j}), local interactions provokes the emergence of waves that spread on the entire territory, as one sees in Figs.~\ref{fig4d} and \ref{fig4e}. The single-species spatial domains are not symmetric due to the antipredator strategy executed by organisms of species $1$, which imposes species $2$ to propagate in wavefronts shorter than the other species. \section{The influence of the radius of the antipredator response} In the previous section, we found that the local antipredator response influences the predation efficiency of species $3$. Now, we aim to find how $R$ impacts the average predation capacity reduction. For this reason, we first calculated the frequency of organisms affected differently by the prey opposition for the complete sets of possible prey group sizes. The magenta, yellow, green, and purple dots in Fig.~\ref{fig3c} show the frequency of individuals with predation capacity $\varepsilon_3$, for $R=1$, $R=2$, $R=3$, and $R=4$, respectively. We obtained the outcomes running simulations in $300^2$ grid sites with a timespan of $3000$ generations. The inset figure depicts $\langle\,\varepsilon_3\,\rangle$, the mean value of $\varepsilon_3$ during the entire simulation. According to Eq.~\ref{ht2}, there may be five levels of antipredator response in the case of $R=1$; they are classified according to the group size resisting predation, $g=0,1,2,3,4$. As $R$ increases, the maximum number of prey reacting predator investiture grows: $\mathcal{G}=12$, $\mathcal{G}=28$, and $\mathcal{G}=48$, for $R=2$, $R=3$, and $R=4$, respectively. Although the species $3$ being, on average, weaker than the others, in terms of predation capacity, the "weakness" is not homogeneously distributed among the organisms. For example, even though that for $R=3$, $\langle\,\varepsilon_3\,\rangle = 0.323\,\langle\,\varepsilon_1\,\rangle =0.323\,\langle\,\varepsilon_2\,\rangle$, this is not a constraining for the majority of the organisms. Figure~\ref{fig3c} unveils that there is a low frequency of individuals whose predation capacity is severely dropped by the antipredator response. Moreover, Fig.~\ref{fig3c} reveals that the effects of the antipredator behaviour increase if the resistance is less localised. This agrees with a recent publication claiming that the antipredator response is more efficient to reduce the predation risk for a larger $R$ \cite{anti2}. \section{Characteristic Length Scales} \begin{figure}[h] \centering \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=65mm]{figure9a} \caption{}\label{fig8a} \end{subfigure}\\ \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=65mm]{figure9b} \caption{}\label{fig8b} \end{subfigure} \caption{Mean species densities in terms of the conditioning and antipredator strength factors of species $1$. Figures a and b show the averaged results obtained from the same implementations of the outcomes presented in Figs.\ref{fig5b} and \ref{fig5c}, respectively.} \label{fig8} \end{figure} The local antipredator reaction of organisms of species $1$ unbalances the spatial rock-papers-scissors game, causing the emergence of departed single-species domains. Now, we aim to calculate the characteristic length which defines the scale of spatial domains occupied by each species. For this reason, we first calculate the spatial autocorrelation function $C_i(r)$, with $i=1,2,3$, in terms of radial coordinate $r$, where $r=|\vec{r}|=x+y$ is the Manhatan distance between $(x,y)$ and $(0,0)$. Let us first define the function $\phi_i(\vec{r})$ that represents the presence of an organism of species $i$ in the position $\vec{r}$ in the lattice. Calculating the mean value $\langle\phi_i\rangle$, we find the Fourier transform \begin{equation} \varphi_i(\vec{\kappa}) = \mathcal{F}\,\{\phi_i(\vec{r})-\langle\phi_i\rangle\}, \end{equation} that is used to compute the spectral densities \begin{equation} S_i(\vec{k}) = \sum_{k_x, k_y}\,\varphi_i(\vec{\kappa}). \end{equation} The autocorrelation function is given by the normalised inverse Fourier transform \begin{equation} C_i(\vec{r}') = \frac{\mathcal{F}^{-1}\{S_i(\vec{k})\}}{C(0)}. \end{equation} Finally, we compute the spatial autocorrelation function for species $i$ as a function of the radial coordinate $r$: \begin{equation} C_i(r') = \sum_{|\vec{r}'|=x+y} \frac{C_i(\vec{r}')}{min\left[2N-(x+y+1), (x+y+1)\right]}. \end{equation} The typical size of the spatial agglomerations of organisms of species $i$ is found by assuming the threshold $C_i(l_i)=0.15$, where $l_i$ is the characteristic length scale for spatial domains of species $i$. We ran a series of $100$ simulations using lattices with $600^2$ grid points starting from different random initial conditions for $p\,=\,r\,=\,0.5$. We computed the mean autocorrelation function $C_i(r)$ employing the spatial configuration at $t=5000$ generations. First, we investigated how the scale of single-species domains changes with the range of antipredator response $R$. In this set of simulations, we fixed $\kappa=5.0$ and $\alpha=1.0$. According to the results depicted in Fig.~\ref{fig5a}, even if the antipredator reaction is limited to $R=1$, the system undergoes pattern formation. Confronted with the standard model, where organisms cannot resist predation ($R=0$), there is an increase in the typical size of the single-species domains, with species $1$ occupying larger territories. The outcomes also show that the less localised the antipredator response (larger $R$), the more extensive are the areas inhabited by individuals of a single species \cite{anti2}. Second, considering $R=3$ and $\kappa=5.0$, we studied the dependence of the characteristic length $l_i$ in the percentage of individuals of species $1$ conditioned to perform antipredator response. Figure~\ref{fig5b} shows that if no more than $10\%$ of organisms can participate in the antipredator response, the increase in the $l_i$ is approximately the same for every species. For $0.2\,\leq\,\alpha\,\leq\,0.6$, the outcomes reveal that species $1$ occupy the larger areas of the lattice - followed by species $2$. However, the scenario changes if more than $60\%$ of the organisms of species $1$ are conditioned: agglomerations of species $3$ grow more than clumps of species $2$. Third, we observed the dependence of the antipredator strength factor in the pattern formation, in the case of all organisms of species $1$ being conditioned, with $R=3$. The outcomes presented in Fig.~\ref{fig5c} show that the typical agglomeration size depends on the intensity of antipredator response, with species $1$ filling the largest single-species domains irrespective of $\kappa$. The second larger agglomerations are formed by organisms of species $2$ in the case of $0\,<\,\kappa\,\leq\,2.25$. In contrast, for $\kappa\,>\,2.25$, $l_2$ is the shortest. \section{Species Predominance} We now address the question of whether the local decrease in predation capacity gives the species predominance over the others. Figures \ref{fig8a} and \ref{fig8b} depict the mean species densities averaged from the same sets of simulations presented in Fig.~\ref{fig5}. Overall, the outcomes reveal the predominance of species $1$. First, the outcomes show that species $1$ does not predominate only if less than $50\%$ of the organisms are conditioned to participate in the collective strategy; in this scenario, the spatial densities of species $1$ and $3$ are the same, as shown in Fig.~\ref{fig8a}. Second, according to Fig.~\ref{fig8b}, for $\alpha=1$, species $1$ more abundant irrespective of the antipredator strength factor. Although not being preponderant, our findings also show that the weakening of species $3$ brings positive results in terms of population growth. The outcomes reveal that the stronger is the local opposition faced by the individuals, the higher is the density of species $3$. However, under no circumstances does the weaker species predominate in a locally unbalanced cyclic model; the preponderance is always of the weaker species’ prey. \begin{figure} \centering \includegraphics[width=51mm]{figure10} \caption{Ternary diagram of the species densities for various $\kappa$. Each orbit shows $\rho_i$ from a single realisation running in grids with $300^2$ sites, for $R=3$, $\alpha=1.0$, and $p=m=0.5$.} \label{fig9} \end{figure} \begin{figure} \centering \includegraphics[width=68mm]{figure11} \caption{Coexistence probability as a function of the mobility probability $m$ for various $\kappa$. The results were obtained by running sets of $1000$ simulations in lattices with $102^2$ grid points, running until $102^2$ generations, assuming $p=1-m$, $R=3$, and $\alpha=1.0$.} \label{fig10} \end{figure} \section{Coexistence Probability} Because the antipredator behaviour of organisms of species $1$ unbalances the cyclic spatial rock-paper-scissors model, species coexistence may be jeopardised. To investigate this issue, we first observed how the spatial species densities oscillate for various $\kappa$. The ternary diagram depicted in Figure \ref{fig9} shows the orbits of $\rho_i$ for $\kappa=1.5$ (yellow line), $\kappa=3.0$ (purple line), $\kappa=4.5$ (blue line), $\kappa=6.0$ (green line), and $\kappa=7.5$ (red line). The simulations ran in lattices with $300^2$ sites until $3000$ generations, for $R=3$, $\alpha=1.0$, and $p=m=0.5$. The outcomes show that the species density oscillations in the first stage of the simulations increase with $\kappa$, indicating that the stronger the antipredator reaction of organisms of species $1$, the more the biodiversity may be threatened. We then investigated the species coexistence as a function of the mobility probability for the cases of Fig.~\ref{fig9}. To this purpose, we implemented different random initial conditions for sets of $1000$ simulations in lattices with $102^2$ grid points for $ 0.05\,<\,m\,<\,0.95$, assuming $R=3$ and $\alpha$=1.0; the predation probability was set to be $p\,=\,1-m$; the simulations ran for a timespan of $102^2$ generations. Coexistence occurs if at least one individual of every species is present at the end of the simulation, $I_i (t=5000) \neq 0$ with $i=1,2,3$. This means that if at least one species is absent, the simulation results in extinction. The coexistence probability is defined as the fraction of implementations resulting in coexistence. Figure \ref{fig10} depicts the coexistence probability as a function of $m$ for $\kappa\,=\,1.5$ (yellow line), $\kappa\,=\,3.0$ (purple line), $\kappa\,=\,4.5$ (blue line), $\kappa\,=\,6.0$ (green line), and $\kappa\,=\,7.5$ (red line). Overall, biodiversity is threatened because the local antipredator response unbalances the spatial cyclic model. The results show that the effects are highlighted if organisms move with high probability. Moreover, the larger $\kappa$, the more jeopardised the biodiversity is. \section{Discussion and Conclusions} \label{sec6} We investigated the effects of local antipredator response performed by one out of the species in the spatial version of the rock-paper-scissors model. The antipredator reaction is deflagrated whenever a predator tries to consume one of the individuals belonging to the prey group surrounding the predator. The decrease in predation capacity depends on the prey group size and the antipredator strength of each prey. Besides, participation in the collective reaction depends on the organism's physical and cognitive abilities to properly perform the defence tactic. Our findings show that even if a small number of organisms can perform the local behavioural tactic, there are clear benefits for the species that perform the antipredator response. However, the species whose organisms perform the antipredator strategy is not the only one to profit: due to cyclic predator-prey interactions, it is advantageous for the species affected by the local antipredator response. If less than half of individuals are conditioned, both species share territorial dominance. Otherwise, the prevalence is of the species whose organisms behave defensively. Our results reveal that the local antipredator response unbalances the spatial rock-paper-scissors game differently than if the species was weakened for reasons other than local. Suppose all organisms of one out of the species are intrinsically weak in terms of effective predation probability, with the frailty being due to the evolutionary characteristics inherent to the species or external interference provoked by any disease or seasonal circumstance. In that case, the reduced predation capacity allows its prey to multiply everywhere, reverting in protection against its predator. In those scenarios, the result is a predominance of the weak species \cite{weakest,PedroWeak,uneven}. Here, the complexity of the local interactions leads to regions with different prey concentrations. In patches with more prey, the local antipredator response is more intense, resulting in a sharp drop in the organism's predation capacity; thus, the probability of predators invading the prey territory is low. On the contrary, in low prey density regions, a predator is less affected; therefore, the chances of consuming prey is higher, increasing the local predator population. Besides organisms with low and high predation capacity being concentrated in different patches, our results show that the number of organisms with low predation capacity is much smaller. If the fraction of organisms conditioned to perform the antipredator strategy is small, the total areas occupied by prey and predators are approximately the same. However, if the fraction of conditioned species is greater than $50\%$, the fraction of territory occupied by predators grows faster than the prey-dominated areas. Therefore, the weaker species does not dominate under any circumstances if the antipredator response locally causes the weakening. Despite the indisputable benefits of the collective antipredator behavioural strategy, biodiversity may be jeopardised if the antipredator resistance is too strong. This happens because of the species' densities oscillations in the transient pattern formation stage. As expected, the chances of the local defensive strategy affecting coexistence accentuate if individuals move with higher mobility probabilities. Our outcomes may also be helpful to ecologists to model biological systems where the consequences of behavioural strategies in the local interactions play a vital role in biodiversity conservation. \section*{Acknowledgments} We thank CNPq, ECT, Fapern, and IBED for financial and technical support. \bibliographystyle{elsarticle-num}
{ "timestamp": "2022-02-02T02:07:47", "yymm": "2202", "arxiv_id": "2202.00149", "language": "en", "url": "https://arxiv.org/abs/2202.00149" }
\section{Introduction} Quantum key distribution (QKD) allows for the establishment of a shared secret key between two parties, Alice and Bob, secure against computationally unbounded adversaries (whom we refer to as Eve). Progress in these protocols has rapidly advanced, leading to both a rich theory along with practical commercial systems \cite{qkd-survey-scarani,qkd-survey-pirandola,amer2021introduction}. Quantum conference key agreement (QCKA) protocols are designed to allow multiple parties to establish a common, shared, secret key secure against computationally unbounded adversaries. Starting from early work in this field \cite{group-key-first,group-key-newer}, QCKA protocols have advanced substantially with new protocols and security proofs \cite{grasselli2019conference,wu2016continuous,ottaviani2019modular}; it is also experimentally feasible \cite{proietti2021experimental}. Interestingly, it has been shown that there are some scenarios where such multiparty protocols hold an advantage over the naive use of multiple two-party protocols run in parallel \cite{group-key-newer}. For a recent survey on quantum conference key agreement protocols and the state of the art in security proofs, the reader is referred to \cite{murta2020quantum}. High-dimensional quantum cryptography has been shown to exhibit numerous advantages over qubit-based protocols, especially in two-party QKD \cite{bechmann2000quantum,chau2005unconditionally,sheridan2010security,sasaki2014practical,chau2015quantum,vlachou2018quantum,cerf2002security,nikolopoulos2005security,iqbal2020high,nikolopoulos2006error,yin2018improved,doda2021quantum}. Encouraged by this, it is worth investigating whether high-dimensional states can benefit QCKA. To our knowledge, only one high-dimensional QCKA protocol exists which was introduced in \cite{pivoluska2018layered}, however no rigorous finite key security analysis exists for it (instead, \cite{pivoluska2018layered} developed layered QKD protocols and was not concerned with the explicit finite-key analysis of this particular QCKA protocol - in fact, our analysis done in this paper may be useful in proving security of those other protocols introduced in \cite{pivoluska2018layered}, though we leave that as interesting future work). In this work, we consider a high-dimensional QCKA protocol and prove its security against arbitrary, general attacks in the finite key setting. The protocol we analyze is an extension of the qubit-based protocol from \cite{finite-ghz-bb84} to higher dimensions and also a specific instance of a protocol introduced in \cite{pivoluska2018layered}. For the security proof, we utilize the quantum sampling framework introduced by Bouman and Fehr in \cite{sample}, along with proof techniques we developed in \cite{krawec2019quantum} to derive sampling-based entropic uncertainty relations. Our proof, though using these two frameworks as a foundation, introduces several new methods which may also be useful when analyzing other quantum cryptographic protocols, both those involving two users and those for multi-users, especially in higher dimensions. Finally, we evaluate the performance of this protocol in a variety of scenarios, showing some very interesting behavior and shedding new light on the benefits of high-dimensional quantum states. In particular, we show that, as the dimension of the quantum signal increases, the noise tolerance also increases. Interestingly, the key-rate also increases beyond what would be possible by simply running multiple, lower-dimensional, protocols in parallel. This shows that high-dimensional states can greatly benefit QCKA protocols. Our contributions in this work are not only in developing a security proof for a high dimensional QCKA protocol, but also in showing even more benefits to high-dimensional quantum states when applied to quantum cryptography. Our methods may also spur future research in this area, as our proof techniques may be highly adaptable to other scenarios. \subsection{Notation and Definitions} We begin with some notation and definitions that we will use in this work. Let $d \in \mathbb{N}$, then we write $\mathcal{A}_d$ to be a $d$-character alphabet with a distinguished $0$ element. Given a word $q \in \mathcal{A}_d^n$, and a subset $t \subset \{1, \cdots, n\}$, we write $q_t$ to mean the substring of $q$ indexed by $t$; we use $q_{-t}$ to mean the substring of $q$ indexed by the complement of $t$. We write $w(q)$ to be the relative Hamming weight of $q$, namely $w(q) = \frac{|\{i \text{ } : \text{ } q_i \ne 0\}|}{n}$ - that is the number of characters in $q$ that are not zero, divided by the length of $q$. Given two words $x, y$ in this alphabet, we write $xy$ to mean the concatenation of $x$ and $y$. Finally, given $a,b$, numbers between $0$ and $d-1$, we write $a +_d b$ to mean the addition of $a$ and $b$ modulo $d$. We use $\mathcal{H}_d$ to mean a Hilbert space of dimension $d$. The standard computational basis will be denoted $Z = \{\ket{0}, \ket{1}, \cdots, \ket{d-1}\}$. If we are referring to an alternative basis we will write the basis label as a superscript. One important basis we will use is the Fourier basis consisting of elements $\mathcal{F} = \{\ket{0}^\mathcal{F}, \cdots, \ket{d-1}^{\mathcal{F}}\}$, where: \[ \ket{j}^\mathcal{F} = \frac{1}{\sqrt{d}}\sum_k \exp(2\pi i j k /d) \ket{k}. \] If given a word $q \in \mathcal{A}_d^n$, we write $\ket{q}$ to mean $\ket{q_1}\otimes\cdots\otimes \ket{q_n}$. Similarly, we write $\ket{q}^\mathcal{F}$ to mean $\ket{q_1}^\mathcal{F}\otimes\cdots\otimes\ket{q_n}^{\mathcal{F}}$. Note that if there is no superscript, then $\ket{q}$ is assumed to be the computational $Z$ basis. Finally, given pure state $\ket{\psi}$, we write $\kb{\psi}$ to mean $\ket{\psi}\bra{\psi}$. A density operator is a positive semi-definite Hermitian operator of unit trace acting on some Hilbert space. If $\rho_{AE}$ acts on Hilbert space $\mathcal{H}_A\otimes\mathcal{H}_E$, then we write $\rho_A$ to mean the operator resulting from tracing out the $E$ system, namely $\rho_A = tr_E\rho_{AE}$. Similarly for other, or multiple, systems. The Shannon entropy of a random variable $X$ is denoted $H(X)$. The $d$-ary entropy function is denoted $H_d(x)$, for $x \in [0,1]$, and is defined to be: \[ H_d(x) = x\log_d(d-1) - x\log_d x - (1-x)\log_d (1-x). \] Note that when $d=2$ this is simply the binary Shannon entropy. Given density operator $\rho_{AE}$, the conditional \emph{quantum min entropy} is defined to be \cite{renner2008security}: \begin{equation} H_\infty(A|E)_\rho = \sup_{\sigma_E}\max\{\lambda\in\mathbb{R} \text{ } : \text{ } 2^{-\lambda}I_A\otimes\sigma_E - \rho_{AE} \ge 0\}, \end{equation} where the supremum is over all density operators acting on the $E$ system. If $\rho = \kb{\psi}$ is a pure state, then we often write $H_\infty(A|E)_\psi$. Given $\rho_{AE}$, we write $H_\infty(A_Z|E)_\rho$ to mean the min entropy of the resulting state following a measurement of the $A$ register in the $Z$ basis. There are many important properties of quantum min entropy we will use. In particular, if the $E$ system is trivial or independent of the $A$ system, then $H_\infty(A)_\rho = -\log_2\max\lambda$, where the maximum is over all eigenvalues $\lambda$ of $\rho_A$. Given a state $\rho_{AEC} = \sum_{c=0}^Mp_c\rho_{AE}^{(c)}\otimes \kb{c}$ (i.e., the $C$ register is classical), then: \begin{equation}\label{eq:qc-state} H_\infty(A|EC)_\rho \ge \min_cH_\infty(A|E)_{\rho^{(c)}}. \end{equation} An important result proven in \cite{sample}, based on a lemma in \cite{renner2008security}, is the following which allows one to compute the min entropy of a superposition state based on the min entropy of a suitable mixture state: \begin{lemma}\label{lemma:superposition} (From \cite{sample}): Let $Z$ and $X$ be two orthonormal bases of $\mathcal{H}_d$. Then for any pure state $\ket{\psi}_{AE} = \sum_{i\in J}\alpha\ket{i}^X\otimes\ket{E_i}$, with $J \subset \mathcal{A}_d^N$, it holds that: \[ H_\infty(A_Z|E)_\psi \ge H_\infty(A_Z|E)_\rho - \log_2|J|, \] where $\rho_{AE} = \sum_{i\in J}|\alpha_i|^2\kb{i}^X\otimes\kb{E_i}$, and where the entropies above are computed on the state following a $Z$ basis measurement. \end{lemma} Quantum min-entropy is a vital resource in QKD security. Indeed, given a classical-quantum state $\rho_{AE}$, then the amount of uniform independent randomness that may be extracted from the $A$ register after a privacy amplification process is a function of conditional min entropy. In particular, let $\sigma_{KE}$ be the resulting state after privacy amplification (a process of hashing the $A$ register to a size of $\ell$ bits using a randomly chosen two-universal hash function), then it was shown in \cite{renner2008security} that: \begin{equation}\label{eq:PA} \trd{\sigma_{KE} - I/2^\ell \otimes \sigma_E} \le 2^{-\frac{1}{2}(H_\infty(A|E)_\rho - \ell)}. \end{equation} In our security proof, we will utilize a quantum sampling framework originally introduced in 2010 by Bouman and Fehr \cite{sample} and used by us recently to prove novel sampling-based entropic uncertainty relations \cite{krawec2019quantum,krawec2020new} and proofs of security for high-dimensional BB84 \cite{yao2022quantum}. We review some of the terminology and results from \cite{sample} here; for more information on these results, the reader is referred to that original reference. Fix $d \ge 2$ and $N \ge 1$. A \emph{classical sampling strategy} is a tuple $(P_T, f, g)$ where $P_T$ is a distribution over all subsets of $\{1, \cdots, N\}$ and $f, g: \mathcal{A}_d^* \rightarrow \mathbb{R}$. Given $q \in \mathcal{A}_d^N$, the strategy will first choose $t$ according to $P_T$; it will then observe $q_t$ and evaluate $f(q_t)$. This evaluation should be a ``guess'' as to the value of some target function, $g$, evaluated on the \emph{unobserved} portion. Namely, for a good sampling strategy, with high probability over the choice of subset $t$, it should hold that $f(q_t)$ is $\delta$-close to $g(q_{-t})$ for given $\delta > 0$. More formally, fix a subset $t$ with $P_T(t) > 0$. We define the set of ``good'' words $\mathcal{G}_t$ to be: \begin{equation}\label{eq:good-words} \mathcal{G}_t = \{q \in \mathcal{A}_d^N \text{ } : \text{ } |f(q_t) - g(q_{-t})| \le \delta\} \end{equation} Note that, given $q \in \mathcal{G}_t$, if subset $t$ were to be chosen by the sampling strategy, it is guaranteed that the strategy will succeed (the guess will be $\delta$-close to the target value). The \emph{error probability} of the sampling strategy, then, is: \[ \epsilon^{cl} = \max_{q\in\mathcal{A}_d^N}Pr\left(q \not \in \mathcal{G}_t\right), \] where the probability is over all subsets chosen according to $P_T$. One sampling strategy we will need later is summarized in the following lemma: \begin{lemma}\label{lemma:sample} (From \cite{sample}): Let $\delta > 0$ and $m \le N/2$. Define $P_T$ to be the uniform distribution over all subsets of $\{1, \cdots, N\}$ of size $m$. Define $f(x) = g(x) = w(x)$. Then: \[ \epsilon^{cl} \le 2\exp\left(\frac{-\delta^2m N}{N+2}\right). \] \end{lemma} These definitions may be promoted to the quantum case. Fixing a sampling strategy and a $d$-dimensional basis $\mathcal{B}$, we define $\text{span}(\mathcal{G}_t) = \text{span}(\ket{q}^\mathcal{B} \text{ } : \text{ } q \in \mathcal{G}_t)$. Note that, for any $\ket{\psi} \in \text{span}(\mathcal{G}_t)\otimes\mathcal{H}_E$, if a measurement in the $\mathcal{B}$ basis were made on those qudit systems indexed by $t$ resulting in outcome $q \in \mathcal{A}_d^{|t|}$, it would hold that the collapsed post-measured state must be of the form: \[ \ket{\psi_t^q} = \sum_{x\in J_q}\alpha_x\ket{x}^\mathcal{B}\otimes\ket{E_x}, \] where $J_q = \{x \in \mathcal{A}_d^{N-|t|} \text{ } : \text{ } |f(q) - g(x)| \le \delta\}$. The main result from \cite{sample} may then be stated as follows: \begin{theorem}\label{thm:sample} (From \cite{sample} though reworded for our application in this work): Let $(P_T, f, g)$ be a classical sampling strategy with error probability $\epsilon^{cl}$ for a given $\delta > 0$ and let $\ket{\psi}_{AE}$ be a quantum state where the $A$ register lives in a Hilbert space of dimension $d^N$. Then, there exist ideal states $\ket{\phi^t} \in \text{span}(\mathcal{G}_t)\otimes\mathcal{H}_E$ (with respect to some given, fixed, $d$-dimensional basis $\mathcal{B}$) such that: \begin{equation}\label{eq:ideal-real} \frac{1}{2}\trd{\sum_tP_T(t)\kb{t}\otimes\left(\kb{\psi} - \kb{\phi^t}\right)} \le \sqrt{\epsilon_{cl}}. \end{equation} where the above summation is over all subsets $t\subset\{1, \cdots, N\}$. \end{theorem} Note that the above is a slight rewording of the main result from \cite{sample}. For a proof that Theorem \ref{thm:sample} follows from the main result in \cite{sample}, the reader is referred to \cite{yao2022quantum}. \section{Protocol} The protocol we consider is a high-dimensional variant of the QCKA agreement protocol originally introduced and analyzed in \cite{finite-ghz-bb84}. It is also a specific instance of a protocol introduced for a layered QKD system in \cite{pivoluska2018layered} (though without a complete proof of security). We assume there are $p$ Bob's and one Alice all of whom wish to agree on a shared secret group key. The protocol begins by having Alice prepare the following high-dimensional GHZ state: \[ \ket{\psi_0} = \frac{1}{\sqrt{d}}\sum_{a=0}^{d-1} \ket{a, \cdots, a}_{AB_1\cdots, B_p}. \] Above, $d$ is the dimension of a single system ($d=2$ in the protocol analyzed in \cite{finite-ghz-bb84}). The $B_i$ system is sent to the $i$'th Bob while Alice retains the $A$ register. Randomly, Alice and the $p$ Bob's will measure their registers in the Fourier basis $\mathcal{F}$ resulting in outcome $q_{AB_1\cdots B_p} \in \mathcal{A}_d^{p+1}$. Otherwise, if Alice and the $p$ Bob's choose not to measure in the Fourier basis, they will measure in the computational basis, the result of which will be used to add $\log_2 d$ bits to their raw key. Note that the choice of whether to measure in the Fourier basis or the computational $Z$ basis may be made randomly by all parties (discarding events when choices are not consistent) or by using a pre-shared secret key (as was done in \cite{finite-ghz-bb84}). The above process is repeated for a freshly prepared and sent $\ket{\psi_0}$ until a raw key of sufficient length has been established. Note that, in the original qubit-based version introduced in \cite{finite-ghz-bb84}, the Hadamard $X$ basis was used instead of explicitly the Fourier basis - however both are equivalent in dimension two; in higher dimensions, we use the Fourier basis for this test measurement. This protocol here, generalizes the one from \cite{finite-ghz-bb84} to higher dimensions and when $d=2$ they are equivalent protocols. Interestingly, unlike standard BB84 \cite{QKD-BB84} (or, rather, the entanglement based version E91 \cite{QKD-E91}), measuring in an alternative, non computational, basis cannot lead to a correlated secret key digit as the results will not be identical for all parties. However, the Fourier basis measurement can be used to test for errors, leaving the $Z$ basis measurement alone for key distillation. Note that, if there is no noise in the channel, it should hold that whenever parties measure in the $\mathcal{F}$ basis, the results should sum to $0$ modulo $d$, namely: $q_A +_d q_{B_1} +_d\cdots +_d q_{B_p} = 0$; any non-zero sum will be considered noise and factored into our key-rate analysis. That this is true is easy to see. Indeed, converting $\ket{\psi_0}$ to the Fourier basis yields: \[ \ket{\psi_0} = \frac{1}{\sqrt{d}}\sum_{a=0}^{d-1}\left(\sum_{j_0,\cdots, j_p\in\mathcal{A}_d}\frac{1}{\sqrt{d^{p+1}}}\exp(-2\pi i (j_0+\cdots+j_p)a/d)\ket{j_0,\cdots,j_p}^\mathcal{F}\right) \] Now, if $j_0+\cdots + j_p = \lambda \cdot d$ for some $\lambda \in \mathbb{Z}$, then the probability of observing that particular $\ket{j_0,\cdots,j_p}^\mathcal{F}$ is simply: \[ \frac{1}{d^{p+2}}\trd{\sum_{a=0}^{d-1}\exp(-2\pi i a)}^2 = \frac{1}{d^{p+2}}\times d^2 = \frac{1}{d^p}. \] Since there are exactly $d^p$ such $j_0,\cdots, j_p \in \mathcal{A}_d$ and their sum, modulo $d$ is zero, it follows that the only observable values in the Fourier basis must sum to a number divisible by the dimension $d$. This proves the protocol is correct - namely, if the source is ideal, parties will distill a correlated key and not abort since their test measurement will result in the prescribed all-zero string. Following the establishment of the raw key, Alice and the $p$ Bob's will run a pair-wise error correction protocol followed by a standard privacy amplification protocol. Following error correction, but before privacy amplification, Alice will choose a random two-universal hash function $f$, the output size of which we take to be $\log_2\frac{1}{\epsilon_{EC}}$-bits (for user-specified $\epsilon_{EC}$), and broadcast $f(A)$, where $A$ is her raw key. Each Bob will locally compare the result of running their version of the raw key through this hash function and if the digest doesn't match, all parties abort. This ensures that, except with probability at most $\epsilon_{EC}$, parties can be assured that error correction has succeeded. This, of course, leaks an additional $\log_2\frac{1}{\epsilon_{EC}}$ bits which must be deducted from the final secret key size. We will comment more on error correction later when evaluating our key-rate bound. \section{Security Proof} To prove security of this protocol, we analyze the security of an equivalent entanglement based version. Here, instead of having Alice prepare and send a quantum state, we allow Eve the ability to create any arbitrary initial state, sending part to Alice and the other parts to the $p$ Bob's while also potentially maintaining a private entangled ancilla. Clearly security in this case will imply security of the prepare-and-measure version discussed in the previous section. We also use as a foundation, a proof methodology we introduced in \cite{krawec2019quantum}, though making several modifications for the multi-party protocol being analyzed here. Our proof of security, at a high level, proceeds in three steps: first we define an analyze an appropriate classical sampling strategy allowing us to use Theorem \ref{thm:sample}; second, we analyze the ideal states produced by that Theorem; and third, finally, we promote that ideal-case analysis to the real state. \textbf{Entanglement Based Protocol -} Let $\ket{\psi}\in \mathcal{H}_A\otimes\mathcal{H}_{B_1}\otimes\cdots\otimes\mathcal{H}_{B_p}\otimes\mathcal{H}_E$ be the state Eve prepares where each $\mathcal{H}_A \cong \mathcal{H}_{B_i} \cong \mathcal{H}_d^{\otimes N}$. Here $N$ is the user-specified number of rounds used by the protocol and is a parameter users may optimize. Ideally $\ket{\psi} = \ket{\psi_0}^{\otimes N}$. At this point, the users choose a random subset $t \subset \{1, 2, \cdots, N\}$ of size $m < N/2$ for sampling. This can be done by having Alice choose the subset and sending it to the Bob's (the option we assume here) or by using a small pre-shared key (the option used in \cite{finite-ghz-bb84}). Each party will measure their respective $d$ dimensional signals, indexed by $t$, in the $d$-dimensional Fourier basis, $\mathcal{F}$, resulting in outcome ${q} = q_Aq_{B_1}\cdots q_{B_p} \in \mathcal{A}_d^{m(p+1)}$. Here, each $q_A, q_{B_1}, \cdots, q_{B_p}$ is an $m$ character string which we may enumerate as $q_A = q_A^1\cdots q_A^m$ and $q_{B_i} = q_{B_i}^1\cdots q_{B_i}^m$. Let $s_i({q}) = q_A^i +_d q_{B_1}^i +_d \cdots +_d q_{B_p}^i$. That is, $s_i$ is the sum, modulo the dimension $d$, of all user measurement outcomes for signal $i$. Also, define $s(q) = s_1(q)\cdots s_m(q) \in \mathcal{A}_{d}^m$. If the source $E$ were honest, it should be that $w(s(q)) = 0$ since this will be the case in the event Eve prepared copies of $\frac{1}{\sqrt{d}}\sum_{a=0}^{d-1}\ket{a, a, \cdots, a}_{AB_1\cdots B_p}$ as discussed earlier. \textbf{Step 1: Classical Sample Strategy Analysis -} We now wish to use Theorem \ref{thm:sample} to analyze the security of this protocol. To do so, we require a suitable classical sampling strategy which corresponds to the sampling done by the actual protocol, and a bound on its error probability. Consider the following classical sampling strategy: given a word $q = q^0q^1q^2\cdots q^p \in \mathcal{A}_d^{(p+1)\cdot N}$ (i.e., each $q^j \in \mathcal{A}_d^N$), then first choose a subset $t \subset \{1, \cdots, N\}$ of size $m \le N/2$ and observe $q_t = q_t^0q^1_tq^2_t\cdots q^p_t$ (namely, one observes the $t$ portion of each of the $p+1$ strings). From this, compute $f(q_t) = w(s(q_t))$ to estimate the value of $g(q_{-t}) = w(s(q_{-t}))$. Putting this into the notation introduced earlier, we have the set of ``good'' words (see Equation \ref{eq:good-words}) as: \[ \mathcal{G}_t = \{q \in \mathcal{A}_d^{(p+1)\cdot N} \text{ } : \text{ } |w(s(q_t)) - w(s(q_{-t}))| \le \delta\}. \] This is exactly the sampling strategy we wish to use in our QCKA protocol. Users will observe a value based on their measurement in the Fourier basis, in particular, they observe the number of outcomes that do not sum to $0$ modulo $d$. We wish to argue that the remaining, unmeasured portion, satisfies a similar restriction in the $\mathcal{F}$ basis, thus placing a constraint on the form of the state Eve prepared, needed to compute the min entropy later. In order to use Theorem \ref{thm:sample}, needed to construct suitable ideal quantum states, we require a bound on the error probability of this classical sampling strategy. In particular, we require: \[ \epsilon^{cl} = \max_{q\in\mathcal{A}_d^{(p+1)N}}Pr\left(q \not\in \mathcal{G}_t\right). \] We claim: \begin{equation} \epsilon^{cl} \le 2\exp\left(\frac{-\delta^2m N}{N+2}\right). \end{equation} Let $\widetilde{\mathcal{G}}_t = \{q \in \mathcal{A}_d^N \text{ } : \text{ } |w(q_t) - w(q_{-t})| \le \delta\}$. Note that, by Lemma \ref{lemma:sample}, it holds that: \[ \tilde{\epsilon}^{cl} = \max_{\tilde{q}\in\mathcal{A}_d^N}Pr(\tilde{q}\not\in\widetilde{\mathcal{G}}_t) \le 2\exp\left(\frac{-\delta^2m N}{N+2}\right). \] Pick $q \in \mathcal{A}_d^{(p+1)N}$ and let $\tilde{q} = s(q)$. Then, it is clear that if $q \not \in \mathcal{G}_t$ then $\tilde{q} \not \in \widetilde{G}_t$ for any subset $t$. Thus for every $q\in\mathcal{A}_d^{(p+1)N}$, it holds that $Pr(q\not\in \mathcal{G}_t) \le Pr(\tilde{q}\not\in\widetilde{\mathcal{G}}_t)$ from which the claim follows. \textbf{Step 2: Ideal State Analysis -} We now return to the security analysis of the protocol. Let $\epsilon > 0 $ be given (it will, as we discuss later, determine the security level of the secret key). From Theorem \ref{thm:sample}, using the above sampling strategy with respect to the Fourier basis, there exists an ideal state of the form $\frac{1}{T}\sum_t\kb{t} \otimes \kb{\phi^t}$ where $T = {N \choose m}$ and: \begin{equation} \ket{\phi^t} \in \text{span}\{\ket{q}^\mathcal{F} \text{ } : \text{ } q \in \mathcal{A}_d^{(p+1)N} \text{ and } |w(s(q_t)) - w(s(q_{-t}))|\le\delta\}. \end{equation} If we set \begin{equation} \delta = \sqrt{\frac{(m+n+2)\ln(2/\epsilon^2)}{m(m+n)}}. \end{equation} then, we have that the real and ideal states are $\epsilon$-close in trace distance (on average over the subset choice as shown in Equation \ref{eq:ideal-real}) with the real-state being $\frac{1}{T}\sum_{t}\kb{t}\otimes\kb{\psi}$. We first analyze the ideal case and then use this analysis to argue about security of the actual given input state from Eve. In the ideal case, the event of choosing subset $t$, measuring those systems in the Fourier basis and observing outcome ${q}\in\mathcal{A}_d^{(p+1)m}$, causes the ideal state to collapse to: \begin{equation}\label{eq:ideal-post-measure} \ket{\phi_q^t} = \sum_{x\in J_q}\alpha_x\ket{x}^\mathcal{F}\otimes\ket{E_x}, \end{equation} where: \begin{align} J_q &= \{x_Ax_{B_1}\cdots x_{B_p} \in \mathcal{A}_d^{(p+1)n} \text{ } : \text{ } |w(s(x)) - w(s(q))|\le\delta\}\notag\\\notag\\ &= \left\{x_A^1\cdots x_A^n x_{B_1}^1\cdots x_{B_1}^n\cdots x_{B_p}^1\cdots x_{B_p}^n \text{ such that }\right.\\ & \left.|w( [x_A^1 +_d \cdots +_d x_{B_p}^1]\cdots [x_A^n +_d \cdots +_d x_{B_p}^n]) - w(s(q))|\le\delta\right\}. \notag \end{align} By manipulating the above state, we may write it in the following form which will be more useful for us in our analysis: \begin{align} \ket{\phi_q^t} \cong \sum_{\substack{ x_{B_1}^1\cdots x_{B_1}^n = x_{B_1}\in\mathcal{A}_d^n\\ x_{B_2}^1\cdots x_{B_2}^n = x_{B_2}\in\mathcal{A}_d^n\\ \vdots\\ x_{B_p}^1\cdots x_{B_p}^n = x_{B_p}\in\mathcal{A}_d^n }} \beta_{x}\ket{x}^\mathcal{F}_{B_1\cdots B_p} \otimes \sum_{y \in J(q \text{ } : \text{ } x)} \beta_{y|x}\ket{y}^\mathcal{F}_A\ket{F_{x,y}}_E \end{align} where, above, we define $x = x_{B_1}\cdots x_{B_p} \in \mathcal{A}_d^{p\cdot n}$ and we define: \begin{equation} J(q\text{ } : \text{ } x) = \{y\in\mathcal{A}_d^n \text{ } : \text{ } |w(s(yx)) - w(s(q))|\le\delta\} \end{equation} Note that some of the $\beta$'s in the above expression may be zero; also note that we permuted the subspaces above to place the $A$ register to the right of the $B$ registers - this was done only to make the algebra in the remainder of the proof easier to follow. Our goal now is to compute a lower bound on the conditional quantum min entropy following a $Z$ basis measurement on the collapsed ideal state (that is, the entropy in the above state $\ket{\phi_q^t}$, but following Alice's $Z$ basis measurement on her $A$ register). Tracing out $B$'s system yields: \begin{equation} \sigma_{AE} = \sum_{x\in\mathcal{A}_d^{p\cdot n}}|\beta_{x}|^2\underbrace{P\left(\sum_{y\in J(q \text{ } : \text{ } x)} \beta_{y|x}\ket{y}^\mathcal{F}_A\ket{F_{x,y}}_E\right)}_{\sigma_{AE}^{(x)}}, \end{equation} where $P(\ket{z}) = \kb{z}$. From Equation \ref{eq:qc-state}, we have $H_\infty(A_Z|E)_\sigma \ge \min_xH_\infty(A_Z|E)_{\sigma^{(x)}}$. Fix a particular $x$ and consider the mixed state: \begin{equation} \chi^{(x)}_{AE} = \sum_{y\in J(q\text{ } : \text{ } x)}|\beta_{y|x}|^2\kb{y}_A^\mathcal{F}\otimes\kb{F_{x,y}}_E. \end{equation} From Lemma \ref{lemma:superposition}, we have: \[ H_\infty(A_Z|E)_{\sigma^{(x)}} \ge H_\infty(A_Z|E)_{\chi^{(x)}} - \log_2|J(q\text{ } : \text{ } x)|. \] We first compute a bound on the size of $J(q\text{ } : \text{ } x)$. Let $\mathcal{I} = \{y\in\mathcal{A}_d^n\text{ } : \text{ } |w(y) - w(s(q))|\le\delta\}$. We claim $|J(q\text{ } : \text{ } x)| \le |\mathcal{I}|$. Indeed, pick $y \in J(q\text{ } : \text{ } x)$ and let $z = s(yx)$. Then $z \in \mathcal{I}$. Furthermore, for any $y,y' \in J(q\text{ } : \text{ } x)$ with $y \ne y'$, it holds that $s(yx) \ne s(y'x)$. Thus the claim follows. Now, since $|\mathcal{I}| \le d^{nH_d(w(s(q)) + \delta)}$ by the well known bound on the volume of a Hamming ball, we have an upper-bound on the size of the set $J(q\text{ } : \text{ } x)$ as a function of the observed value $q$. Note that, ideally, $w(s(q)) = 0$ with non-zero values representing error in the channel, and so the size of this set should be ``small'' for low noise levels. As the noise increases, our entropy bound will decrease (thus ultimately decreasing the overall key-rate as expected). What remains is to compute $H_\infty(A_Z|E)_{\chi}$. Following a $Z$ basis measurement on the $A$ register in $\chi$, we are left with the post-measured state: \begin{equation} \chi_{A_ZE} = \sum_y |\beta_{y|x}|^2 \sum_{z\in\mathcal{A}_d^n}p(z|y)\kb{z}_A\kb{F_{x,y}}_E, \end{equation} where $p(z|y)$ is the conditional probability of observing outcome $\ket{z}$ given input state $\ket{y}^\mathcal{F}$. Now, consider the following state where we add an additional, classical, ancilla: \[ \chi_{A_ZEY} = \sum_y |\beta_{y|x}|^2 \kb{y}_Y\otimes \underbrace{\sum_{z\in\mathcal{A}_d^n}p(z|y)\kb{z}_A\kb{F_{x,y}}_E}_{\chi^{(y)}}. \] Then we have $H_\infty(A_Z|E)_{\chi} \ge H_\infty(A_Z|EY)_{\chi} \ge \min_yH_\infty(A_Z|E)_{\chi^{(y)}}$ where we used Equation \ref{eq:qc-state} for the last inequality. Since the $E$ and $A_Z$ registers are independent in $\chi^{(y)}$ we have $H_\infty(A_Z|E)_{\chi^{(y)}} = H_\infty(A_Z)_{\chi^{(y)}} = -\log_2\max_zp(z|y)$. It is not difficult to see that $p(z|y) = d^{-n}$ for all $y,z \in \mathcal{A}_d^n$. Thus $H_\infty(A_Z|E)_{\chi} \ge n\log_2 d$. Note that our bound here, and also on $|J(q\text{ } : \text{ } x)|$, are independent of $x$. Thus, concluding, we have the following bound on the entropy in the ideal state: \begin{equation}\label{eq:ideal-entropy} H_\infty(A_Z|E)_{\sigma} \ge \min_xH_\infty(A_Z|E)_{\sigma^{(x)}} \ge n\left(\log_2 d - \frac{H_d(w(s(q)) + \delta)}{\log_d 2}\right). \end{equation} Of course, this was only the ideal state analysis, however, Equation \ref{eq:ideal-entropy} holds for any choice of subset $t$ and observation $q$. We now use this result to derive the final security of the real state produced by Eve and show that, with high probability over the choice of subset $t$ and measurement outcome $q$, the final secret key produced by the protocol will be secure. \textbf{Step 3: Real State Security -} The QCKA protocol (and, indeed, most if not all QKD protocols) may be broken into three distinct modules or CPTP maps: first is a sampling module $\mathcal{S}$ which takes as input a quantum state $\rho_{TABE}$ where the $T$ register represents the sampling subset $t$ used and $B$ represents all $p$ Bobs. Here, this module measures the $T$ register which chooses a subset $t$; from this, all qudits indexed by $t$ are measured in the Fourier basis, producing outcome $q \in \mathcal{A}_d^{m\cdot(p+1)}$. The output of this process is the subset chosen $t$, the observed $q$, and also the post-measured state $\rho_{ABE}(t,q)$. Following this, the raw-key generation module is run, denoted $\mathcal{R}$, which takes as input the previous post measured state and measures the remaining systems in the $Z$ basis resulting in raw keys for all parties. The output of this module is the raw key produced along with a post-measured state for Eve. Finally, a post-processing module is run, denoted $\mathcal{P}$, which will run an error correction protocol and privacy amplification, yielding the final secret key. The output of this last CPTP map is the actual secret key produced along with Eve's final quantum ancilla. This module requires as input the raw keys along with $q$ (needed to determine the final secret key size). We want to show, with high probability over the choice of sampling subset and test measurement outcome, that the final secret key is $\epsilon_{PA}$-close to the ideal secret key as defined by Equation \ref{eq:PA}. Recall, $\ket{\psi}_{AB_1\cdots B_pE}$ is the actual state produced by the adversary and sent to each of the parties. We may assume this is a pure state as a mixed state would lead to greater uncertainty for Eve. Of course, in the real case, the choice of subset is independent of the state produced by Eve and so we write the complete real state as $\rho_{TABE} = \sum_t\frac{1}{T}\kb{t}\otimes\kb{\psi}$ where $T = {N \choose m}$. From this, an ideal state of the form $\sum_t\frac{1}{T}\kb{t}\otimes\kb{\phi^t}_{ABE}$ may be defined as was analyzed previously in the second step of the proof. We may write the action of the composition $\mathcal{P}\circ\mathcal{R}\circ\mathcal{S} = \mathcal{PRS}$ as follows: \begin{align} \mathcal{PRS}\left(\sum_t\frac{1}{T}\kb{t}_T\otimes\kb{\psi}\right) &= \sum_{q,t}p(q, t) \kb{q,t} \otimes\mathcal{P}_q\mathcal{R}\left(\kb{\psi_q^t}_{ABE}\right)\\ \mathcal{PRS}\left(\sum_t\frac{1}{T}\kb{t}_T\otimes\kb{\phi^t}\right) &= \sum_{q,t}\tilde{p}(q, t) \kb{q,t} \otimes\mathcal{P}_q\mathcal{R}\left(\kb{\phi_q^t}_{ABE}\right). \end{align} Above, $p(q,t)$ is the probability of choosing subset $t$ and observing outcome $q$ in the real state and $\tilde{p}(q,t)$ is similar but for the ideal state. The post-measured state after sampling are denoted $\ket{\psi_q^t}$ in the real case and $\ket{\phi_q^t}$ in the ideal case (see Equation \ref{eq:ideal-post-measure} for what this state looks like in the ideal case). Note that, conditioning on a particular $q$ and $t$, these states are pure. Let $\ell(q,\texttt{leak}_{EC}) = n(\log_2 d - \frac{1}{\log_d 2}H_d(w(s(q)) + \delta)) - \texttt{leak}_{EC} - 2\log_2\left(\frac{1}{\epsilon}\right)$ where $\texttt{leak}_{EC}$ will be used to denote the leaked information due to error correction. Then, from Equation \ref{eq:PA} and our analysis on the min entropy of the post-measured ideal state in Equation \ref{eq:ideal-entropy}, we know that for any $t$ and observed $q$, if privacy amplification shrinks the raw key to a size of $\ell$, it holds that: \begin{equation} \trd{\mathcal{P}_q\mathcal{R}\left(\kb{\phi_q^t}\right) - \mathcal{U}_{\ell(q,\texttt{leak}_{EC})}\otimes tr_A \mathcal{P}_q\mathcal{R}\left(\kb{\phi_q^t}\right)} \le \epsilon, \end{equation} where $\mathcal{U}_k = \frac{1}{2^k}\sum_{i=0}^{2^k-1}\kb{i}$ is an operator acting on $\mathcal{H}_{2^{n\log_2 d}}$ (note that $n\log_2 d$ is the largest number of bits the final secret key can possibly be; privacy amplification will hash this into something potentially smaller and so $\mathcal{U}$ represents a uniform distribution on this smaller subspace of potential secret keys). Note that, above and in the text below, we are tracing out the $B$ systems though we do not explicitly write out $tr_B$ in all equations as it would add additional, and unnecessary, bulk to the equations. Hence, from here on out, the reader may assume all Bob systems are traced out of the equations unless otherwise stated. Finally, note that the above of course implies that: \begin{equation}\label{eq:ideal-epPA} \trd{\sum_{q,t}\tilde{p}(q,t)\kb{q,t}\otimes\mathcal{P}_q\mathcal{R}\left(\kb{\phi_q^t}\right) - \sum_{q,t}\tilde{p}(q,t)\kb{q,t}\otimes\mathcal{U}_{\ell(q,\texttt{leak}_{EC})}\otimes tr_A \mathcal{P}_q\mathcal{R}\left(\kb{\phi_q^t}\right)} \le \epsilon, \end{equation} We now claim that, with high probability over $t$ and measurement outcome $q$, it holds that: \begin{equation}\label{eq:pa-claim} \trd{\mathcal{P}_q\mathcal{R}\left(\kb{\psi_q^t}\right) - \mathcal{U}_{\ell(q,\texttt{leak}_{EC})}\otimes tr_A\mathcal{P}_q\mathcal{R}\left(\kb{\psi_q^t}\right)} \le 5\epsilon + \left(20\epsilon\right)^{1/3} = \epsilon_{PA} \end{equation} thus ensuring, again with high probability over the subset choice and test measurement outcome, that the resulting secret key in the real case, using the state produced by the adversary, is $\epsilon_{PA}$ close to an ideal secret key. Let $\rho_{TABE}$ and $\sigma_{TABE}$ be the real and ideal states respectively. Now, since the ideal and real states are $\epsilon$-close in trace distance by Theorem \ref{thm:sample}, along with our choice of $\delta$ and our sampling strategy, and since quantum operations cannot increase trace distance, we have: \begin{align} 2\epsilon &\ge \trd{\rho - \sigma} \ge \trd{\mathcal{P}\mathcal{R}\mathcal{S}(\rho) - \mathcal{P}\mathcal{R}\mathcal{S}(\sigma)}\notag\\ &=\trd{\sum_{q,t}p(q,t)\kb{q,t}\otimes\mathcal{P}_q\mathcal{R}(\kb{\psi_q^t}) - \sum_{q,t}\tilde{p}(q,t)\kb{q,t}\otimes\mathcal{P}_q\mathcal{R}(\kb{\phi_q^t})}\label{eq:diff-ideal-real} \end{align} From the above, we have: \begin{equation}\label{eq:diff-U} 2\epsilon \ge \trd{\sum_{q,t}p(q,t)\kb{q,t}\otimes\mathcal{U}_{\ell(q,\texttt{leak}_{EC})}\otimes tr_A\mathcal{P}_q\mathcal{R}(\kb{\psi_{q}^t}) - \sum_{q,t}\tilde{p}(q,t)\kb{q,t}\otimes\mathcal{U}_{\ell(q,\texttt{leak}_{EC})}\otimes tr_A\mathcal{P}_q\mathcal{R}(\kb{\phi_{q}^t})} \end{equation} This follows from basic properties of trace distance along with the fact that partial trace is a quantum operation. Adding the Equations \ref{eq:diff-ideal-real} and \ref{eq:diff-U} above yields: \begin{align*} 4\epsilon &\ge \trd{\sum_{q,t}p(q,t)\kb{q,t}\otimes\mathcal{P}_q\mathcal{R}(\kb{\psi_q^t}) - \sum_{q,t}\tilde{p}(q,t)\kb{q,t}\otimes\mathcal{P}_q\mathcal{R}(\kb{\phi_q^t})}\\ &+ \trd{\sum_{q,t}p(q,t)\kb{q,t}\otimes\mathcal{U}_{\ell(q,\texttt{leak}_{EC})}\otimes tr_A\mathcal{P}_q\mathcal{R}(\kb{\psi_{q}^t}) - \sum_{q,t}\tilde{p}(q,t)\kb{q,t}\otimes\mathcal{U}_{\ell(q,\texttt{leak}_{EC})}\otimes tr_A\mathcal{P}_q\mathcal{R}(\kb{\phi_{q}^t})}\\ &\ge \trd{\sum_{q,t}p(q,t)\kb{q,t}\otimes\left(\mathcal{P}_q\mathcal{R}(\kb{\psi_q^t}) - \mathcal{U}_{\ell(q,\texttt{leak}_{EC})}\otimes tr_A\mathcal{P}_q\mathcal{R}(\kb{\psi_q^t})\right)}\\ & - \trd{\sum_{q,t}\tilde{p}(q,t)\kb{q,t}\otimes\left(\mathcal{P}_q\mathcal{R}(\kb{\phi_q^t}) - \mathcal{U}_{\ell(q,\texttt{leak}_{EC})}\otimes tr_A\mathcal{P}_q\mathcal{R}(\kb{\phi_q^t})\right)}\\&\ge \trd{\sum_{q,t}p(q,t)\kb{q,t}\otimes\left(\mathcal{P}_q\mathcal{R}(\kb{\psi_q^t}) - \mathcal{U}_{\ell(q,\texttt{leak}_{EC})}\otimes tr_A\mathcal{P}_q\mathcal{R}(\kb{\psi_q^t})\right)} - \epsilon, \end{align*} where, above we used the triangle inequality followed by the reverse triangle inequality and finally Equation \ref{eq:ideal-epPA}. Let $\Delta_{t,q} = \frac{1}{2}\trd{\mathcal{P}_q\mathcal{R}(\kb{\psi_q^t}) - \mathcal{U}_{\ell(q,\texttt{leak}_{EC})}\otimes tr_A\mathcal{P}_q\mathcal{R}(\kb{\psi_q^t})}$. Then, the above, along with basic properties of trace distance, implies: \[ \frac{5\epsilon}{2} \ge \sum_{q,t}p(q,t)\Delta_{q,t}. \] We now consider $\Delta_{q,t}$ as a random variable over $q$ and $t$. From the above, its expected value is upper-bounded by $5\epsilon/2$. Furthermore, since $\Delta_{t,q} \le 1$ for all $t,q$ (by properties of trace distance), the variance may also be upper-bounded by $5\epsilon/2$. Using Chebyshev's inequality, then, we have: \begin{equation} Pr\left[\left|\Delta_{t,q} - \frac{5\epsilon}{2}\right| \le \left(\frac{5\epsilon}{2}\right)^{1/3}\right] \ge 1 - \left(\frac{5\epsilon}{2}\right)^{1/3}, \end{equation} From this, and simple algebra, it follows that, except with probability at most $\epsilon_{\text{fail}} = (5\epsilon/2)^{1/3}$, Equation \ref{eq:pa-claim} holds. This implies that, with high probability over the choice of subset $t$ and test measurement outcome in the Fourier basis $q$, Alice and the $p$ Bob's are left with an $\epsilon_{PA} = 5\epsilon + (20\epsilon)^{1/3}$ secure key of size: \begin{equation}\label{eq:final-keyrate} \ell = n\left(\log_2d - \frac{H_d(w(s(q)) + \delta)}{\log_d2}\right) - \texttt{leak}_{EC} - 2\log_2\frac{1}{\epsilon}. \end{equation} concluding the security proof. \subsection{Evaluation} We now evaluate our key-rate bound for this protocol. We will first consider the two-dimensional case, allowing us to compare with current state of the art results from \cite{finite-ghz-bb84}. We will then evaluate our bound in higher dimensions - in that case, we have no other QCKA results to compare to (the results in \cite{finite-ghz-bb84} applied only to the qubit case); however, we will show some interesting behavior in the higher-dimensional case, when compared to the qubit case. To evaluate, we will assume a depolarization channel connecting all parties. This assumption is not required for our security proof which works for any channel - one must simply observe the value $q$ and also the error correction leakage used by the EC protocol and then evaluate the secret key rate (Equation \ref{eq:final-keyrate}) using our analysis in the prior section. However, we will consider depolarization channels in this subsection in order to evaluate our bound here without actual hardware, and also to compare with prior work (which also assume depolarization channels when evaluating key-rates). Under a depolarization channel, we may assume the quantum state shared by Alice and the $p$ Bobs is of the form: \begin{equation}\label{eq:dep-result} \rho_{AB}^{\otimes N} = \left( (1-Q)\kb{\psi_0} + \frac{Q}{d^{p+1} }I\right)^{\otimes N}, \end{equation} where $I$ is the identity operator of dimension $d^{p+1}$. Note that, under this assumption, the expected value of $w(s(q))$ is $Q/d$. This matches the value evaluated in \cite{finite-ghz-bb84} for the qubit case as expected (where, there, the $X$ basis was used and a parity check performed). We next need a bound on $\texttt{leak}_{EC}$. In practice, this can be done through the actual public transcript after executing the protocol; however for our evaluation, we will simulate an expected leakage. For error correction (EC), we assume one-way error correction and take the same approach as in \cite{finite-ghz-bb84}, whereby Alice will send the same error correction information to each of the $p$ Bob's. In particular, it was proven there, that there exists a one-way EC protocol for such a scenario that aborts with probability no greater than $2p\epsilon'$ where the leakage is upper-bounded by: \[ \texttt{leak}_{EC} \le \max_iH_0^{\epsilon'}(A|B_i) + \log_2\frac{2(N-1)}{\epsilon_{EC}} \] where: \[ (1-2p\epsilon')Pr(\exists i \text{ } : \text{ } B_i \ne A \text{ after EC}) \le \epsilon_{EC} \] and where $H_0^{\epsilon'}(A|B_i)$ is the smooth R\'enyi zero-entropy of Alice's raw key conditioned on the $i$'th Bob's, namely: \[ H_0^{\epsilon'}(X|Y) = \min_{P_{XY}}\max_y\text{supp}(P(X|Y=y)) \] where the minimum is over all probability distributions $P$ that are $\epsilon'$-close to the original input distribution. Importantly, one need only consider the ``worst-case'' noise between Alice and one Bob, as opposed to taking the sum of all error correction leakages for all $p$ Bob's. A single error correction message from Alice is sufficient to correct all $p$ Bob's raw keys. To ensure error correction succeeded, Alice will choose a random two-universal hash function $f$, the output size of which we take to be $\log_2\frac{1}{\epsilon_{EC}}$-bits, and broadcast $f(A)$, where $A$ is her raw key as discussed earlier when introducing the protocol. This leaks an additional $\log_2\frac{1}{\epsilon_{EC}}$ bits which must be deducted from the final secret key size. This is used so users can be assured that error correction has succeeded Using results from \cite{HD-BB84} to bound the R\'enyi zero-entropy in this high-dimensional scenario, along with the depolarization assumption, we may bound the error correction leakage by: \[ \texttt{leak}_{EC} \le nh(Q_Z + \nu) + n(Q_Z+\nu)\log_2(d-1) + \log_2\frac{1}{\epsilon_{EC}}, \] where: \[ \nu = \sqrt{\frac{N(m+1)\ln\frac{4p}{\epsilon_{EC}}}{m^2(N-m)}}. \] and where $Q_Z = \max_i Q_i$, where $Q_i$ is the probability of an error in Alice and the $i$'th Bob's raw key digit. Note that we are using the same sample size $m$ used for the Fourier basis measurement test and this must be deducted from the total raw key size. Since we are evaluating assuming a depolarization channel, we have $Q_Z = Q(1-1/d)$ (which is easily seen from Equation \ref{eq:dep-result}). Note that, we use this only for evaluation purposes as it will allow us to directly compare, in the qubit case, to state-of-the-art results in \cite{finite-ghz-bb84}. Combining everything, we find the length of the key produced by the protocol to be: \begin{equation} \ell = n\left(\log_2 d - \frac{H_d\left(\frac{Q}{d} + \delta\right)}{\log_d 2} - h\left(Q_Z + \nu\right) - (Q_Z+\nu)\log_2(d-1)\right) - \log_2\frac{1}{\epsilon_{EC}} - 2\log_2\frac{1}{\epsilon}. \end{equation} Of course the actual key-rate, then, is simply $\ell / (n+2m)$ (we divide by an additional $m$ number of signals to account for the sampling of the raw-key needed to estimate $Q_Z$ above). In our evaluations, we set $\epsilon_{EC} = 10^{-12}$ and $\epsilon = 10^{-36}$ giving a failure probability (both for the entropy bound and error correction) on the order of $10^{-12}$. This also sets $\epsilon_{PA}$ to be on the order of $10^{-12}$. When comparing with other protocols, we use a failure probability of $10^{-12}$. Finally, we use a sample size of $7\%$ for both bases (i.e., $m = .07N$ where $N$ is the total number of signals sent). A comparison of our key-rate bound, and that derived in \cite{finite-ghz-bb84} through alternative means, is shown in Figure \ref{fig:comp1} for the two dimensional case (in which case both protocols are identical). We note that, except for a slight deviation, the two results agree (with prior results from \cite{finite-ghz-bb84} surpassing ours by a small amount). Of course, the proof and results in \cite{finite-ghz-bb84} apply only to $d=2$; to our knowledge, we are the first to derive a rigorous finite-key proof of security for a high-dimensional QCKA protocol. We also evaluate our key-rate bound in higher-dimensions in Figure \ref{fig:hd-keyrate}. In higher dimensions, we cannot compare to any other QCKA protocols as we are not aware of any other finite key security results for such protocols in high (greater than $2$) dimensions. However, we note several interesting properties here. First, as the dimension increases, the number of signals needed before a positive key-rate is achieved, decreases, and the general key-rate increases, making the protocol potentially more efficient. Note that one explanation for the increased key-rate is due to the fact that one receives, for each signal, a larger number of raw-key bits as the dimension increases. However this, alone, does not explain the great increase in key-rate as the signal dimension increases. For instance, if we compare $d=2$ and $d'=4$, a single iteration of the protocol, in the first case, produces at most one raw key bit, while the second case would produce at most $2$ raw key bits. If this were the only reason for the increase in secret key-rates, one would expect that running twice the number of iterations for the $d=2$ case would produce the same secret key length as the $d'=4$ case. However this is clearly not the case, as shown in Figure \ref{fig:comp3}. Thus, the increase in key-rate for higher dimensions cannot be recovered simply by running multiple copies of the qubit-based protocol in parallel, instead higher-dimensional states per round are required. We also note that the number of Bob's, $p$, does not noticeably affect the key-rate - interestingly, this was also discovered in \cite{finite-ghz-bb84} for the qubit, $d=2$, case. \begin{figure} \centering \includegraphics[width=.7\textwidth]{qubit-case.png} \caption{Comparing our new bound with that from \cite{finite-ghz-bb84} for the qubit case ($d=2$) when $Q = 10\%$. We note that our result is slightly lower than in \cite{finite-ghz-bb84} for this dimension. However, the advantage to our approach is that it can readily handle higher dimensions. Inset: a close-up view of the difference in our bound and that from \cite{finite-ghz-bb84}; we note that as the number of signals increases, our results converge. See text for discussion.} \label{fig:comp1} \end{figure} \begin{figure} \centering \includegraphics[width=.7\textwidth]{hd-case-q10.png} \includegraphics[width=.7\textwidth]{hd-case-q30.png} \caption{Evaluating our key-rate bound for higher dimensions assuming $Q = 10\%$ (Top) and $Q = 30\%$ (Bottom). We note that, as dimension increases, not only does key-rate increase, but also noise tolerance. Furthermore, the number of signals required before a positive key-rate is attained also decreases with dimension for a fixed noise level.} \label{fig:hd-keyrate} \end{figure} \begin{figure} \centering \includegraphics[width=.7\textwidth]{hd-case-double.png} \caption{Showing that the advantage in key-rate for higher dimensions cannot be recovered simply by using lower-dimensional systems and increasing the number of rounds. High-dimensional states exhibit an advantage beyond simple parallel executions of a qubit-based protocol for this QCKA protocol.} \label{fig:comp3} \end{figure} \section{Closing Remarks} In this paper, we proved the security of a high-dimensional QCKA protocol, allowing multiple parties to establish a shared secret key. We proved security using a combination of the quantum sampling framework of \cite{sample}, along with sampling-based entropic uncertainty relation techniques from \cite{krawec2019quantum}. Our proof introduced several new methods needed to use those two frameworks in this multi-user scenario and our methods may be applicable to other multi-user quantum cryptographic protocols, especially in higher dimensions. Finally, we evaluated the protocol in a variety of scenarios and showed some interesting properties in higher-dimensions. Our work here has shown even more evidence, beyond that already known (as discussed in the Introduction), of the potential benefits, at least in theory, of high-dimensional quantum states. Note that we did not consider practical device imperfections, leaving that as interesting future work. $ $\newline\newline \footnotesize{\textbf{Acknowledgments: } WOK would like to acknowledge support from the National Science Foundation under grant number 2006126.} $ $\newline\newline \footnotesize{\textbf{Disclaimer: }This paper was prepared for information purposes by the teams of researchers from the various institutions identified above, including the Future Lab for Applied Research and Engineering (FLARE) group of JPMorgan Chase Bank, N.A.. This paper is not a product of the Research Department of JPMorgan Chase \& Co. or its affiliates. Neither JPMorgan Chase \& Co. nor any of its affiliates make any explicit or implied representation or warranty and none of them accept any liability in connection with this paper, including, but limited to, the completeness, accuracy, reliability of information contained herein and the potential legal, compliance, tax or accounting effects thereof. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction.}
{ "timestamp": "2022-02-02T02:06:49", "yymm": "2202", "arxiv_id": "2202.00140", "language": "en", "url": "https://arxiv.org/abs/2202.00140" }
\section{Introduction} \label{sec:1} We are currently in the midst of rapid progress in observing the vicinity of black holes. In particular, in the observation of the center of our galaxy, the orbital evolution of the so-called S-stars orbiting a supermassive black hole candidate, Sagittarius A$^\ast$ (Sgr A$^\ast$), has been actively investigated~\cite{Ghez:2000ay,Schodel:2002py}. Because S-stars can be regarded as test particles on the gravitational field of Sgr A$^\ast$, precise measurements of their orbital evolution provide information on the central object and its surrounding matters, as well as on the spacetime geometry~\cite{Do:2019txf,Saida:2019mcz,Abuter:2020dou}. A theoretical understanding of such orbital evolution is essential for identifying observables and interpreting observational results. One of the typical examples is that the elliptic orbit, well-known in Newtonian gravity, rotates in the same direction as the orbital evolution because of the general-relativistic effect~(see, e.g., Ref.~\cite{Weinberg:1972}). This is the so-called periapsis shift phenomenon of the bound orbits. Thus, by observing the displacement of the orbit due to this precessional motion, we can estimate the general-relativistic correction to the gravitational field. Furthermore, it has been discussed that supermassive and intermediate-mass black holes may be associated with large dark matter overdensities, called density spikes of dark matter~\cite{Bertone:2009kj}. Obviously, the contribution from extended mass distribution must also be considered as a correction to the black hole gravitational field. However, although it has been discussed in post-Newtonian gravity~\cite{Rubilar:2001} and by the general-relativistic Plummer model~\cite{igata:2021}, the effect of matter distribution on the periapsis shift phenomena is still controversial in the framework including a general-relativistic black hole. Therefore, the most important next issue is to clarify the competition between the general-relativistic and local-density effects on the periapsis shift in a spacetime where a black hole and matter distribution are coexistent. Such knowledge will broaden our understanding of the effects of matter fields on particle dynamics in the strong gravity regime. Besides, it will be useful for comparison with the case where the center is not a black hole but an exotic object~\cite{Bini:2005dy,Bambhaniya:2021ybs,Ota:2021mub}. To clarify the above issues, we construct a background spacetime with a static and spherically symmetric distribution of massive particles around the Schwarzschild black hole by exactly solving the Einstein equations. Then utilizing this black hole spacetime, we aim to formulate the competition between the general-relativistic and local-density effects that determine the direction of the periapsis shift of the bound geodesic orbits of stars on the matter distribution. Using this formulation, we consider the case in general relativity where the retrograde shift due to extended matter distribution can compensate for the prograde shift due to the general-relativistic effect (see Refs.~\cite{Rubilar:2001,Nucita:2007qp, Iwata:2016ivt} for the post-Newtonian regime). However, it is not obvious whether extended matter distribution contributes to the retrograde shift in any case. Therefore, it is quite important to discuss the periapsis shift by considering the energy conditions and other physically reasonable conditions for the matter field. This paper is organized as follows. In Sec.~\ref{sec:2}, we review a static and spherically symmetric cloud solution to the Einstein equations. In particular, we construct a black hole surrounded by a static Einstein cluster and clarify physically reasonable constraints based on energy conditions. In Sec.~\ref{sec:3}, we formulate the dynamics of a freely falling particle in the Einstein cluster around a black hole and show the conditions for the existence of nearly circular bound orbits. Then we derive a formula, which shows how the conflicting general-relativistic and local-density effects determine the precession rate that characterizes the periapsis shift. In Sec.~\ref{sec:4}, we evaluate the precession rate for nearly circular orbits in several spacetime models obtained by giving concrete forms to the metric functions. Furthermore, we demonstrate periapsis shifts of bound orbits with large eccentricity. In Sec.~\ref{sec:5}, we discuss how to determine the matter distribution around a black hole using the formula, given observational data on the orbiting stars. Section~\ref{sec:6} is devoted to a summary and discussion. We use units in which $G=1$ and $c=1$. \section{Static clouds around the Schwarzschild black hole} \label{sec:2} We review a static and spherically symmetric cloud solution to the Einstein equations. Let $t$ be a static time, and let $r$ be an areal radius. The labels $(\theta, \varphi)$ are the standard spherical coordinates. Using these coordinates $x^\mu=(t, r, \theta, \varphi)$, we consider the general metric ansatz of static and spherically symmetric spacetimes, \begin{align} \label{eq:met2} g_{\mu\nu}\:\!\mathrm{d}x^\mu\:\!\mathrm{d}x^\nu =-\left(1-\frac{2\alpha(r)}{r}\right)\mathrm{d}t^2+\left(1-\frac{2m(r)}{r}\right)^{-1}\mathrm{d}r^2+r^2 (\mathrm{d}\theta^2+\sin^2\theta\:\!\mathrm{d}\varphi^2), \end{align} where $\alpha(r)$ and $m(r)$ are continuous functions of $r$, and in particular $m(r)$ is the Misner-Sharp mass~\cite{Misner:1964je,Hayward:1994bu}. Now we assume that \begin{align} \label{eq:maspt} &0\le m<\frac{r}{2}, \\ \label{eq:alppt} &\alpha <\frac{r}{2}. \end{align} We consider the following form of the stress-energy tensor: \begin{align} \label{eq:Tmunu} T^{\mu}{}_{\nu}=\mathrm{diag} (-\epsilon, 0, \Pi, \Pi), \end{align} where $\epsilon$ and $\Pi$ denote energy density and tangential pressure, respectively, and we assume that the radial pressure vanishes. Through the Einstein equations, $\epsilon$ and $\Pi$ are related to $m$ as \begin{align} \label{eq:ep1} \epsilon&=\frac{m'}{4\pi r^2}, \\ \label{eq:PI1} \Pi&=\frac{m}{2(r-2m)} \epsilon, \end{align} and the vanishing radial pressure leads to the remaining nontrivial equation \begin{align} \label{eq:alp1/2} \alpha'=\frac{1}{2}\left( 1-\frac{r-2\alpha}{r-2m} \right) =\frac{\alpha-m}{r-2m}, \end{align} or equivalently, \begin{align} \label{eq:malpha} m=\frac{\alpha-\alpha' r}{1-2\:\!\alpha'}, \end{align} where the prime denotes differentiation with respect to $r$. As $m$ and $\alpha$ are continuous, the function $\alpha'$ is also continuous. Note that Eq.~\eqref{eq:alp1/2} together with the inequalities~\eqref{eq:maspt} and \eqref{eq:alppt} implies that \begin{align} \label{eq:haltalp} \alpha'<\frac{1}{2}. \end{align} Consequently, for given $m(r)$ and $\alpha(r)$ under the constraints~\eqref{eq:maspt} and \eqref{eq:alppt}, we can specify the matter distribution $\epsilon(r)$ and $\Pi(r)$ through Eqs.~\eqref{eq:ep1} and \eqref{eq:PI1}, respectively. This static configuration is possible by a balance between the gravitational force and the tangential pressure. Imposing energy conditions further restricts $m$ and $\alpha$ than Eqs.~\eqref{eq:maspt} and \eqref{eq:alppt}. Some of the energy conditions for $T^\mu{}_\nu$ are written as follows: (\hspace{.18em}i\hspace{.18em}) weak energy condition, $\epsilon\ge 0$ and $\epsilon+\Pi \geq 0$; (\hspace{.08em}ii\hspace{.08em}) strong energy condition, $\epsilon+2\Pi\geq 0$ and $\epsilon+\Pi \geq 0$; (i\hspace{-.08em}i\hspace{-.08em}i) null energy condition, $\epsilon\ge 0$ and $\epsilon+\Pi \geq 0$; and (i\hspace{-.08em}v\hspace{-.06em}) dominant energy condition, $\epsilon\geq |\Pi|$. For the vacuum region (i.e., $\epsilon=0$ and $\Pi=0$), all these energy conditions are trivially satisfied, and thus, $m'=0$ holds. For the nonvacuum region, Conditions~(\hspace{.18em}i\hspace{.18em})--(i\hspace{-.08em}i\hspace{-.08em}i) under the inequality~\eqref{eq:maspt} provide a common inequality, $m'>0$. On the other hand, Condition~(i\hspace{-.08em}v\hspace{-.06em}) can be reduced to $m'> 0$ and $0< m\le 2r/5$. If any one of the energy conditions is imposed together with the assumption~\eqref{eq:maspt}, the quantities $\epsilon$ and $\Pi$ will be positive. The Einstein cluster~\cite{Einstein:1939,Geralico:2012jt} is a physical model compatible with the above $T^\mu{}_\nu$. This cluster is static and spherically symmetric and consists of averaged distribution of collisionless particles. The motion of each particle in the cluster is circular geodesic motion. The counterrotating particles cancel out their angular momenta, so that the spherical symmetry is recovered. Let $n(r)$ and $L_{\mathrm{p}}(r)$ be the proper number density of counterrotating particles with rest mass $m_{\mathrm{p}}$ and the total angular momentum of each of the particles moving on a circular geodesic with radius $r$, respectively. Then, as summarized in Appendix~\ref{sec:A}, the stress-energy tensor $T^\mu{}_\nu$ is free from radial pressure and coincides with Eq.~\eqref{eq:Tmunu}, where \begin{align} \label{eq:ep2} \epsilon&=m_{\mathrm{p}} n \:\!\left( 1+\frac{l_{\mathrm{p}}^2}{r^2} \right)=m_{\mathrm{p}} n \:\!\frac{r-2m}{r-3m}, \\ \label{eq:PI2} \Pi& =m_{\mathrm{p}} n \:\! \frac{l_{\mathrm{p}}^2}{2r^2} =\frac{1}{2} \frac{l_{\mathrm{p}}^2}{r^2+l_{\mathrm{p}}^2} \epsilon=m_{\mathrm{p}} n \:\!\frac{m}{2(r-3m)}, \end{align} where $l_{\mathrm{p}}=L_{\mathrm{p}}/m_{\mathrm{p}}$. These expressions imply that $\epsilon\geq 0$ and $\Pi\geq 0$, and therefore, $m$ is further restricted as \begin{align} \label{eq:ECcond} 0\le m<\frac{r}{3} \quad \mathrm{and} \quad m'\geq0. \end{align} Furthermore, these restrictions with Eq.~\eqref{eq:malpha} lead to \begin{align} \label{eq:alphaineq} \alpha' r \leq \alpha < (1+\alpha') \frac{r}{3} \quad \mathrm{and} \quad \alpha''\le 0, \end{align} where we have used the relation \begin{align} m'=-\frac{(r-2\alpha) \alpha''}{(1-2\alpha')^2}. \end{align} Therefore, we see that the Einstein cluster automatically satisfies all of the above energy conditions. To obtain an Einstein cluster, we need to give either $m$ and $\alpha$ so that the inequalities~\eqref{eq:ECcond} and \eqref{eq:alphaineq} hold. Hereafter, we particularly focus on a black hole surrounded by the Einstein cluster.% \footnote{Recently, similar background is used in Refs.~\cite{Boehmer:2007az,Cardoso:2021wlq}.} We assume that the mass function $m$ is of the form \begin{empheq}[left={m=\empheqlbrace}]{alignat=4} & \label{eq:mnear} M_0 \quad &\mathrm{for} \quad &2 M_0<r\le r_{\mathrm{min}}, \\ & \label{eq:mstar} m_*(r) \quad &\mathrm{for} \quad &r_{\mathrm{min}}\leq r\leq r_{\mathrm{max}}, \\ & \label{eq:mfar} M \quad &\mathrm{for} \quad &r\geq r_{\mathrm{max}}, \end{empheq} where $M_0$, $M$, $r_{\mathrm{min}}$, and $r_{\mathrm{max}}$ are positive constants, and $m_*(r)$ is a continuous mass function of $r$ and must satisfy \begin{align} m_*(r_{\mathrm{min}})&=M_0, \\ m_*(r_{\mathrm{max}})&=M. \end{align} This model is known as the thick Einstein shell~\cite{Comer:1993rx}. Figure~\ref{fig:TS} shows a schematic picture of a black hole with mass $M_0$ surrounded by an Einstein cluster distributed in the region $r_{\mathrm{min}}\le r\le r_{\mathrm{max}}$. \begin{figure}[t] \centering \includegraphics[width=7.0cm,clip]{TS.pdf} \caption{ Schematic picture of a black hole with mass $M_0$ surrounded by an Einstein cluster distributed in the region $r_{\mathrm{min}}\le r\le r_{\mathrm{max}}$. } \label{fig:TS} \end{figure} Then the corresponding $\alpha$ can be obtained by solving Eq.~\eqref{eq:alp1/2} as% \footnote{ If we first assume the form of $m$, then $g_{tt}$ is restricted by the continuity of the metric at $r=r_{\mathrm{min}}$ to the following form: \begin{align} g_{tt} =-\frac{r_{\mathrm{min}}-2M_0}{r}\exp \left[\:\! \int_{r_{\mathrm{min}}}^r \frac{\mathrm{d}\tilde{r}}{\tilde{r}-2m(\tilde{r})} \:\!\right]. \end{align}} \begin{empheq}[left={\alpha(r)=\empheqlbrace}]{alignat=4} & \frac{r}{2}-\frac{C_0^2}{2}(r-2M_0) \quad &\mathrm{for} \quad &2 M_0<r\le r_{\mathrm{min}}, \label{eq:near} \\ &\alpha_*(r) \quad &\mathrm{for} \quad &r_{\mathrm{min}}\leq r\leq r_{\mathrm{max}}, \label{eq:matre} \\ & \frac{r}{2}-\frac{C^2}{2}(r-2M) \quad &\mathrm{for} \quad &r\geq r_{\mathrm{max}}, \label{eq:far} \end{empheq} where $C_0$ and $C$ are integral constants, and $\alpha_*(r)$ is a continuous function of $r$ and must satisfy \begin{align} \label{eq:alpbc1} \alpha_*(r_{\mathrm{min}})&=\frac{r_{\mathrm{min}}}{2}-\frac{C_0^2}{2}(r_{\mathrm{min}}-2M_0), \\ \label{eq:alpbc2} \alpha_*(r_{\mathrm{max}})&=\frac{r_{\mathrm{max}}}{2}-\frac{C^2}{2} (r_{\mathrm{max}}-2M). \end{align} In the region $2M_0<r\le r_{\mathrm{min}}$, the metric~\eqref{eq:met2} is reduced to the Schwarzschild spacetime with mass $M_0$, \begin{align} \mathrm{d}s^2=-\left(1-2M_0/r\right)\mathrm{d}\tilde{t}^{\,2}+\left(1-2M_0/r\right)^{-1}\mathrm{d}r^2+r^2 (\mathrm{d}\theta^2+\sin^2\theta \:\!\mathrm{d}\varphi^2), \end{align} where $\tilde{t}=C_0 t$ is the Schwarzschild time. Thus, $M_0$ is the mass of the central black hole. In the region $r\ge r_{\mathrm{max}}$, the metric~\eqref{eq:met2} is reduced to the Schwarzschild spacetime with mass $M$, \begin{align} \mathrm{d}s^2=-\left(1-2M/r\right)\mathrm{d}T^2+\left(1-2M/r\right)^{-1}\mathrm{d}r^2+r^2 (\mathrm{d}\theta^2+\sin^2\theta \:\!\mathrm{d}\varphi^2), \end{align} where $T=C t$ is the Schwarzschild time. If we set $C=1$, the time $t$ is the proper time for asymptotic static observers. Thus, $M$ is the sum of the masses of the black hole and the matter (i.e., the Arnowitt-Deser-Misner mass). To obtain the distribution of an Einstein cluster, we must impose the inequalities~\eqref{eq:ECcond}, or equivalently, \begin{align} 0<m_*<\frac{r}{3}, \quad m'_*>0. \end{align} The second inequality $m_*<r/3$ evaluated at $r=r_{\mathrm{min}}$ and $r=r_{\mathrm{max}}$ provides lower bounds of $r_{\mathrm{min}}$ and $r_{\mathrm{max}}$, respectively, \begin{align} r_{\mathrm{min}}&>3M_0, \\ r_{\mathrm{max}}&>3M. \end{align} We also obtain $M>M_0$ because of the third inequality $m'_*>0$. \section{Stellar dynamics in an Einstein cluster around a black hole} \label{sec:3} We consider stellar dynamics in an Einstein cluster around a black hole. We assume that the matter field contributes to the particle motion only through the gravitational field and that local interactions between the star and the density distribution (e.g., pressure, friction, etc.) are negligible. The Lagrangian of a massive particle with unit mass is given by \begin{align} \label{eq:Lagra} \mathscr{L}=\frac{1}{2} \left[\:\! -\left(1-\frac{2\alpha}{r}\right) \dot{t}^2+\left(1-\frac{2m}{r}\right)^{-1} \dot{r}^2+r^2\dot{\theta}^2+r^2\sin^2\theta \:\!\dot{\varphi}^2 \:\!\right], \end{align} where the dot denotes differentiation with respect to proper time. Without loss of generality, we assume from spherical symmetry that a freely falling particle moves on the equatorial plane $\theta=\pi/2$. Since $t$ and $\varphi$ are cyclic variables in this mechanical system, the conjugate momenta are conserved, \begin{align} \label{eq:En} \frac{\partial \mathscr{L}}{\partial \dot{t}}&=-\left(1-\frac{2\alpha}{r}\right) \dot{t}=-E, \\ \label{eq:L} \frac{\partial \mathscr{L}}{\partial \dot{\varphi}}&=r^2 \dot{\varphi}=L, \end{align} where $E$ and $L$ are constant energy and angular momentum per unit mass of a particle, respectively. The remaining Euler-Lagrange equation for $r$ is written as \begin{align} \label{eq:ELeq} &\ddot{r}+V'=0, \\ &V(r)=\frac{1}{2}\left(1-\frac{2m}{r}\right)\left(\frac{L^2}{r^2}+1\right)-\frac{r-2m}{r-2\alpha}\frac{E^2}{2}, \end{align} where the prime denotes differentiation with respect to $r$, and we have used Eqs.~\eqref{eq:En}, \eqref{eq:L}, and the normalization for the four-velocity, $\mathscr{L}=-1/2$. Integrating Eq.~\eqref{eq:ELeq}, we obtain \begin{align} \label{eq:constr} \frac{1}{2}\dot{r}^2+V=0, \end{align} which corresponds to the normalization condition in terms of $E$ and $L$. We focus on circular orbits, where particles must satisfy the stationary conditions \begin{align} \dot{r}=0 \quad \mathrm{and} \quad \ddot{r}=0. \end{align} Through Eqs.~\eqref{eq:ELeq} and \eqref{eq:constr}, these conditions are rewritten as \begin{align} V=0 \quad \mathrm{and} \quad V'=0. \end{align} Solving these algebraic equations for $L$ and $E$, we obtain angular momentum and energy for circular orbits as functions of the orbital radius, \begin{align} \label{eq:Lr} L^2(r)&=\frac{mr^2}{r-3m}, \\ \label{eq:Er} E^2(r)&=\left(1-\frac{2\alpha}{r}\right) \frac{r-2m}{r-3m}, \end{align} respectively. These expressions imply that the circular orbits only exist in the range \begin{align} r-3m>0. \end{align} The circular orbits are stable if $V''>0$, marginally (un)stable if $V''=0$, and unstable if $V''<0$, where $V''$ is the second derivative of $V$ on circular orbits given by \begin{align} V'' =\frac{(r-6m)m+m' r^2}{r^3(r-3m)}, \end{align} where Eqs.~\eqref{eq:Lr} and \eqref{eq:Er} were used to eliminate $L^2$ and $E^2$, respectively. We consider bound orbits that are nearly circular orbits, i.e., motion of a particle which is displaced slightly from the equilibrium radius of a stable circular orbit. For sufficiently small displacement, we can introduce two frequencies \begin{align} \label{eq:omegavarphi} \omega_\varphi&= \dot{\varphi}=\sqrt{\frac{m}{r^2(r-3m)}}, \\ \label{eq:omegar} \omega_r&=\sqrt{V''}, \end{align} where $\omega_\varphi$ is the angular frequency, and $\omega_r$ is the frequency of radial harmonic oscillation. The periapsis shift of the nearly circular orbits is measured by the precession rate \begin{align} \nu&=\frac{\omega_\varphi-\omega_r}{\omega_\varphi} \\ &=1-\sqrt{1-\frac{6m}{r}+\frac{m' r}{m} } \\ &=1-\sqrt{1+3\Big(\zeta-\frac{2m}{r}\Big)}, \end{align} where $\zeta$ is the ratio of $\epsilon$ to the average mass density $\bar{\epsilon}(r)$ inside radius $r$, \begin{align} \zeta(r)&=\frac{\epsilon}{\bar{\epsilon}}=\frac{m' r}{3m}, \\ \bar{\epsilon}(r)&=\frac{3m}{4\pi r^3}. \end{align} The ratio $\zeta$ indicates the local-density effect of matters, which negatively contributes to $\nu$. In contrast, the ratio $2m/r$ indicates how close $r$ is to the gravitational radius $2m$ for the mass inside $r$ and can be regarded as the general-relativistic effect, which positively contributes to $\nu$. We should note that $\nu$ contains $m$ and $m'$ but not $\alpha$. The form of $\nu$ implies that \begin{align} 0<\nu<1 \quad &\mathrm{for} \quad \zeta <\frac{2m}{r}<\zeta+\frac{1}{3}, \\ \nu=0 \quad &\mathrm{for} \quad \zeta=\frac{2m}{r}, \\ \nu<0 \quad &\mathrm{for} \quad \zeta>\frac{2m}{r}. \end{align} Hence, if the general-relativistic effect is larger than the local-density effect, the periapsis shift is prograde, whereas if it is smaller, the shift is retrograde. \section{Periapsis shifts in specific models} \label{sec:4} \subsection{Constant density model} \label{sec:4A} We consider the continuous mass function~\eqref{eq:mnear}--\eqref{eq:mfar} with \begin{align} m_*=M_0+\frac{4\pi \epsilon_*}{3} (r^3-r_{\mathrm{min}}^3), \end{align} where $\epsilon_*$ is a constant given by \begin{align} \epsilon_*=\frac{3}{4\pi} \frac{M-M_0}{r_{\mathrm{max}}^3-r_{\mathrm{min}}^3}. \end{align} This mass distribution is produced by the rectangular-shaped energy density profile \begin{align} \epsilon=\epsilon_* \Theta(r-r_{\mathrm{min}})\Theta(r_{\mathrm{max}}-r), \end{align} where $\Theta(\cdot)$ is the step function. Therefore, we call this model the constant density model. There is no analytical expression for $\alpha$. It is worthwhile to consider the innermost stable circular orbit (ISCO), which satisfies $V=0$, $V'=0$, and $V''=0$. Provided that the mass fraction of the cluster is sufficiently small, i.e., $\eta=(M-M_0)/M_0\ll1$, if the ISCO appears on the matter distribution, then the radius is given by \begin{align} r=6M_0 \left[\:\! 1-\frac{r_{\mathrm{min}}^3+432 M_0^3}{r_{\mathrm{max}}^3-r_{\mathrm{min}}^3} \eta +O(\eta^2) \:\!\right]. \end{align} This means that the ISCO radius is smaller than $6M_0$ of the Schwarzschild, which is caused by the matter distribution. We focus on the nearly circular bound orbits on the matter distribution. In this model, the two ratios $2m/r$ and $\zeta=\epsilon/\bar{\epsilon}$ are reduced to \begin{align} \frac{2m}{r} =\frac{2M_0}{r}+\frac{2(M-M_0) (r^3-r_{\mathrm{min}}^3)}{r(r_{\mathrm{max}}^3-r_{\mathrm{min}}^3)}, \quad \zeta =\frac{(M-M_0) r^3}{(M-M_0)r^3+M_0 r_{\mathrm{max}}^3-M r_{\mathrm{min}}^3}. \end{align} Figure~\ref{fig:cdens} shows the contour plots of $\nu$ on the matter distribution, where the red curves denote $\nu=0$, and the contour interval is $0.25$. Figure~\ref{fig:cdens}(a) shows the result for the case $(M_0, M, r_{\mathrm{min}})=(1,2,6)$. If $r_{\mathrm{max}}<9.524\ldots\,$, then $\nu<0$. When $r_{\mathrm{max}}$ is relatively small, $\epsilon_*$ is relatively large. In this situation, the local-density effect becomes dominant over the general-relativistic effect, and as a result, retrograde shifts are more likely to occur. On the other hand, if $r_{\mathrm{max}}> 9.524\ldots\,$, then the region of $\nu> 0$ appears near $r=r_{\mathrm{min}}$. When $r_{\mathrm{max}}$ is relatively large, $\epsilon_*$ is relatively small. Therefore, the general-relativistic effect becomes dominant over the local-density effect, and as a result, prograde shifts are more likely to occur. Figure~\ref{fig:cdens}(b) shows the result for the case $(M_0, M, r_{\mathrm{min}})=(1,1.2,6)$. If $r_{\mathrm{max}}<7.017\ldots\,$, then $\nu<0$. If $7.770\ldots< r_{\mathrm{max}}< 12.976\ldots\,$, then $\nu> 0$. If $7.017\ldots< r_{\mathrm{max}}<7.770\ldots$ and $r_{\mathrm{max}}> 12.976\ldots\,$, then $\nu> 0$ near $r=r_{\mathrm{min}}$ and $\nu<0$ near $r=r_{\mathrm{max}}$. Compared to the case of Fig.~\ref{fig:cdens}(a), the region of $\nu>0$ appears from a smaller $r_{\mathrm{max}}$. This implies that the local-density effect is weaker, and the general-relativistic effect tends to dominate in a wider parameter range. \begin{figure}[t] \centering \includegraphics[width=12cm,clip]{CDM.pdf} \caption{ Precession rate $\nu$ on the matter distribution for the constant density model. The units in which $M_0=1$ are adopted. The upper and lower panels correspond to the cases $(M, r_{\mathrm{min}})=(2,6)$ and $(1.2,6)$, respectively. The left panels show $\nu$ as a function of $r$ for several fixed values of $r_{\mathrm{max}}$. The right panels show the contours of $\nu$ in the range $r_{\mathrm{min}}\le r\le r_{\mathrm{max}}$. The red solid curves denote $\nu=0$. The contours in (a) and (b) are drawn in 0.25 intervals. } \label{fig:cdens} \end{figure} We now focus on a situation where the total mass of the matter distribution is much smaller than the black hole mass, $\eta=(M-M_0)/M_0\ll 1$. We also assume that the matter is distributed widely enough and far enough away from the black hole, $r_{\mathrm{max}}/M_0\gg 1$ and $(r_{\mathrm{max}}-r_{\mathrm{min}})/M_0 \gg1$. Then we can estimate the radius at which $\nu=0$ as \begin{align} r\simeq \left(\frac{2M_0 r_{\mathrm{max}}^3}{\eta}\right)^{1/4} \approx 480 \left(\frac{M_0}{4.0\times 10^{6} M_{\odot}}\right)^{1/4} \left(\frac{r_{\mathrm{max}}}{1.9\times 10^3 \,\mathrm{au}}\right)^{3/4} \left(\frac{0.01}{\eta}\right)^{1/4} \,\mathrm{au}, \end{align} where $M_\odot$ is the solar mass, and the typical values of $M_0$ and $r_{\mathrm{max}}$ are chosen as the mass of Sgr~A$^\ast$ and the apoapsis distance of S2/S0-2, respectively. The typical value $480 \,\mathrm{au}$ is between the periapsis and apoapsis distances of S2/S0-2. Thus, we see that the general-relativistic effect can be canceled out by the local-density effect here, and the retrograde periapsis shift occurs outside this radius. It suggests that the local-density effect may compensate the general-relativistic effect even if the matter distribution has only a mass fraction of $1\%$ relative to the black hole. Figure~\ref{fig:cdns} shows several bound orbits of stars moving in the matter distribution, where $(M_0, M, r_{\mathrm{min}}, r_{\mathrm{max}})=(1,2,6,30)$ for Figs.~\ref{fig:cdns}(a)--\ref{fig:cdns}(c) and $(M_0, M, r_{\mathrm{min}}, r_{\mathrm{max}})=(1,1.2,6,30)$ for Figs.~\ref{fig:cdns}(d)--\ref{fig:cdns}(f). The initial conditions are $\varphi(0)=0$, $r(0)=r_{\mathrm{c}}=18$, $E=E(r_{\mathrm{c}})+\Delta E$, and $L=L(r_{\mathrm{c}})$, where we have used $C=1$ and have chosen $\Delta E$ as (a) $6.818\times 10^{-4}$, (b) $2.727\times 10^{-3}$, (c) $1.091\times 10^{-2}$, (d) $5.051\times 10^{-4}$, (e) $1.515\times 10^{-3}$, and (f) $4.545\times 10^{-3}$. The corresponding energy values measured by the asymptotic observers are (a) $0.934\ldots\,$, (b) $0.936\ldots\,$, (c) $0.944\ldots\,$, (d) $0.927\ldots\,$, (e) $0.928\ldots\,$, and (f) $0.931\ldots\,$. The initial radial velocity $\dot{r}(0)$ is determined from these initial condition through the normalization condition~\eqref{eq:constr}. The solid curves denote the orbits of stars revolving counterclockwise in the $(x,y)$ plane, where $(x,y)=(r\cos \varphi, r \sin \varphi)$. When a small amount of energy $\Delta E$ is injected to a star on a circular orbit, the orbit transits to a bound orbit of nearly circular shape [see Figs.~\ref{fig:cdns}(a) and \ref{fig:cdns}(d)]. The amplitude of radial oscillation increases with $\Delta E$ [see Figs.~\ref{fig:cdns}(b) and \ref{fig:cdns}(e)]. In Figs~\ref{fig:cdns}(c) and \ref{fig:cdns}(f), each orbit extends across the entire matter distribution. The red and blue dots indicate the periapsises and apoapsises of the bound orbits, respectively. Figures~\ref{fig:cdns}(a)--\ref{fig:cdns}(c) show their retrograde shifts, where the blue and red dots revolve clockwise. On the other hand, Figs~\ref{fig:cdns}(d)--\ref{fig:cdns}(f) show their prograde shifts, where the blue and red dots revolve counterclockwise. We can see that even if the amplitude of the radial oscillation becomes large by injecting energy, the shift direction is the same as in the case of nearly circular bound orbits. \begin{figure}[t] \centering \includegraphics[width=15cm,clip]{CDM_orbit.pdf} \caption{Bound orbits and their periapsis and apoapsis shifts in the constant density model, where $(M_0, M, r_{\mathrm{min}}, r_{\mathrm{max}})=(1,2,6,30)$ for (a)--(c) and $(M_0, M, r_{\mathrm{min}}, r_{\mathrm{max}})=(1,1.2,6,30)$ for (d)--(f). The orbits are shown by the solid curves in the $(x, y)$ plane, where $(x, y)=(r \cos \varphi, r\sin \varphi)$. The initial conditions are $\varphi(0)=0$, $r(0)=r_{\mathrm{c}}=18$, $E=E(r_{\mathrm{c}})+\Delta E$, and $L=L(r_{\mathrm{c}})$, where $C=1$. The value $\dot{r}(0)$ is determined from Eq.~\eqref{eq:constr}. The red and blue dots are the periapsises and apoapsises for the bound orbits, respectively. } \label{fig:cdns} \end{figure} \subsection{Isothermal sphere model} We consider the continuous mass function~\eqref{eq:mnear}--\eqref{eq:mfar} with \begin{align} m_* &=\sigma r+ \delta, \end{align} where \begin{align} \sigma&=\frac{M-M_0}{r_{\mathrm{max}}-r_{\mathrm{min}}},\\ \delta&=\frac{M_0 r_{\mathrm{max}}-M r_{\mathrm{min}}}{r_{\mathrm{max}}-r_{\mathrm{min}}}. \end{align} The mass distribution $m_*$ is produced by the density profile of the truncated singular isothermal sphere, \begin{align} \epsilon=\frac{\sigma}{4\pi r^2}\Theta(r-r_{\mathrm{min}})\Theta(r_{\mathrm{max}}-r). \end{align} Therefore, we call this model the isothermal sphere model. If $\sigma\neq 1/2$, the corresponding $\alpha$ is given by Eqs.~\eqref{eq:near}--\eqref{eq:far} with \begin{align} \alpha_*&=\frac{r}{2}-\frac{C_*^2}{2}(r-2m_*)^{1/(1-2\sigma)}, \end{align} where $C_*$ is an integral constant giving the shift degree of freedom of the time coordinate. The boundary conditions~\eqref{eq:alpbc1} and \eqref{eq:alpbc2} give the relations, \begin{align} C_0&=C_* (r_{\mathrm{min}}-2M_0)^{\sigma/(1-2\sigma)}, \\ C&=C_*(r_{\mathrm{max}}-2M)^{\sigma/(1-2\sigma)}. \end{align} If $\sigma=1/2$, the corresponding $\alpha_*$ takes the form \begin{align} \alpha_*=\frac{r}{2}-\frac{C_*^2}{2} e^{r/(r_{\mathrm{min}}-2M_0)}, \end{align} where $C_*$ is an integration constant giving the shift degree of freedom of the time coordinate. The boundary conditions~\eqref{eq:alpbc1} and \eqref{eq:alpbc2} lead to \begin{align} C_0&=C_*\:\! \frac{e^{r_{\mathrm{min}}/2(r_{\mathrm{min}}-2M_0)}}{\sqrt{r_{\mathrm{min}}-2M_0}}, \\ C &=C_*\:\!\frac{e^{r_{\mathrm{max}}/2(r_{\mathrm{max}}-2M)}}{\sqrt{r_{\mathrm{max}}-2M}}. \end{align} For $\eta=(M-M_0)/M_0\ll1$, if the ISCO appears on the matter distribution, its radius is given by \begin{align} r=6 M_0 \left[\:\! 1 -\frac{r_{\mathrm{min}}}{r_{\mathrm{max}}-r_{\mathrm{min}}} \eta +O\left(\eta^2\right) \:\!\right]. \end{align} This means that the ISCO is smaller than $6M_0$ of the Schwarzschild, which is caused by the matter distribution. We focus on the nearly circular bound orbits on the matter distribution. The two ratios $2m/r$ and $\zeta$ of this model are reduced to \begin{align} \frac{2m}{r}&=2\:\!\sigma+\frac{2\delta}{r}, \quad \zeta=\frac{\sigma r}{3(\sigma r+\delta)}. \end{align} \begin{figure}[t] \centering \includegraphics[width=11.5cm,clip]{IST.pdf} \caption{ Contour plots of the precession rate $\nu$ for the isothermal sphere model in the range $r_{\mathrm{min}}\le r\le r_{\mathrm{max}}$, where the matter distribution exists. The units in which $M_0=1$ are adopted. The red solid curves denote $\nu=0$. The contour interval is $0.25$. } \label{fig:iso} \end{figure} Figure~\ref{fig:iso} shows the contour plots of $\nu$ on the matter distribution. The red curves denote $\nu=0$, and the contour interval is 0.25. Figure~\ref{fig:iso}(a) shows the result for the case $(M_0, M, r_{\mathrm{min}})=(1,2.2,5.8)$. If $r_{\mathrm{max}}<9.640\ldots\,$, then $\nu<0$; if $12.52 <r_{\mathrm{max}}<14.559\ldots\,$, then $\nu>0$; if $r_{\mathrm{max}}>14.559\ldots\,$, then $\nu>0$ near $r=r_{\mathrm{min}}$ and $\nu<0$ near $r=r_{\mathrm{max}}$. These behaviors can be interpreted in the same way as the constant density model. However, there is a novel situation, where $\nu<0$ near $r=r_{\mathrm{min}}$ and $\nu>0$ near $r=r_{\mathrm{max}}$ in $9.640\ldots<r_{\mathrm{max}}<12.528$, which is not found in the constant density model (see Fig.~\ref{fig:cdens}). Since the local density is proportional to $r^{-2}$, we can see that the local-density effect in this range decreases to such an extent that the general-relativistic effect dominates as $r$ increases. Figure~\ref{fig:iso}(b) shows the result for the case $(M_0, M, r_{\mathrm{min}})=(1,2.2,6)$. The contour of $\nu=0$ makes a vertical segment at $r_{\mathrm{max}}=13.2$, where $(\sigma, \delta)=(1/6, 0)$ and $\nu=0$ in the whole range of $r$. This is a special case unique to the isothermal sphere model. Figure~\ref{fig:iso}(c) shows the result for the case $(M_0, M, r_{\mathrm{min}})=(1,2.2,6.2)$. This case shows qualitatively the same behavior as the case in Fig.~\ref{fig:cdens}(a). Figures~\ref{fig:iso}(a)--\ref{fig:iso}(c) show the change of the contours as the value of $r_{\mathrm{min}}$ gradually increases. Figure~\ref{fig:iso}(d) shows the result for the case $(M_0, M, r_{\mathrm{min}})=(1,1.2,10)$. This case shows qualitatively the same behavior as the case in Fig.~\ref{fig:cdens}(b). We now focus on a situation where $\eta\ll 1$. We also assume that $r_{\mathrm{max}}/M_0\gg 1$ and $(r_{\mathrm{max}}-r_{\mathrm{min}})/M_0 \gg1$. Then we can estimate the radius at which $\nu=0$ as \begin{align} r\simeq \left(\frac{6 r_{\mathrm{max}} M_0}{\eta}\right)^{1/2} \approx 210 \left(\frac{r_{\mathrm{max}}}{1.9\times 10^3 \,\mathrm{au}}\right)^{1/2} \left(\frac{M_0}{4.0\times 10^{6} M_{\odot}}\right)^{1/2}\left(\frac{0.01}{\eta}\right)^{1/2} \,\mathrm{au}, \end{align} where the typical values are chosen as the same as in the previous subsection. The typical value $210 \,\mathrm{au}$ is comparable to the periapsis distance $120 \,\mathrm{au}$ of S2/S0-2. As in the previous model, it is suggested that the local-density effect cancels out the general-relativistic effect even if the matter distribution has only a $1\%$ mass fraction relative to the black hole. We also consider several bound orbits on the matter distribution and their periapsis shifts. Figures~\ref{fig:nonliniso}(a)--\ref{fig:nonliniso}(c) show the results for the case $(M_0, M, r_{\mathrm{min}}, r_{\mathrm{max}})=(1,2,6,10)$, and thus $(\sigma, \delta)=(1/4, -1/2)$. The initial conditions are $\varphi(0)=0$, $r(0)=r_{\mathrm{c}}=7.7$, $E=E(r_{\mathrm{c}})+\Delta E$, and $L=L(r_{\mathrm{c}})$, where we have used $C=1$ and have chosen $\Delta E$ as (a) $9.021\times 10^{-4}$, (b) $3.608\times 10^{-3}$, and (c) $1.443\times 10^{-2}$. The corresponding energy values measured by the asymptotic observers are (a) $0.850\ldots\,$, (b) $0.852\ldots\,$, and (c) $0.863\ldots\,$. The star moves counterclockwise in time, while the red and blue dots shift clockwise. Therefore, we see that the retrograde periapsis shift occurs. \begin{figure}[t] \centering \includegraphics[width=15cm,clip]{IST_orbit.pdf} \caption{ Bound orbits and their periapsis and apoapsis shifts in the isothermal sphere model, where $(M_0, M, r_{\mathrm{min}}, r_{\mathrm{max}})=(1,2,6,10)$ and thus $(\sigma, \delta)=(1/4, -1/2)$ for (a)--(c), and $(M_0, M, r_{\mathrm{min}}, r_{\mathrm{max}})=(1,2,5,30)$ and thus $(\sigma, \delta)=(1/25, 4/5)$ for (d)--(f). The curves and dots are defined in the same way as in Fig.~\ref{fig:cdns}. The initial conditions are of the same forms as in Fig.~\ref{fig:cdns}. The values $r_{\mathrm{c}}=7.7$ for (a)--(c) and $r_{\mathrm{c}}=12$ for (d)--(f) are chosen. } \label{fig:nonliniso} \end{figure} Figures~\ref{fig:nonliniso}(d)--\ref{fig:nonliniso}(f) show the results for the case $(M_0, M, r_{\mathrm{min}}, r_{\mathrm{max}})=(1,2,5,30)$, where $(\sigma, \delta)=(1/25, 4/5)$. The initial conditions are $\varphi(0)=0$, $r(0)=r_{\mathrm{c}}=12$, $E=E(r_{\mathrm{c}})+\Delta E$, and $L=L(r_{\mathrm{c}})$, where we have used $C=1$ and have chosen $\Delta E$ as (d) $1.173\times 10^{-3}$, (e) $5.868\times 10^{-3}$, and (f) $2.934\times 10^{-2}$. The corresponding energy values measured by the asymptotic observers are (d) $0.914\ldots\,$, (e) $0.919\ldots\,$, and (f) $0.942\ldots\,$. When the energy slightly increases from that of the circular orbit, the orbital shape becomes quasi-circular [see Fig.~\ref{fig:nonliniso}(d)]. As the energy increases, the radial amplitude increases [see Figs.~\ref{fig:nonliniso}(e) and \ref{fig:nonliniso}(f)]. These cases show the prograde periapsis shifts. We can see that even if the amplitude of the radial oscillation becomes large by injecting energy, the shift direction is the same as in the quasi-circular case. \subsection{NFW model} We consider the continuous mass function~\eqref{eq:mnear}--\eqref{eq:mfar} with \begin{align} m_*=M_0+4\pi \epsilon_{\mathrm{s}} d^3 \left[\:\! \ln \left( \frac{1+r/d}{1+r_{\mathrm{min}}/d} \right) +\frac{1}{1+r/d}-\frac{1}{1+r_{\mathrm{min}}/d}\:\!\right], \end{align} where $d$ is a constant called the scale radius, and $\epsilon_*$ is a constant given by \begin{align} \epsilon_* =\frac{M-M_0}{4\pi d^3}\left[\:\! \ln \left( \frac{1+r_{\mathrm{max}}/d}{1+r_{\mathrm{min}}/d} \right) +\frac{1}{1+r_{\mathrm{max}}/d} -\frac{1}{1+r_{\mathrm{min}}/d} \:\!\right]^{-1}. \end{align} The mass distribution $m_*$ is produced by the Navarro-Frenk-White~(NFW) profile~\cite{Navarro:1995iw} \begin{align} \epsilon =\frac{\epsilon_*}{(r/d)(1+r/d)^2} \Theta(r-r_{\mathrm{min}})\Theta(r_{\mathrm{max}}-r). \end{align} Therefore, we call this model the NFW model. For $\eta\ll 1$, if the ISCO appears on the matter distribution, its radius is given by \begin{align} r&=6M_0 \left[\:\! 1-\frac{1-\ln (6M_0/r_{\mathrm{min}})}{\ln (r_{\mathrm{max}}/r_{\mathrm{min}})} \eta +O(\eta^2) \:\!\right] \end{align} in the limit $d\to 0$, and \begin{align} r&=6M_0 \left[\:\! 1-\frac{r_{\mathrm{min}}^2+36 M_0^2}{r_{\mathrm{max}}^2-r_{\mathrm{min}}^2} \eta+O(\eta^2) \:\!\right] \end{align} in the limit $d/M_0\to \infty$. These values are smaller than $6M_0$ of the Schwarzschild, which is caused by the matter distribution. We focus on the nearly circular bound orbits on the matter distribution. Figures~\ref{fig:NFWcontour}(a)--\ref{fig:NFWcontour}(c) show the contour plots of $\nu$ on the matter distribution, where $(M_0,M,r_{\mathrm{min}})=(1,2,6)$ are common parameter values, and (a) $d=20$, (b) $d=9$, and (c) $d=6$. Figures~\ref{fig:NFWcontour}(d)--\ref{fig:NFWcontour}(f) also show the contour plots of $\nu$ on the matter distribution, where $(M_0,M,r_{\mathrm{min}})=(1,1.2,10)$ are common parameter values, and (d) $d=100$, (e) $d=25$, and (f) $d=5$. Figures~\ref{fig:NFWcontour}(a) and \ref{fig:NFWcontour}(d) show qualitatively the same behaviors as in Figs.~\ref{fig:cdens}(a) and \ref{fig:iso}(c). Figure~\ref{fig:NFWcontour}(c) shows qualitatively the same behavior as in Fig.~\ref{fig:iso}(a). Figures~\ref{fig:NFWcontour}(e) and \ref{fig:NFWcontour}(f) show qualitatively the same behaviors as in Figs.~\ref{fig:cdens}(b) and \ref{fig:iso}(d). Consider Fig.~\ref{fig:NFWcontour}(b), which shows a novel $\nu=0$ behavior that has not appeared in the previous cases. If $r_{\mathrm{max}}<11.290\ldots\,$, then $\nu<0$; if $12.217\ldots<\nu<15.324\ldots\,$, then $\nu>0$; if $\nu>15.324\ldots\,$, then $\nu<0$ near $r=r_{\mathrm{max}}$ and $\nu>0$ near $r=r_{\mathrm{min}}$. These behaviors can be interpreted in the same way as the constant density model and the isothermal sphere model. If $11.290\ldots<r_{\mathrm{max}}<11.814\ldots\,$, then $\nu<0$ near $r=r_{\mathrm{min}}$ and $\nu>0$ near $r=r_{\mathrm{max}}$. The same behavior is seen in the case of Fig.~\ref{fig:iso}(a) of the isothermal sphere model. If $11.814\ldots<r_{\mathrm{max}}<12.217\ldots\,$, then $\nu>0$ near $r=r_{\mathrm{min}}$ and $r=r_{\mathrm{max}}$, whereas $\nu<0$ in the intermediate region, where $d$ is comparable to the distribution scale. This behavior is not seen in the previous two models. Assume that $M_0=4.0\times 10^{6} M_{\odot}$, $\eta=0.01$, $r_{\mathrm{max}}=1.9\times 10^{3} \,\mathrm{au}$, and $r_{\mathrm{min}}=1.2\times 10^2 \,\mathrm{au}$. Then we can estimate the radius at which $\nu=0$ as $r\approx 130 \,\mathrm{au}$ for $d\approx 0$, $r\approx 200 \,\mathrm{au}$ for $d=100 \,\mathrm{au}$, and $r\approx 420 \,\mathrm{au}$ for $d=1\times 10^4 \,\mathrm{au}$. As in the previous two cases, the local-density effect can compensate for the general-relativistic effect in a realistic observational range, even if the mass fraction is $1\%$. \begin{figure}[t] \centering \includegraphics[width=15cm,clip]{NFW.pdf} \caption{Contour plots of the precession rate $\nu$ for the NFW model in the range $r_{\mathrm{min}}\le r\le r_{\mathrm{max}}$, where the matter distribution exists. The parameters are $(M_0, r_{\mathrm{min}})=(1, 6)$ for (a)--(c) and $(M_0, r_{\mathrm{min}})=(1,10)$ for (d)--(f). The red curves denote $\nu=0$. The contour interval is $0.1$.} \label{fig:NFWcontour} \end{figure} Figure~\ref{fig:NFWorbit} shows several bound orbits on the matter distribution, where $(M_0, M, r_{\mathrm{min}}, r_{\mathrm{max}})=(1,2,6,30)$ are common parameter values, $d=20$ for Figs.~\ref{fig:NFWorbit}(a)--\ref{fig:NFWorbit}(c), and $d=6$ for Figs.~\ref{fig:NFWorbit}(d)--\ref{fig:NFWorbit}(f). The initial conditions are $\varphi(0)=0$, $r(0)=r_{\mathrm{c}}=18$, $E=E(r_{\mathrm{c}})+\Delta E$, and $L=L(r_{\mathrm{c}})$, where we have used $C=1$ and have chosen $\Delta E$ as $\Delta E=1.182 \times 10^{-3}$ for Figs.~\ref{fig:NFWorbit}(a) and \ref{fig:NFWorbit}(d), $\Delta E=3.548\times 10^{-3}$ for Figs.~\ref{fig:NFWorbit}(b) and \ref{fig:NFWorbit}(e), and $\Delta E=1.064\times 10^{-2}$ for Figs.~\ref{fig:NFWorbit}(c) and \ref{fig:NFWorbit}(f). The corresponding energy values measured by the asymptotic observers are (a) $0.939\ldots\,$, (b) $0.941\ldots\,$, (c) $0.948\ldots\,$, (d) $0.943\ldots\,$, (e) $0.945\ldots\,$, and (f) $0.952\ldots\,$. As $\Delta E$ increases from (a) to (c) or from (d) to (f), the amplitude of the radial oscillation increases. The first line shows retrograde periapsis shifts, while the second line shows prograde periapsis shifts. \begin{figure}[t] \centering \includegraphics[width=15cm,clip]{NFW_orbit.pdf} \caption{ Bound orbits and their periapsis and apoapsis shifts in the NFW model, where $(M_0, M, r_{\mathrm{min}}, r_{\mathrm{max}})=(1,2,6,30)$. The curves and dots are defined in the same way as in Fig.~\ref{fig:cdns}. The initial conditions are of the same forms as in Fig.~\ref{fig:cdns}, where $r_{\mathrm{c}}=18$. } \label{fig:NFWorbit} \end{figure} \section{Model functions and observables} \label{sec:5} We consider whether the model functions $m(r)$, $\alpha(r)$, and $\epsilon(r)$ are determined by observations of photons coming from orbiting stars. Assume that a distant observer observes a nearly circular bound orbit from its orbital axis (i.e., face on to the orbital plane). Then we focus on the following four observables: The orbital period $T_\varphi$, the periapsis shift angle $\Delta \varphi_{\mathrm{p}}$ of the nearly circular bound orbit, the redshift factor $z$ of photons coming from the star (the so-called spectroscopic observable), and the angular radius $\beta$ of the orbital radius of the star on the celestial sphere. The two observables $T_\varphi$ and $\Delta \varphi_{\mathrm{p}}$ can be measured by continuous observation at least for one orbital period. In the present spacetime, using Eqs.~\eqref{eq:En}, \eqref{eq:Er}, \eqref{eq:omegavarphi}, and \eqref{eq:omegar}, these observables are represented as \begin{align} T_\varphi &=\frac{2\pi}{\mathrm{d}\varphi/\mathrm{d}T} =2\pi \sqrt{\frac{r^3}{m}\frac{r-2m}{r-2\alpha}}\bigg|_{\mathrm{s}}, \\ \Delta \varphi_{\mathrm{p}} &=\frac{2\pi}{\omega_r}(\omega_\varphi-\omega_r) =2\pi \frac{\nu}{1-\nu}=2\pi \left[\:\ \left(1+\frac{4\pi r^3\epsilon}{m}-\frac{6m}{r}\right)^{-1/2}-1 \:\!\right]\!\!\Bigg|_{\mathrm{s}}, \end{align} where $\mathrm{d}\varphi/\mathrm{d}T=(\mathrm{d}\varphi/\mathrm{d}t)(\mathrm{d} t/\mathrm{d}T)=C^{-1} \dot{\varphi}/\dot{t}$, we have chosen $C=1$, and the symbol ``$\mathrm{s}$" means the quantities for the source. The others, $z$ and $\beta$, are given by the momentum $k_\mu$ of photons coming from the star, \begin{align} 1+z&=\frac{(k_\mu u_{\mathrm{s}}^\mu)\big|_{\mathrm{s}}}{(k_\mu u_{\mathrm{o}}^\mu)\big|_{\mathrm{o}}}=\dot{t}-b \:\!\dot{\varphi}, \\ \sin \beta&= \sqrt{(k^{(2)}/k^{(0)})^2+(k^{(3)}/k^{(0)})^2}\Big|_{\mathrm{o}} =\frac{q}{r_{\mathrm{o}}} \sqrt{1-\frac{2\alpha(r_{\mathrm{o}})}{r_{\mathrm{o}}}}, \end{align} where the symbol ``$\mathrm{o}$" means the quantities for the observer, and $b$ and $q$ are constants of photon motion known as the impact parameters, \begin{align} \label{eq:bqdef1} b&=\frac{k_\varphi}{(-k_t)}, \\ \label{eq:bqdef2} q&=\frac{1}{(-k_t)}\sqrt{k_\theta^2+\frac{k_\varphi^2}{\sin^2\theta}}, \end{align} and $u^{\mu}_{\mathrm{s}}=(\dot{t},0,0,\dot{\varphi})$ and $u_{\mathrm{o}}^\mu=(1,0,0,0)$ are the velocity of the star and that of the distant observer in the coordinates $(t,r,\theta,\varphi)$, respectively, and $k^{(\mu)}$ is the tetrad components of $k_\mu$ (see Appendix~\ref{sec:B} for details). Using the assumption of the observer being face-on, we can set $b=0$ by the coordinate transformation, and then \begin{align} \label{eq:zobs} 1+z & =\sqrt{\frac{r}{r-3m} \frac{r-2m}{r-2\alpha}}\bigg|_{\mathrm{s}}, \end{align} where we have used Eqs.~\eqref{eq:En} and \eqref{eq:Er}. Furthermore, in the face-on case, because only the photons with zero radial velocity at the emission point will reach the distant observer on the axis, the parameter $q$ takes the value \begin{align} \label{eq:qsour} q^2 =\frac{r^3}{r-2\alpha}\bigg|_{\mathrm{s}}, \end{align} where we have used the radial equation for photon motion, \begin{align} \left(\frac{\mathrm{d}r}{\mathrm{d}\lambda}\right)^2+\left(1-\frac{2m}{r}\right)\left(\frac{q^2}{r^2}-\frac{r}{r-2\alpha}\right)=0, \end{align} where $\lambda$ is an affine parameter. Using $r_{\mathrm{o}}\gg M$ and $\beta\ll 1$, we obtain \begin{align} \beta&= \frac{r_{\mathrm{s}}}{r_{\mathrm{o}}\sqrt{ 1-2\alpha(r_{\mathrm{s}})/r_{\mathrm{s}}}}. \end{align} If we know the value of $r_{\mathrm{o}}$, by measuring the four observables $\Delta \varphi_{\mathrm{p}}$, $T_\varphi$, $z$, and $\beta$ for a nearly circular bound orbit in the face-on case, we obtain the values of $m(r_{\mathrm{s}})$, $\alpha(r_{\mathrm{s}})$, $\epsilon(r_{\mathrm{s}})$, and the equilibrium radius $r_{\mathrm{s}}$. Therefore, by using observational data for several stars orbiting at different radii, we can obtain the functions of our model by appropriate interpolation. In contrast, if we do not know the value of $r_{\mathrm{o}}$, we obtain a relationship between two sets of dimensionless quantities ($\Delta \varphi_{\mathrm{p}}$, $T_{\varphi}/r_{\mathrm{s}}$ $z$, $\beta$) and ($m(r_{\mathrm{s}})/r_{\mathrm{s}}$, $\alpha(r_{\mathrm{s}})/r_{\mathrm{s}}$, $\epsilon(r_{\mathrm{s}}) r_{\mathrm{s}}^2$, and $r_{\mathrm{o}}/r_{\mathrm{s}}$). Note that, however, for the observer not in the face-on to the nearly circular orbital plane of the star, the situation becomes much more convoluted due to the uncertainties in measuring the values of the orbital shift angle $\Delta\varphi_{\mathrm{p}}$ and the the position on the celestial sphere $(X,Y)$ (see Appendix~\ref{sec:B}).% \footnote{These uncertainties arise from the gravitational lens effect on the orbit of photons coming from the source to the observer. For $\Delta \varphi_{\mathrm{p}}$, the gravitational lens effect deforms the visible shape of the star orbit, and the exact value of $\Delta \varphi_{\mathrm{p}}$ cannot be measured from the shape without prior knowledge of the metric functions $m(r)$ and $\alpha(r)$. For $(X,Y)$, although they are related to $(q, b/\sin\theta_{\mathrm{o}})$ through Eqs.~\eqref{eq:Xapp} and \eqref{eq:Yapp}, neither the relation $b=0$ nor Eq.~\eqref{eq:qsour} is valid in general because of the lens effect. However, if the deviation from the face-on case is small, or equivalently, if the inclination angle is small (i.e., $i=\theta_0\ll 1$), the discussion for the face-on case holds approximately. Then the correction is typically given by the projection of the line-of-sight direction to the face-on direction, whose order is expected to be $O(1-\cos i)=O(i^2)$.} To make the discussion clear, we focus for a while on a quasi-Newtonian nearly circular quasi-elliptical orbit with an equilibrium radius $r_{\mathrm{s}}$, where $r_{\mathrm{s}}\gg m(r_{\mathrm{s}})$, $r_{\mathrm{s}}\gg \alpha(r_{\mathrm{s}})$, and $|\nu|\ll 1$. In this case, the periapsis shift $\Delta \varphi_{\mathrm{p}}$ is given by the following simple formula: \begin{align} \label{eq:shiftNe} \Delta \varphi_{\mathrm{p}}=3\pi \left( \frac{2m}{r}-\frac{\epsilon}{\bar{\epsilon}} \right)\!\bigg|_{\mathrm{s}}. \end{align} Then we derive the concrete expressions for $m(r_{\mathrm{s}})$, $\alpha(r_{\mathrm{s}})$, and $\epsilon(r_{\mathrm{s}})$ and therefore the deviation from the Schwarzschild solution. For such an orbit, we can determine the mass $m_{\mathrm{s}}$, the radius $r_{\mathrm{s}}$, and the inclination angle $i=\theta_{\mathrm{o}}$ using the maximum and minimum separations from the central object, $r_{\mathrm{s}}=r_{\mathrm{o}} \max(\beta)$ and $r_{\mathrm{s}} \cos i=r_{\mathrm{o}} \min(\beta)$, respectively; the orbital period, $T_\varphi=2\pi \sqrt{r^3/m}|_{\mathrm{s}}$; and the difference between the maximum and minimum redshifts, \begin{align} \max(z)-\min(z)=2 \sqrt{m/r}|_{\mathrm{s}}\sin i. \end{align} Then, if the periapsis shift angle $\Delta \varphi_{\mathrm{p}}$ is observed, the values of the functions $\epsilon/\bar{\epsilon}$ and $\alpha$ at the source are obtained by \begin{align} \frac{\epsilon}{\bar{\epsilon}}\Big|_{\mathrm{s}} &=-\frac{\Delta \varphi_{\mathrm{p}}}{3\pi}+\frac{2m}{r}\Big|_{\mathrm{s}}, \\ \label{eq:alpNe} \alpha(r_{\mathrm{s}}) &=r_{\mathrm{s}} \langle z\rangle -\frac{m(r_{\mathrm{s}})}{2}, \end{align} where $\langle z\rangle$ is the redshift of the star averaged over one orbital period, and we have used Eq.~\eqref{eq:zobs} to obtain Eq.~\eqref{eq:alpNe}. We can recast the above equations to the following form: \begin{align} \label{eq:epNewt} \epsilon(r_{\mathrm{s}})&= \frac{3m^2}{2\pi r^4}\bigg|_{\mathrm{s}} \frac{ \Delta \varphi_{\mathrm{p},\mathrm{conv}} -\Delta \varphi_{\mathrm{p}}}{ \Delta \varphi_{\mathrm{p}, \mathrm{conv}} }, \\ \frac{\alpha-m}{m}\Big|_{\mathrm{s}} &=\frac{3}{2} \frac{ \langle z\rangle -\langle z\rangle_{\mathrm{conv}} }{ \langle z\rangle_{\mathrm{conv}}}, \end{align} where the symbol ``conv" stands for conventional, and $\Delta \varphi_{\mathrm{p}, \mathrm{conv}}$ and $\langle z\rangle_{\mathrm{conv}}$ are the conventional expressions for $\Delta \varphi_{\mathrm{p}}$ and $\langle z\rangle$, respectively, \begin{align} \Delta \varphi_{\mathrm{p}, \mathrm{conv}} &=\frac{6\pi m}{r}\Big|_{\mathrm{s}}, \\ \langle z\rangle_{\mathrm{conv}} &=\frac{3m}{2r}\Big|_{\mathrm{s}}. \end{align} Thus, if we know the value of $r_{\mathrm{o}}$, we can \textit{locally} determine not only the deviation of $\alpha(r_{\mathrm{s}})$ from the gravitational mass $m(r_{\mathrm{s}})$ but also the energy density $\epsilon(r_{\mathrm{s}})$ at the orbital radius of the star. The accuracy in observing $\langle z\rangle$ determines the sensitivity to the deviation $\alpha(r_{\mathrm{s}})$ from $m(r_{\mathrm{s}})$, while the accuracy in observing $\Delta \varphi_{\mathrm{p}}$ determines the sensitivity to $\epsilon$. If we normalize the parameters by typical values for S2/S0-2 near Sgr~A$^\ast$, we obtain for the latter \begin{align} \epsilon(r_{\mathrm{s}}) \approx 3.7 \times 10^{-5} M_{\odot} \mathrm{\, au}^{-3} \left( \frac{ \Delta \varphi_{\mathrm{p}, \mathrm{conv}} -\Delta \varphi_{\mathrm{p}}}{ \Delta \varphi_{\mathrm{p}, \mathrm{conv}} }\bigg/0.1 \right)\left(\frac{m}{4.0\times 10^6 M_{\odot}}\right)^2 \left( \frac{r_{\mathrm{s}}}{120 \mathrm{\, au}} \right)^{-4}, \end{align} where $\Delta \varphi_{\mathrm{p, conv}} \approx 0.36^{\circ}\:\! [\:\!m/(4.0\times10^6 M_\odot)\:\!](r_{\mathrm{s}}/120\mathrm{\, au})^{-1}$. If we can implement the above analysis for many stars of different $r_{\mathrm{s}}$, we can check whether such obtained functions $\alpha(r)$, $m(r)$, and $\epsilon(r)$ satisfy Eqs.~\eqref{eq:ep1} and \eqref{eq:alp1/2} and thus check the Einstein cluster solution as a model of the gravitational field sourced by dark matter particles surrounding the central black hole. \section{Summary and discussion} \label{sec:6} We have considered the periapsis shift of geodesic bound orbits on physically reasonable static clouds in an asymptotically flat black hole spacetime. The background spacetime constructed as the Schwarzschild black hole surrounded by a static Einstein cluster satisfies the four energy conditions (i.e., the weak, strong, null, and dominant energy conditions) in the entire region. In the framework of general relativity, we have explored how matter distribution affects the prograde shift of an orbiting star (the periapsis shift in the same direction as the revolution) observed in vacuum black hole spacetimes. Consequently, we have shown that the precession rate $\nu$ of the nearly circular bound orbits is determined by the relative magnitude of the following two terms of the opposite signs: The ratio of the gravitational radius for the gravitational mass contained within the equilibrium orbital radius to its equilibrium radius, $2m/r$, with a positive sign and the ratio of the local energy density to the averaged energy density within the equilibrium radius, $\zeta=\epsilon/\bar{\epsilon}$, with a negative sign. If the general-relativistic effect dominates over the local-density effect (i.e., $2m/r>\zeta$), then the prograde shift occurs (i.e., $0<\nu<1$), while if the local-density effect dominates over the general-relativistic effect (i.e., $\zeta>2m/r$), then the retrograde shift occurs (i.e., $\nu<0$). This result means that the negative contribution to the periapsis shifts naturally appears as a consequence of the extended distribution of physically reasonable matters even in general relativity. Note that even though a retrograde shift occurs, it does not imply any exotic spacetime (e.g., naked-singular spacetimes or wormhole spacetimes) but simply the existence of significant local energy density on the orbit of the star. Furthermore, if the prograde shift exceeds the value expected from the general-relativistic effect, it implies that the local energy density is negative, thus violating energy conditions. We have revealed that, if the distance from the observer to the star is given, the four quantities for a nearly circular bound orbit (i.e., the orbital shift angle, the radial oscillation period, the redshift, and the source position mapped onto the observer's sky) determine the local values of the background model functions. Therefore, the model functions can be extrapolated by measuring such observables with different radii. A notable advantage of focusing on nearly circular bound orbits is that the shape of the model functions can be estimated without assuming a concrete functional form. The discussion is much clearer for quasi-circular orbits in the post-Newtonian regime. Furthermore, we have estimated the precession rate of nearly circular bound orbits in the constant density model, the isothermal sphere model, and the NFW model. A common property of these models is that if the matter distribution is sufficiently broadened while the total mass is fixed, the prograde periapsis shift occurs due to the dominant general-relativistic effect near the inner boundary of the distribution. Conversely, the retrograde shift occurs due to the dominant local-density effects near the outer boundary of the distribution. In the situation at the center of our galaxy, we find that even if the mass fraction of the matter to the black hole is only $1\%$, the local-density effect can compensate for the prograde shift due to the general-relativistic effect, which is consistent with the result in Ref.~\cite{Rubilar:2001}. Furthermore, if the matter distribution is localized in a narrow region while the total mass is fixed, the local-density effect tends to dominate over the general-relativistic effect, so that the retrograde shift occurs. In the intermediate situation, there appears a variety of behaviors depending on the density distribution. Thus, we can extract information about the matter distribution from the distribution of the periapsis shifts. We have also numerically simulated several bound orbits with large eccentricity on the matter distribution. In the parameter range explored in this study, the shift direction remains unchanged even when a nearly circular bound orbit transits to a bound orbit with large eccentricity by injecting energy. It suggests that the onset of prograde and retrograde periapsis shifts revealed by nearly circular elliptical orbits is also approximately valid even for bound orbits with large eccentricity. This study uses a static and spherically symmetric black hole spacetime surrounded by a self-gravitating cluster of massive particles to consider the competing effects, the general-relativistic one and the local-density one on the periapsis shifts. However, several future projects (e.g., Thirty Meter Telescope) are expected to achieve the accuracy to measure even the frame-dragging effect of Sgr A$^\ast$ from the observations of S-stars. Therefore, it is an interesting future issue to clarify the competition between the spin effects and the local-density effects for phenomena such as periapsis shifts near a rotating black hole. Though this study has concerned the effect of local matter density distribution on the dynamics of stars, the effect of matter distribution on the light rays or photon orbits is another interesting issue. We will report in a separate paper on the effect of matter distribution on phenomena such as gravitational lensing, a photon sphere, and gravitational redshifts in a black hole spacetime with static clouds.
{ "timestamp": "2022-02-02T02:10:00", "yymm": "2202", "arxiv_id": "2202.00202", "language": "en", "url": "https://arxiv.org/abs/2202.00202" }
\section{Introduction} For some complex manifolds it is known that the ``size'' of their fundamental group has influence on geometric properties. In complex dimension one this is clear from the uniformization theorem: a closed Riemann surface is hyperbolic if and only if it has an infinite and non-abelian fundamental group. Things become more complicated in higher dimensions. Hypersurfaces with high degree in projective spaces provide examples of Kobayashi hyperbolic complex manifolds with trivial $\pi_1$. The product of a hyperbolic compact Riemann surface with $\mathbb{P}^n$ is not hyperbolic in any sense, albeit with infinite and non-abelian $\pi_1$. In his study of the Shafarevich conjecture, Koll\'{a}r introduced the notion of {\it large fundamental group} (cf. \cite{Kol93}), which turns out to be a suitable substitution of infinity of $\pi_1$ in higher dimension. A complex projective variety $X$ is said to have large fundamental group if for every positive dimensional subvariety $Y \subset X$, the image $\mathrm{Im}[\pi_1(Y) \to \pi_1(X)]$ is infinite. People are also interested in varieties with linear $\pi_1$ or linear representations of $\pi_1$, since in this case tools from non-abelian Hodge theory can be applied. In a similar manner, we have \begin{defi} Let $G$ be a linear algebraic group defined over $\bar{\mathbb{Q}}$. A Zariski dense representation $\rho:\, \pi_1(X) \to G(\mathbb{C})$ is called {\rm big} if for a sufficiently general point $x \in X$ and every positive dimensional subvariety $Y \subset X$ containing $x$, the image $\rho(\mathrm{Im}[\pi_1(Y) \to \pi_1(X)])$ is infinite. \end{defi} Note that if we take $G$ to be {\it semi-simple}, then the bigness of $\rho$ implies that $\pi_1(X)$ is infinite and non-abelian. Then similar to the complex dimension one case, one can expect {\it hyperbolicity} results for these varieties. In \cite{Zuo96} Zuo proved \begin{thm} Suppose that $X$ is a smooth projective variety over $\mathbb{C}$ and $\rho:\,\pi_1(X) \to G(\mathbb{C})$ is a Zariski-dense representation into a semi-simple algebraic group. If $\rho$ is a big representation, then $X$ is Chern hyperbolic, i.e. there exists a proper subvariety $Z \subset X$, such that for any algebraic curve $C$ of genus $g(C) \leq 1$, the image of every non-constant morphism $f:\,C \to X$ is contained in $Z$. \end{thm} Zuo stated the above theorem for almost simple algebraic group $G$ (cf. {\cite[Theorem~2]{Zuo96}}). His argument can be easily generalized to the semi-simple case since one can replace $X$ by a finite \'{e}tale cover and thus $G$ can be assumed to be a direct product of almost simple algebraic groups. Yamanoi revisited these varieties with Zariski-dense linear representation in \cite{Yam10}, with interest in the {\it distribution of entire curves} (i.e. non-constant holomorphic maps from $\mathbb{C}$ to these varieties). Motivated by Campana's abelianity conjecture (cf. {\cite[Conjecture~9.8]{Cam04}}), in {\cite[Proposition~2.1]{Yam10}} Yamanoi proved that for $X$ admitting a Zariski-dense representation of $\pi_1(X)$ into an almost simple algebraic group, every entire curve $\gamma:\,\mathbb{C} \to X$ is degenerate (i.e. the image curve is not Zariski-dense in $X$). A key ingredient of Yamanoi's proof is the combination of the value distribution theory with some construction from non-abelian Hodge theory (i.e. harmonic maps into Bruhat-Tits buildings and spectral coverings, cf. \S\ref{sec_4.1} and {\cite[\S3]{Yam10}}). Inspired by Yamanoi's work, we obtain the following algebraicity theorem in this paper: \begin{namedthm*}{Main Theorem}\label{main-thm} Suppose that $X$ is a smooth projective variety over $\mathbb{C}$ and $\rho:\,\pi_1(X) \to G(\mathbb{C})$ is a Zariski-dense representation into a semi-simple algebraic group. If $\rho$ is big, then $X$ is {\rm pseudo Borel hyperbolic}. That is, there exists a proper subvariety $Z \subsetneqq X$ such that for any algebraic curve $C$, any holomorphic map $\gamma:\, C \to X$ with $\gamma(C) \not\subset Z$ is induced from an algebraic morphism. \end{namedthm*} \begin{rmk} Note that if $X$ admits such a big representation $\rho$, then for any birational model $\hat{X} \to X$, the pull-back of $\rho$ to $\pi_1(\hat{X})$ is again a big representation. That means the above result cannot be strengthened to $Z = \varnothing$. \end{rmk} The reader is referred to \cite{JK20} for the generalities of the notion of Borel hyperbolicity. The above algebraicity result implies pseudo Brody hyperbolicity: \begin{cor}\label{Cor-1.4} Let $X$ and $\rho:\,\pi_1(X) \to G(\mathbb{C})$ be the same as in {\rm {\bf Main Theorem}}. If $\rho$ is big, then $X$ is {pseudo Brody hyperbolic}. That is, there exists a proper subvariety $Z \subsetneqq X$ such that the image of every entire curve $\gamma:\,\mathbb{C} \to X$ is contained in $Z$. \end{cor} \begin{proof} Suppose that there exists an entire curve $\gamma:\,\mathbb{C} \to X$ with $\gamma(\mathbb{C}) \not\subset Z$. Note that one can always replace $\gamma$ by the following transcendental holomorphic map \[ \gamma_{\mathrm{new}}:\, \mathbb{C} \xrightarrow{\mathrm{exp}} \mathbb{C} \xrightarrow{\gamma} X \] and we still have $\gamma_{\mathrm{new}}(\mathbb{C}) \not\subset Z$. On the other hand, by the Main Theorem we know that $\gamma_{\mathrm{new}}$ is induced from a non-constant morphism $\mathbb{P}^1 \to X$. This gives a contradiction. \end{proof} We are now interested in studying Lang's conjecture on varieties with linear fundamental groups. Recall that for a projective variety $X$ the {\it special set} of $X$, $\mathrm{Sp}(X)$, is defined as the Zariski closure of the union of images of all non-constant rational maps from abelian varieties to $X$ (cf. \cite{Lang} or {\cite[\S2.4]{Yam_book}}). Then Lang's conjecture predicts that $X$ is of general type if and only if $X \neq \mathrm{Sp}(X)$. In {\cite[Theorem~2.17]{Yam_book}}, Yamanoi considered a normal projective variety $X = S \slash \Gamma$ where $S$ is a Stein space and $\Gamma \subset \mathrm{GL}_n(\mathbb{C})$ is a {\it discrete} linear group which acts freely on $S$. Motivated by the hyperbolic version of Lang's conjecture, he proved that $X$ is Kobayashi hyperbolic if and only if $\mathrm{Sp}(X) = \varnothing$. Inspired by Lang's conjecture and Yamanoi's theorem, we show the following \begin{thm}\label{Yam-question} Let $X$ be a complex projective variety with a big reductive representation $\rho:\, \pi_1(X) \to \mathrm{GL}_n(\mathbb{C})$. If the special set of $X$ is a proper subset, then $X$ is pseudo-Brody hyperbolic. \end{thm} The strategy of the proof is to consider the abelian part and the semi-simple part of this reductive representation respectively. We use Yamanoi's theorem \cite{Yam15} to deal with the abelian part, and the semi-simple part follows from the Main Theorem of this paper. Details will be given in Section~\ref{proof-Yam-question}.\\ \noindent{\bf Acknowledgment.} The author is very grateful to Katsutoshi Yamanoi for his generosity in sharing ideas: Theorem~\ref{Yam-question} is previously a question asked by him and the proof given in Section~\ref{proof-Yam-question} relies on his important observation. The author would also like to thank Steven Lu and Kang Zuo for many useful and inspiring discussions. \section{Dichotomy of representations and the strategy of the proof of Main Theorem} We first investigate the moduli space of representations. Recall that $\mathrm{Hom}(\pi_1(X),G)$ is an affine variety. Then the {\it Betti moduli space} is defined as the categorical quotient \[ M_{\mathrm{B}}(X,G) := \mathrm{Hom}(\pi_1(X),G) \mathbin{/\mkern-6mu/} G \] which is a quasi-projective scheme over $\bar{\mathbb{Q}}$ (cf. \cite{Sim92}). A representation $\rho:\, \pi_1(X) \to G(\mathbb{C})$ is said to be {\it rigid} if $[\rho] \in M_{\mathrm{B}}(X,G)(\mathbb{C})$ is an isolated point. Then the rigidity of $\rho$ gives the factorization $\pi_1(X) \to G(K) \subset G(\mathbb{C})$ after conjugation, where $K$ is some number field (cf. \cite{Rag72},p.90, Proposition~6.6). Let $\mathfrak{p}$ be a prime ideal of the ring of integers of $K$ and $K_{\mathfrak{p}}$ be the $\mathfrak{p}$-adic field. We say $\rho:\,\pi_1(X) \to G(K)$ is {\it $\mathfrak{p}$-bounded} if $\mathrm{Im}[\pi_1(X) \xrightarrow{\rho} G(K) \hookrightarrow G(K_{\mathfrak{p}})]$ is contained in some maximal compact subgroup (see for instance {\cite[\S6]{Zimmer}}). Following the strategy in \cite{Zuo96}, we consider the dichotomy of representations: \begin{description} \item[\myuline{Type A}] \begin{tabular}{l} $\rho$ is rigid and the factorization $\pi_1(X) \to G(K)$ is $\mathfrak{p}$-bounded\\ for each $\mathfrak{p} \in \mathrm{Spec}\, \mathcal{O}_K$;\\[.2cm] \end{tabular} \item[\myuline{Type B}] \begin{tabular}{l} the rest cases, that is, either $\rho:\,\pi_1(X) \to G(\mathbb{C})$ is non-rigid,\\ or $\rho:\, \pi_1(X) \to G(K)$ is $\mathfrak{p}$-unbounded for some $\mathfrak{p} \in \mathrm{Spec}\, \mathcal{O}_K$. \end{tabular} \end{description} We will see in Section~\ref{Type_A} that the big representation $\rho$ of Type A induces a {variation of Hodge structures} over $X$ with {\it generically injective} Higgs map. In Section~\ref{Type_B} a pluriharmonic map from $\tilde{X}$, the universal covering of $X$, into some Bruhat-Tits building will be constructed from a Type B representation $\rho$. One can consider the {spectral covering} $X^s$ of $X$ associated to this pluriharmonic map, and the bigness of $\rho$ guarantees that $X^s$ has {\it maximal Albanese dimension}. We will prove the Main Theorem respectively in these two cases. \section{Type A representations and variations of Hodge structures}\label{Type_A} In this section we prove the Main Theorem for the case that $\rho$ is a Type A representation. \subsection{Generalities about Type A representations} We first recall a lemma about algebraic groups (\cite{Zimmer}, p.120-121). \begin{lem} Suppose $H \subset G(K)$ is a subgroup which is $\mathfrak{p}$-bounded for every $\mathfrak{p} \in \mathrm{Spec}\, \mathcal{O}_K$. Then we have \[ [H : H \cap G(\mathcal{O}_K)] < +\infty. \] \end{lem} In our case, this means that $\rho(\pi_1(X)) \cap G(\mathcal{O}_K)$ is a finite index subgroup for a Type A representation $\rho$. Therefore, after replacing $X$ by some finite \'{e}tale covering, we can assume further that \[ \rho:\, \pi_1(X) \to G(\mathcal{O}_K). \] Next we shall construct a {\it real semi-simple discrete} representation from $\rho$. We consider the restriction of scalars. Let $\sigma_i:\, K \hookrightarrow \mathbb{C}$, $i=1,2,\dots,d$ be different embeddings. Define \[ \mathrm{R}_{K/\mathbb{Q}}(G) := \prod^d_{i=1} G^{\sigma_i}, \] which is an algebraic $\mathbb{Q}$-group with the diagonal embedding \[ \alpha:\, G(K) \to \mathrm{R}_{K/\mathbb{Q}}(G)(\mathbb{Q}). \] Now we consider the composition map \[ \pi_1(X) \xrightarrow{\rho} G(\mathcal{O}_K) \xrightarrow{\alpha} \mathrm{R}_{K/\mathbb{Q}}(G)(\mathbb{Z}). \] Following Zuo's argument one can find a noncompact factor $G_0$ of the Zariski closure of the image of $\alpha \circ \rho$ in $\mathrm{R}_{K/\mathbb{Q}}(G)$, such that the induced map \[ \rho_0:\, \pi_1(X) \to G_0(\mathbb{Z}) \] is a discrete big representation of $\pi_1(X)$ into $G_0(\mathbb{R})$, a semi-simple real Lie group of noncompact type. From Simpson correspondence \cite{Sim92}, we know that the rigid representation $\rho$, as well as the induced representation $\rho_0$, have more structures. First note that the original representation \[ \rho:\, \pi_1(X) \to G \subset \mathrm{GL}_n(\mathbb{C}) \] gives us a Higgs bundle $(E,\theta)$ together with a harmonic metric $u$. The rigidity of $\rho$ implies that $(E,\theta)$ is the fixed point of the $\mathbb{C}^*$-action on the moduli space of Higgs bundles. Then by Simpson's ubiquity theorem, we know that $(E,\theta)$ is a Hodge bundle associated to a $\mathbb{C}$-VHS, i.e. there exists a bigrading $E = \bigoplus_{p+q =k} E^{p,q}$ with \[ \theta|_{E^{p,q}}:\, E^{p,q} \to E^{p-1,q+1} \otimes \Omega^1_X. \] For each embedding $\sigma_i:\, K \hookrightarrow \mathbb{C}$, we know that \[ \rho^{\sigma_i}:\, \pi_1(X) \to G(K) \xrightarrow{\sigma_i} G(\mathbb{C}) \] is still rigid, and thus carries a structure of $\mathbb{C}$-VHS by Simpson's theorem. This means that the induced representation \[ \rho_0:\, \pi_1(X) \to G_0(\mathbb{Z}) \subset \mathrm{R}_{K/\mathbb{Q}}(G)(\mathbb{Z}) \] is equipped with a structure of $\mathbb{Z}$-VHS. Denote by $(E_0,\theta_0,u_0)$ the Hodge bundle associated to $\rho_0$ with the harmonic metric. By using the theory of harmonic maps and Mok's factorization theorem {\cite[Main Theorem]{Mok92}}, Zuo proved the following \begin{thm}[{\cite[\S3]{Zuo96}}]\label{VHS} Suppose $\rho$ is a big representation of Type A. Then the induced representation $\rho_0$ is also big and the Higgs map \[ \theta_0:\, T_X \to \mathrm{End}(E_0) \] is generically injective. \end{thm} We shall use this construction to prove the pseudo hyperbolicity of $X$. \subsection{Griffiths line bundle and the big Picard theorem} Recall that we have a Hodge bundle \[ \left( E_0 = \bigoplus_{p+q=k}E^{p,q}_0, \theta_0 = \bigoplus_{p+q=k}\theta^{p,q}_0 \right) \] coming from a $\mathbb{Z}$-VHS together with a harmonic metric $u_0$ (the Hodge metric). We consider the {\it Griffiths line bundle} on $X$ \[ \mathrm{K}(E_0) := \left(\mathrm{det}\,E^{k,0}_0\right)^{\otimes k} \otimes \left(\mathrm{det}\,E^{k-1,1}_0\right)^{\otimes (k-1)} \otimes \dots \otimes \left(\mathrm{det}\,E^{1,k-1}_0\right). \] Then the curvature form of $\mathrm{K}(E_0)$ with respect to the induced Hodge metric can be written as \[ \Theta(\mathrm{K}(E_0)) = k\,\mathrm{Tr}\, \Theta(E^{k,0}_0) + (k-1)\,\mathrm{Tr}\, \Theta(E^{k-1,1}_0) + \dots + \mathrm{Tr}\, \Theta(E^{1,k-1}_0). \] \begin{prop}[Griffiths, {\cite[Proposition(7.15)]{Gri70}}] % For a tangent vector $\eta \in T_X$, one has \[ \Theta(\mathrm{K}(E_0)) (\eta \wedge \bar{\eta}) \geq 0 \] with equality if and only if $\theta_{0,\eta} =0$. \end{prop} In our situation, it is easy to show \begin{lem}\label{lem_Gbig} The Griffiths line bundle $\mathrm{K}(E_0)$ is big and nef. \end{lem} \begin{proof} Note that $\mathrm{K}(E_0)$ is nef since $\Theta(\mathrm{K}(E_0))$ is semi-positive. To prove the bigness, one only needs to check that \[ \int_X \Theta(\mathrm{K}(E_0))^{\mathrm{dim}\,X} >0. \] This follows from the fact that $\theta$ is generically injective and thus $\theta_{0,\eta} \neq 0$ for any tangent vector $\eta$ at a general point of $X$. \end{proof} Now we can use the {\it Second Main Theorem} of Brotbek-Brunebarbe (cf. {\cite[Theorem~1.1]{BB20}}) to obtain the following \begin{thm} Let $(E_0,\theta_0)$ and $\mathrm{K}(E_0)$ be the same as above. Let $\gamma$ be a holomorphic map from an algebraic curve $C$ to $X$. For any ample line bundle $A$ on $X$, there exists a constant $\epsilon>0$ such that \begin{align} \label{SMT-BB} T(r,\gamma, \mathrm{K}(E_0)) \leq \epsilon \cdot \left(\mathrm{log}\, r + \mathrm{log}\, T(r,\gamma,A) \right),\,\,||. \end{align} \end{thm} \begin{rmk} Readers are referred to {\cite[\S2.4]{BB20}} for notations in value distribution theory. Note that in the general form of Brotbek-Brunebarbe's Second Main Theorem, the source space of $\gamma$ is assumed to be a parabolic Riemann surface and the weighted Euler characteristic (cf. {\cite[Definition~1.2]{PS14}}) of it appears in the right hand side of \eqref{SMT-BB}. In our setting, since $C$ is an algebraic curve, we know that the weighted Euler characteristic of $C$ has logarithmic growth (see p.4, (2) of \cite{PS14}). \end{rmk} \begin{proof}[Proof of Main Theorem] From Lemma~\ref{lem_Gbig} we know that the Griffiths line bundle $\mathrm{K}(E_0)$ is big. Then by Kodaira's lemma one can find a positive integer $m$ such that there exists a nonzero section $s$ of $\mathrm{K}(E_0)^{\otimes m} \otimes A^{-1}$. Now take $Z$ to be the zero locus of $s$. For any holomorphic map $\gamma:\,C \to X$ with $\gamma(C) \not\subset Z$, we apply \eqref{SMT-BB} and obtain \[ \frac{1}{m} \cdot T(r,\gamma,A) \leq T(r,\gamma, \mathrm{K}(E_0)) \leq \epsilon \cdot \left(\mathrm{log}\, r + \mathrm{log}\, T(r,\gamma,A) \right),\,\,||. \] This means that $T(r,\gamma,A) = O(\mathrm{log}\, r)$, which implies the algebraicity of $\gamma$ (see {\it e.g.} \cite[2.11.~cas local]{Dem97b} or \cite[Remark 4.7.4.(\lowerromannumeral{2})]{NW}). \end{proof} \section{Type B representations and harmonic maps into Bruhat-Tits buildings}\label{Type_B} In this section we deal with Type B representations. \subsection{Harmonic maps into buildings and spectral coverings}\label{sec_4.1} As we mentioned in the introduction, after replacing $X$ by some finite \'{e}tale cover, we can assume that $G \cong G_1 \times \cdots \times G_k$, where $G_i$ are almost simple algebraic groups. We will consider the induced representations $\pi_1(X) \to G \twoheadrightarrow G_i$. We first consider the $\mathfrak{p}$-unbounded representations. By the theory of harmonic maps into Bruhat-Tits buildings due to Gromov and Schoen \cite{GS92}, there exists a non-constant $\rho$-equivariant pluriharmonic map \[ u_i:\, \tilde{X} \to \Delta(G_i(K_{\mathfrak{p}})) \] from the universal covering of $X$ to the Bruhat-Tits building of $G_i(K_{\mathfrak{p}})$ for $i=1,2,\dots,k$. Next we consider the case that $\rho:\, \pi_1(X) \to G(\mathbb{C})$ is {\it non-rigid}. In this case we know that $M_{\mathrm{B}}(X,G)$ has positive dimensional component and thus one can find an affine curve contained in $M_{\mathrm{B}}(X,G)(\bar{\mathbb{Q}})$. Denote by $T$ the modulo-$p$ reduction of this affine curve to a finite field $k$. Choose a compactification $\bar{T}$ and a smooth point $\infty \in \bar{T}\setminus T$. Then we can define the $\infty$-adic valuation $\nu_{\infty}(\bullet)$ on the function field $k(T)$ of $T$, where for any function $f \in k(T)$ the valuation $\nu_{\infty}(f)$ is the vanishing order of $f$ at $\infty$.\\ Now the deformation of representations along $T$ will induce the following representation \[ \rho_{T,i}:\, \pi_1(X) \to G \left( k(T)_{\infty}\right) \twoheadrightarrow G_i \left( k(T)_{\infty}\right) \] where $k(T)_{\infty}$ is the completion of $k(T)$ under the $\infty$-valuation $\nu_{\infty}(\bullet)$. Note that $\rho_{T,i}$ is an unbounded representation with respect to the non-archimedean norm induced by the valuation $\nu_{\infty}(\bullet)$. Then again by the theorem of Gromov-Schoen, we can construct a $\rho_{T,i}$-equivariant non-constant pluriharmonic map \[ u_i:\, \tilde{X} \to \Delta(G_i(k(T)_{\infty})) \] from the universal covering to the Bruhat-Tits building of $G_i(k(T)_{\infty})$ for $i=1,2,\dots,k$.\\ Thus in both aforementioned cases of Type B representations, we obtain an equivariant pluriharmonic map $u$ from $\tilde{X}$ to product of Bruhat-Tits buildings ($\prod^k_{i=1}\Delta(G_i(K_{\mathfrak{p}}))$ or $\prod^k_{i=1}\Delta(G_i(k(T)_{\infty}))$). From the complexified differential $\partial u$ one can extract a multi-valued holomorphic one-form $\omega$ on $X$. Following {\cite[\S1, p.146-147]{Zuo96}} we consider a finite ramified Galois covering $\pi:\, X^s \to X$, the {\it spectral covering}, such that $\pi^*\omega$ splits into $l$ single-valued holomorphic one-forms $\omega_1, \dots, \omega_l \in H^0(X^s,\pi^*\Omega^1_X)$. Here $l$ is the number of roots of the Weyl group of the algebraic group $G$. Note that the spectral covering $\pi$ is unramified outside the union of zero loci $\bigcup_{i \neq j}(\omega_i - \omega_j)_0$.\\[.1cm] Now we consider the Albanese map $a:\, X^s \to \mathrm{Alb}(X^s)$. Note that all $\omega_i$'s are pull-back from the Albanese variety. Thus one can find holomorphic one-forms $\tilde{\omega}_i$ on $\mathrm{Alb}(X^s)$ such that $\omega_i = a^*\tilde{\omega}_i$ for $i =1,\dots,l$. Let $B \subset \mathrm{Alb}(X^s)$ be the maximal abelian subvariety such that all $\tilde{\omega}_i$ vanish on it. We set $A:= \mathrm{Alb}(X^s)/B$ and consider the induced morphism \[ \Phi:\, X^s \xrightarrow{a} \mathrm{Alb}(X^s) \twoheadrightarrow A. \] \begin{prop} If $\rho:\, \pi_1(X) \to G$ is a {\rm big} representation of Type B, then \begin{itemize} \item[1)] $\Phi:\, X^s \to A$ is generically finite onto its image. \item[2)] $X^s$ is of general type. \end{itemize} \end{prop} \begin{proof} See \S1 of \cite{Zuo96}. \end{proof} \subsection{Yamanoi's Second Main Theorem and algebraicity} Now we prove the Main Theorem in the case that $\rho$ is a Type B representation. First notice that the general case can be easily reduced to the following situation: $C$ is a smooth quasi-projective curve which is a finite ramified covering of $\mathbb{C}$. Denote by $p_C:\, C \to \mathbb{C}$ the covering map. We suppose that $\gamma:\, C \to X$ is a holomorphic map such that the image curve is not contained in the branched locus of $\pi:\, X^s \to X$. Now we take $Y$ to be the normalization of the fiber product $C \times_X X^s$. Note that a priori $Y$ is only a Riemann surface. Denote by $\tilde{\gamma}:\, Y \to X^s$ the induced holomorphic map. Then we have the following diagram \[ \xymatrix{ Y \ar[r]^{\tilde{\gamma}} \ar[d]^{\pi_Y} \ar@/_1pc/[dd]_{p_Y} & X^s \ar[r]^{\Phi} \ar[d]^{\pi} & A \\ C \ar[r]^{\gamma} \ar[d]^{p_C} & X & \\ \mathbb{C} & & } \] where the composed map $p_Y:\,Y \to \mathbb{C}$ is a finite surjective holomorphic map (thus $Y$ is a parabolic Riemann surface). Next we employ tools from Nevanlinna theory (cf. \S3 of \cite{Yam10} for a brief introduction to the notations). For $r>0$, we set $Y(r):= p^{-1}_Y(\mathbb{D}_r)$. Let $\mathrm{Ram}(p_Y)$ be the (analytic) ramification divisor of $p_Y$. We define \[ N_{\mathrm{ram}\,p_Y}(r) := \frac{1}{\mathrm{deg}\,p_Y} \int^r_0 \#\left( \mathrm{Ram}(p_Y) \cap Y(t)\right) \frac{dt}{t}. \] Denote by $R$ the ramification divisor of the spectral covering $\pi$. Denote by $R_C$ the ramification divisor of $p_C:\, C \to \mathbb{C}$. Then according to the pull-back divisors $\tilde{\gamma}^*R$ and $\pi^*_Y R_C$ we have the following partition of $\mathrm{Ram}(p_Y)$: \[ \mathrm{Ram}(p_Y) = R_1 + R_2 \] where $R_1 = \tilde{\gamma}^*R$ and $R_2 = \mathrm{Ram}(p_Y) - R_1 $. Since $\pi_Y$ is \'{e}tale outside $R_1$, we know that the ramification over $R_2$ comes from the ramification divisor of $p_C$ and therefore $R_2 \leq \pi^*_Y R_C$. Thus we have \[ N_{\mathrm{ram}\,p_Y}(r) = N(r,R_1) + N(r,R_2) \] where $N(r,R_i) := \frac{1}{\mathrm{deg}(p_Y)} \int^r_0 \#\left( R_i \cap Y(t)\right) \frac{dt}{t}$. Since $p_C:\, C \to \mathbb{C}$ is algebraic, we have \[ N(r,R_2) \leq N(r,\pi^*_Y R_C) := \frac{1}{\mathrm{deg}(p_Y)} \int^r_0 \#\left( \pi^*_YR_C \cap Y(t)\right) \frac{dt}{t} = \frac{\mathrm{deg}(\pi_Y)}{\mathrm{deg}(p_Y)} \int^r_0 \#\left( R_C \cap C(t)\right) \frac{dt}{t} = O(\mathrm{log}\, r). \] Next we want to determine the growth rate of $N(r,\tilde{\gamma},R)$. Let $L$ be an ample line bundle on $X^s$. Yamanoi proved the following \begin{lem}[] For any $\varepsilon >0$, we have \[ N(r,\tilde{\gamma},R) \leq \varepsilon \cdot T(r,\tilde{\gamma},L),\quad ||. \] \end{lem} \begin{proof} It is known from \cite{Zuo96} that $\mathrm{Supp}\,R \subset \bigcup_{i \neq j}(\omega_i - \omega_j)_0$. In {\cite[p.557, CLAIM]{Yam10}}, Yamanoi proved that for $i \neq j$, \[ N(r,\tilde{\gamma},(\omega_i - \omega_j)_0) \leq \varepsilon \cdot T(r,\tilde{\gamma},L), \quad ||. \] \end{proof} Thus we have proved \begin{align}\label{N_ram} N_{\mathrm{ram}\,p_Y}(r) \leq \varepsilon \cdot T(r,\tilde{\gamma},L) + O(\mathrm{log}\, r),\quad ||. \end{align} Now we can apply Yamanoi's Second Main Theorem {\cite[Theorem~1]{Yam15}} for varieties with maximal albanese dimension and obtain the following \begin{thm}\label{SMT} Let $L$ be an ample line bundle on $X^s$ and let $\varepsilon$ be a positive constant. Then there exist a proper Zariski closed subset $\Sigma \subsetneqq X^s$ and a positive constant $\alpha$ satisfying the following property: for any holomorphic map $\tilde{\gamma}:\,Y \to X^s$ from any parabolic Riemann surface $p_Y:\, Y \to \mathbb{C}$ such that the image of $\tilde{\gamma}$ is not contained in $\Sigma$, we have \[ T(r,\tilde{\gamma},K_{X^s}) \leq \alpha \cdot N_{\mathrm{ram}\,p_Y}(r) + \varepsilon \cdot T(r,\tilde{\gamma},L), \quad ||. \] \end{thm} \begin{proof}[Proof of Main Theorem] Let $Z$ be the union of $\pi(R)$ and $\pi(\Sigma)$, which is a proper subvariety of $X$. Let $\gamma:\, C \to X$ be a holomorphic map with $\gamma(C) \not\subset Z$. Denote by $\tilde{\gamma}:\, Y \to X^s$ the induced holomorphic map as above. Recall that $X^s$ is of general type. That means, we can find some positive integer $m$ such that $L \hookrightarrow K^{\otimes m}_{X^s}$. By Theorem~\ref{SMT}, we can find some constant $C>0$ such that \[ T(r,\tilde{\gamma},L) \leq C \cdot N_{\mathrm{ram}\,p_Y}(r), \quad ||. \] Combing this with the previous estimate \eqref{N_ram} of $N_{\mathrm{ram}\,p_Y}(r)$, we finally proved \[ T(r,\tilde{\gamma},L) = O(\mathrm{log}\, r). \] Note that we can choose the line bundle $L$ on $X^s$ to be sufficiently positive such that there is a nonzero map $\pi^*H \to L$, where $H$ is an ample line bundle on $X$. Then we have \begin{align}\label{ineq_H} T(r,\gamma \circ \pi_Y, H) = T(r, \tilde{\gamma}, \pi^*H) \leq T(r,\tilde{\gamma},L) = O(\mathrm{log}\, r), \end{align} where the first equality comes from the commutative diagram \[ \xymatrix{ Y \ar[d]^{\pi_Y} \ar[r]^{\tilde{\gamma}} & X^s \ar[d]^{\pi} \\ C \ar[r]^{\gamma} & X } \] By definition of the Nevanlinna order function, we have \begin{displaymath} \begin{array}{ll} T(r,\gamma \circ \pi_Y, H) & = \int^r_0\frac{dt}{t} \int_{\pi^{-1}_Y p^{-1}_C\mathbb{D}(t)} \pi^*_Y\gamma^*c_1(H) \\ & = \int^r_0\frac{dt}{t}\, \mathrm{deg}(\pi_Y) \int_{p^{-1}_C \mathbb{D}(t)} \gamma^*c_1(H) \\ & = \mathrm{deg}(\pi_Y) \cdot T(r,\gamma,H). \end{array} \end{displaymath} Combining this with \eqref{ineq_H}, we know that $T(r,\gamma,H) = O(\mathrm{log}\, r)$. Therefore $\gamma:\,C \to X$ is algebraic. \end{proof} \section{Proof of Theorem~\ref{Yam-question}}\label{proof-Yam-question} Denote by $H$ the Zariski closure of the image of the reductive representation $\rho:\,\pi_1(X) \to \mathrm{GL}_n(\mathbb{C})$. After replacing $X$ by some finite etale covering of it, we can assume that $H \cong T \times G_1 \times \cdots \times G_k$, where $T$ is an algebraic torus and $G_i$ are almost simple groups. We first show Theorem~\ref{Yam-question} in two extreme cases: the abelian case $H = T$ and the semi-simple case $H = G:= G_1 \times \cdots \times G_k$. \subsection*{Abelian case $H=T$} We can consider the Stein factorization $X \to Y \to A$ of the Albanese map $X \to A$ induced by the {\it abelian representation} $\rho:\,\pi_1(X) \to T$. Then the bigness of $\rho$ implies that $X \to Y$ is birational. So the assumption on the special set of $X$ also holds on the special set of $Y$. Then we know that $Y$ is a finite covering of an abelian variety with $\mathrm{Sp}(Y) \subsetneqq Y$. By Kawamata-Ueno fibration theorem, we know that $Y$ is of general type. Then by {\cite[Corollary 1, (1)]{Yam15}} we know that $Y$ (and thus $X$) is pseudo-Brody hyperbolic. \subsection*{Semi-simple case $H=G$} This case follows from Corollary~\ref{Cor-1.4}.\\ Now we start to prove Theorem~\ref{Yam-question} for the general case (i.e. both the abelian part $T$ and the semi-simple part $G$ are non-trivial). We will consider the induced representations \[ \rho_T: \pi_1(X) \to H \cong T \times G \twoheadrightarrow T \] and \[ \rho_G: \pi_1(X) \to H \cong T \times G \twoheadrightarrow G. \] By the result of Koll\'{a}r (cf. {\cite[\S3]{Kol93}} or {\cite[\S5.1]{Zuo_book}}), we know that for the representation $\rho$ we have the Shafarevich map \[ sh_{\rho}:\, X \dashrightarrow \mathrm{Sh}_{\rho}(X) \] which is a rational map with connected fibers from $X$ to a normal algebraic variety $\mathrm{Sh}_{\rho}(X)$ (see also \cite{Cam94} for the K\"{a}hler case). Note that the bigness of $\rho$ implies that $sh_{\rho}$ is {\it birational}. For $\rho_T$ and $\rho_G$, we have the corresponding Shafarevich maps: \[ sh_{\rho_T}:\, X \dashrightarrow \mathrm{Sh}_{\rho_T}(X), \quad sh_{\rho_G}:\, X \dashrightarrow \mathrm{Sh}_{\rho_G}(X). \] \begin{lem} The product Shafarevich map \[ g:=(sh_{\rho_T},sh_{\rho_G}):\, X \dashrightarrow \mathrm{Sh}_{\rho_T}(X) \times \mathrm{Sh}_{\rho_G}(X) \] is birational onto its image in $\mathrm{Sh}_{\rho_T}(X) \times \mathrm{Sh}_{\rho_G}(X)$. \end{lem} \begin{proof} Denote by $W$ the Zariski closure of $g(X)$ in $\mathrm{Sh}_{\rho_T}(X) \times \mathrm{Sh}_{\rho_G}(X)$. Note that Shafarevich maps have connected fibers. Thus we only need to show that the general fiber of $g:\, X \dashrightarrow W$ is of zero dimension. Let $F:= g^{-1}(w)$ be a general fiber of $g:\, X \dashrightarrow W$. Since $sh_{\rho_T}(F) = \mathrm{point}$ (resp. $sh_{\rho_G}(F) = \mathrm{point}$), by the property of Shafarevich maps we know that $\mathrm{Im}[\pi_1(F) \to \pi_1(X) \xrightarrow{\rho_T} T]$ (resp. $\mathrm{Im}[\pi_1(F) \to \pi_1(X) \xrightarrow{\rho_G} G]$) is finite. That means $\mathrm{Im}[\pi_1(F) \to \pi_1(X) \xrightarrow{\rho} H]$ is finite. Since $\rho:\, \pi_1(X) \to H \subset \mathrm{GL}_n(\mathbb{C})$ is a big representation, we know that $F$ is of zero dimension. \end{proof} Moreover, since $\rho_T$ is an abelian representation, we know that $\mathrm{Sh}_{\rho_T}(X)$ is an {\it abelian variety} (in fact a quotient of the Albanese variety of $X$). Now we apply the {\it Factorization theorem} to the semi-simple representation $\rho_G$ (cf. \cite{Mok92} for Type A representations and {\cite[\S4]{Zuo_book}} for Type B representations). Then after replacing $X$ by some finite \'{e}tale covering and blowing up it if necessary, there exists a projective variety $Y$ and a surjective morphism $f:\, X \to Y$ such that $\rho_G:\, \pi_1(X) \to G$ factors through a {\it big} representation $\rho_Y:\,\pi_1(Y) \to G$. By the semi-simple case ($H=G$), we know that $Y$ is pseudo-Brody hyperbolic. Now we have the following commutative diagram \[ \xymatrix{ X \ar[d]^f \ar@{-->}[r]^-{g} & \mathrm{Sh}_{\rho_T}(X) \times \mathrm{Sh}_{\rho_G}(X) \ar[d] \\ Y \ar@{-->}[r] & \mathrm{Sh}_{\rho_G}(X) } \] where the general fiber of $Y \dashrightarrow \mathrm{Sh}_{\rho_G}(X)$ is of zero dimension. Let $Y_0$ be the Zariski open subset of $Y$ where the rational map $Y \dashrightarrow \mathrm{Sh}_{\rho_G}(X)$ is a well-defined morphism. Let $X_0:= f^{-1}(Y_0)$. Denote by $\mathscr{A}:= Y_0 \times_{\mathrm{Sh}_{\rho_G}(X)} (\mathrm{Sh}_{\rho_T}(X) \times \mathrm{Sh}_{\rho_G}(X))$ the fiber product. Then $g:\,X_0 \dashrightarrow \mathrm{Sh}_{\rho_T}(X) \times \mathrm{Sh}_{\rho_G}(X)$ factors through $X_0 \dashrightarrow \mathscr{A} \to \mathrm{Sh}_{\rho_T}(X) \times \mathrm{Sh}_{\rho_G}(X)$. Note that $h:\, X_0 \dashrightarrow \mathscr{A}$ is generically finite onto its image in $\mathscr{A}$ since $\mathscr{A} \to \mathrm{Sh}_{\rho_T}(X) \times \mathrm{Sh}_{\rho_G}(X)$ is generically finite and $g$ is birational onto its image. We have the commutative diagram \[ \xymatrix{ X_0 \ar[d]^f \ar@{-->}[r]^-{h} & \mathscr{A} \ar[d] \\ Y_0 \ar@{=}[r] & Y_0 } \] such that for a general $y \in Y_0$, the restriction of the birational map $h:\, X_0 \dashrightarrow \mathscr{A}$ on the fiber $F:= f^{-1}(y)$ coincides with the restriction of the Shafarevich map of $\rho_T$ on $F$: \[ F \hookrightarrow X \stackrel{\rho_T}{\dashrightarrow} \mathrm{Sh}_{\rho_T}(X). \] This restriction of Shafarevich map is induced from the following {\it abelian} representation \[ \pi_1(F) \to \pi_1(X) \xrightarrow{\rho} H \cong T \times G \twoheadrightarrow T. \] Therefore, the restriction $h|_F:\, F \to \mathrm{Sh}_{\rho_T}(X)$ is a well-defined morphism. We can assume that $F$ is not contained in $\mathrm{Sp}(X)$ by our assumption on the special set. Then by the abelian case ($H=T$), we know that $F$ is also pseudo-Brody hyperbolic.\\ Now we consider an entire curve $\gamma:\, \mathbb{C} \to X$ and the composed holomorphic map $\gamma_Y:\, \mathbb{C} \to X \xrightarrow{f} Y$. Since $Y$ is pseudo-Brody hyperbolic, we know that outside a proper subvariety of $Y$ all the maps $\gamma_Y$ are constant. Thus we only need to consider these entire curves $\gamma$ which are contained in the general fiber $F$. Thus we have \[ \mathbb{C} \xrightarrow{\gamma} F \xrightarrow{h|_F} \mathrm{Sh}_{\rho_T}(X) \] where $h|_F$ is generically finite onto its image in $\mathrm{Sh}_{\rho_T}(X)$. Now by {\cite[Corollary 1, (2)]{Yam15}}, we know that every entire curve in $F$ is contained in $E\subset F$, where $E$ is the union of $\mathrm{Sp}(F)$ and the exceptional locus of $h|_F$. Note that the special set of $F$ is contained in the special set of $X$ (which is a proper subset by our assumption), and the exceptional locus of $h|_F$ is contained in the exceptional locus of $h$ (which is a proper subset since $h:\,X_0 \dashrightarrow \mathscr{A}$ is generically finite onto its image). Therefore, there exists a proper subset of $X$ such that every entire curve is contained in it. \def$'${$'$}
{ "timestamp": "2022-02-08T02:15:22", "yymm": "2202", "arxiv_id": "2202.00196", "language": "en", "url": "https://arxiv.org/abs/2202.00196" }
\section{Introduction} Federated learning (FL) \cite{mcmahan2017communication} enables the collaborative training from datasets residing on distributed clients with the help of a parameter server. In numerous previous works, including \cite{smith2017federated,li2020federated,sattler2019robust,li2019convergence}, the superiority of FL has been validated through numerical results and convergence analysis in independently identically distributed (IID) and non-IID datasets. While the recent literature related to FL primarily addresses communication efficiency, fairness, robustness, privacy of FL, and personalization, almost all of the previous works have assumed that the training datasets at clients are perfectly ready to be used for training. However, the annotation step should not be overlooked or ignored for the practical implementations of FL, like other machine learning (ML), since the cost for labeling is generally high and might be even dominant over the FL itself. Considering this problem, we study the annotation strategies in the FL framework, where the clients participating in FL should label their datasets prior to FL execution. For the annotations, we apply active learning (AL) \cite{settles2009active} at each client participating in FL. Because labeling all the instances is rarely a practical or cost-effective, AL aims to maximize the model's performance based on the fewest samples by selectively sampling and labeling the most informative instances. To validate the proposed method, we establish a FL framework with the annotation step, where various active learning strategies in the FL are compared: 1) conventional FL with random sampling, 2) client-level separated active learning (S-AL), and 3) the proposed federated active learning (F-AL). In the F-AL, the clients collaboratively execute the AL to select the instances that are considered informative to FL in a distributed optimization manner. For the S-AL and F-AL, the state-of-the-art AL algorithms are incorporated. The AL certainly outperforms the random sampling in the centralized learning. However, to the best of the authors’ knowledge, there has been no work for considering the AL in the FL framework and investigating the effect of AL on the performance of FL. This work demonstrates that AL can surprisingly reduce the cost of labeling for FL, and the cost-saving is fascinating in the FL environments. Furthermore, we show that the proposed F-AL considerably improves the performance of AL in the FL environment. We summarize our contributions below: \begin{itemize} \item We establish a general FL framework combining with the annotation step. We evaluate the three types of methods: conventional FL with random sampling, S-AL, and F-AL. With the S-AL, the clients independently apply AL in their datasets. The F-AL encourages the clients' collaboration for AL. \item We empirically demonstrate that the AL is effective in the FL environment through various experiments with AL algorithms and datasets. The numerical result indicates that the AL methods outperform random sampling in terms of test accuracy of global FL models. \item We demonstrate that F-AL outperforms the other methods. We highlight that the F-AL magnifies the benefit of AL in the FL environment. \end{itemize} \section{Related Work \& Background} \subsection{Federated Learning} FL can be categorized into cross-device FL and cross-silo FL \cite{kairouz2019advances}. In both of FL, data is locally generated and stored while the data is centrally managed and distributed to clients in the setting of datacenter distributed learning. The cross-device FL supposes that the clients are an enormous number of mobile or IoT devices connected by wi-fi or slow connections. Therefore, uplink communication is the main bottleneck of performance. Furthermore, it generally encounters fresh training samples which are never seen before since most clients participate only once in an entire FL process. On the other hand, cross-silo FL typically supposes that the distribution scale is $2$-$100$ clients, which are generally different organizations or geo-distributed data centers such as hospitals or banks. Therefore, it supposes that all clients are available during the whole FL process, and the clients' datasets are repeatedly used for training from round to round. The performance degradation due to communication bottleneck is not as severe as the case of cross-device FL. Instead, the performance heavily depends on the number or quality of the training dataset \cite{fenza2021data}. FL can be executed in various ways in terms of optimization strategy of the knowledge among the clients. The most classic algorithms in FL are federated stochastic gradient descent (FedSGD), or federated averaging (FedAvg) \cite{mcmahan2017communication} which are based on the averaging of the clients' parameters. Beyond the vanilla algorithms, FedProx \cite{li2020federated} and FedDF \cite{lin2020ensemble} tackles the systems and statistical heterogeneity, FedMA \cite{wang2020federated} and FetchSGD \cite{rothchild2020fetchsgd} alleviate the communication bottleneck, and TERM \cite{li2020tilted} and Ditto \cite{li2021ditto} are related to the fairness and robustness in personalized FL. \subsection{Active Learning} AL selects the informative instances to be labeled prior to the other instances and aims to maximize the model's performance based on the fewest samples. It has been demonstrated that AL can considerably reduce the number of labeling samples and alleviate the heavy burden of cost for annotation \cite{settles2009active,ren2021survey}. In fact, it has been proved that an effective AL strategy can theoretically obtain exponential acceleration in the efficiency of labeling \cite{balcan2009agnostic}. Even when it is applied in the area of deep learning (DL), the cost saving in the annotation is much more fascinating since DL has its explicit limitation due to the high cost of labeling the numerous instances, even brutal in the professional field that requires rich knowledge \cite{bengio2007greedy,krizhevsky2012imagenet}. The sampling strategies of AL can be categorized into uncertainty-based sampling, representation-based sampling, other sampling strategies leverage the characteristic of deep learning such as learning loss (LL) \cite{yoo2019learning}, Monte-Carlo dropout (MC-dropout) \cite{gal2016dropout}, adversarial active learning \cite{sinha2019variational,kim2021task}, and hybrid sampling using the strategies jointly. Uncertainty-based sampling \cite{lewis1994sequential,beluch2018power} queries the instances which are the most uncertain to the model trained on the current training samples. Representation-based sampling \cite{geifman2017deep,sener2017active} measures the representativeness of unlabeled samples and encourages the sampling strategy to select the instances from different areas of the distribution. Since the sampling strategy only concerned with uncertainty may skew the model due to the similarity of the sampled instances in a particular distribution, the balance between uncertainty and representativeness is one of the main issues in the performance of AL strategies \cite{sener2017active}. Furthermore, most of the recent work related to AL focuses on the AL strategy for DL by leveraging the aspects of ML model such as estimated training loss \cite{yoo2019learning}, length of gradient \cite{freytag2014selecting} and MC dropout \cite{gal2016dropout} for uncertainty estimation. Adversarial active learning \cite{sinha2019variational,wang2020dual,zhang2020state,kim2021task} trains a generative adversarial network (GAN) structured auxiliary network which learns a low dimensional latent space and discriminates the labeled and unlabeled samples in order to select unlabeled instances which are most different from the labeled instances. Furthermore, \cite{cho2021mcdal} recently proposed Maximum Classifier Discrepancy for Active Learning (MCDAL) which is the first work that leverages classifier discrepancy for sampling in active learning. \section{Problem Definition}\label{sec:problem} This section provides FL framework where the annotation step is included before the execution of FL. We first introduce the FL environment comprising a parameter server and clients. The annotation step in the FL framework is described and formulated in more detail. Furthermore, we provide the AL which the clients execute in the annotation step. \subsection{Federated Learning Environment} We consider a cross-silo FL comprising a parameter server and $M$ clients. The clients store their own local dataset $\mathcal{U}_{m}$, $m=1,\dots,M$ which are the unlabeled datasets. Before the start of FL, each $m$-th client selects the instances from $\mathcal{U}_{m}$ and labels the instances to obtain $\mathcal{D}_{m}=\left\{\bm{x}_{i},\bm{y}_{i}\right\}$ where $\mathcal{L}_{m}=\left\{\bm{x}_{i}\right\}$ is the selected instance from $\mathcal{U}_{m}$ and $\bm{y}_{i}$ is the label of $\bm{x}_{i}$. We denote the sampling function as $\mathcal{A}\left(\cdot\right)$ and the selected instances, $\mathcal{L}_{m}$, as \begin{equation}\label{sampling} \mathcal{L}_{m}=\mathcal{A}\left(\mathcal{U}_{m}\right), \end{equation} for $m=1,\dots,M$. Let $\bm{\theta} \in \mathbb{R}^{D}$ denote the global model to be optimized in FL. The local loss $F_{m} \left( \bm{\theta}\right)$ at the $m$-th client is $F_{m} \left( \bm{\theta}\right)= \frac{1}{\left|\mathcal{D}_{m} \right|} \sum_{\bm{u} \in \mathcal{D}_{m}} f \left( \bm{\theta}, \bm{u} \right)$, where $\mathcal{D}_{m}$ is the labeled dataset at the $m$-th client and $f\left(\cdot\right)$ is the loss function determined by the network model. Accordingly, the global loss $F\left(\bm{\theta}\right)$ is defined as $ F\left(\bm{\theta}\right)=\frac{1}{ \left|\bigcup_{m=1}^{M} \mathcal{D}_{m} \right|} \sum_{\bm{u} \in \bigcup_{m=1}^{M} \mathcal{D}_{m}} f \left( \bm{\theta}, \bm{u} \right)$. The goal of FL is to train the optimized parameter $\bm{\theta}^{*}$ minimizing the global loss, namely $\bm{\theta}^{*}=\underset{\theta}{\mathrm{argmin}} ~ F\left(\bm{\theta}\right)$. The FedSGD \cite{mcmahan2017communication} is applied for the FL updates, where $\bm{\theta}^{*}$ is obtained through iterative stochastic gradient descent (SGD) allowing the parallel computation of gradients at the clients. The parameter vector $\bm{\theta}_{t}$ at the $t$-th iteration is updated according to $\bm{\theta}_{t+1} = \bm{\theta}_{t} - \eta_{t} \sum _{m=1}^{M} \frac{n_{m}}{n}\bm{g}_{m }\left(\bm{\theta}_{t}\right)$, where $\eta_{t}$ is the learning rate at the $t$-th iteration, $n=\sum_{m=1}^{M} n_{m}$, $n_{m}=\left|\mathcal{D}_{m} \right|$ and $\bm{g}_{m }\left(\bm{\theta}_{t}\right) \in \mathbb{R}^{D}$ is the stochastic gradient of $\bm{\theta}_{t}$ computed at the $m$-th client as $\bm{g}_{m}\left(\bm{\theta}_{t}\right)=\frac{1}{\left|\mathcal{D}_{m}\right|} \sum_{\bm{u} \in \mathcal{D}_{m}} \nabla f\left(\bm{\theta}_{t},\bm{u}\right)$. The update is equivalently given by \begin{equation}\label{average} \bm{\theta}_{t+1} = \sum_{m=1}^{M}\frac{n_{m}}{n} \bm{\theta}^{m}_{t+1}, \end{equation} where \begin{equation}\label{update} \bm{\theta}^{m}_{t+1} = \bm{\theta}_{t} - \eta_{t} \bm{g}_{m }\left(\bm{\theta}_{t}\right). \end{equation} In this work, the FedAvg \cite{mcmahan2017communication} computes the converged solution at each client by repeating \eqref{update} multiple times before the average. The overall FL framework is summarized in Algorithm \ref{alg:FedAvg}. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.5\columnwidth]{overview.png}} \caption{Annotation strategies for federated learning.} \label{overview} \end{center} \vskip -0.2in \end{figure} \begin{algorithm}[tb] \caption{Federated Learning with annotation step} \label{alg:FedAvg} \begin{algorithmic} \STATE {\bfseries Input:} unlabeled datasets, $\left\{\mathcal{U}_{m}\right\}_{m=1}^{M}$ \STATE \phantom{{\bfseries Input:}} initialized model, $\bm{\theta}_{1}$ \STATE \phantom{{\bfseries Input:}} learning rate, $\left\{\eta_{t}\right\}_{t}$ \begin{ALC@g} \STATE {\bfseries Annotation step:} \begin{ALC@g} \FOR{$m=1$ {\bfseries to} $M$} \STATE annotate $\mathcal{L}_{m}=\mathcal{A}\left(\mathcal{U}_{m}\right)$ to obtain $\mathcal{D}_{m}$ \ENDFOR \end{ALC@g} \STATE {\bfseries FL step:} \begin{ALC@g} \FOR{each round $t=1,2,\dots$ } \STATE {\bfseries Client executes:} \begin{ALC@g} \STATE do multiple iterations of \eqref{update} \end{ALC@g} \STATE {\bfseries Server executes:} \begin{ALC@g} \STATE average model parameters as in \eqref{average} \end{ALC@g} \ENDFOR \end{ALC@g} \STATE {\bfseries return} $\texttt{FedAvg}\left(\left\{ \mathcal{D}_{m} \right\}_{m}^{M} \;\middle|\; \bm{\theta}_{1} ,\left\{\eta_{t}\right\}_{t} \right)$ \end{ALC@g} \end{algorithmic} \end{algorithm} \subsection{Active Learning}\label{sec:problem:AL} In the proposed FL framework, we introduce the sampling function, $\mathcal{A}\left(\cdot\right)$, which finds the instances to be labeled from the unlabeled dataset prior to the process of FL. For an example of random sampling, the acquired sample instances from the unlabeled dataset $\mathcal{U}$ is $\mathcal{A}\left(\mathcal{U}\right)=\texttt{random}(\mathcal{U}, b)$, where $\texttt{random}\left(\mathcal{U}, b\right)$ is to randomly choose the $b$ instances from $\mathcal{U}$. In terms of the sampling function, the goal of AL is to find the best sampling function which selects the most informative and effective instances to the performance of the main task. Most of the AL algorithms generally searches instances with the highest score in the unlabeled data pool \cite{mccallumzy1998employing} as \begin{equation} \mathcal{A}\left(\mathcal{U}\right)= \underset{\mathcal{L} \subseteq \mathcal{U},~\left| \mathcal{L} \right|=b, ~ x \in \mathcal{L}}{\mathrm{argmax}} ~ S\left(x\right), \end{equation} where $b$ is the budget of sampling, and $S\left(x\right)$ is the score function of $x$. The score function of effective AL should perfectly reflect the potential informativeness of instances in the unlabeled dataset. Hence, the AL algorithms can be described by how to design $S\left(\cdot\right)$. The score includes uncertainty, representativeness such as diversity, density, training loss, and dissimilarity to the labeled dataset. Since the informativeness depends on the current labeled dataset, the score function is also conditioned on the current state of the labeled dataset. Hence, the score is generally calculated based on the trained model with the current labeled dataset, namely \begin{equation}\label{scorefunction} S\left(x\right)=S\left(x\; \middle| \; \mathcal{D}\right)=S\left(x\; \middle| \; \bm{\phi} \left(\mathcal{D}\right) \right), \end{equation} where $\bm{\phi} \left(\mathcal{D}\right)$ is the auxiliary model that is trained with the labeled dataset $\mathcal{D}$, starting from the randomly initialized model. Furthermore, AL adopts multiple rounds for sampling and gradually samples from the unlabeled dataset. When it is desired to add $b$ instances to be labeled after $K$ rounds, it samples $b/K$ instances at each round. We summarize the description of AL algorithm, $\mathcal{A}\left(\cdot\right)$, in Algorithm 2. \begin{algorithm}[tb] \caption{Active Learning, $\mathcal{A}\left(\cdot\right)$} \label{alg:AL} \begin{algorithmic} \STATE {\bfseries Input:} unlabeled dataset, $\mathcal{U}^{1}$ \STATE \phantom{{\bfseries Input:}} initially labeled dataset, $\mathcal{D}^{1}$ \STATE \phantom{{\bfseries Input:}} number of AL round, $K$ \STATE \phantom{{\bfseries Input:}} initialized models for $\bm{\phi}$, $\{\bm{\phi}^{k}\}_{k=1}^{K}$ \STATE \phantom{{\bfseries Input:}} number of annotation budget, $b$ \begin{ALC@g} \FOR{$k=1$ {\bfseries to} $K$} \STATE train $\bm{\phi}\left(\mathcal{D}^{k}\right)$, starting from $\bm{\phi}^{k}$ \STATE sample $\mathcal{L}^{k}$, \begin{equation}\label{samplinginAL} \mathcal{L}^{k}=\underset{\mathcal{L} \subseteq \mathcal{U}^{k},~\left| \mathcal{L} \right|=\frac{b}{K}, ~ x \in \mathcal{L}}{\mathrm{argmax}} ~ S\left(x\; \middle| \; \bm{\phi}\left(\mathcal{D}^{k}\right) \right) \end{equation} \STATE $\hat{\mathcal{D}}^{k}=\texttt{annotate}\left(\mathcal{L}^{k}\right)$ \STATE $\mathcal{D}^{k+1 } = \mathcal{D}^{k} \cup \hat{\mathcal{D}}^{k}$ \STATE $\mathcal{U}^{k+1 } = \mathcal{U}^{k} - \mathcal{L}^{k}$ \ENDFOR \STATE {\bfseries return} $\mathcal{D}^{K+1}$ with size of $\left|\mathcal{D}^{1}\right|+b$ \end{ALC@g} \end{algorithmic} \end{algorithm} \begin{algorithm}[tb] \caption{Federated Active Learning, $\mathcal{A}\left(\cdot\right)$} \label{alg:FAL} \begin{algorithmic} \STATE {\bfseries Input:} unlabeled dataset, $\left\{\mathcal{U}^{1}_{m}\right\}_{m=1}^{M}$ \STATE \phantom{{\bfseries Input:}} initially labeled dataset, $\left\{\mathcal{D}^{1}_{m}\right\}_{m=1}^{M}$ \STATE \phantom{{\bfseries Input:}} number of AL round, $K$ \STATE \phantom{{\bfseries Input:}} initialized models for $\bm{\phi}$, $\{\bm{\phi}^{k}\}_{k=1}^{K}$ \STATE \phantom{{\bfseries Input:}} number of annotation budget, $\left\{b_{m}\right\}_{m=1}^{M}$ \begin{ALC@g} \FOR{$k=1$ {\bfseries to} $K$} \STATE {\bfseries FL step:} \begin{ALC@g} \STATE train $\bm{\phi}^{k}_{FL}$, starting from $\bm{\phi}^{k}$ \begin{equation} \bm{\phi}_{FL}^{k}= \texttt{FedAvg}\left(\left\{ \mathcal{D}_{m}^{k} \right\}_{m=1}^{M} \;\middle|\; \bm{\phi}^{k} ,\left\{\eta^{k}_{t}\right\}_{t} \right) \end{equation} \end{ALC@g} \STATE {\bfseries Sampling step:} \begin{ALC@g} \FOR{$m=1$ {\bfseries to} $M$} \STATE sample $\mathcal{L}^{k}_{m}$, \begin{equation}\label{samplinginFAL} \mathcal{L}^{k}_{m}=\underset{\mathcal{L} \subseteq \mathcal{U}^{k}_{m},~\left| \mathcal{L} \right|=\frac{b_{m}}{K}, ~ x \in \mathcal{L}}{\mathrm{argmax}} ~ S\left(x\; \middle| \; \bm{\phi}^{k}_{FL}\right) \end{equation} \STATE $\hat{\mathcal{D}}^{k}_{m}=\texttt{annotate}\left(\mathcal{L}_{m}^{k}\right)$ \STATE $\mathcal{D}_{m}^{k+1 } = \mathcal{D}_{m}^{k} \cup \hat{\mathcal{D}}_{m}^{k}$ \STATE $\mathcal{U}_{m}^{k+1 } = \mathcal{U}_{m}^{k} - \mathcal{L}_{m}^{k}$ \ENDFOR \end{ALC@g} \ENDFOR \STATE {\bfseries return} $\mathcal{D}_{m}^{K+1}$ with size of $\left|\mathcal{D}_{m}^{1}\right|+b_{m}$, $m=1,\dots,M$ \end{ALC@g} \end{algorithmic} \end{algorithm} \section{Federated Active Learning (F-AL)} This section introduces AL methods in the FL framework: S-AL and F-AL. In the benchmark scheme of conventional FL adopting random sampling, we set $\mathcal{A}\left(\mathcal{U}\right)=\texttt{random}\left(\mathcal{U},b\right)$ in the Algorithm 1, as we introduce in the Section \ref{sec:problem:AL}. \subsection{Separate Active Learning (S-AL)} In S-AL, the clients separately perform the AL before the FL execution. With S-AL, the $m$-th client applies $\mathcal{A}\left(\cdot\right)$ of Algorithm \ref{alg:AL} to its unlabeled dataset at the annotation step in the FL framework. The S-AL directly leverages the AL in the FL framework, including the annotation step, and might seem straightforward. However, no related work establishes AL in FL and investigates the effect of AL in FL. \subsection{Federated Active Learning (F-AL)} In S-AL, the clients independently accomplish AL and achieve the instances which are informative to the local datasets as in $\eqref{samplinginAL}$. At the $k$-th round, the $m$-th client selects $x$ with the highest score, $S\left(x\; \middle| \; \bm{\phi}\left(\mathcal{D}_{m}^{k}\right) \right)$, where $\mathcal{D}_{m}^{k}$ denotes $\mathcal{D}^{k}$ in the Algorithm \ref{alg:AL} in the perspective of $m$-th client. Since the clients execute FL after the annotation step, however, it should be the main objective to obtain instances which are informative to the aggregate labeled dataset, $\mathcal{D}^{k}_{total}=\bigcup_{m=1}^{M} \mathcal{D}_{m}^{k}$ as in \eqref{scorefunction}. Therefore, the score function in F-AL is conditioned on $\mathcal{D}^{k}_{total}$ and defined as $S\left(x\; \middle| \; \bm{\phi}\left(\mathcal{D}_{total}^{k}\right) \right)$. But, $\bm{\phi}\left(\mathcal{D}_{total}^{k}\right)$ cannot be built because $\mathcal{D}_{m}^{k}$, $m=1,\dots,M$ should not be compiled to satisfy the constraint of FL. Thus, we replace $\bm{\phi}\left(\mathcal{D}_{total}^{k}\right)$ with the model trained by FL, $\bm{\phi}_{FL}^{k}$, which is \begin{equation}\label{FALmodel} \bm{\phi}_{FL}^{k}= \texttt{FedAvg}\left(\left\{ \mathcal{D}_{m}^{k} \right\}_{m=1}^{M} \;\middle|\; \bm{\phi}^{k} ,\left\{\eta^{k}_{t}\right\}_{t} \right). \end{equation} Accordingly, with F-AL, clients carry out FL to obtain the score function that represents the informativeness of the aggregate labeled dataset. As a more clear perspective for the explanation, uncertainty \cite{lewis1994sequential,beluch2018power} can illustrate the ground why $\bm{\phi}_{FL}^{k}$ should be leveraged for the calculation of score function in order to improve FL performance. If the AL applies uncertainty-based sampling or the sampling related to uncertainty, referred as to task aware AL in \cite{kim2021task}, it utilizes the uncertainty score, which is measured by the main task model trained with the current labeled dataset. Therefore, the auxiliary model is the main task model, namely, $\bm{\phi}\left(\mathcal{D}^{k}\right)=\bm{\theta}\left(\mathcal{D}^{k}\right)$ in Algorithm \ref{alg:AL} or the set of auxiliary models includes the main tasks model in the case of several auxiliary models. Hence, we remark that the auxiliary model should also be obtained through FL since the main task model is trained by FL. After attaining $\bm{\phi}_{FL}^{k}$ in F-AL, the instances can be ideally sampled as \begin{equation} \label{samplingideal} \mathcal{L}^{k}=\underset{\mathcal{L} \subseteq \mathcal{U}^{k},~\left| \mathcal{L} \right|=\frac{b}{K}, ~ x \in \mathcal{L}}{\mathrm{argmax}} ~ S\left(x\; \middle| \; \bm{\phi}_{FL}^{k} \right), \end{equation} where $\mathcal{U}^{k}= \bigcup_{m=1}^{M} \mathcal{U}^{k}_{m}$, $b=\sum_{m=1}^{M} b_{m}$. Under the annotation workload condition that the $m$-th client annotates $b_{m}/K$ instances at each round, we have $\mathcal{L}^{k}= \bigcup_{m=1}^{M} \mathcal{L}^{k}_{m}$, where \begin{equation}\label{samplinginFAL2} \mathcal{L}^{k}_{m}=\underset{\mathcal{L} \subseteq \mathcal{U}^{k}_{m},~\left| \mathcal{L} \right|=\frac{b_{m}}{K}, ~ x \in \mathcal{L}}{\mathrm{argmax}} ~ S\left(x\; \middle| \; \bm{\phi}^{k}_{FL}\right). \end{equation} Therefore, each $m$-th client samples $\mathcal{L}_{m}^{k}$ as in \eqref{samplinginFAL2} and follows the remaining steps in Algorithm \ref{alg:AL}. In fact, the sampling step in $\eqref{samplingideal}$ can be executed at the server by exchanging the scores and indices of instances. However, we do not go any further since it might break the fairness of annotation workload among clients. \section{Experiments} This section provides the implementation details and the numerical results with related discussion. We compare the performance of FL using the random sampling, S-AL, and the proposed F-AL in image classification tasks. The annotation strategies are applied for the annotation step in the Algorithm \ref{alg:FedAvg}, where the test accuracy of the obtained model is measured for the performance metric. For the image classification tasks, we evaluate the performances of the annotation strategies on the classical public datasets, Fashion-MNIST \cite{xiao2017fashion}, CIFAR-10 \cite{krizhevsky2009learning}, and CIFAR-100 \cite{krizhevsky2009learning}. The Fashion-MNIST dataset is a more challenging alternative dataset for the MNIST dataset. It consists of a training dataset of 60,000 images for 10 types of clothing and a test dataset of 10,000 images. CIFAR-10 and CIFAR-100 contain 50,000 training images and 10,000 test images. CIFAR-10 has 10 classes, while CIFAR-100 has 100 classes. \subsection{Active learning algorithms} First, we evaluate the performance of annotation strategies when the AL algorithm is the recently proposed Maximum Classifier Discrepancy for Active Learning (MCDAL) \cite{cho2021mcdal} which is one of the state-of-the-art AL algorithms. It utilizes the prediction discrepancies between two auxiliary classifiers after learning the auxiliary classifiers to maximize the discrepancies. It replaces the classic uncertainty with the discrepancies in the predictions of the auxiliary classifiers. It empirically demonstrates that this approach outperforms the state-of-the-art AL algorithms on the several image classifications, including CIFAR-10 and CIFAR-100. For more discussion, we evaluate the performance of annotation strategies for the various kinds of AL algorithms to achieve consistency in performance comparison. The first category of the AL algorithms uncertainty-related AL algorithms. This category of AL algorithms includes the classic uncertainty-based sampling with maximum entropy \cite{lewis1994sequential}, MC-dropout with maximum entropy \cite{gal2016dropout}, Learning Loss (LL) \cite{yoo2019learning}, and MCDAL \cite{cho2021mcdal}. The other AL algorithms are the core-set approach \cite{sener2017active} and variational adversarial active learning (VAAL). The core-set approach is the most widely used AL among the representative-based AL in the literature, and the VAAL represents the recent adversarial AL algorithms \cite{wang2020dual,zhang2020state,kim2021task}. All of the algorithms consider the main task model as the auxiliary model for AL. In the LL, MCDAL, and VAAL algorithms, the auxiliary models are additionally assumed and trained for AL. Therefore, the other models can be trained by FL in addition to the main task models for F-AL. For the evaluation of LL, however, we locally train the auxiliary model because no improvements are observed with the FL of auxiliary models in our experiments. \subsection{Implementation details} In the experiments, we assume that $M=5$ clients respectively have disjoint $10000$ images where $10 \%$ of the dataset is initially labeled. In our active learning setup, the $10 \%$ of the dataset is added to the labeled dataset at the sampling step of each round. We repeat this AL rounds until the total dataset is labeled. Hence, we set $b=10000$, $K=10$, and measure the test accuracy of FL model at each $k$-th round of AL, $\texttt{FedAvg}\left(\left\{ \mathcal{D}^{k}_{m} \right\}_{m}^{M} \;\middle|\; \bm{\theta}^{k}_{1} ,\left\{\eta_{t}^{k}\right\}_{t} \right)$. We apply the Resnet-18 \cite{he2016deep} for the base architecture of main task model for all the exemplary tasks. In the FL implementation, the main task models are optimized by SGD with the learning rate of $5 \times 10^{-2}$ and learning rate decay of $0.997$ per global iteration. The number of the local epoch is $1$, and the global iteration ends when the training loss at the clients decrease below thresholds, $1 \times 10 ^{-3}$, $5 \times 10 ^{-4}$, and $1.5 \times 10 ^{-3}$ for Fashion-MNIST, CIFAR-10, and CIFAR-100, respectively. In the independent learning for S-AL, we use SGD with a learning rate $1 \times 10^{-2}$ and step decay of $0.997$ at every epoch. Independent learning follows the same stopping criteria as FL. We use random horizontal flips for data augmentations. For the result of the experiments, we use the average accuracy of three runs. \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.4\columnwidth]{repre_mnist.png}} \caption{Test accuracies of global model trained by FL per rounds on Fashion-MNIST.} \label{repre_mnist} \end{center} \vskip -0.2in \end{figure} \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.4\columnwidth]{repre_cifar10.png}} \caption{Test accuracies of global model trained by FL per rounds on CIFAR-10.} \label{repre_cifar10} \end{center} \vskip -0.2in \end{figure} \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.4\columnwidth]{repre_cifar100.png}} \caption{Test accuracies of global model trained by FL per rounds on CIFAR-100.} \label{repre_cifar100} \end{center} \vskip -0.2in \end{figure} \subsection{Performance comparison} Fig. \ref{repre_mnist}-\ref{repre_cifar100} illustrate the performance of random sampling (conventional FL), S-AL (benchmark), and F-AL (ours) which are the annotation strategies for FL. The AL algorithm is MCDAL, and the datasets are Fashion-MNIST, CIFAR-10, and CIFAR-100. Full budget in the figures denotes the performance of FL when all the clients have $100 \%$ labeled dataset. On the Fashion-MNIST, F-AL and S-AL considerably outperform random sampling, and the proposed F-AL shows the best performance compared to the other strategies. In particular, the average improvement compared to random sampling is $1.1 \%$ and $1.6 \%$ for S-AL and F-AL, respectively, at the 2nd round and 3rd round, before converging to the performance of the full budget. On the CIFAR-10, F-AL and S-AL outperform random sampling, and the proposed F-AL shows the best performance compared to the other strategies, same as the case of Fashion-MNIST. The average improvement compared to random sampling is $1.6 \%$ and $2.4 \%$ for S-AL and F-AL, respectively, before the 10th round. At the half of the rounds, the improvement is $2.3 \%$ and $3.9 \%$ for S-AL and F-AL, respectively when the performance of random sampling is $79.5 \%$. On the CIFAR-100, it is also observed that the F-AL and S-AL show better performance than the performance of random sampling. The average improvement compared to random sampling is $1.1 \%$ and $2.0 \%$ for S-AL and F-AL, respectively, before the 10th round and the improvements are $2.0 \%$ and $3.2 \%$ at the $7$th round while the test accuracy of random sampling is $42.3 \%$. Fig. \ref{repre_mnist}-\ref{repre_cifar100} demonstrate that the proposed F-AL outperforms the baseline methods in the image classification of Fashion-MNIST, CIFAR-10, and CIFAR-100. \subsection{Extended results for various AL algorithms} In order to demonstrate that our proposed F-AL outperforms the baseline methods for the general AL algorithms, we extend the experiment with MCDAL in Fig. \ref{several_mnist}-\ref{several_cifar100}. We first consider uncertainty-related AL algorithms: uncertainty-based sampling, MC-dropout with maximum entropy, LL, and MCDAL. Fig. \ref{several_mnist}-\ref{several_cifar100} illustrate that F-AL outperforms S-AL and random sapling for the considered AL algorithms. The only conflicting case is when LL is applied on the CIFAR-100, as observed in Fig. \ref{several_cifar100}. Through Fig. \ref{several_mnist}-\ref{several_cifar100}, we can compare the performance of the AL algorithms when they are applied for the FL environment. In Fig. \ref{several_mnist}, uncertainty-based sampling and MC-dropout show comparable performance, better than MCDAL and LL in both cases of S-AL and F-AL. In Fig. \ref{several_cifar10}, uncertainty-based sampling and MC-dropout show the best performance, and LL shows poor performance compared to the other algorithms, similar to the result in Fashion-MNIST. Fig. \ref{several_cifar100} illustrates that uncertainty-based sampling and MC-dropout outperform random sampling while MCDAL and LL show comparable performance with random sampling. As we observed in Fig. \ref{repre_cifar100}, F-MCDAL outperforms random sampling, but F-AL does not show considerable improvement for LL. In Table \ref{others_cifar10}, we consider the other categories of AL algorithms, which are the representative-based AL and adversarial AL. We evaluate the performance of random sampling, S-AL, and F-AL on CIFAR-10 when the AL algorithms are core-set approach and VAAL. Table \ref{others_cifar10} shows that both of core-set approach and VAAL show poor performance and are even worse than random sampling, while F-VAAL shows comparable performance with random sampling. F-AL does not show improvement when the core-set approach is applied for the AL algorithm. \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.4\columnwidth]{several_mnist.png}} \caption{Test accuracies of global model trained by FL per rounds on Fashion-MNIST (uncertainty-related AL algorithms).} \label{several_mnist} \end{center} \vskip -0.2in \end{figure} \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.4\columnwidth]{several_cifar10.png}} \caption{Test accuracies of global model trained by FL per rounds on CIFAR-10 (uncertainty-related AL algorithms).} \label{several_cifar10} \end{center} \vskip -0.2in \end{figure} \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.4\columnwidth]{several_cifar100.png}} \caption{Test accuracies of global model trained by FL per rounds on CIFAR-100 (uncertainty-related AL algorithms).} \label{several_cifar100} \end{center} \vskip -0.2in \end{figure} \begin{table*}[t] \caption{Test accuracies of global model trained by FL per rounds on CIFAR-10 (core-set approach and VAAL).} \label{others_cifar10} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lccccccccccr} \toprule \phantom{s} &\multicolumn{10}{c}{Labeled Data ($\%$)} \\ methods & $10\%$ & $20\%$ & $30\%$ & $40\%$ & $50\%$ & $60\%$ & $70\%$ & $80\%$ & $90\%$ & $100\%$ \\ \midrule F-VAAL &$0.571$ &$0.660$ &$0.720$ &$0.757$ &$0.778$ &$0.794$ &$0.808$ &$0.827$ &$0.839$ &$0.850$\\ VAAL &$0.571$ &$0.634$ &$0.689$ &$0.737$ &$0.763$ &$0.789$ &$0.808$ &$0.819$ &$0.840$ &$0.850$\\ F-coreset &$0.571$ &$0.652$ &$0.699$ &$0.742$ &$0.768$ &$0.791$ &$0.807$ &$0.820$ &$0.838$ &$0.850$\\ Coreset &$0.571$ &$0.655$ &$0.700$ &$0.745$ &$0.770$ &$0.792$ &$0.806$ &$0.819$ &$0.838$ &$0.850$\\ Random &$0.571$ &$0.662$ &$0.716$ &$0.755$ &$0.781$ &$0.795$ &$0.808$ &$0.827$ &$0.837$ &$0.850$\\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table*} \subsection{Discussion} In Fig. \ref{several_mnist}-\ref{several_cifar100} and Table \ref{others_cifar10}, it was first observed that uncertainty-based sampling and MC-dropout, which directly utilizes the uncertainty, show the best performance across most rounds of AL, and they have the largest performance increase by F-AL. In the previous literature \cite{yoo2019learning,cho2021mcdal}, it is validated that the LL and MCDAL outperform the classic uncertainty-based sampling and MC-dropout, contrary to the results in our experiments. In fact, LL and MCDAL learn the classifiers for discrepancy and the loss prediction module, respectively, in addition to the main task model, using the \textit{unlabeled dataset}. Compared to the large-scale dataset stored at one client in the literature \cite{yoo2019learning,cho2021mcdal}, multiple clients relatively have a much less number of instances in the unlabeled dataset for the distributed setting, e.g., $ 5$ clients respectively have $20 \%$ of the total dataset in our experiments. This insufficiency of the unlabeled dataset in the FL environment causes the worse performance degradation compared to the classical uncertainty-based AL algorithms. With VAAL, the sampling step is implemented only by the variational autoencoder (VAE) and discriminator, trained by both the labeled dataset and the unlabeled dataset. Therefore, the performance is degraded by the insufficient unlabeled dataset, compared to the performance evaluated on the large-scale datasets \cite{sinha2019variational}. It is remarkable that F-VAAL outperforms VAAL, as observed in TABLE \ref{others_cifar10}, since the improvement comes from the development of VAE and discriminator with F-AL. The core-set approach generally shows excellent performance under the large-scale dataset \cite{sener2017active} since the representativeness-based AL algorithm alleviates the problem of uncertainty-based sampling which selects the similar instances near the decision boundary. However, the problem is suppressed in the distributed setting and the core-set approach performs poor in our FL environment. With the core-set approach, only the feature extraction of main task model is utilized for sampling, so that the core-set approach is rarely improved by the main task model, developed by F-AL. Furthermore, core-set approach requires whole datasets among clients to be stored at one site for the collaborative sampling in \eqref{samplingideal}. It means that the collaboratively sampled instances with the core-set approach should be \begin{equation} \mathcal{L}^{k}=\underset{\mathcal{L} \subseteq \mathcal{U}^{k},~\left| \mathcal{L} \right|=\frac{b}{K}}{\mathrm{argmax}} ~ S\left(\mathcal{L}\; \middle| \; \bm{\phi}_{FL}^{k} \right), \end{equation} but it cannot be obtained from the local computation as the \eqref{samplinginFAL2}, since it requires $\mathcal{U}^{k}$ in contrast to the FL constraint. Hence, F-AL cannot improve the performance of the core-set approach. \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.4\columnwidth]{solo_cifar10.png}} \caption{Test accuracies of model trained by independent learning per rounds on CIFAR-10.} \label{IL_cifar10} \end{center} \vskip -0.2in \end{figure} \textbf{Performance of independent learning} Through Fig. \ref{repre_mnist}-\ref{several_cifar100} and Table \ref{others_cifar10}, it has been demonstrated that AL is effective in FL environment, and the proposed F-AL outperforms the conventional random sampling and S-AL. For more discussion, we investigate the effect of F-AL in the perspective of local dataset. For this, each client solely trains the main task model with the local dataset after achieving the labeled dataset via the AL strategies. Fig. \ref{IL_cifar10} illustrates the average test accuracy of the models trained at the clients on CIFAR-10 when the AL algorithms are the uncertainty-related AL algorithms which show relatively significant performance increase by F-AL compared to the core-set approach and VAAL. It is observed that F-AL considerably decreases the performance of IL. In contrast, the S-AL certainly outperforms random sampling since S-AL samples the informative instances to the current local dataset. With F-AL, the clients collaborate to sample the informative instances to the aggregate datasets, not the local dataset. It becomes a solid constraint to the sampling of clients in the perspective of local datasets since each client with F-AL does not sample the instances that are not informative to the aggregate dataset even though the instances are informative to its datasets. As illustrated in Fig. \ref{example}, the aggregate dataset, which is sampled by F-AL, performs excellent for FL even though the sampled instances can be biased at the distribution of local datasets. \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.4\columnwidth]{example5.png}} \caption{Distributions of the sampled instances } \label{example} \end{center} \vskip -0.2in \end{figure} \section{Conclusion} In this paper, we focused on the active learning (AL) and sampling strategies into the FL framework to reduce the annotation workload. In our proposed federated active learning (F-AL) method, the clients collaboratively perform the AL to obtain the instances that can maximally improve the global model of FL. We empirically demonstrate that F-AL outperforms conventional random sampling strategy, client-level separate AL (S-AL) for the various AL algorithms on the image classification applications such as Fashion-MNIST, CIFAR-10, and CIFAR-100. \section*{Acknowledgements}
{ "timestamp": "2022-02-09T02:03:44", "yymm": "2202", "arxiv_id": "2202.00195", "language": "en", "url": "https://arxiv.org/abs/2202.00195" }
\section{Introduction} A $3$-manifold in this paper indicates a connected compact closed oriented smooth $3$-manifold. Given a framed link $L$ in $S^3$, the integral surgery along $L$ produces a $3$-manifold. The link $L$ is called a surgery presentation of the resulting manifold. Kirby calculus \cite{MR467753} says that any $3$-manifold can be obtained in this way. In addition, surgery presentations of the same $3$-manifold are related to each other by Kirby moves. A linear sum of quantum invariants of framed links defines a topological invariant for $3$-manifolds, if it is invariant under Kirby moves. Reshetikhin and Turaev \cite{MR1091619} gave the first rigorous construction of $3$-manifold invariant along this line. Their invariant was defined for a modular category, which is semisimple and all simple objects are required to have non-zero quantum dimensions. Costantino, Geer and Patureau-Mirand \cite{MR3286896} extended Reshetikhin and Turaev's construction to categories which may not be semisimple or may contain objects with zero quantum dimensions. They proposed the concept: relative $G$-modular category and proved that the quantum invariant of framed links constructed from a relative $G$-modular category can be used to define a $3$-manifold invariant. Let $\mathscr{C}$ be a relative $G$-modular category. For a $3$-manifold $M$, a $\mathscr{C}$-ribbon graph $T$ and a cohomology class $\omega: H_1(M\backslash T, \mathbb{Z})\to G$ which satisfy some compatible conditions, let $L$ be a surgery presentation of $M$ with color induced from $\omega$. Then \cite{MR3286896} showed that the quantum invariant of $L\cup T$ after normalization is a topological invariant of $(M, T, \omega)$. In this paper, we follow the method in \cite{MR3286896} to construct a $3$-manifold invariant. The quantum invariant we use is Viro's $\mathfrak{gl}(1\vert 1)$-Alexander polynomial defined in \cite{MR2255851}. Consider a $1$-palette defined by $(B, G)$ where $B$ is a field of characteristic $0$ and $G\subset B$ is an abelian group. There is a category $\mathcal{M}_B$ of finite dimensional modules over a subalgebra $U^1$ of $U_{q}(\mathfrak{gl}(1 \vert 1))$, the quantum group of the Lie superalgebra $\mathfrak{gl}(1 \vert 1)$. The category $\mathcal{M}_B$ is not semisimple and the objects of which have zero quantum dimensions. Viro defined a functor from the category of trivalent graphs to $\mathcal{M}_B$. For a colored graph $\Gamma$, the Alexander polynomial $\Delta (\Gamma)$ is defined using this functor. Now consider a triple $(M, \Gamma, \omega)$, where $M$ is a $3$-manifold, $\Gamma$ is a trivalent graph colored by objects of $\mathcal{M}_B$ and $\omega: H_1(M\backslash \Gamma, \mathbb{Z})\to G$ is a cohomology class. We assume that $(M, \Gamma, \omega)$ satisfies certain compatible conditions. Here is our main result. The definitions of Kirby color and computable surgery presentation will be given in Section 3.3 and Section 4. \begin{theo} \label{mainresult1} For the $1$-palette $(B, G)$ where $G$ contains $\mathbb{Z}$ but no $\mathbb{Z}/2\mathbb{Z}$ as a subgroup, let $(M, \Gamma, \omega)$ be a compatible triple. Let $L$ be a computable surgery presentation of $(M, \Gamma, \omega)$. Then $$\Delta (M, \Gamma, \omega):=\frac{\Delta (L\cup \Gamma)}{2^{r(L)}(-1)^{\sigma_{+}(L)}}$$ is a topological invariant of $(M, \Gamma, \omega)$, where $r(L)$ is the component number of $L$ and $\sigma_{+}(L)$ is the number of positive eigenvalues of the linking matrix of $L$. Here each component $K$ of $L$ has Kirby color $\Omega(\omega([m_K]), 1)$, where $m_K$ is the meridian of $K$. \end{theo} Our strategy is as follows. Instead of proving that $\mathcal{M}_B$ has a relative $G$-modular category structure, we show directly that the value $\Delta (M, \Gamma, \omega)$ is invariant under Kirby moves. So the flavor of this paper is quite combinatorial without involving many algebras. However we believe the existence of a relative $G$-modular category structure on $\mathcal{M}_B$ so that the corresponding invariant is the one given in Theorem \ref{mainresult1}. We hope to discuss this topic in our future work. In the definitions of compatible triple, Kirby color and the proof of Theorem \ref{mainresult1}, we imitate many ideas from \cite{MR3286896}. The authors of \cite{MR3286896} discussed in detail how to define the 3-manifold invariant in the context of quantum $\mathfrak{sl}(2)$. For any finite-dimensional simple complex Lie algebra $\mathfrak{g}$, they also showed the existence of relative $G$-modular category associated with certain version of quantum $\mathfrak{g}$. The representation theory for Lie superalgebras is much more complicated than that of Lie algebras. Based on the concept relative $G$-modular category, NP Ha \cite{MR3785790} constructed 3-manifold invariant from quantum group associated with Lie superalgebra $\mathfrak{sl}(2\vert 1)$. It is not clear to us yet whether $\Delta (M, \Gamma, \omega)$ coincides with any known invariant or not. The structure of the paper is as follows. In Section 2 we review the definition of Viro's $\mathfrak{gl}(1\vert 1)$-Alexander polynomial for trivalent graphs. We calculate examples and recall junction relations, both of which will be used in subsequent sections. In Section 3, we give the definitions of compatible triple and Kirby color, and discuss how the $\mathfrak{gl}(1\vert 1)$-Alexander polynomial changes under Kirby moves. In Section 4, we state the main result and give the proof. In Section 5 we discuss examples and calculations of this invariant. In particular, we take lens spaces $L(7, 1)$ and $L(7, 2)$ as example to show that the invariant $\Delta (M, \Gamma, \omega)$ can distinguish homotopy equivalent manifolds. \medskip \noindent{\bf Acknowledgements} The authors would like to thank Prof.~Jun Murakami and Dr. Atsuhiko Mizusawa for their helpful discussions. They also would like to thank Prof.~Tetsuya Ito for suggesting us compute our invariant for $L(7, 1)$ and $L(7, 2)$. The first author was partially supported by JSPS KAKENHI Grant Number JP20K14304. The second author was partially supported by JSPS KAKENHI Grant Number 20K03604 and Toyohashi Tech Project of Collaboration with KOSEN. \medskip \section{Viro's $\mathfrak{gl}(1\vert 1)$-Alexander polynomial for trivalent graphs} Viro \cite{MR2255851} defined a functor from the category of colored framed oriented trivalent graphs to the category of finite dimensional modules over a subalgebra $U^1$ of the $q$-deformed universal enveloping superalgebra $U_{q}(\mathfrak{gl}(1 \vert 1))$. Using this functor, in \cite[Sect. 6]{MR2255851}, he defined the $\mathfrak{gl}(1\vert 1)$-Alexander polynomial for a trivalent graph. We recall how this polynomial is calculated. For the algebraic structures of $U^1$ and $U_{q}(\mathfrak{gl}(1 \vert 1))$, please read \cite[$\S$11: Appendix]{MR2255851}. \subsection{Colored framed graphs} A {\it $1$-palette} (see \cite[2.8]{MR2255851}) is a quadruple $$(B, G, W, G\times W \to G),$$ where $B$ is a commutative ring with unit, $G$ is a subgroup of the multiplicative group of $B$, $W$ is a subgroup of the additive group of $B$ which contains the unit $1$ of $B$, and $G\times W \to G: (t, N)\mapsto t^{N}$ is a bilinear map satisfying $t^1=t$ for each $t\in G$. In this paper, we consider the case that $B$ is a field of characteristic $0$. Let $G$ is a subgroup of the multiplicative group of $B$, which is abelian, and let $W=\mathbb{Z}$ and $G\times \mathbb{Z} \to G: (t, N)\mapsto t^{N}$. Obviously $(B, G, W, G\times W \to G)$ becomes a $1$-palette. Since $W$ and $G\times \mathbb{Z} \to G$ have specific definitions, we suppress them and use $(B, G)$ to denote the $1$-palette. In this paper, when we say a $1$-palette, we mean a $1$-palette defined in this way. Let $T$ be an oriented trivalent graph, and let $E$ be the set of edges of $T$. Consider a map which we call a coloring \begin{eqnarray*} c=(\mathrm{mul}, \mathrm{wt}): E &\to& G\backslash \{g\in G \mid g^4=1\}\times \mathbb{Z}\\ e &\mapsto& (t, N). \end{eqnarray*} The first number $t=\mathrm{mul}(e)$ is called the {\it multiplicity} and the second number $N=\mathrm{wt}(e)$ is called the {\it weight}. Around a vertex, suppose the three edges adjacent to it are colored by $(t_1, N_1)$, $(t_2, N_2)$ and $(t_3, N_3)$. Let $\epsilon_i=-1$ if the $i$-th edge points toward the vertex and $\epsilon_i=1$ otherwise. The coloring $c$ needs to satisfy the following conditions, which are called {\it admissibility conditions} in \cite{MR2255851}: \begin{align} \prod_{i=1}^{3} t_{i}^{\epsilon_i}&=1,\hspace{7cm} \label{multi}\\ \sum_{i=1}^{3} \epsilon_i N_i&=-\prod_{i=1}^{3} \epsilon_i. \end{align} A vertex is called {\it source} (resp. {\it sink}) if all the adjacent edges have $\epsilon=1$ (resp. $\epsilon=-1$). Now consider a proper embedding of $T$ into a 3-manifold $M$. We still use $T$ to represent the embedded graph. A {\it framing} of $T$ is an orientable compact surface $F$ embedded in $M$ in which $T$ is sitting as a deformation retract. More precisely, in $F$ each vertex of $T$ is replaced by a disk where the vertex is the center, and each edge of $T$ is replaced by a strip $[0, 1]\times [0, 1]$ where $[0, 1]\times \{0, 1\}$ is attached to the boundaries of its adjacent vertex disks and $\{\frac{1}{2}\}\times [0, 1]$ is the given edge of $T$. A {\it framed graph} is a graph with a framing. By an {\it isotopy} of a framed graph we mean an isotopy of the graph in $M$ which extends to an isotopy of the framing. For a framed graph, at each source or sink, we can assign an orientation to the boundary of the associated disk, which is regarded as part of the coloring of $T$. Now we are ready to give the following definition. \begin{defn} \rm A {\it colored framed oriented trivalent graph} $\Gamma$ in a 3-manifold $M$ is an oriented trivalent graph $T$ embedded in $M$ with the following three structures: \begin{itemize} \item a framing; \item a coloring on the set of edges which satisfies the admissibility conditions; \item an orientation of the boundary of the associated disk on each source or sink vertex. \end{itemize} \end{defn} In the following sections, a framed graph means a framed oriented trivalent graph, while a colored framed graph means a colored framed oriented trivalent graph. When $\Gamma$ is a graph in $S^3$, we can use a graph diagram to represent $\Gamma$, the blackboard framing of which coincides with the framing of $\Gamma$. Around a source or sink vertex, the counter-clockwise orientation is chosen unless otherwise stated. \subsection{$\mathfrak{gl}(1\vert 1)$-Alexander polynomial} Let $(B, G)$ be a $1$-palette. Suppose $\Gamma$ is a colored framed graph embedded in $S^3$ whose coloring is given by the map $c$ as given in Sect. 2.1. We review the definition of the $\mathfrak{gl}(1\vert 1)$-Alexander polynomial of $\Gamma$, which is denoted by $\Delta(\Gamma)$ or $\Delta(\Gamma; c)$. It is known that the pair $(t, N)\in G\backslash \{g\in G \mid g^4=1\}\times \mathbb{Z}$ corresponds to two irreducible $U^1$-modules of dimension $(1 \vert 1)$, which are denoted by $U(t, N)_{+}$ and $U(t, N)_{-}$. These two modules are dual to each other. The module $U(t, N)_{+}$ (resp. $U(t, N)_{-}$) is generated by two elements $e_0$ (boson) and $e_1$ (fermion). For details of their algebraic structures please see Appendix 1 of \cite{MR2255851}. Choose a graph diagram of $\Gamma$ in $\mathbb{R}^2$. The diagram divides $\mathbb{R}^2$ into several regions, one of which is unbounded. Choose an edge of $\Gamma$ on the boundary of the unbounded region and cut the edge at a generic point. Suppose the color of the edge is $(t, N)$. Deform the graph diagram under isotopies of $\mathbb{R}^2$ to make it in a Morse position under a given orthogonal coordinate system of $\mathbb{R}^{2}$ so that the two endpoints created by cutting have heights zero and one and the critical points, the crossings, and the vertices of the diagram have different heights between zero and one. Namely after deformation the diagram can be divided into several slices by horizontal lines so that each slice is a disjoint union of trivial vertical segments and one of the six elements in Fig.~\ref{fig2}. Each slice connects a sequence of endpoints on its bottom to a sequence of endpoints on its top. In Example~\ref{example}, we show how the Hopf link is divided into such slices. \begin{figure} \begin{tikzpicture}[baseline=-0.65ex, thick, scale=0.6] \draw (-1, 0) arc (0:180:1); \draw (0, 1) arc (180:360:1); \draw (3, 0) -- (5, 1); \draw (5, 0) -- (4.2, 0.4); \draw (3, 1) -- (3.8, 0.6); \draw (8, 0) -- (6, 1); \draw (8, 1) -- (7.2, 0.6); \draw (6, 0) -- (6.8, 0.4); \end{tikzpicture}\hspace{6mm} \begin{tikzpicture}[baseline=-0.65ex, thick, scale=0.7] \draw (0, 0) to (0,0.5); \draw (0,0.5) to (0.66,1); \draw (0,0.5) to (-0.66,1); \end{tikzpicture}\hspace{6mm} \begin{tikzpicture}[baseline=-0.65ex, thick, scale=0.7] \draw (0, 0.5) to (0,1); \draw (0,0.5) to (0.66,0); \draw (0,0.5) to (-0.66,0); \end{tikzpicture} \caption{Critical points, crossings, and vertices. } \label{fig2} \end{figure} Under Viro's functor, each sequence of endpoints corresponds to the tensor product of irreducible $U^1$-modules of dimension $(1 \vert 1)$, as described below. Suppose the sequence of endpoints is $(p_1, \cdots, p_k)$ for $k\geq 1$, where the subindices represent the $x$-coordinates of the endpoints. Then $(p_1, \cdots, p_k)$ corresponds to the tensor product $$U(t_1, N_1)_{\epsilon_1}\otimes \cdots \otimes U(t_k, N_k)_{\epsilon_k},$$ where $(t_i, N_i)$ is the color of the edge containing $p_i$ and $\epsilon_i=+$ when the edge points upward and $\epsilon_i=-$ otherwise for $1\leq i\leq k$. See Fig.~\ref{fig3}. \begin{figure} \centering \begin{tikzpicture}[baseline=-0.65ex, thick, scale=0.6] \draw (0, -1) [->] to (0, 0.5); \draw (3, -1) [<-] to (3, 0.5); \draw (0,-2) node {$U(t, N)_{+}$}; \draw (3,-2) node {$U(t, N)_{-}$}; \draw [dotted] (10, -1) -- (10, 0.5); \draw (15, -1) -- (15, 0.5); \draw (10,-2) node {$e_0$ (boson)}; \draw (15,-2) node {$e_1$ (fermion)}; \end{tikzpicture} \caption{Under the coloring $c$, each edge corresponds to an irreducible $U^1$-module. In a state, if an edge is assigned with $e_0$ (resp. $e_1$), we represent it by a dotted (resp. solid) arc.} \label{fig3} \end{figure} Each slice connects two sequences of endpoints. Under Viro's functor, each slice, read from the bottom to the top, is mapped to a morphism between the corresponding tensor products of irreducible $U^1$-modules. The morphism is defined in \cite{MR2255851} in the language of Boltzmann weights. Simply speaking, each module $U(t, N)_{+}$ or $U(t, N)_{-}$ has two generators $e_0$ (boson) and $e_1$ (fermion), and therefore $U(t_1, N_1)_{\epsilon_1}\otimes \cdots \otimes U(t_k, N_k)_{\epsilon_k}$ is generated by $e_{\delta_1}\otimes e_{\delta_2}\otimes \cdots \otimes e_{\delta_k}$ for $\delta_i=0$ or $1$. The morphism is represented by a matrix under the above choice of generators, and the Boltzmann weights are the entries of the matrix. In Table~\ref{viro1}, we list some Boltzmann weights that we need. For the full table, see Tables 3 and 4 of \cite{MR2255851}. The composition of two slices (attaching them by identifying the top of the first slice with the bottom of the second slice) corresponds to the composition of their morphisms for $U^1$-modules. As a consequence, the graph diagram in a Morse position with two endpoints of heights zero and one is mapped to a morphism from $U(t, N)_+$ to $U(t, N)_+$ (or $U(t, N)_-$ to $U(t, N)_-$ depending the orientation of $\Gamma$ at the endpoints), which is a scalar of identity (\cite[6.2.A]{MR2255851}). Recall that $(t, N)$ is the color of the edge which was cut. Then multiplying the scalar by the inverse of $t^{2}-t^{-2}$ we get $\Delta(\Gamma)$. In the following paragraphs, we use $(\Gamma)$ to represent the Alexander polynomial of $\Gamma$ when $\Gamma$ is a colored framed graph diagram. \begin{ex} \label{example} For $u, v\in G\backslash \{g\in G\vert g^4=1\}$, we have \begin{align*} \left ( \begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (-0.3,1) [<-]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) [->] to (0.3,1); \draw (-0.3,0.4) to (0.3,-0.2); \draw (-0.3,-0.2) to (-0.1,0); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.3,1) arc (90:270:0.6); \draw (0.3,1) arc (90:-90:0.6); \draw (0.5,-0.5) node {$\scriptstyle (v, V)$}; \draw (-0.5,-0.5) node {$\scriptstyle (u, U)$}; \end{tikzpicture} \right ) &=\frac{1}{v^2-v^{-2}}\left< \begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (-0.3,1) to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) to (0.3,1); \draw (-0.3,0.4) to (0.3,-0.2); \draw (-0.3,-0.2) to (-0.1,0); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.3,1) arc (0:180:0.3); \draw (-0.9,1) [->-]to (-0.9, -0.2); \draw (-0.3,-0.2) arc (0:-180:0.3); \draw (0.3,-0.2) to (0.3,-1); \draw (0.3,1) [->]to (0.3,1.5); \draw (0.8,-0.7) node {$\scriptstyle (v, V)$}; \draw (-1.4,-0.5) node {$\scriptstyle (u, U)$}; \draw [thin] (-1.4, 1.5) to (0.9, 1.5); \draw [thin] (-1.4, 1) to (0.9, 1); \draw [thin] (-1.4, -1) to (0.9, -1); \draw [thin] (-1.4, -0.2) to (0.9, -0.2); \draw [thin] (-1.4, 0.4) to (0.9, 0.4); \end{tikzpicture}\right >=\frac{-u^{2V}v^{2U}(v^{2}-v^{-2})}{v^2-v^{-2}}=-u^{2V}v^{2U}.\\\\ \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (-0.3,1) [<-]to (-0.1,0.8); \draw (0.3,0.4) to (0.1,0.6); \draw (-0.3,0.4) [->] to (0.3,1); \draw (-0.3,0.4) to (-0.1,0.2); \draw (0.3,-0.2) to (0.1,0); \draw (-0.3,-0.2) to (0.3,0.4); \draw (-0.3,1) arc (90:270:0.6); \draw (0.3,1) arc (90:-90:0.6); \draw (0.5,-0.5) node {$\scriptstyle (v, V)$}; \draw (-0.5,-0.5) node {$\scriptstyle (u, U)$}; \end{tikzpicture}\right)&=\frac{1}{v^2-v^{-2}}\left<\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (-0.3,1) to (-0.1,0.8); \draw (0.3,0.4) to (0.1,0.6); \draw (-0.3,0.4) to (0.3,1); \draw (-0.3,0.4) to (-0.1,0.2); \draw (0.3,-0.2) to (0.1,0); \draw (-0.3,-0.2) to (0.3,0.4); \draw (-0.3,1) arc (0:180:0.3); \draw (-0.9,1) [->-]to (-0.9, -0.2); \draw (-0.3,-0.2) arc (0:-180:0.3); \draw (0.3,-0.2) to (0.3,-1); \draw (0.3,1) [->]to (0.3,1.5); \draw (0.8,-0.7) node {$\scriptstyle (v, V)$}; \draw (-1.4,-0.5) node {$\scriptstyle (u, U)$}; \draw [thin] (-1.4, 1.5) to (0.9, 1.5); \draw [thin] (-1.4, 1) to (0.9, 1); \draw [thin] (-1.4, -1) to (0.9, -1); \draw [thin] (-1.4, -0.2) to (0.9, -0.2); \draw [thin] (-1.4, 0.4) to (0.9, 0.4); \end{tikzpicture}\right>=\frac{u^{-2V}v^{-2U}(v^{2}-v^{-2})}{v^2-v^{-2}}=u^{-2V}v^{-2U}. \end{align*} Here $\langle D\rangle$ denotes the scalar defined the by the tangle $D$. \end{ex} \begin{proof} We view the left-hand diagram in the first row as a morphism from $U(v, V)_{+}$ to itself, which is a scalar of the indentity. To determine the scalar, it is enough to calculate the image of the generator $e_0$. The diagram is divided into 4 slices, each of which contains exactly one critical point or crossing. By the Boltzmann weights in Table 1, we have the following calculation. \begin{eqnarray*} e_0&\mapsto& u^{-2}e_0\otimes e_0\otimes e_0+e_1\otimes e_1\otimes e_0\\ &\mapsto& u^{-2}v^{-1+U}u^{-1+V}e_0\otimes e_0\otimes e_0+v^{1+U}u^{-1+V}e_1\otimes e_0\otimes e_1+\frac{1-v^4}{v^{1-U}u^{1-V}}e_1\otimes e_1\otimes e_0\\ &\mapsto& u^{-2}v^{-2+2U}u^{-2+2V}e_0\otimes e_0\otimes e_0+v^{2+2U}u^{-2+2V}e_1\otimes e_1\otimes e_0\\ &&+\frac{(1-v^4)(1-u^4)}{v^{2-2U}u^{2-2V}} e_1\otimes e_1\otimes e_0+\frac{1-v^4}{v^{1-U}u^{1-V}}v^{-1+U}u^{1+V}e_1\otimes e_0\otimes e_1\\ &\mapsto& [u^{-2}v^{-2+2U}u^{-2+2V}-u^{-2}v^{2+2U}u^{-2+2V}-u^{-2}\frac{(1-v^4)(1-u^4)}{v^{2-2U}u^{2-2V}}]e_0\\ &=& u^{2V}v^{2U-2}(1-v^{4})e_0=-u^{2V}v^{2U}(v^2-v^{-2})e_0. \end{eqnarray*} After dividing $v^2-v^{-2}$, we get the desired result. The calculation of the other case can be conducted in the same way. \end{proof} \begin{table} \begin{center} \begin{tabular} {|c|c|c|c|c|} \hline &\begin{tikzpicture}[baseline=-0.65ex, thick] \draw [dotted] (0, 0) [<-] to [out=270,in=180] (0.5, -0.5) to [out=0,in=270] (1,0); \draw (1.7, -0.3) node {$(u, U)$}; \draw (0.7, -1) node {$1$}; \end{tikzpicture}& \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0, 0) [<-] to [out=270,in=180] (0.5, -0.5) to [out=0,in=270] (1,0); \draw (1.7, -0.3) node {$(u, U)$}; \draw (0.7, -1) node {$-u^{2}$}; \end{tikzpicture}& \begin{tikzpicture}[baseline=-0.65ex, thick] \draw [dotted] (0, 0) to [out=270,in=180] (0.5, -0.5) [->] to [out=0,in=270] (1,0); \draw (1.7, -0.3) node {$(u, U)$}; \draw (0.7, -1) node {$u^{-2}$}; \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0, 0) to [out=270,in=180] (0.5, -0.5) [->] to [out=0,in=270] (1,0); \draw (1.7, -0.3) node {$(u, U)$}; \draw (0.7, -1) node {$1$}; \end{tikzpicture}\\ \hline &\begin{tikzpicture}[baseline=-0.65ex, thick] \draw [dotted] (0, 0) [<-] to [out=90,in=180] (0.5, 0.5) to [out=0,in=90] (1,0); \draw (1.7, 0.3) node {$(u, U)$}; \draw (0.7, -0.5) node {$1$}; \end{tikzpicture}& \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0, 0) [<-] to [out=90,in=180] (0.5, 0.5) to [out=0,in=90] (1,0); \draw (1.7, 0.3) node {$(u, U)$}; \draw (0.7, -0.5) node {$-u^{-2}$}; \end{tikzpicture}& \begin{tikzpicture}[baseline=-0.65ex, thick] \draw [dotted] (0, 0) to [out=90,in=180] (0.5, 0.5) [->] to [out=0,in=90] (1,0); \draw (1.7, 0.3) node {$(u, U)$}; \draw (0.7, -0.5) node {$u^{2}$}; \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0, 0) to [out=90,in=180] (0.5, 0.5) [->] to [out=0,in=90] (1,0); \draw (1.7, 0.3) node {$(u, U)$}; \draw (0.7, -0.5) node {$1$}; \end{tikzpicture}\\ \hline \hline \vspace{-2mm}&&&&\\ & \begin{tikzpicture}[baseline=-0.65ex, thick] \draw [dotted] (0, 0) to (1,1); \draw [dotted] (1,0) to (0,1); \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.65ex, thick] \draw [dotted] (0, 0) to (1,1); \draw (1,0) to (0,1); \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0, 0) to (1,1); \draw [dotted] (1,0) to (0,1); \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0, 0) to (1,1); \draw (1,0) to (0,1); \end{tikzpicture} \\ \hline \vspace{-2mm} &&&&\\ \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0, 0) [->] to (1,1); \draw (1,0) to (0.6,0.4); \draw (0.4,0.6) [->] to (0,1); \draw (1.6, 0) node {$(u, U)$}; \draw (-0.6, 0) node {$(v, V)$}; \end{tikzpicture} & $v^{1-U}u^{1-V}$ & $v^{-1-U}u^{1-V}$ & $v^{1-U}u^{-1-V}$ &$-v^{-1-U}u^{-1-V}$ \\ \hline \vspace{-2mm} &&&&\\ \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0.6, 0.6) [->] to (1,1); \draw (1,0) [->] to (0,1); \draw (0,0) to (0.4,0.4); \draw (1.6, 0) node {$(u, U)$}; \draw (-0.6, 0) node {$(v, V)$}; \end{tikzpicture} & $v^{U-1}u^{V-1}$ & $v^{U+1}u^{V-1}$ & $v^{U-1}u^{V+1}$ &$-v^{U+1}u^{V+1}$ \\ \hline \hline \vspace{-2mm}&&&&\\ & \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0, 0) to (0.5,0.5); \draw [dotted] (0.5, 0.5) to (1,1); \draw (0.5, 0.5) to (0, 1); \draw [dotted] (1,0) to (0.5,0.5); \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.65ex, thick] \draw [dotted] (0, 0) to (0.5,0.5); \draw (0.5, 0.5) to (1,1); \draw [dotted] (0.5, 0.5) to (0, 1); \draw (1,0) to (0.5,0.5); \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0, 0) to (0.5,0.5); \draw [dotted] (0.5, 0.5) to (1,1); \draw [dotted] (0.5, 0.5) to (0, 1); \draw (1,0) to (0.5,0.5); \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.65ex, thick] \draw [dotted] (0, 0) to (0.5,0.5); \draw (0.5, 0.5) to (1,1); \draw (0.5, 0.5) to (0, 1); \draw [dotted] (1,0) to (0.5,0.5); \end{tikzpicture} \\ \hline \vspace{-2mm} &&&&\\ \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0, 0) [->] to (1,1); \draw (1,0) to (0.6,0.4); \draw (0.4,0.6) [->] to (0,1); \draw (1.6, 0) node {$(u, U)$}; \draw (-0.6, 0) node {$(v, V)$}; \end{tikzpicture} & $0$ & $\displaystyle \frac{v^{4}-1}{v^{1+U}u^{1+V}}$ & $0$ &$0$ \\ \vspace{-2mm} &&&&\\ \hline \vspace{-2mm} &&&&\\ \begin{tikzpicture}[baseline=-0.65ex, thick] \draw (0.6, 0.6) [->] to (1,1); \draw (1,0) [->] to (0,1); \draw (0,0) to (0.4,0.4); \draw (1.6, 0) node {$(u, U)$}; \draw (-0.6, 0) node {$(v, V)$}; \end{tikzpicture} & $\displaystyle \frac{1-u^{4}}{v^{1-U}u^{1-V}}$ & $0$ & $0$ &$0$ \\ &&&&\\ \hline \end{tabular} \vspace{3mm} \caption{Boltzmann weights for critical points, half-twist symbols and two types of crossings from Viro's Table 3.} \label{viro1} \end{center} \end{table} Viro's functor satisfies the following relations, which will be used in Section 3. See \cite[5.2B]{MR2255851}. \begin{lemma}[Junction relations] \label{junction} For $t\in G$ satisfying $t^4\neq 1$, let $\displaystyle d(t)=\frac{1}{t^2-t^{-2}}$. When $u^4v^4\neq 1$ we have \begin{align*} \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0,1); \draw (1,-1) [<-] to (1,1); \draw (-0,-1.25) node {$\scriptstyle(u, U)$}; \draw (1, -1.25) node {$\scriptstyle(v, V)$}; \end{tikzpicture}\right)=d(uv) \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [->] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [-<-] to (1,1); \draw (0.5,-0.5) [-<-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle(u, U)$}; \draw (1, -1.25) node {$\scriptstyle(v, V)$}; \draw (1.5, 0) node {${ \scriptstyle (uv, U+V-1)}$}; \draw (0, 1.25) node {$\scriptstyle(u, U)$}; \draw (1, 1.25) node {$\scriptstyle(v, V)$}; \end{tikzpicture}\right) -d(uv) \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [->] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [-<-] to (1,1); \draw (0.5,-0.5) [->-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle(u, U)$}; \draw (1, -1.25) node {$\scriptstyle(v, V)$}; \draw (2, 0) node {${\scriptstyle (u^{-1}v^{-1}, -U-V-1)}$}; \draw (0, 1.25) node {$\scriptstyle(u, U)$}; \draw (1, 1.25) node {$\scriptstyle(v, V)$}; \end{tikzpicture}\right). \end{align*} When $u^4v^{-4}\neq 1$ we have \begin{align*} \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0,1); \draw (1,-1) [->] to (1,1); \draw (-0,-1.25) node {$\scriptstyle(u, U)$}; \draw (1, -1.25) node {$\scriptstyle(j, J)$}; \end{tikzpicture}\right)=d(uv^{-1}) \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [-<-] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [->] to (1,1); \draw (0.5,-0.5) [-<-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle(u, U)$}; \draw (1, -1.25) node {$\scriptstyle(v, V)$}; \draw (1.7, 0) node {${\scriptstyle (uv^{-1}, U-V+1)}$}; \draw (0, 1.25) node {$\scriptstyle(u, U)$}; \draw (1, 1.25) node {$\scriptstyle(v, V)$}; \end{tikzpicture}\right) -d(uv^{-1}) \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [-<-] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [->] to (1,1); \draw (0.5,-0.5) [->-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle(u, U)$}; \draw (1, -1.25) node {$\scriptstyle(v, V)$}; \draw (1.8, 0) node {${\scriptstyle (u^{-1}v, -U+V+1)}$}; \draw (0, 1.25) node {$\scriptstyle(u, U)$}; \draw (1, 1.25) node {$\scriptstyle(v, V)$}; \end{tikzpicture}\right). \end{align*} \end{lemma} \medskip \section{Properties of the Alexander polynomial under Kirby moves} \subsection{Cohomology classes} We review a characterization of cohomology classes given in \cite[Sect. 2.3]{MR3286896}. Let $M$ be a $3$-manifold, let $T$ be a framed graph in $M$. Suppose $L$ is an oriented framed link in $S^3$ which is a surgery presentation for $M$. Since $T$ is disjoint from $L$, we also view $T$ as a graph in $S^3$ before the surgery. Now we consider diagrams of $L$ and $T$, which are still denoted by $L$ and $T$. Let $e_1, e_2, \cdots, e_r$ be the components of $L$, and $e_{r+1}, e_{r+2}, \cdots, e_{r+s}$ be the oriented edges of $T$. For two different components $e_i$ and $e_j$ in $L$ ($1 \leq i, j\leq r$), let $lk_{ij}=lk(e_i, e_j)$ denote the linking number of $e_i$ and $e_j$. Namely, it is half of the sum of signs of all the crossings between $e_i$ and $e_j$. Let $lk_{ii}=lk(e_i, e_i)$ ($1 \leq i \leq r$) be the framing of $e_i$. Namely it is the sum of signs of self-crossings of $e_i$ (since we use blackboard framing). It is well-known that $lk_{ij}$ does not depend on the diagram we choose. The matrix $(lk_{ij})_{1\leq i, j \leq r}$ is called the {\it linking matrix} of $L$. For a component $e_i$ of $L$ and an edge $e_j$ of $T$, we define the linking number $lk_{ij}=lk(e_i, e_j)$ to be the number of all the crossings of type \begin{tikzpicture}[baseline=-0.65ex, thick, scale=0.5] \draw (0, 0) [->] to (1,1); \draw (1,0) to (0.6,0.4); \draw (0.4,0.6) [->] to (0,1); \draw (1.5, 0) node {$e_i$}; \draw (-0.5, 0) node {$e_j$}; \end{tikzpicture} minus the number of crossings of type \begin{tikzpicture}[baseline=-0.65ex, thick, scale=0.5] \draw (0.6, 0.6) [->] to (1,1); \draw (1,0) [->] to (0,1); \draw (0,0) to (0.4,0.4); \draw (1.5, 0) node {$e_j$}; \draw (-0.5, 0) node {$e_i$}; \end{tikzpicture} between $e_i$ and $e_j$. Note that this number depends on the diagrams of $L$ and $T$. Let $M\backslash T$ be the complement of $T$ in $M$. The first homology group $H_{1}(M\backslash T, \mathbb{Z})$ has a presentation $$H_1(M\backslash T, \mathbb{Z})=\left <\begin{array}{l|l} & \forall 1\leq i \leq r,\sum_{j=1}^{r+s} lk_{ij}[m_j]=0;\\ \{ [m_i]\}_{1\leq i\leq r+s} & \forall v:\text{vertex of $T$}, r_v=0;\\ &\forall 1\leq i, j \leq r+s, [m_i]+[m_j]=[m_j]+[m_i] \end{array} \right >, $$ where $m_i$ is the oriented meridian of $e_i$, and for a vertex $v$ of $T$, $r_v$ is the sum of meridians of the edges entering $v$ minus the sum of meridians of the edges outgoing from $v$. Note that for each $1\leq i \leq r$, $\sum_{j=r+1}^{r+s}lk_{ij}[m_j]$ does not depend on the choice of diagram of $L$ and $T$ and is a well-defined value. Let $G$ be an abelian group. Then the cohomology class $$\omega\in H^1(M\backslash T, G)\cong \mathrm{Hom} (H_1(M\backslash T, \mathbb{Z}), G)$$ is uniquely determined by the images of $[m_i]'s$ under $\omega$. \subsection{Kirby calculus} We review basic facts about Kirby calculus, which can be found, for instance in \cite[5.1]{MR3286896}. Kirby \cite{MR467753} showed that any $3$-manifold can be obtained by doing surgeries along a framed link in $S^3$. Such a link is called the surgery presentation of the given $3$-manifold. There are two types of moves connecting surgery presentations, which are called blow up/down moves and handle-slide move. See Fig.~\ref{f3} and Fig.~\ref{f4}. \begin{figure} \begin{align*} \begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-0.5) to (0,0.9); \draw (0,1.1) to (0,1.4); \draw (-0.3,1) [<-]to (0.3,1); \draw (-0.3,0.5) to (-0.1,0.5); \draw (0.1,0.5) to (0.3,0.5); \draw (-0.3,1) arc (90:270:0.25); \draw (0.3,1) to (0.8,0.5); \draw (0.3,0.5) to (0.5,0.7); \draw (0.8,1) to (0.6,0.8); \draw (0,-0.5) to (0.5,-1); \draw (0.8,0.5) arc (-90:90:0.25); \draw (0,-1) to (0.2,-0.8); \draw (0.3,-0.7) to (0.5,-0.5); \draw (0.5,-1) arc (-90:90:0.25); \draw (0,-1) [->] to (0,-1.5); \end{tikzpicture} \quad \leftrightarrow \quad \begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0.5,-1) [<-] to (0.5,1.2); \end{tikzpicture}\quad\quad\quad \text{or}\quad\quad\quad \begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-0.5) to (0,0.4); \draw (0,0.6) to (0,1.4); \draw (-0.3,0.5) [->]to (0.3,0.5); \draw (-0.3,1) to (-0.1,1); \draw (0.1,1) to (0.3,1); \draw (-0.3,1) arc (90:270:0.25); \draw (0.3,1) to (0.5,0.8); \draw (0.6,0.7) to (0.8,0.5); \draw (0.3,0.5) to (0.8,1); \draw (0,-0.5) to (0.2,-0.7); \draw (0.3,-0.8) to (0.5,-1); \draw (0.8,0.5) arc (-90:90:0.25); \draw (0,-1) to (0.5,-0.5); \draw (0.5,-1) arc (-90:90:0.25); \draw (0,-1) [->] to (0,-1.5); \end{tikzpicture}\quad \leftrightarrow \quad \begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0.5,-1) [<-] to (0.5,1.2); \end{tikzpicture} \end{align*} \caption{Blow up/down moves} \label{f3} \end{figure} \begin{figure} \begin{eqnarray*} \begin{tikzpicture}[baseline=-0.65ex, thick, scale=0.8] \draw (0.5,-1.2) to (0.5,1.4); \draw (0.5,-1.5) node {$e_i$}; \draw [dotted](2.2,-0.8) arc (-90:90:1); \draw (2.2,-0.8) arc (-90:-270:1); \draw (2.2,-1.2) node {$e_j$}; \end{tikzpicture}\quad\quad \leftrightarrow \quad\quad \begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw (0.3, -0.8) arc (-90:-270:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw (0.3, -1.1) to [out=180,in=270] (-0.5, -0.5) to [out=180,in=0] (-1, -0.5) to [out=270,in=90] (-1, -1.5); \draw (-1, 1) to (-1, 0) to (-0.5, 0) to [out=90,in=180] (0.3, 0.7); \draw (-1.2, -1.25) node {$e_i$}; \draw (0.2, 0) node {$e_j$}; \end{tikzpicture} \end{eqnarray*} \caption{Handle-slide move of $e_i$ along $e_j$.} \label{f4} \end{figure} \begin{theo}[Theorem 5.2 in \cite{MR3286896}] \label{kirby} Let $M_1$ and $M_2$ be two $3$-manifolds and $T_1\subset M_1$ and $T_2\subset M_2$ be embedded framed graphs. Let $f:M_1\to M_2$ be an orientation preserving diffeomorphism such that $f(T_1)=T_2$. Let $L_i\subset S^3$ be a surgery presentation of $M_i$ which is disjoint from $T_i$ for $i=1, 2$. Then $f$ is isotopic to the diffeomorphisms induced by a finite sequence of moves: $$L_1=L^0\xrightarrow{k_1} L^1 \xrightarrow{k_2}\cdots \xrightarrow{k_r}L^{k}=L_2,$$ where each $k_i$ ($1\leq i \leq r$) is one of the following moves. \begin{enumerate} \item handle-slide move of a component/edge of $L^{i-1}\cup T_1$ along a component of $L^{i-1}$; \item blow up/down move along a component/edge of $L^{i-1}\cup T_1$, where the circle component which appears or disappears during this move must be a component of the surgery presentation. \end{enumerate} \end{theo} In general, the composition of a handle-slide move and a blow-up move can be realized by the composition of a blow-up move and several handle-slide moves (possibly one), and the composition of a blow-down move and a handle-slide move can be realized by the composition of several handle-slide moves (possibly one) and a blow-down move. A blow-up move after a blow-down move can be done before the blow-down move without changing the result. Therefore for the sequence of moves connecting $L_1\cup \Gamma_1$ and $L_2 \cup \Gamma_2$, we can assume that all the blow-up moves are at the beginning and all the blow-down moves are at the end. \subsection{Lemmas} In \cite[7.5]{MR2255851}, Viro discussed how the Alexander polynomial changes when the weights change. As a special situation, we have the following lemma. For a colored framed graph $\Lambda \subset S^3$ and a knot component $K\subset \Lambda$, we define {\it the colored linking number} of $K$ with $\Lambda$ as $$clk(K, \Lambda):=\prod_{\text{$e$: edge of $\Lambda$}}t_e^{lk(K, e)},$$ where $lk(K, e)$ is the linking number as defined in Section 3.1, and $t_e$ is the multiplicity of $e$. Due to the admissibility condition \eqref{multi} for multiplicities, $clk(K, \Lambda)$ is well-defined. \begin{lemma} \label{zero} Let $\Lambda$ be a colored framed graph in $S^3$. Let $K$ be a knot component of $\Lambda$ satisfying the condition that $clk(K, \Lambda)=1$. Then $\Delta (\Lambda)$ does not depend on the choice of the weight on $K$. \end{lemma} \begin{proof} We consider a graph diagram of $\Lambda$, which is still denoted by $\Lambda$. Suppose $c$ is the color of $\Lambda$ and $c(K)=(t, N)$. Let $c'$ be a color which are the same as $c$ except that at the component $K$ we have $c'(K)=(t, N+J)$. A straightforward comparison of Boltzmann weights tells us the following facts. At a positive crossing, if the weight at one of the two strands changes from $N$ to $N+J$, the contribution of the Boltzmann weight makes a change of $s^{-J}$, where $s$ is the multiplicity of the other strand at that crossing. For a negative crossing, the change is $s^{J}$. Therefore \begin{eqnarray*} \Delta (\Lambda; c')&=&\Delta (\Lambda; c)\prod_{\text{$e$: edge of $\Lambda$}} t_e^{-2lk(K, e)J}=\Delta (\Lambda; c)\left (\prod_{\text{$e$: edge of $\Lambda$}} t_e^{2lk(K, e)}\right )^{-J}\\&=&\Delta (\Lambda; c)1^{-J}=\Delta (\Lambda; c), \end{eqnarray*} where $t_e$ is the multiplicity of the edge $e$. \end{proof} \begin{defn} \rm For $(t, N)\in G\backslash \{g\in G \mid g^4=1\}\times \mathbb{Z}$, if one component of a link has {\it Kirby color} $\Omega (t, N)$, the Alexander polynomial is calculated as follows: \begin{align*} \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0.5,-0.5) [<-] to (0.5,1.2); \draw (0.5,-0.8) node {$\Omega(t, N)$}; \end{tikzpicture}\right):=d(t)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0.5,-0.5) [<-] to (0.5,1.2); \draw (0.5,-0.8) node {$(t, N)$}; \end{tikzpicture}\right)-d(t)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0.5,-0.5) [->] to (0.5,1.2); \draw (0.5,-0.8) node {$(t^{-1}, 2-N)$}; \end{tikzpicture}\right), \end{align*} where $d(t)=\frac{1}{t^2-t^{-2}}$. For a strand with Kirby color $\Omega (t, N)$, its {\it multiplicity} is defined to be $t$. \end{defn} It is easy to see that if a knot $K\subset S^3$ has Kirby color $\Omega (t, N)$, we have $$\Delta (K; \Omega (t, N))=\Delta (-K; \Omega (t^{-1}, 2-N)),$$ where $-K$ is the same knot $K$ with opposite orientation. Now, we discuss how the Alexander polynomial changes under blow-up/down moves when the circle component has a Kirby color. \begin{lemma} \label{up-down} \begin{align*} \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-0.5) to (0,0.9); \draw (0,1.1) to (0,1.4); \draw (-0.3,1) [<-]to (0.3,1); \draw (-0.3,0.5) to (-0.1,0.5); \draw (0.1,0.5) to (0.3,0.5); \draw (-0.3,1) arc (90:270:0.25); \draw (0.3,1) to (0.8,0.5); \draw (0.3,0.5) to (0.5,0.7); \draw (0.8,1) to (0.6,0.8); \draw (0,-0.5) to (0.5,-1); \draw (0.8,0.5) arc (-90:90:0.25); \draw (0,-1) to (0.2,-0.8); \draw (0.3,-0.7) to (0.5,-0.5); \draw (0.5,-1) arc (-90:90:0.25); \draw (0,-1) [->] to (0,-1.5); \draw (0.6,-1.25) node {$(t, N)$}; \draw (-0.8,0.2) node {$\Omega(t, J)$}; \end{tikzpicture}\right)=2\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0.5,-1) [<-] to (0.5,1.2); \draw (0.5,-1.25) node {$(t, N)$}; \end{tikzpicture}\right),\quad \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-0.5) to (0,0.4); \draw (0,0.6) to (0,1.4); \draw (-0.3,0.5) [->]to (0.3,0.5); \draw (-0.3,1) to (-0.1,1); \draw (0.1,1) to (0.3,1); \draw (-0.3,1) arc (90:270:0.25); \draw (0.3,1) to (0.5,0.8); \draw (0.6,0.7) to (0.8,0.5); \draw (0.3,0.5) to (0.8,1); \draw (0,-0.5) to (0.2,-0.7); \draw (0.3,-0.8) to (0.5,-1); \draw (0.8,0.5) arc (-90:90:0.25); \draw (0,-1) to (0.5,-0.5); \draw (0.5,-1) arc (-90:90:0.25); \draw (0,-1) [->] to (0,-1.5); \draw (0.6,-1.25) node {$(t, N)$}; \draw (-0.8,0.2) node {$\Omega(t, J)$}; \end{tikzpicture}\right)=-2\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0.5,-1) [<-] to (0.5,1.2); \draw (0.6,-1.25) node {$(t, N)$}; \end{tikzpicture}\right). \end{align*} \end{lemma} \begin{proof} The equalities follow from the definition of Kirby color and the following facts. \begin{align*} \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-0.5) to (0,0.9); \draw (0,1.1) to (0,1.4); \draw (-0.3,1) [<-]to (0.3,1); \draw (-0.3,0.5) to (-0.1,0.5); \draw (0.1,0.5) to (0.3,0.5); \draw (-0.3,1) arc (90:270:0.25); \draw (0.3,1) to (0.8,0.5); \draw (0.3,0.5) to (0.5,0.7); \draw (0.8,1) to (0.6,0.8); \draw (0,-0.5) to (0.5,-1); \draw (0.8,0.5) arc (-90:90:0.25); \draw (0,-1) to (0.2,-0.8); \draw (0.3,-0.7) to (0.5,-0.5); \draw (0.5,-1) arc (-90:90:0.25); \draw (0,-1) [->] to (0,-1.5); \draw (0.6,-1.25) node {$(t, N)$}; \draw (-0.5,0.2) node {$t$}; \end{tikzpicture}\right)=-\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-0.5) to (0,0.9); \draw (0,1.1) to (0,1.4); \draw (-0.3,1) [->]to (0.3,1); \draw (-0.3,0.5) to (-0.1,0.5); \draw (0.1,0.5) to (0.3,0.5); \draw (-0.3,1) arc (90:270:0.25); \draw (0.3,1) to (0.8,0.5); \draw (0.3,0.5) to (0.5,0.7); \draw (0.8,1) to (0.6,0.8); \draw (0,-0.5) to (0.5,-1); \draw (0.8,0.5) arc (-90:90:0.25); \draw (0,-1) to (0.2,-0.8); \draw (0.3,-0.7) to (0.5,-0.5); \draw (0.5,-1) arc (-90:90:0.25); \draw (0,-1) [->] to (0,-1.5); \draw (0.6,-1.25) node {$(t, N)$}; \draw (-0.5,0.2) node {$t^{-1}$}; \end{tikzpicture}\right)=(t^{2}-t^{-2}) \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0.5,-1) [<-] to (0.5,1.2); \draw (0.6,-1.25) node {$(t, N)$}; \end{tikzpicture}\right),\\ -\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-0.5) to (0,0.4); \draw (0,0.6) to (0,1.4); \draw (-0.3,0.5) [->]to (0.3,0.5); \draw (-0.3,1) to (-0.1,1); \draw (0.1,1) to (0.3,1); \draw (-0.3,1) arc (90:270:0.25); \draw (0.3,1) to (0.5,0.8); \draw (0.6,0.7) to (0.8,0.5); \draw (0.3,0.5) to (0.8,1); \draw (0,-0.5) to (0.2,-0.7); \draw (0.3,-0.8) to (0.5,-1); \draw (0.8,0.5) arc (-90:90:0.25); \draw (0,-1) to (0.5,-0.5); \draw (0.5,-1) arc (-90:90:0.25); \draw (0,-1) [->] to (0,-1.5); \draw (0.6,-1.25) node {$(t, N)$}; \draw (-0.5,0.2) node {$t$}; \end{tikzpicture}\right)=\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-0.5) to (0,0.4); \draw (0,0.6) to (0,1.4); \draw (-0.3,0.5) [<-]to (0.3,0.5); \draw (-0.3,1) to (-0.1,1); \draw (0.1,1) to (0.3,1); \draw (-0.3,1) arc (90:270:0.25); \draw (0.3,1) to (0.5,0.8); \draw (0.6,0.7) to (0.8,0.5); \draw (0.3,0.5) to (0.8,1); \draw (0,-0.5) to (0.2,-0.7); \draw (0.3,-0.8) to (0.5,-1); \draw (0.8,0.5) arc (-90:90:0.25); \draw (0,-1) to (0.5,-0.5); \draw (0.5,-1) arc (-90:90:0.25); \draw (0,-1) [->] to (0,-1.5); \draw (0.6,-1.25) node {$(t, N)$}; \draw (-0.5,0.2) node {$t^{-1}$}; \end{tikzpicture}\right)=(t^{2}-t^{-2}) \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0.5,-1) [<-] to (0.5,1.2); \draw (0.5,-1.25) node {$(t, N)$}; \end{tikzpicture}\right). \end{align*} For the component where the weight is hidden, we can choose any integer as its weight. We prove one relation, and the other three can be proved in the same vein. From Table 1, we see that the contribution of a negative full-twist is $t^{2N}$ if the corresponding edge has color $(t, N)$. We have \begin{align*} \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-0.5) to (0,0.9); \draw (0,1.1) to (0,1.4); \draw (-0.3,1) [->]to (0.3,1); \draw (-0.3,0.5) to (-0.1,0.5); \draw (0.1,0.5) to (0.3,0.5); \draw (-0.3,1) arc (90:270:0.25); \draw (0.3,1) to (0.8,0.5); \draw (0.3,0.5) to (0.5,0.7); \draw (0.8,1) to (0.6,0.8); \draw (0,-0.5) to (0.5,-1); \draw (0.8,0.5) arc (-90:90:0.25); \draw (0,-1) to (0.2,-0.8); \draw (0.3,-0.7) to (0.5,-0.5); \draw (0.5,-1) arc (-90:90:0.25); \draw (0,-1) [->] to (0,-1.5); \draw (0.6,-1.25) node {$(t, N)$}; \draw (-1,0.2) node {$(t^{-1}, N')$}; \end{tikzpicture}\right)=t^{2(N-N')}\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,1.1) to (0,1.4); \draw (-0.3,1) [->]to (0.3,1); \draw (-0.3,0.5) to (-0.1,0.5); \draw (0.1,0.5) to (0.3,0.5); \draw (-0.3,1) arc (90:270:0.25); \draw (0.3,1) arc (90:-90:0.25); \draw (0,0.9) [->] to (0,-1); \draw (0.6,-0.8) node {$(t, N)$}; \draw (-0.8,0.2) node {$(t^{-1}, N')$}; \end{tikzpicture}\right)=t^{2(N-N')}\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (-0.3,1) [->-]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) to (0.3,1); \draw (-0.3,0.4) to (0.3,-0.2); \draw (-0.3,-0.2) to (-0.1,0); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.3,1) arc (90:270:0.6); \draw (0.3,-0.2) [->] to (0.3,-1); \draw (0.3,1) to (0.3,1.2); \draw (0.9,-0.8) node {$(t, N)$}; \draw (-0.6,-0.5) node {$(t^{-1}, N')$}; \end{tikzpicture}\right)\\ =-t^{2(N-N')}t^{2(N'-N)}(t^2-t^{-2})\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0.5,-1) [<-] to (0.5,1.2); \draw (0.6,-1.25) node {$(t, N)$}; \end{tikzpicture}\right)=-(t^2-t^{-2})\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0.5,-1) [<-] to (0.5,1.2); \draw (0.6,-1.25) node {$(t, N)$}; \end{tikzpicture}\right). \end{align*} The third equality follows from the calculation as we did in Example~\ref{example} (with an overall change of orientations of edges). \end{proof} Next, we study how the Alexander polynomial changes under a handle-slide move. We have the following lemma. \begin{lemma} \label{handle-slide} Suppose $\Lambda$ is a colored framed graph, and $K$ is a knot component of $\Lambda$ with Kirby color $\Omega (s, S)$. Let $e$ be an oriented edge of $\Lambda\backslash K$ with color $(t, N)$. Let $\Lambda'$ be a graph obtained from $\Lambda$ by a handle-slide move of $e$ along $K$, and $K$ has the new Kirby color $\Lambda (ts, N+S-1)$. If $clk(K, \Lambda)=1$ and $(ts)^4\neq 1$, we have $\Delta (\Lambda)=\Delta (\Lambda')$. \end{lemma} \begin{proof} We have \begin{eqnarray*} &&\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=0.8] \draw (0.5,-1.2) [<-] to (0.5,1.4); \draw (0.5,-1.5) node {$\scriptstyle (t, N)$}; \draw (0.2,1.2) node {$\scriptstyle e$}; \draw [dotted](2.2,-0.8) arc (-90:90:1); \draw (2.2,-0.8) [<-]arc (-90:-270:1); \draw (2.2,-1.2) node {$\scriptstyle \Omega(s, S)$}; \draw (1.5,1.2) node {$\scriptstyle K$}; \end{tikzpicture}\right)=d(s)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=0.8] \draw (0.5,-1.2) [<-] to (0.5,1.4); \draw (0.5,-1.5) node {$\scriptstyle (t, N)$}; \draw [dotted](2.2,-0.8) arc (-90:90:1); \draw (2.2,-0.8) [<-]arc (-90:-270:1); \draw (2.2,-1.2) node {$\scriptstyle (s, S)$}; \end{tikzpicture}\right)-d(s)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=0.8] \draw (0.5,-1.2) [<-] to (0.5,1.4); \draw (0.5,-1.5) node {$\scriptstyle (t, N)$}; \draw [dotted](2.2,-0.8) arc (-90:90:1); \draw (2.2,-0.8) [->]arc (-90:-270:1); \draw (2.2,-1.2) node {$\scriptstyle (s^{-1}, 2-S)$}; \end{tikzpicture}\right)\\ &=&d(s)d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [->] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [-<-] to (1,1); \draw (0.5,-0.5) [-<-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle (t, N)$}; \draw (1.5, 0) node {${ \scriptstyle (ts, N+S-1)}$}; \draw (0, 1.25) node {$\scriptstyle (t, N)$}; \draw (1, 1.25) node {$\scriptstyle (s, S)$}; \draw (1,1) to (1.5,1); \draw (1,-1) to (1.5,-1); \draw [dotted](1.5,-1) arc (-90:90:1); \end{tikzpicture}\right)-d(s)d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [->] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [-<-] to (1,1); \draw (0.5,-0.5) [->-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle (t, N)$}; \draw (1.8, 0) node {${ \scriptstyle ((ts)^{-1}, -N-S-1)}$}; \draw (0, 1.25) node {$\scriptstyle (t, N)$}; \draw (1, 1.25) node {$\scriptstyle (s, S)$}; \draw (1,1) to (1.5,1); \draw (1,-1) to (1.5,-1); \draw [dotted](1.5,-1) arc (-90:90:1); \end{tikzpicture}\right)\\ &&-d(s)d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [-<-] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [->] to (1,1); \draw (0.5,-0.5) [-<-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle (t, N)$}; \draw (1.5, 0) node {${ \scriptstyle (ts, N+S-1)}$}; \draw (0, 1.25) node {$\scriptstyle (t, N)$}; \draw (1.3, 1.25) node {$\scriptstyle (s^{-1}, 2-S)$}; \draw (1,1) to (1.5,1); \draw (1,-1) to (1.5,-1); \draw [dotted](1.5,-1) arc (-90:90:1); \end{tikzpicture}\right)+d(s)d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [-<-] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [->] to (1,1); \draw (0.5,-0.5) [->-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle (t, N)$}; \draw (1.7, 0) node {${ \scriptstyle ((ts)^{-1}, 3-N-S)}$}; \draw (0, 1.25) node {$\scriptstyle (t, N)$}; \draw (1.3, 1.25) node {$\scriptstyle (s^{-1}, 2-S)$}; \draw (1,1) to (1.5,1); \draw (1,-1) to (1.5,-1); \draw [dotted](1.5,-1) arc (-90:90:1); \end{tikzpicture}\right)\\ &=&d(s)d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [->] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [-<-] to (1,1); \draw (0.5,-0.5) [-<-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle (t, N)$}; \draw (1.5, 0) node {${ \scriptstyle (ts, N+S-1)}$}; \draw (0, 1.25) node {$\scriptstyle (t, N)$}; \draw (1, 1.25) node {$\scriptstyle (s, S)$}; \draw (1,1) to (1.5,1); \draw (1,-1) to (1.5,-1); \draw [dotted](1.5,-1) arc (-90:90:1); \end{tikzpicture}\right)-d(s)d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [->] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [-<-] to (1,1); \draw (0.5,-0.5) [->-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle (t, N)$}; \draw (1.7, 0) node {${ \scriptstyle ((ts)^{-1}, 3-N-S)}$}; \draw (0, 1.25) node {$\scriptstyle (t, N)$}; \draw (1.2, 1.2) node {$\scriptstyle (s, S-4)$}; \draw (1,1) to (1.5,1); \draw (1,-1) to (1.5,-1); \draw [dotted](1.5,-1) arc (-90:90:1); \end{tikzpicture}\right)\\ &&-d(s)d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [-<-] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [->] to (1,1); \draw (0.5,-0.5) [-<-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle (t, N)$}; \draw (1.5, 0) node {${ \scriptstyle (ts, N+S-1)}$}; \draw (0, 1.25) node {$\scriptstyle (t, N)$}; \draw (1.3, 1.25) node {$\scriptstyle (s^{-1}, 2-S)$}; \draw (1,1) to (1.5,1); \draw (1,-1) to (1.5,-1); \draw [dotted](1.5,-1) arc (-90:90:1); \end{tikzpicture}\right)+d(s)d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [-<-] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [->] to (1,1); \draw (0.5,-0.5) [->-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle (t, N)$}; \draw (1.7, 0) node {${ \scriptstyle ((ts)^{-1}, 3-N-S)}$}; \draw (0, 1.25) node {$\scriptstyle (t, N)$}; \draw (1.2, 1.25) node {$\scriptstyle (s^{-1}, 2-S)$}; \draw (1,1) to (1.5,1); \draw (1,-1) to (1.5,-1); \draw [dotted](1.5,-1) arc (-90:90:1); \end{tikzpicture}\right)\\ &=&d(s)d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,0) [->-]to (0,-0.5); \draw (0,0) [-<-]to (0.3, 0.3); \draw (0,0) arc (0:180:0.3); \draw (0,-0.5) to (0.3, -0.8); \draw (0,-0.5) arc (90:270:0.3); \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw (0, -1.1) to (0.3, -1.1); \draw (-0.6,0) to (-0.6, -0.5); \draw (-0.6, -0.5)[->] to (-1, -1); \draw (0.3,0.7) [<-]to (-0.6, 0.7); \draw (-0.6, 0.7) to (-0.9, 1); \draw (-1, -1.25) node {${\scriptstyle (t, N)}$}; \draw (-0.3, 0.9) node {${\scriptstyle (t, N)}$}; \draw (0.35, -0.2) node {${\scriptstyle (s, S)}$}; \draw (1, -1.3) node {${ \scriptstyle (ts, N+S-1)}$}; \draw [thin](1.2, -1.1) to (0.8, -0.6); \end{tikzpicture}\right)-d(s)d(ts) \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,0) [->-]to (0,-0.5); \draw (0,0) [->]to (0.3, 0.3); \draw (0,0) arc (0:180:0.3); \draw (0,-0.5) to (0.3, -0.8); \draw (0,-0.5) arc (90:270:0.3); \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw (0, -1.1) to (0.3, -1.1); \draw (-0.6,0) to (-0.6, -0.5); \draw (-0.6, -0.5)[->] to (-1, -1); \draw (0.3,0.7) [<-]to (-0.6, 0.7); \draw (-0.6, 0.7) to (-0.9, 1); \draw (-1, -1.25) node {${\scriptstyle (t, N)}$}; \draw (-0.3, 0.9) node {${\scriptstyle (t, N)}$}; \draw (0.55, -0.2) node {${\scriptstyle (s, S-4)}$}; \draw (1, -1.3) node {${ \scriptstyle ((ts)^{-1}, 3-N-S)}$}; \draw [thin](1.2, -1.1) to (0.8, -0.6); \end{tikzpicture}\right)\\ &&-d(s)d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,0) [-<-]to (0,-0.5); \draw (0,0) [-<-]to (0.3, 0.3); \draw (0,0) arc (0:180:0.3); \draw (0,-0.5) to (0.3, -0.8); \draw (0,-0.5) arc (90:270:0.3); \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw (0, -1.1) to (0.3, -1.1); \draw (-0.6,0) to (-0.6, -0.5); \draw (-0.6, -0.5)[->] to (-1, -1); \draw (0.3,0.7) [<-]to (-0.6, 0.7); \draw (-0.6, 0.7) to (-0.9, 1); \draw (-1, -1.25) node {${\scriptstyle (t, N)}$}; \draw (-0.3, 0.9) node {${\scriptstyle (t, N)}$}; \draw (0.8, -0.2) node {${\scriptstyle (s^{-1}, 2-S)}$}; \draw (1, -1.3) node {${ \scriptstyle (ts, N+S-1)}$}; \draw [thin](1.2, -1.1) to (0.8, -0.6); \end{tikzpicture}\right)+d(s)d(ts) \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,0) [-<-]to (0,-0.5); \draw (0,0) [->]to (0.3, 0.3); \draw (0,0) arc (0:180:0.3); \draw (0,-0.5) to (0.3, -0.8); \draw (0,-0.5) arc (90:270:0.3); \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw (0, -1.1) to (0.3, -1.1); \draw (-0.6,0) to (-0.6, -0.5); \draw (-0.6, -0.5)[->] to (-1, -1); \draw (0.3,0.7) [<-]to (-0.6, 0.7); \draw (-0.6, 0.7) to (-0.9, 1); \draw (-1, -1.25) node {${\scriptstyle (t, N)}$}; \draw (-0.3, 0.9) node {${\scriptstyle (t, N)}$}; \draw (0.8, -0.2) node {${\scriptstyle (s^{-1}, 2-S)}$}; \draw (1, -1.3) node {${ \scriptstyle ((ts)^{-1}, 3-N-S)}$}; \draw [thin](1.2, -1.1) to (0.8, -0.6); \end{tikzpicture}\right)\\ &=&d(ts)\left [d(s) \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,0) [->-]to (0,-0.5); \draw (0,0) [-<-]to (0.3, 0.3); \draw (0,0) arc (0:180:0.3); \draw (0,-0.5) to (0.3, -0.8); \draw (0,-0.5) arc (90:270:0.3); \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw (0, -1.1) to (0.3, -1.1); \draw (-0.6,0) to (-0.6, -0.5); \draw (-0.6, -0.5)[->] to (-1, -1); \draw (0.3,0.7) [<-]to (-0.6, 0.7); \draw (-0.6, 0.7) to (-0.9, 1); \draw (-1, -1.25) node {${\scriptstyle (t, N)}$}; \draw (-0.3, 0.9) node {${\scriptstyle (t, N)}$}; \draw (0.35, -0.2) node {${\scriptstyle (s, S)}$}; \draw (1, -1.3) node {${ \scriptstyle (ts, N+S-1)}$}; \draw [thin](1.2, -1.1) to (0.8, -0.6); \end{tikzpicture}\right)-d(s)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,0) [-<-]to (0,-0.5); \draw (0,0) [-<-]to (0.3, 0.3); \draw (0,0) arc (0:180:0.3); \draw (0,-0.5) to (0.3, -0.8); \draw (0,-0.5) arc (90:270:0.3); \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw (0, -1.1) to (0.3, -1.1); \draw (-0.6,0) to (-0.6, -0.5); \draw (-0.6, -0.5)[->] to (-1, -1); \draw (0.3,0.7) [<-]to (-0.6, 0.7); \draw (-0.6, 0.7) to (-0.9, 1); \draw (-1, -1.25) node {${\scriptstyle (t, N)}$}; \draw (-0.3, 0.9) node {${\scriptstyle (t, N)}$}; \draw (0.8, -0.2) node {${\scriptstyle (s^{-1}, 2-S)}$}; \draw (1, -1.3) node {${ \scriptstyle (ts, N+S-1)}$}; \draw [thin](1.2, -1.1) to (0.8, -0.6); \end{tikzpicture}\right) \right]\\ &&-d(ts) \left[d(s) \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,0) [->-]to (0,-0.5); \draw (0,0) [->]to (0.3, 0.3); \draw (0,0) arc (0:180:0.3); \draw (0,-0.5) to (0.3, -0.8); \draw (0,-0.5) arc (90:270:0.3); \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw (0, -1.1) to (0.3, -1.1); \draw (-0.6,0) to (-0.6, -0.5); \draw (-0.6, -0.5)[->] to (-1, -1); \draw (0.3,0.7) [<-]to (-0.6, 0.7); \draw (-0.6, 0.7) to (-0.9, 1); \draw (-1, -1.25) node {${\scriptstyle (t, N)}$}; \draw (-0.3, 0.9) node {${\scriptstyle (t, N)}$}; \draw (0.55, -0.2) node {${\scriptstyle (s, S-4)}$}; \draw (1, -1.3) node {${ \scriptstyle ((ts)^{-1}, 3-N-S)}$}; \draw [thin](1.2, -1.1) to (0.8, -0.6); \end{tikzpicture}\right)-d(s)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,0) [-<-]to (0,-0.5); \draw (0,0) [->]to (0.3, 0.3); \draw (0,0) arc (0:180:0.3); \draw (0,-0.5) to (0.3, -0.8); \draw (0,-0.5) arc (90:270:0.3); \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw (0, -1.1) to (0.3, -1.1); \draw (-0.6,0) to (-0.6, -0.5); \draw (-0.6, -0.5)[->] to (-1, -1); \draw (0.3,0.7) [<-]to (-0.6, 0.7); \draw (-0.6, 0.7) to (-0.9, 1); \draw (-1, -1.25) node {${\scriptstyle (t, N)}$}; \draw (-0.3, 0.9) node {${\scriptstyle (t, N)}$}; \draw (0.75, -0.2) node {${\scriptstyle (s^{-1}, 2-S)}$}; \draw (1, -1.3) node {${ \scriptstyle ((ts)^{-1}, 3-N-S)}$}; \draw [thin](1.2, -1.1) to (0.8, -0.6); \end{tikzpicture}\right) \right]\\ &=&d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw [->] (0, -1.1) to [out=180,in=270] (-0.3,-0.2) to [out=90,in=0] (-0.7,0.3) to [out=180,in=90] (-1.1, -0.2) to [out=270,in=45] (-1.4, -1.1); \draw [-<-](0.3, -0.8) to [out=180, in=270] (0, -0.25) to [out=90, in=180] (0.3, 0.3); \draw (0, -1.1) to (0.3, -1.1); \draw (0.3,0.7) [<-]to (-0.6, 0.7); \draw (-0.6, 0.7) to (-0.9, 1); \draw (-1, -1.25) node {${\scriptstyle (t, N)}$}; \draw (-0.3, 0.9) node {${\scriptstyle (t, N)}$}; \draw (1, -1.3) node {${ \scriptstyle (ts, N+S-1)}$}; \draw [thin](1.2, -1.1) to (0.8, -0.6); \end{tikzpicture}\right)-d(ts)\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw [->] (0, -1.1) to [out=180,in=270] (-0.3,-0.2) to [out=90,in=0] (-0.7,0.3) to [out=180,in=90] (-1.1, -0.2) to [out=270,in=45] (-1.4, -1.1); \draw [->-] (0.3, -0.8) to [out=180, in=270] (0, -0.25) to [out=90, in=180] (0.3, 0.3); \draw (0, -1.1) to (0.3, -1.1); \draw (0.3,0.7) [<-]to (-0.6, 0.7); \draw (-0.6, 0.7) to (-0.9, 1); \draw (-1, -1.25) node {${\scriptstyle (t, N)}$}; \draw (-0.3, 0.9) node {${\scriptstyle (t, N)}$}; \draw (1, -1.3) node {${ \scriptstyle ((ts)^{-1}, 3-N-S)}$}; \draw [thin](1.2, -1.1) to (0.8, -0.6); \end{tikzpicture}\right)\\ &=&\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw [dotted] (0.3, -0.8) arc (-90:90:0.55); \draw [dotted] (0.3, -1.1) arc (-90:90:0.9); \draw [->] (0, -1.1) to (-1.4, -1.1); \draw [-<-](0.3, -0.8) to [out=180, in=270] (0, -0.25) to [out=90, in=180] (0.3, 0.3); \draw (0, -1.1) to (0.3, -1.1); \draw (0.3,0.7) [<-]to (-0.6, 0.7); \draw (-0.6, 0.7) to (-0.9, 1); \draw (-0.3, 0.9) node {${\scriptstyle (t, N)}$}; \draw (-1, -0.1) node {${ \scriptstyle \Omega(ts, N+S-1)}$}; \end{tikzpicture}\right). \end{eqnarray*} The second equality follows from Lemma \ref{junction}, the junction relations. To obtain the third equality, we need the following fact: \begin{eqnarray*} \left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [->] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [-<-] to (1,1); \draw (0.5,-0.5) [->-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle (t, N)$}; \draw (1.8, 0) node {${ \scriptstyle ((ts)^{-1}, -N-S-1)}$}; \draw (0, 1.25) node {$\scriptstyle (t, N)$}; \draw (1.1, 1.25) node {$\scriptstyle (s, S)$}; \draw (1,1) to (1.5,1); \draw (1,-1) to (1.5,-1); \draw [dotted](1.5,-1) arc (-90:90:1); \end{tikzpicture}\right) =\left(\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-1) [<-] to (0.5,-0.5); \draw (0.5, -0.5) [->] to (1,-1); \draw (0.5, 0.5) [-<-] to (0,1); \draw (0.5,0.5) [-<-] to (1,1); \draw (0.5,-0.5) [->-] to (0.5,0.5); \draw (0, -1.25) node {$\scriptstyle (t, N)$}; \draw (1.7, 0) node {${ \scriptstyle ((ts)^{-1}, 3-N-S)}$}; \draw (0, 1.25) node {$\scriptstyle (t, N)$}; \draw (1.1, 1.25) node {$\scriptstyle (s, S-4)$}; \draw (1,1) to (1.5,1); \draw (1,-1) to (1.5,-1); \draw [dotted](1.5,-1) arc (-90:90:1); \end{tikzpicture}\right), \end{eqnarray*} which holds because of the assumption that $clk(K, \Lambda)=1$. The edge colored by $(s, S)$ was the main part of $K$ before junction, so it has the same crossings with $\Lambda$ as $K$. Following the proof of Lemma \ref{zero}, we see that the Alexander polynomial is invariant if we change the color $(s, S)$ to $(s, S-4)$. By admissibility condition, the color $((ts)^{-1}, -N-S-1)$ should be changed to $((ts)^{-1}, 3-N-S)$. Note that the Boltzmann weights around a vertex do not depend on weights. The forth equality holds because the each diagram after the equality are obtained from a previous diagram by sliding of the edge with color $(t, T)$ along the edge coming from $K$, which is an isotopy of the graph. The sixth equality follows again from the junction relations. \end{proof} \medskip \section{Invariant for 3-manifolds} In this section, we consider a $1$-palette for which $G$ is a finitely generated abelian group containing at least one $\mathbb{Z}$ summand and satisfying $t^4=1 \iff t=1$. Namely $G$ contains $\mathbb{Z}$ but no $\mathbb{Z}/2\mathbb{Z}$ as a subgroup. It is not hard to find such $1$-palettes as we can see in the following examples. \begin{ex} \label{palette} The $1$-palettes defined by the following data meet our requirements. \begin{enumerate} \item Let $B=\mathbb{Q}(t)$, the field of rational functions of $t$, and let $G=\mathbb{Z}\langle t \rangle$, the cyclic group generated by $t$. \item Let $\xi_l$ be the $l$-th primitive root of unity for a prime number $l\geq 3$. Let $B=\mathbb{Q}(\pi, \xi_l)$, the extension field of $\mathbb{Q}$ generated by $\pi$ and $\xi_l$. Let $G=\mathbb{Z}\langle \pi, \xi_l \rangle$, the abelian group generated by $\pi$ and $\xi_l$. \end{enumerate} \end{ex} Let $M$ be a 3-manifold, and let $\Gamma$ be a colored framed graph in $M$ colored by a $1$-palette $(B, G)$ where $G$ contains $\mathbb{Z}$ but no $\mathbb{Z}/2\mathbb{Z}$ as a subgroup. Consider a cohomology class $\omega: H_{1}(M\backslash \Gamma, \mathbb{Z})\to G$. We say that $(M, \Gamma, \omega)$ is a {\it compatible triple} if for each edge $e$ of $\Gamma$, the multiplicity of $e$ is equal to $\omega([m_e])$, where $m_e$ is the meridian of $e$. Let $L$ be a surgery presentation of $M$. We say that $L$ is {\it computable} for a compatible triple $(M, \Gamma, \omega)$ if $L\cup \Gamma\neq\emptyset$ and $\omega ([m])\neq 1\in G$ for any meridian $m$ of $L$. We show the existence of the computable surgery presentations. \begin{lemma} For a compatible triple $(M, \Gamma, \omega)$ over the $1$-palette $(B, G)$ where $G$ contains $\mathbb{Z}$ but no $\mathbb{Z}/2\mathbb{Z}$ as a subgroup, if $\omega$ is non-trivial, then there exists a computable surgery presentation of $(M, \Gamma, \omega)$. \end{lemma} \begin{proof} Choose a surgery presentation of $M$, which we call $L$. Recall that $H_{1}(M\backslash \Gamma, \mathbb{Z})$ is generated by the meridians of $L\cup \Gamma$. Since $\omega$ is non-trivial, there is an edge or component $e$ of $L\cup \Gamma$ for which $\omega ([m_e])\neq 1$, where $m_e$ is the meridian of $e$. We do a blow-up move on the edge $e$ to create a component $K$. Then $\omega([m_K])=\omega ([m_e])\neq 1$, and $K\cup L$ is a new surgery presentation of $M$, where $m_K$ is the meridian of $K$. If $K\cup L$ is not computable, we can slide $K$ along those components of $L$ whose meridians are mapped to $1$ under $w$ to get a computable surgery presentation. Precisely, if $L_0$ is a component of $L$ with $\omega ([m_{L_0}])=1$, we slide $K$ along $L_0$ to get a new surgery presentation. For this new presentation, $\omega ([m_{L_0}])=\omega([m_K])\neq 1$, where $m_{L_0}$ is the meridian of $L_0$. \end{proof} Now we are ready to prove our main result. The proof is essentially the same as the proof of Theorem 4.7 in \cite{MR3286896}. Since our situation is concrete, we state the proof for the completeness of the paper. \begin{proof}[Proof of Theorem \ref{mainresult1}] Let $L$ and $L'$ be two computable surgery presentations of $(M, \Gamma, \omega)$. By Theorem \ref{kirby}, there is a sequence of handle-slide moves, blow-up/down moves connecting $L\cup \Gamma$ and $L'\cup \Gamma$ and the induced diffeomorphism $f: M\to M$ satisfies $f(\Gamma)=\Gamma$ and $f^{*}(\omega)=\omega$. We want to show that $\frac{\Delta (L\cup \Gamma)}{2^{r(L)}(-1)^{\sigma_+(L)}}=\frac{\Delta (L'\cup \Gamma)}{2^{r(L')}(-1)^{\sigma_+(L')}}$. We can assume that all the blow-up moves are at the beginning and all the blow-down moves are at the end. Namely it is a sequence as follows: $$L=L_0\to L_1\to \cdots \to L_k \to L_{k+1}\to \cdots \to L_{k+l}\to L_{k+l+1}\to \cdots \to L_{k+l+m}=L',$$ where $L_0$ and $L_{k}$ are connected by blow-up moves, $L_{k+l}$ and $L_{k+l+m}$ are connected by blow-down moves, while $L_{k}$ and $L_{k+l}$ are connected by handle-slide moves. If $k\geq 1$, $L_1\cup \Gamma$ is obtained from $L_0\cup \Gamma$ by a blow-up move of a component/edge of $L_0\cup \Gamma$, which is still computable since the circle component created by the blow-up move has the same multiplicity as that of the edge where the move is done. By Lemma~\ref{up-down}, we have $\frac{\Delta (L_0\cup \Gamma)}{2^{r(L_0)}(-1)^{\sigma_+(L_0)}}=\frac{\Delta (L_1\cup \Gamma)}{2^{r(L_1)}(-1)^{\sigma_+(L_1)}}$, since the change of numerator is annihilated by the change of denominator. Subsequently, one can show that $\frac{\Delta (L_0\cup \Gamma)}{2^{r(L_0)}(-1)^{\sigma_+(L_0)}}=\frac{\Delta (L_k\cup \Gamma)}{2^{r(L_k)}(-1)^{\sigma_+(L_k)}}$. Since $L'\cup \Gamma$ is obtained from $L_{k+l}\cup \Gamma$ by blow-down moves, so conversely $L_{k+l}\cup \Gamma$ is obtained from $L'\cup \Gamma$ by blow-up moves. As we discussion above, in this case we have $\frac{\Delta (L'\cup \Gamma)}{2^{r(L')}(-1)^{\sigma_+(L')}}=\frac{\Delta (L_{k+l}\cup \Gamma)}{2^{r(L_{k+l})}(-1)^{\sigma_+(L_{k+l})}}$. Now it suffices to show that $\frac{\Delta (L_k\cup \Gamma)}{2^{r(L_k)}(-1)^{\sigma_+(L_k)}}=\frac{\Delta (L_{k+l}\cup \Gamma)}{2^{r(L_{k+l})}(-1)^{\sigma_+(L_{k+l})}}$, where $L_k\cup \Gamma$ and $L_{k+l}\cup \Gamma$ are computable and are connected by handle-slide moves. More precisely, let \begin{equation*} \label{sequence} L_k \xrightarrow{s_1} L_{k+1}\xrightarrow{s_2} \cdots \xrightarrow{s_l} L_{k+l} \tag{H1} \end{equation*} be such a sequence of handle-slide moves. Suppose $L_{k+1}$ is computable, which is obtained from $L_k$ by a handle-slide move of $e\subset L_k\cup \Gamma$ along a component $K\subset L_k$. Note that \begin{eqnarray*} \displaystyle clk(K, L_k\cup \Gamma)&=&\prod_{\text{$e$: edge of $L_k\cup \Gamma$}} t_e^{lk(K, e)}=\prod_{\text{$e$: edge of $L_k\cup \Gamma$}} \omega ([m_e])^{lk(K, e)}\\&=&\omega\left (\sum_{\text{$e$: edge of $L_k\cup \Gamma$}} lk(K, e) [m_e]\right )=\omega (0)=1, \end{eqnarray*} where the forth equality follows from the presentation of $H_1(M\backslash \Gamma, \mathbb{Z})$. By Lemma~\ref{handle-slide} $\Delta (L_k\cup \Gamma)=\Delta (L_{k+1}\cup \Gamma)$, and thus $\frac{\Delta (L_k\cup \Gamma)}{2^{r(L_k)}(-1)^{\sigma_+(L_k)}}=\frac{\Delta (L_{k+1}\cup \Gamma)}{2^{r(L_{k+1})}(-1)^{\sigma_+(L_{k+1})}}$, since the component number and the eigenvalues of the linking matrix do not change under a handle-slide move. If the intermediate presentations between $L_k$ and $L_{k+l}$ are all computable, we have $\frac{\Delta (L_k\cup \Gamma)}{2^{r(L_k)}(-1)^{\sigma_+(L_k)}}=\frac{\Delta (L_{k+l}\cup \Gamma)}{2^{r(L_{k+l})}(-1)^{\sigma_+(L_{k+l})}}$. Now we consider the case that some surgery presentations between $L_k$ and $L_{k+l}$ are not computable. Namely some knot components have multiplicity $1$. We separate the discussion to the cases that $\Gamma\neq \emptyset$ and $\Gamma= \emptyset$ (and thus $L_k\neq \emptyset$ since $L_k\cup \Gamma\neq \emptyset$). Suppose $\Gamma\neq \emptyset$. We choose an edge $e\subset \Gamma$ with color $(\beta, N)$ and let $\tilde{\Gamma}=\Gamma\cup \{m_e\}$, where $m_e$ is the meridian of $e$ with color $(\alpha, 0)$. The cohomology class $\omega: H_1(M\backslash \Gamma; \mathbb{Z})\to G$ uniquely determines an element $\tilde{\omega}: H_1(M\backslash \tilde{\Gamma}; \mathbb{Z})\to G$ by requiring that $\tilde{\omega}$ sends the meridian of $m_e$ to $\alpha$. We want to show that by choosing $\alpha$ sufficiently generic, $L_k\cup \tilde{\Gamma}$ and $L_{k+l}\cup \tilde{\Gamma}$ can be connected by computable surgery presentations. Suppose $L_{k+1}$ is obtained from $L_k$ by doing handle-slide move of a component/edge $e_1\subset L_k\cup \Gamma$ along a component $K_1$ of $L_k$. If $L_{k+1}$ is not computable, which means that the new $K_1$ in $L_{k+1}$ has multiplicity $\mathrm{mul}(e_1)\mathrm{mul}(K_1)=1\in G$, we do the following moves. We first slide $m_e$ along $K_1$, which changes the multiplicity of $K_1$ to $\alpha \mathrm{mul}(K_1)$, and then perform the handle-slide $s_1$, which further changes the multiplicity of $K_1$ to $\alpha\mathrm{mul}(e_1)\mathrm{mul}(K_1)$. We want to choose $\alpha$ so that both $\alpha \mathrm{mul}(K_1)$ and $\alpha\mathrm{mul}(e_1)\mathrm{mul}(K_1)$ are not $1$. For the rest of moves $s_2, \cdots, s_l$, each time a component with multiplicity $1$ is created, we either consider a slide of $m_e$ as above along the component or reselect $\alpha$. We need to add more conditions to $\alpha$ so that all the handle-slide moves create computable surgery presentations. The conditions can be summarized as follows. {\bf Condition 1}: There is a finite set $\{x_i\}_{i\in I}\subset G$ and a finite set $J\subset {\mathbb{Z}}$ which only depends on the sequence \eqref{sequence}. We want to find an $\alpha$ so that $\alpha^{n} x_i\neq 1$ for all $i\in I$ and $n\in J$. After the last handle-slide move, we get a computable presentation $L_{k+l}$ for $(M, \tilde{\Gamma}, \tilde{\omega})$. However, $m_e$ could be linked with $L_{k+l}$, and thus the multiplicities of $L_{k+l}$ might be different from the original multiplicities of $L_{k+l}$ in the sequence \eqref{sequence}. Since $m_e$ is isotopic to the meridian of the edge $e\subset \Gamma$, there is an isotopy of $m_e$ in $M$ which brings it back to the small meridian around $e$. This isotopy can be realized by a sequence of handle-slide moves \begin{equation*} \label{sequence2} L_{k+l}\cup \tilde{\Gamma} \xrightarrow{h_1} L_{k+1}\cup \tilde{\Gamma}\xrightarrow{h_2} \cdots \xrightarrow{h_p} L_{k+l}\cup \tilde{\Gamma} \tag{H2} \end{equation*} The link $L_{k+l}$ after $h_p$ has the same multiplicities as the one in (\ref{sequence}), and we suppose the component $K_i$ of $L_{k+l}$ has multiplicity $\mathrm{mul}(K_i)$ after $h_p$. During each $h_j$, we slide $m_e$ along a component of $L_{k+l}$. So we see that the multiplicity of $K_i$ before the step $h_j$ has multiplicity of the form $\alpha^{m_{ij}}\mathrm{mul}(K_i)$ where $m_{ij}\in \mathbb{Z}$. In order to make all the surgery presentations in \eqref{sequence2} be computable, we add the condition that $\alpha^{m_{ij}}\mathrm{mul}(K_i)\neq 1$ for all $i, j$. Note that $m_{ij}$ only depends on the sequence \eqref{sequence2}. This condition can be summarized as follows. {\bf Condition 2}: For a finite set $\{y_i\}_{i\in I'}\subset G$ and a finite set $J'\subset {\mathbb{Z}}$ which only depends on the sequence \eqref{sequence2}, we want to find an $\alpha$ so that $\alpha^{m}y_i\neq 1$ for $m\in J'$ and $i\in I'$. Since $G$ contains a $\mathbb{Z}$ summand, it is easy to see the existence of $\alpha\in G$ satisfying Conditions 1 and 2. By Lemma \ref{handle-slide}, we have $\frac{\Delta (L_k\cup \tilde{\Gamma})}{2^{r(L_k)}(-1)^{\sigma_+(L_k)}}=\frac{\Delta (L_{k+l}\cup \tilde{\Gamma})}{2^{r(L_{k+l})}(-1)^{\sigma_+(L_{k+l})}}$. Let $\langle H\rangle=\alpha^{-2N}(\beta^2-\beta^{-2})$. By Example \ref{example}, we have $\Delta (L_k\cup \tilde{\Gamma})=\langle H\rangle \Delta (L_k\cup \Gamma)$ and $\Delta (L_{k+l}\cup \tilde{\Gamma})=\langle H\rangle \Delta (L_{k+l}\cup \Gamma)$. Therefore finally we have $\frac{\Delta (L_k\cup \Gamma)}{2^{r(L_k)}(-1)^{\sigma_+(L_k)}}=\frac{\Delta (L_{k+l}\cup \Gamma)}{2^{r(L_{k+l})}(-1)^{\sigma_+(L_{k+l})}}$. Now we consider the case that $\Gamma=\emptyset$, which implies $L_k\neq \emptyset$ since $L_k\cup \Gamma\neq \emptyset$. Choose a component $K$ of $L_k$ with Kirby color $\Omega (\alpha, 1)$. We apply a positive and a negative blow-up of $K$ to create two new components $m_+$ and $m_{-}$, the Kirby colors of which are also $\Omega (\alpha, 1)$. The framing of $K$ is unchanged. Let $\tilde{\Gamma}=m_+\cup m_-$ and regard it as a graph in $M$. Then $\tilde{\omega}: H_1(M, \mathbb{Z}) \to G$ determines a cohomology class $\tilde{\omega}: H_1(M\backslash \tilde{\Gamma}, \mathbb{Z}) \to G$ which sends the meridians of $m_+$ and $m_-$ to $\alpha$. Then $L_k$ is a computable presentation for $(M, \tilde{\Gamma}, \tilde{\omega})$. After performing the handle-slide moves in \eqref{sequence}, we get $L_{k+l}$, which is still a computable presentation for $(M, \tilde{\Gamma}, \tilde{\omega})$ since the handle-slide moves do not involve $m_+$ and $m_-$. Since $\tilde{\Gamma} \neq \emptyset$, as we proved above we have $\frac{\Delta (L_k\cup \tilde{\Gamma})}{2^{r(L_k)}(-1)^{\sigma_+(L_k)}}=\frac{\Delta (L_{k+l}\cup \tilde{\Gamma})}{2^{r(L_{k+l})}(-1)^{\sigma_+(L_{k+l})}}$. On the other hand, by Lemma \ref{up-down}, we have $\Delta (L_k\cup \tilde{\Gamma})=-4\Delta (L_k)$. For $L_{k+l}$, $m_+$ and $m_-$ can be linked with several strands of $L_{k+l}$, as shown in the following figure. \begin{align*} \begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (0,-0.4) to (0,0.9); \draw (0,-0.6) to (0,-1); \draw (0,1.1) to (0,1.4); \draw (-0.4,-0.4) to (-0.4,0.9); \draw (-0.4,-0.6) to (-0.4,-1); \draw (-0.4,1.1) to (-0.4,1.4); \draw (-1.3,-0.4) to (-1.3,0.9); \draw (-1.3,-0.6) to (-1.3,-1); \draw (-1.3,1.1) to (-1.3,1.4); \draw (-0.8, 0.8) node {$\cdots$}; \draw (-0.8, -0.2) node {$\cdots$}; \draw (-1.6,1) [<-]to (0.3,1); \draw (-1.6,0.5) to (-1.4,0.5); \draw (-1.2,0.5) to (-0.5,0.5); \draw (-0.3,0.5) to (-0.1,0.5); \draw (0.1,0.5) to (0.3,0.5); \draw (-1.6,1) arc (90:270:0.25); \draw (0.3,1) to (0.8,0.5); \draw (0.3,0.5) to (0.5,0.7); \draw (0.8,1) to (0.6,0.8); \draw (-1.6,-0.5) [->]to (0.3,-0.5); \draw (-1.6,0) to (-1.4,0); \draw (-1.2,0) to (-0.5,0); \draw (-0.3,0) to (-0.1,0); \draw (0.1,0) to (0.3,0); \draw (0.1,0) to (0.3,0); \draw (-1.6,0) arc (90:270:0.25); \draw (0.3,0) to (0.5,-0.2); \draw (0.6,-0.3) to (0.8,-0.5); \draw (0.3,-0.5) to (0.8,0); \draw (0.8,-0.5) arc (-90:90:0.25); \draw (0.8,0.5) arc (-90:90:0.25); \draw (1.5, -0.3) node {$m_{+}$}; \draw (1.5, 0.7) node {$m_{-}$}; \end{tikzpicture} \end{align*} To remove $m_+$ and $m_-$, we can first do several handle-slide moves along $m_+$ and $m_-$ to decrease the number of strands linked with $m_+$ and $m_-$, and then do one negative blow-up move, one positive blow-up move to remove $m_+$ and $m_-$. During this procedure, we can choose strands properly so that each step we get a computable presentation. By Lemmas \ref{up-down} and \ref{handle-slide}, we see that $\Delta (L_{k+l}\cup \tilde{\Gamma})=-4\Delta (L_{k+l})$ as well. As a result, we have $\frac{\Delta (L_k)}{2^{r(L_k)}(-1)^{\sigma_+(L_k)}}=\frac{\Delta (L_{k+l})}{2^{r(L_{k+l})}(-1)^{\sigma_+(L_{k+l})}}$. \end{proof} \section{Examples and calculations} \subsection{A general formula for a class of lens spaces} In this section, we compute $\Delta (L(mn-1, n), \omega):=\Delta (L(mn-1, n), \emptyset, \omega)$ for the lens space $L(mn-1, n)$ and $\Gamma=\emptyset$. We use the surgery presentation $L=\begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (-0.45,1) node {$\scriptstyle m$}; \draw (-0.3,1.1) to (-0.6,1.1); \draw (-0.3,0.89) to (-0.3,1.11); \draw (-0.3,0.90) to (-0.6,0.90); \draw (-0.6,0.89) to (-0.6,1.11); \draw (-0.3,1) [<-]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) [->] to (0.3,1); \draw (-0.3,0.4) to (0.25,-0.09); \draw (0.45,1) node {$\scriptstyle n$}; \draw (0.6,1.1) to (0.3,1.1); \draw (0.6,0.89) to (0.6,1.11); \draw (0.6,0.90) to (0.3,0.90); \draw (0.3,0.89) to (0.3,1.11); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.6,1) arc (90:315:0.6); \draw (0.6,1) arc (90:-140:0.6); \end{tikzpicture}$ for $L(mn-1, n)$, where $m>0$ or $n>0$ inside a square represents the number of positive full twists. For a cohomology class \begin{align}\label{eq:Lens} \omega : H_1(M, \mathbb{Z})=\left < [m_1], [m_2] \middle| \left(\begin{matrix} m & -1 \\ -1 & n \end{matrix} \right) \left(\begin{matrix} [m_1] \\ [m_2] \end{matrix} \right) =0 \right > \to G, \end{align} where $m_1$ (resp. $m_2$) is the meridian of the left-hand (resp. right-hand) slide component of $L$. Let $u=\omega([m_1])$ and $v=\omega([m_2])$. In the following calculations, a diagram inside round brackets represents the Alexander polynomial of the diagram. We have \begin{align*} &\Delta (L(mn-1, n), \omega)= \frac{\Delta (L)}{2^{r}(-1)^{\sigma_+ (L)}} \\ &= \frac{1}{2^{2}(-1)^{\sigma_+(L)}} \left( \begin{tikzpicture}[baseline=-0.65ex, thick, scale=1] \draw (-0.45,1) node {$\scriptstyle m$}; \draw (-0.3,1.1) to (-0.6,1.1); \draw (-0.3,0.89) to (-0.3,1.11); \draw (-0.3,0.90) to (-0.6,0.90); \draw (-0.6,0.89) to (-0.6,1.11); \draw (-0.3,1) [<-]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) [->] to (0.3,1); \draw (-0.3,0.4) to (0.25,-0.09); \draw (0.45,1) node {$\scriptstyle n$}; \draw (0.6,1.1) to (0.3,1.1); \draw (0.6,0.89) to (0.6,1.11); \draw (0.6,0.90) to (0.3,0.90); \draw (0.3,0.89) to (0.3,1.11); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.6,1) arc (90:315:0.6); \draw (0.6,1) arc (90:-140:0.6); \draw (0.5,-0.5) node {$\scriptstyle \Omega(v, 1)$}; \draw (-0.5,-0.5) node {$\scriptstyle \Omega(u, 1)$}; \end{tikzpicture} \right)\\ &= \frac{d(u)d(v)}{4(-1)^{\sigma_+(L)}} \left( \begin{tikzpicture}[baseline=-0.01ex, thick, scale=1] \draw (-0.45,1) node {$\scriptstyle m$}; \draw (-0.3,1.1) to (-0.6,1.1); \draw (-0.3,0.89) to (-0.3,1.11); \draw (-0.3,0.90) to (-0.6,0.90); \draw (-0.6,0.89) to (-0.6,1.11); \draw (-0.3,1) [<-]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) [->] to (0.3,1); \draw (-0.3,0.4) to (0.25,-0.09); \draw (0.45,1) node {$\scriptstyle n$}; \draw (0.6,1.1) to (0.3,1.1); \draw (0.6,0.89) to (0.6,1.11); \draw (0.6,0.90) to (0.3,0.90); \draw (0.3,0.89) to (0.3,1.11); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.6,1) arc (90:315:0.6); \draw (0.6,1) arc (90:-140:0.6); \draw (0.5,-0.5) node {$\scriptstyle (v, 1)$}; \draw (-0.5,-0.5) node {$\scriptstyle (u, 1)$}; \end{tikzpicture} - \begin{tikzpicture}[baseline=-0.01ex, thick, scale=1] \draw (-0.45,1) node {$\scriptstyle m$}; \draw (-0.3,1.1) to (-0.6,1.1); \draw (-0.3,0.89) to (-0.3,1.11); \draw (-0.3,0.90) to (-0.6,0.90); \draw (-0.6,0.89) to (-0.6,1.11); \draw (-0.3,1) [->]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) [->] to (0.3,1); \draw (-0.3,0.4) to (0.25,-0.09); \draw (0.45,1) node {$\scriptstyle n$}; \draw (0.6,1.1) to (0.3,1.1); \draw (0.6,0.89) to (0.6,1.11); \draw (0.6,0.90) to (0.3,0.90); \draw (0.3,0.89) to (0.3,1.11); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.6,1) arc (90:315:0.6); \draw (0.6,1) arc (90:-140:0.6); \draw (0.5,-0.5) node {$\scriptstyle (v, 1)$}; \draw (-0.5,-0.5) node {$\scriptstyle (u^{-1}, 1)$}; \end{tikzpicture} - \begin{tikzpicture}[baseline=-0.01ex, thick, scale=1] \draw (-0.45,1) node {$\scriptstyle m$}; \draw (-0.3,1.1) to (-0.6,1.1); \draw (-0.3,0.89) to (-0.3,1.11); \draw (-0.3,0.90) to (-0.6,0.90); \draw (-0.6,0.89) to (-0.6,1.11); \draw (-0.3,1) [<-]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) [<-] to (0.3,1); \draw (-0.3,0.4) to (0.25,-0.09); \draw (0.45,1) node {$\scriptstyle n$}; \draw (0.6,1.1) to (0.3,1.1); \draw (0.6,0.89) to (0.6,1.11); \draw (0.6,0.90) to (0.3,0.90); \draw (0.3,0.89) to (0.3,1.11); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.6,1) arc (90:315:0.6); \draw (0.6,1) arc (90:-140:0.6); \draw (0.5,-0.5) node {$\scriptstyle (v^{-1}, 1)$}; \draw (-0.5,-0.5) node {$\scriptstyle (u, 1)$}; \end{tikzpicture} + \begin{tikzpicture}[baseline=-0.01ex, thick, scale=1] \draw (-0.45,1) node {$\scriptstyle m$}; \draw (-0.3,1.1) to (-0.6,1.1); \draw (-0.3,0.89) to (-0.3,1.11); \draw (-0.3,0.90) to (-0.6,0.90); \draw (-0.6,0.89) to (-0.6,1.11); \draw (-0.3,1) [->]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) [<-] to (0.3,1); \draw (-0.3,0.4) to (0.25,-0.09); \draw (0.45,1) node {$\scriptstyle n$}; \draw (0.6,1.1) to (0.3,1.1); \draw (0.6,0.89) to (0.6,1.11); \draw (0.6,0.90) to (0.3,0.90); \draw (0.3,0.89) to (0.3,1.11); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.6,1) arc (90:315:0.6); \draw (0.6,1) arc (90:-140:0.6); \draw (0.5,-0.5) node {$\scriptstyle (v^{-1}, 1)$}; \draw (-0.5,-0.5) node {$\scriptstyle (u^{-1}, 1)$}; \end{tikzpicture} \right)\\ &= \frac{d(u)d(v)}{4(-1)^{\sigma_+(L)}} {\Big[} u^{-2m} v^{-2n} \left(\begin{tikzpicture}[baseline=+1ex, thick, scale=1] \draw (-0.3,1) [<-]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) [->] to (0.3,1); \draw (-0.3,0.4) to (0.3,-0.2); \draw (-0.3,-0.2) to (-0.1,0); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.3,1) arc (90:270:0.6); \draw (0.3,1) arc (90:-90:0.6); \draw (0.5,-0.5) node {$\scriptstyle (v, 1)$}; \draw (-0.5,-0.5) node {$\scriptstyle (u, 1)$}; \end{tikzpicture}\right) - u^{2m} v^{-2n} \left(\begin{tikzpicture}[baseline=1ex, thick, scale=1] \draw (-0.3,1) [->]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) [->] to (0.3,1); \draw (-0.3,0.4) to (0.3,-0.2); \draw (-0.3,-0.2) to (-0.1,0); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.3,1) arc (90:270:0.6); \draw (0.3,1) arc (90:-90:0.6); \draw (0.5,-0.5) node {$\scriptstyle (v, 1)$}; \draw (-0.5,-0.5) node {$\scriptstyle (u^{-1}, 1)$}; \end{tikzpicture}\right) \\ &\qquad\qquad\qquad\quad -u^{-2m} v^{2n} \left(\begin{tikzpicture}[baseline=1ex, thick, scale=1] \draw (-0.3,1) [<-]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) [<-] to (0.3,1); \draw (-0.3,0.4) to (0.3,-0.2); \draw (-0.3,-0.2) to (-0.1,0); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.3,1) arc (90:270:0.6); \draw (0.3,1) arc (90:-90:0.6); \draw (0.5,-0.5) node {$\scriptstyle (v^{-1}, 1)$}; \draw (-0.5,-0.5) node {$\scriptstyle (u, 1)$}; \end{tikzpicture}\right) + u^{2m} v^{2n} \left(\begin{tikzpicture}[baseline=1ex, thick, scale=1] \draw (-0.3,1) [->]to (0.3,0.4); \draw (-0.3,0.4) to (-0.1,0.6); \draw (0.1,0.8) [<-] to (0.3,1); \draw (-0.3,0.4) to (0.3,-0.2); \draw (-0.3,-0.2) to (-0.1,0); \draw (0.1,0.2) to (0.3,0.4); \draw (-0.3,1) arc (90:270:0.6); \draw (0.3,1) arc (90:-90:0.6); \draw (0.5,-0.5) node {$\scriptstyle (v^{-1}, 1)$}; \draw (-0.5,-0.5) node {$\scriptstyle (u^{-1}, 1)$}; \end{tikzpicture}\right){\Big]}\\ &= -\frac{d(u)d(v)}{4(-1)^{\sigma_+(L)}} \left( u^{2-2m} v^{2-2n} + u^{2+2m} v^{-2-2n} +u^{-2-2m} v^{2+2n} + u^{-2+2m} v^{-2+2n}\right)\\ &= -\frac{d(u)d(v)}{4(-1)^{\sigma_+(L)}} (u^2 v^{-2n} + u^{-2} v^{2n})(u^{2m} v^{-2} + u^{-2m} v^2), \end{align*} where the third equality follows from the definition of Kirby color, the forth one is because a positive full-twist contribute $t^{-2N}$ if the strand has color $(t, N)$, and the fifth one follows from Example~\ref{example}. Here note that (\ref{eq:Lens}) implies $u^m v^{-1} =1$, $u^{-1} v^n =1$. Thus \begin{align*} \Delta(L(mn-1, n), \omega)&= -\frac{d(u)d(v)}{4(-1)^{\sigma_+(L)}} (u^2 v^{-2n} + u^{-2} v^{2n})(u^{2m} v^{-2} + u^{-2m} v^2) \\ &= -\frac{d(u)d(v)}{4(-1)^{\sigma_+(L)}} ((u v^{-n})^2 + (u^{-1} v^{n})^2)((u^{m} v^{-1})^{2} + (u^{-m} v)^2)\\ &= (-1)^{\sigma_+(L)+1} d(u)d(v). \end{align*} Then we have \begin{prop} \label{prop51} \[\Delta(L(mn-1, n), \omega) = (-1)^{\sigma_+(L)+1} d(u)d(v).\] \end{prop} \subsection{$L(7, 1)$ and $L(7, 2)$} It is known that lens spaces $L(7, 1)$ and $L(7, 2)$ are homotopy equivalent but not homeomorphic. We show that our invariant can distinguish them. Let $\xi=\mathrm{exp}(\frac{2\pi i}{7})$, $B=\mathbb{Q}(\pi, \xi)$ the extension field of $\mathbb{Q}$ generated by $\pi$ and $\xi$, and $G=\mathbb{Z}\langle \pi, \xi \rangle$ the abelian group generated by $\pi$ and $\xi$. We consider $\Delta (L(7, 1), \omega)$ and $\Delta (L(7, 2), \omega)$ for this $1$-palette $(B, G)$. \begin{prop}\label{Prop5152} The invariant $\Delta (M, \omega)$ corresponding to the $1$-palette $(B, G)$ where $B=\mathbb{Q}(\pi, \xi)$ and $G=\mathbb{Z}\langle \pi, \xi \rangle$ distinguishes $L(7, 1)$ and $L(7, 2)$. More concretely, there exists a cohomology class $\omega_0$ for $L(7, 1)$ such that for any cohomology class $\omega$ for $L(7, 2)$, we have \[ \Delta (L(7, 1), \omega_0) \neq \Delta (L(7, 2), \omega). \] \end{prop} \begin{proof} Note that $L(7, 1)=L(mn-1, n)$ for $m=8, n=1$, and $L(7, 2)=L(mn-1, n)$ for $m=4, n=2$. So we can apply the discussion we did in Section 5.1. A cohomology class $$\omega: H_1(L(7, 2), \mathbb{Z})\cong \mathbb{Z}/7\mathbb{Z}\to \mathbb{Z}\langle \pi, \xi \rangle$$ is determined by $\omega([m_1])$ and $\omega([m_1])$, which satisfy \[ \left(\begin{matrix} 4 & -1 \\ -1 & 2 \end{matrix}\right) \left(\begin{matrix} \omega[m_1] \\ \omega[m_2] \end{matrix}\right) = \left(\begin{matrix} 1 \\ 1 \end{matrix}\right). \] So we have totally six non-trivial cohomology classes which are given by \[ \omega_1 : \left(\begin{matrix} \xi^2 \\ \xi \end{matrix}\right), \,\, \omega_2 :\left(\begin{matrix} \xi^4 \\ \xi^2 \end{matrix}\right), \,\,\omega_3 : \left(\begin{matrix} \xi^6 \\ \xi^3 \end{matrix}\right), \,\, \omega_4 : \left(\begin{matrix} \xi \\ \xi^4 \end{matrix}\right), \,\, \omega_5 : \left(\begin{matrix} \xi^3 \\ \xi^5 \end{matrix}\right),\,\, \omega_6 : \left(\begin{matrix} \xi^5 \\ \xi^6 \end{matrix}\right). \] Let $u_i=\omega_i([m_1])$ and $v_i=\omega_i([m_2])$. By Prop. \ref{prop51} we have $$\Delta(L(7, 2), \omega_i)=-d(u_i)d(v_i).$$ Similarly we can consider the non-trivial cohomology classes for $L(7, 1)$. We see that $\omega_0:\begin{pmatrix} \xi \\ \xi \end{pmatrix}$ is one of them. The corresponding invariant is $$\Delta(L(7, 1), \omega_0)=-d(\xi)d(\xi).$$ We claim that $\Delta(L(7, 2), \omega_i)\neq \Delta(L(7, 1), \omega_0)$ for $1\leq i \leq 6$, which can be confirmed by directly calculations. For instance $\Delta(L(7, 2), \omega_1)=\Delta(L(7, 1), \omega_0)\iff d(\xi^2)=d(\xi)\iff \xi^4-\xi^{-4}=\xi^2-\xi^{-2}\iff \xi^2+\xi^{-2}=1$, which is impossible since the minimal polynomial of $\xi$ is $\sum_{k=0}^6\xi^k=0$. \end{proof} \bibliographystyle{siam}
{ "timestamp": "2022-02-02T02:12:01", "yymm": "2202", "arxiv_id": "2202.00238", "language": "en", "url": "https://arxiv.org/abs/2202.00238" }
\section{Introduction} \label{sec:intro} In crowdsensing, large groups of individuals collaborate in a crowdsourcing fashion, typically leveraging devices as sensors~\cite{liu2018survey}. Employing resource-constrained spectrum sensors (Raspberry Pis equipped with software-defined radio kits), the ElectroSense initiative marks an exemplary network for crowdsensing in particular~\cite{rajendran2017electrosense}. However, the rapid growth of spectrum sensors also has accelerated the emergence of new and specialized cyberattacks, called spectrum sensing data falsification (SSDF) attacks~\cite{yadav:2020:ssdf}. In such a context, the privacy and integrity of sensors measurements are at risk. In order to detect SSDF attacks affecting resource-constrained sensors, signature-based approaches present the limitation of not being effective against new attacks that have not been observed during the signature creation stage (zero-day attacks). To overcome this limitation, dynamic anomaly detectors considering fingerprinting are gaining relevance. This approach monitors device activities such as the usage of CPU, memory, network interfaces, or file system when there is no infection, and in a second stage, detects deviations produced by SSDF attacks~\cite{sanchez2021survey}. The detection phase can be implemented using different techniques. One of the most lightweight in terms of resource consumption is based on rules, but the creation of precise rules requires expert knowledge and a relevant amount of time in complex crowdsensing scenarios \cite{hamza2018combining}. Knowledge-based solutions have also been proposed in the literature, but they do not scale well with many sensors, requiring a lot of time to model and detect attacks \cite{khraisat2019survey}. Finally, machine and deep learning (ML/DL) techniques are gaining enormous relevance due to their detection performance, time, and relative simplicity \cite{aldweesh2020deep}. Despite the benefits of anomaly detectors combining device fingerprinting and ML/DL, they present some characteristics limiting their applicability in crowdsensing scenarios where data belongs to different sensors, and it cannot be shared due to privacy restrictions. Thus, federated learning (FL) becomes increasingly relevant~\cite{yang2019federated}. Compared to centralized approaches, FL aims to train a federated model collaboratively but in a decentralized and privacy-preserving fashion. Each participant of the federation trains a model with its own data and shares the model parameters to create the federated model. However, FL also presents security concerns, being adversarial attacks launched by malicious participants one of the most important ones. In this context, the literature has proposed several data and model falsification attacks consisting of poisoning data, labels, or weights during training to exchange fake model parameters with the entity (or entities) creating the federated model \cite{rodriguezbarroso2022survey}. To overcome this problem, different countermeasures have been proposed, such as the usage of secure aggregation functions \cite{pillutla2019robust}. However, due to the novelty of the field, the combination of FL and behavioral fingerprinting for detecting SSDF attacks on spectrum sensor devices poses several open challenges. First, there is an evident lack of FL-oriented datasets modeling fingerprints of resource-constrained devices belonging to real platforms~\cite{rey2022federated}. Second, there is no work measuring the performance of FL models using device fingerprinting to detect SSDF attacks affecting spectrum sensors and comparing its detection performance with existing traditional ML/DL-based solutions. Last but not least, there is no work studying the robustness of FL-based solutions oriented to detect SSDF attacks in spectrum sensors and equipped with different anti-adversarial mechanisms to mitigate the impact of heterogeneous data and model poisoning attacks. To improve the previous challenges, this paper presents the following contributions: \begin{enumerate} \item The creation of a novel device behavioral fingerprinting dataset suitable for FL scenarios (publicly available in~\cite{dataset}). This dataset contains the normal and under-attack behavior of four ElectroSense spectrum sensors, which are implemented with three different families of Raspberry Pis connected to software-defined radio kits. About 75 internal events belonging to the usage of CPU, memory, network interface, file systems, and other relevant dimensions are monitored in each sensor for two different versions of normal behavior as well as eight SSDF attacks. \item The usage of the dataset to conduct a pool of experiments evaluating and comparing the performance of \textit{(i)} DL models under a horizontal FL scheme, and \textit{(ii)} traditional DL approaches where a centralized aggregation neglects privacy. This evaluation comprises the definition of four federated scenarios dealing with anomaly detection (using Autoencoder), binary classification (with multilayer perceptron), and different participants. \item The study of the federated models robustness in two of the previous federated scenarios under different conditions. The conditions vary in terms of \textit{(i)} anti-adversarial aggregation mechanisms (two variants of \textit{trimmed mean}, and \textit{coordinate-wise median}), \textit{(ii)} an increasing number of malicious participants (from 8 to 33\%), and \textit{(iii)} heterogeneous data and model poisoning attacks affecting both supervised and unsupervised FL models. \end{enumerate} The remainder of this article is organized as follows. Section~\ref{sec:related} reviews solutions combining fingerprinting and ML/DL/FL approaches able to detect cyberattacks affecting IoT. While Section~\ref{sec:dataset} provides the details of the FL-oriented dataset created in this work, Section~\ref{sec:federated} evaluates and compares the performance of different FL and traditional DL models trained and evaluated in heterogeneous conditions and scenarios. Section~\ref{sec:attacks} analyzes the robustness of FL models affected by different adversarial attacks. Finally, Section~\ref{sec:conclusions} draws conclusions and next steps. \section{Related Work} \label{sec:related} This section reviews related work considering behavioral fingerprinting and ML/DL approaches, both centralized and federated, to detect cyberattacks affecting IoT. In~\cite{sanchez2021survey} a broad survey of device fingerprinting reviews a good number of works detecting cybersecurity issues in IoT devices. One of the main conclusions of this survey is that there is a current trend towards combining device fingerprinting and ML/DL/FL techniques to detect cybersecurity attacks. In this context, the work most related to the paper at hand in terms of attacks, devices, and behavioral events is proposed in~\cite{huertas2022cyberspec}. The authors combine unsupervised ML/DL techniques and the usage of device resources (such as CPU, memory, file system, or the network interface, among others) to detect anomalies produced by seven SSDF attacks affecting different Raspberries Pi acting as ElectroSense sensors. A pool of experiments reports 80-100\% TPR when detecting five of the seven SSDF attacks. In~\cite{blaise2020botfp}, the authors look at frequency distributions of protocol attributes and run clustering algorithms to capture particularities of botnet behaviors. They report 97-100\% accuracy, and as in most works, networking features are leveraged as the behavioral source. The authors of~\cite{kumar2019edima} use ML techniques combined with network packets to detect heterogeneous malware in IoT devices. They achieve 95\% accuracy on their test sets. Comparing the previous three works and the paper at hand, the main difference is that the proposed ML/DL models are created in a centralized manner, which means that the privacy of the data used to train the models has not been guaranteed, one of the main contributions of this work. Dealing with solutions that use FL to detect malware affecting IoT devices in privacy-preserving scenarios. The authors of~\cite{taheri2020fed} propose a solution for industrial IoT that analyzes Android application samples and behavioral data in an industrial context. They report 97-100\% accuracy when detecting different malware samples. \cite{preuveneers2018chained} presents a different use case for FL in the field of intrusion detection with up to 97\% accuracy. This work studies adversarial implications in FL and employs blockchain technology as an alternative to mitigate them. Therefore, this work focuses more on the accountability of participants instead of reducing the impact of the attacks. The main difference between the previous approaches and the one proposed in this work is that they do not consider behavioral fingerprinting, do not consider the problem of malicious participants, and do not evaluate the robustness of their models against adversarial attacks. Most related to this work, several works combine FL and device fingerprinting to detect cyberattacks. \cite{hsu2020privacy} presents an FL system to detect malware in Android. The authors train Support Vector Machine classifiers in a federated scenario using device features such as application programming interface (API) calls and permission configuration to obtain 94-96\% F1-score. While API calls correspond to device behavioral source fingerprints, the work at hand analyzes behavioral data sources on a much lower level. In~\cite{nguyen2019diot}, a federated anomaly detection system is proposed for the IoT. It leverages device-type profiles of the communications to detect malware with 96\% accuracy. In contrast to the work at hand, the previous solution is not effective against malware affecting data availability, integrity or confidentiality. The authors of~\cite{rey2022federated} leverage the N-BaIoT dataset to train and evaluate FL models. They achieve good accuracy in federated scenarios dealing with network traffic events. In addition to that, adversarial impacts are measured for selected attacks, and different mechanisms of robust aggregation are evaluated. While the paper at hand considers adversarial attacks against federations, it applies and analyzes these concepts in a scenario with very different characteristics to the one used in~\cite{rey2022federated}. In this sense, N-BaIoT contains network traffic from nine different IoT devices such as webcams and smart doorbells. However, N-BaIoT does not consider spectrum sensors such as Raspberry Pis equipped with software-defined radio kits (as this work does), and does not model device behavioral fingerprinting events. \begin{table*}[bp] \caption{Comparison of Related Work} \centering \resizebox{\textwidth}{!}{\begin{tabular}{c|c|c|c|c|c|c|c} Source & Device Types & Attack Type & Data/Fingerprints & ML Approach & Prediction & Privacy & Robustness \\ \hline\hline \cite{huertas2022cyberspec} & Raspberry Pis & SSDF & Usage of Resources & ML/DL & Anomaly Detection & no & no \\ \cite{blaise2020botfp} & Multiple & Botnets & Communication-based & ML & Classification (Distances to clusters) & no & no \\ \cite{kumar2019edima} & IoT devices & Botnets & Communication-based & ML & Classification & no & no \\ \cite{taheri2020fed} & Industrial IoT devices & Android Malware & App Information & FL, DL & Classification & yes & yes, GAN-based \\ \cite{preuveneers2018chained} & Computers/Machines & multiple & Communication-based & FL, DL & Anomaly Detection (Autoencoder) & yes & yes, blockchain-based \\ \cite{hsu2020privacy} & Mobile (Android) & Android Malware & App Information & FL, ML & Classification (SVM) & yes & no \\ \cite{nguyen2019diot} & IoT devices & IoT Malware & Communication-based & FL, ML & Anomaly Detection & yes & no \\ \cite{rey2022federated} & IoT devices & Botnets & Communication-based & FL, DL & Anomaly Detection and Classification & yes & yes, aggregation \\ ours & Raspberry Pis & SSDF & Usage of Resources & FL, DL & Anomaly Detection and Classification & yes & yes, aggregation \\\hline \end{tabular}} \label{tab:related} \end{table*} As can be seen in \tablename~\ref{tab:related}, none of the related work studies the detection performance and robustness of federated models detecting SSDF attacks. Only~\cite{huertas2022cyberspec} covers the same attacks and devices considered in this work, but from a traditional ML/DL perspective and without considering privacy-preserving scenarios. Moreover, \tablename~\ref{tab:related} shows that device behavioral fingerprints have not been used for the use case of federated malware detection. In conclusion, this literature review demonstrates the lack of works and datasets combining behavioral fingerprints and FL to detect cyberattacks in IoT devices similar to those used in crowdsensing platforms. Furthermore, to the best of our knowledge, there is no work studying the impact of adversaries on the robustness of the previous federated models. \section{Dataset Creation} \label{sec:dataset} This section describes the novel device fingerprinting dataset created for federated scenarios. In particular, it presents \textit{(i)} the crowdsensing platform and the spectrum sensors used to create the dataset, \textit{(ii)} the SSDF attacks affecting the deployed ElectroSense sensors, \textit{(iii)} the events selected to create the fingerprints, and \textit{(iv)} the exploration of the dataset content. \subsection{ElectroSense Sensors \& SSDF Attacks} ElectroSense is a real and collaborative crowdsensing platform that pursues the goal of monitoring the electromagnetic space \cite{electrosense}. ElectroSense is composed of a multitude of spectrum sensors built from cheap commodity hardware like Raspberry Pis equipped with software-defined radio scanners and antennas. Each sensor monitors the different bands and segments of the radio frequency spectrum within its location. Sensed spectrum data is periodically sent to a backend platform in charge of storing, pre-processing, and analyzing the data to provide services. These services range from spectrum occupancy monitoring to transmission decoding. In this scenario, four physical spectrum sensors have been deployed in two locations. \tablename~\ref{tab:setup-devices} summarizes the devices identifiers, hardware characteristics, and locations. \begin{table}[] \caption{Details of the Devices Making up the Scenario} \centering \begin{tabular}{c|c|c|c} Device ID & Type/Model & RAM & Location \\ \hline \hline RPi3 & 3 Model B+ & 1GB & Zurich \\ RPi4\_1 & 4 Model B & 2GB & Zug \\ RPi4\_2 & 4 Model B & 2GB & Zug \\ RPi4\_3 & 4 Model B & 4GB & Zurich \\ \hline \end{tabular} \label{tab:setup-devices} \end{table} For each ElectroSense sensor, two versions of the official and publicly available software are used. The first version is the current sensing application, installed by default in the sensor. The second version of the ElectroSense sensor software is an old one, available on the official ElectroSense GitHub \cite{essensor}. Additionally, eight different SSDF attacks are considered to infect the four sensors. These SSDF attacks are executed after modifying the ElectroSense sensor source code and compiling a new version of the executable. The main goal of these SSDF attacks is to manipulate the data of particular spectrum segments monitored by the sensors (in different ways) and send poisoned spectrum data to the ElectroSense backend platform. Despite the differences in terms of attacks impacts, all attacks affect the same number of spectrum segments (20 MHz). \tablename~\ref{tab:attacks} summarizes the main aspects of the behaviors considered during the creation of the dataset. More details about the implementation and functionality of the SSDF attacks can be found in \cite{huertas2022cyberspec}. \begin{table}[ht] \scriptsize \caption{Behaviors Monitored During the Dataset Creation} \centering \begin{tabular}{c|>{\raggedright\arraybackslash}m{5.8cm}|c} Behavior & \makecell[c]{Description} & Time \\ \hline \hline Normalv1 & Current ElectroSense application sensing the spectrum & 5 days \\ \hline Normalv2 & Old ElectroSense application sensing the spectrum & 5 days \\ \hline Delay & Sense different outdated spectrum data of affected segments & 4 hours \\ \hline Confusion & Swap the spectrum data between affected segments & 4 hours \\ \hline Freeze & Sense the same outdated spectrum data in affected segments & 4 hours \\ \hline Hop & Add random noise to random parts of affected segments & 4 hours \\ \hline Mimic & Copy the spectrum data of one segment into another segment & 4 hours \\ \hline Noise & Add random noise to the spectrum data of affected segments & 4 hours \\ \hline Repeat & Replicate the same spectrum data in all affected segments & 4 hours \\ \hline Spoof & Copy the spectrum data of one segment into another segment and add random noise & 4 hours \\ \hline \end{tabular} \label{tab:attacks} \end{table} The previous behaviors are sequentially executed in the devices of \tablename~\ref{tab:setup-devices} for five days (the normal behaviors) and four hours (the attacks), as indicated in \tablename~\ref{tab:attacks}. To create the fingerprinting dataset, 75 internal events of each device have been monitored in time windows of 50 s using the \textit{perf} Linux command. These events belong to the following device data sources: CPU, virtual memory, network, file system, scheduler, device drivers, and random numbers. The number of events, per event type and family, contained in the datasets can be seen in \figurename~\ref{fig:events}. As a summary, the dataset includes four ElectroSense sensors, ten behaviors (two normal and eight SSDF attacks), 75 events belonging to eight event families, and a total of 73396 samples (approximately 60000 samples of normal and 13396 of malicious behavior). The dataset is publicly available in~\cite{dataset}. \begin{figure}[htpb!] \centering \includegraphics[width=0.9\columnwidth]{Images/events.pdf} \caption{Device Fingerprinting Events of the FL-oriented Dataset} \label{fig:events} \end{figure} \subsection{Data Exploration} \label{sub:data_exploration} This section explores the created dataset to find data patterns and determine the suitability of ML/DL/FL techniques to detect SSDF attacks. This analysis also aims to determine if the data contained in the dataset is independent and identically distributed (IID) or non-IID. For that, three types of studies are performed. The first analyzes the evolution of data over time. The second focuses on the distributions of data belonging to different devices. Finally, the third explores data distributions according to various SSDF attacks. The variation of behavioral data over time is essential to determine the stability of fingerprints, and the suitability of ML/DL/FL approaches to detect normal behavior and SSDF attacks. In this context and as an example, \figurename~\ref{fig:normal-hist} shows the values of the \textit{kmen:mm\_page\_pcpu\_drain} event belonging to the \textit{Virtual Memory} family across the time and for each device. As can be seen, the values are periodic, with some repetitive peaks. Exploring more in detail \figurename~\ref{fig:normal-hist}, it is also interesting to see the different distribution for RPi3 (in red) and RPi4s (in blue, orange, and green), indicating that behavioral data of similar devices is IID, and for different devices is non-IID. In particular, the range of values of the \textit{kmen:mm\_page\_pcpu\_drain} event for the RPi3 is different from the range for the RPi4 devices. These characteristics are also visible in the majority of events, but they are not included due to room constraints. \begin{figure}[htpb!] \centering \includegraphics[width=\linewidth]{Images/Figure2.pdf} \caption{\textit{kmen:mm\_page\_pcpu\_drain} event for Normal Behavior in all Devices} \label{fig:normal-hist} \end{figure} To analyze the differences between normal and under-attack behaviors per device the distributions of each event have been studied. As a representative example, \figurename~\ref{fig:attacks-randomness} shows for RPi4\_1 and the \textit{urandon\_read} event how some attacks (hop, noise, and spoof) offer a higher number of random reads due to the generation of random noise. Another example can be seen in \figurename~\ref{fig:attacks-writeback}, where the \textit{writeback\_mark\_inode\_dirty} event of an RPi4\_1 is differently affected by the copy and swap operations of some SSDF attacks (being disorder the attack with the lowest impact on the event values). \begin{figure}[htpb!] \centering \includegraphics[width=\linewidth]{Images/Figure3.pdf} \caption{\textit{urandon\_read} Event of all Behaviors on RPi4\_1} \label{fig:attacks-randomness} \end{figure} \begin{figure}[htpb!] \centering \includegraphics[width=\linewidth]{Images/FIgure4.pdf} \caption{\textit{writeback\_mark\_inode\_dirty} Event of all Behaviors on RPi4\_1} \label{fig:attacks-writeback} \end{figure} From the previous data exploration, it can be concluded that attacks do generally not impact the same features equally across different device types. Therefore, generalization across attacks and device types is challenging, and ML/DL/FL usage seems adequate for finding the events and values separating normal and SSDF attacks. In terms of data distribution, the exploration shows that the IID data is present in devices of the same device type and between the RPi4 families. However, devices from RPi3 and RPi4 families present non-IID data. Furthermore, the independence of the data samples allows that the data of a single device can be used to simulate additional participants of the same device type, which is critical for federated scenarios. Finally, external factors like network outages could potentially affect the data distributions. However, no significant systematic influence of external factors has been identified during the data exploration. \section{Federated Attack Detection} \label{sec:federated} This section evaluates the performance of different federated models when detecting SSDF attacks and compares it with centralized ML/DL approaches where data privacy is not preserved. For that, two approaches have been considered. The first one detects anomalies using an unsupervised Autoencoder, while the second utilizes a supervised multilayer perceptron (MLP) to classify normal and under-attack behaviors. The pipeline and methodology followed to train and evaluate the federated models are also detailed in this section. Finally, four scenarios with different federation compositions (in terms of number and type of participants, behaviors, and detection tasks) are created to evaluate the performance of the previous FL models and compare it with centralized ML/DL approaches. \subsection{Federated ML Pipeline} \label{sec:federated-ml-pipeline} The federated setting needs adaptations from the typical ML pipeline to handle distributed data and models. In particular, the scaling phase and the threshold selection have to be adapted to allow a global model to aggregate the knowledge of involved participants. Furthermore, a central coordinator needs to run the federated learning pipeline iteratively. The following subsections describe the necessary steps. \subsubsection{Dataset Splitting and Feature Preprocessing} Each participant of the federation creates the following datasets: one for training, one for validation and optimization of hyperparameters, and another for testing the model performance. Those datasets are sampled from the respective dataset to avoid overlapping between sets. Next, outlier filtering is performed on the training and validation sets using the \textit{z-score}. \textit{Z-score} is computed using the mean $\mu$ and the standard deviation $\sigma$ according to the formula $\frac{x - \mu}{\sigma}$. Data points that have an absolute \textit{z-score} $\geq3$ in any feature are excluded as they could impair the model performance. Besides, features with correlation of 1 in the datasets are filtered. \subsubsection{Federated Feature Scaling} Feature scaling in a federated setup does not require communication efforts, as a global scaling for all participants must be put in place. Min-max scaling is employed using the formula $\frac{x - min}{max-min}$, but the minimum and maximum values are determined on the data of all the participants. Therefore, action from a central entity is required to coordinate the scaling process, asking the minimum and maximum of each feature to each participant and then returning the global minimum and maximum for scaling. A drawback of this approach is a certain loss of privacy since every participant has to disclose the minimum and maximum value of each feature. This issue could be addressed using solutions such as homomorphic encryption, but it is out of the scope of this work. \subsubsection{Model Setup, Training and Evaluation} Throughout this work, both supervised and unsupervised models are evaluated. It requires different data and methods to train the models and make predictions. However, both models are trained on a 68-dimensional input, which corresponds to the number of relevant features after the preprocessing. \textit{Stochastic gradient descent (SGD)} is used as the optimization algorithm with a learning rate of 1e-3 and a momentum term of 0.9. In the \textit{anomaly detection scenarios}, an Autoencoder with a single hidden layer of size 32 is used. After the first linear layer, \textit{batch normalization} is applied and \textit{GELU} is used as an activation function on the hidden state. A second linear layer transforms the hidden state back to its original size, followed by a \textit{GELU} activation function that returns the reconstructed input. After the training phase, the anomaly threshold is determined based on the mean ($\mu$) and standard deviation ($\sigma$) of the reconstructed mean square error (MSE). The formula used to select the threshold is show in Equation \ref{eq:thr}. \begin{equation} \label{eq:thr} threshold = \mu + 3 \cdot \sigma \end{equation} The prediction then corresponds to determining the MSE on reconstructing a given behavioral vector. If the MSE of the recreated input is greater than the threshold, it is considered an anomaly and, therefore, behavior under attack. Otherwise, it is considered normal behavior. In the \textit{binary classification scenarios}, an MLP is used. A linear layer produces a hidden state of size 256. Subsequently, batch normalization and the \textit{GELU} activation function are applied to this hidden state. A second linear layer then returns a single output neuron. A \textit{Binary Cross Entropy} Loss function with logits (\textit{BCEwithLogitsLoss}) is used during training, which applies the \textit{sigmoid} activation function and minimizes the logarithmic difference of the output to the encoded label (0 for normal behavior and 1 for attack behavior). Early stopping is applied when there is no loss decrease greater than \textit{1e-4} on the validation set. For the federated training, \textit{FederatedAveraging (FedAvg)} algorithm is used. Algorithm \ref{alg:fedavg} describes the training loop in the clients and the server. Generally, the federation is trained for 15 aggregation rounds with five local epochs per participant if not stated otherwise. It is important to note that the models are relatively small and thus can also be trained on resource-constrained hardware. Further, early stopping is implemented per participant. \begin{algorithm}[ht!] \begin{algorithmic} \STATE \textbf{Server executes:} \item \hskip1em initialize $w_0$ \item \hskip1em \textbf{for} each round \textit{t} = 1,2... \textbf{do} \item \hskip2em $m\leftarrow max(C \cdot K,1)$ \item \hskip2em $S_t \leftarrow$ (random set of $m$ clients) \item \hskip2em \textbf{for} each client $k \in S_t$ \textbf{in parallel do} \item \hskip3em $w^k_{t+1} \leftarrow $ ClientUpdate($k,w_t$) \item \hskip2em $w_{t+1} \leftarrow \sum_{k=1}^{K} \frac{n_k}{n} w_{t+1}^k$ //Aggregation\\ \item \textbf{ClientUpdate(}$k,w$\textbf{):} //\textit{Run on client} $k$ \item \hskip1em $B \leftarrow$ (split $P_k$ into batches of size $B$) \item \hskip1em \textbf{for} each local epoch \textit{i} from 1 to \textit{W} \textbf{do} \item \hskip2em \textbf{for} batch \textit{b} $\in B$ \textbf{do} \item \hskip3em $w \leftarrow w - \eta \bigtriangledown l(w;b)$ //Local update \item \hskip1em return $w$ to server \end{algorithmic} \caption{\texttt{FederatedAveraging}. The \textit{K} clients are indexed by \textit{k}; \textit{B} is the local minibatch size, \textit{E} is the number of local epochs, and $\eta$ is the learning rate; $w$ are the model weights; $P_k$ is the local dataset of client $k$. \cite{mcmahan2017communication}} \label{alg:fedavg} \end{algorithm} \subsubsection{Federated Threshold Selection} \label{subsec:fed-threshold-selection} For anomaly detection, each participant sends its locally computed threshold to the central coordinator, which determines a global threshold. Depending on the federation composition, the thresholds per participant can vary heavily due to the non-IID data across different device types. It has to be taken into account when choosing the federated threshold. While a simple mean may perform reasonably in a setting with the same device type participants, it may perform poorly in a federation with different device types. Taking the maximum of the thresholds, on the other hand, creates a vulnerability to an overstating participant to impair the performance of the global model. Hence, a reasonable compromise has to be found. This compromise is built on the mean $\mu$ and standard deviation $\sigma$ of the list of thresholds that the participants send to the coordinator. Only thresholds that have an absolute \textit{z-score} that is $<=$ 1.5 are considered, choosing the maximum of those filtered values as the global threshold. \subsection{Federated Scenarios and Detection Performance} This section creates four federated scenarios where heterogeneous FL models are trained and evaluated following the previous pipeline. In addition, the detection performance of these models is compared to the one obtained by centralized approaches where data privacy is not preserved. For the sake of fair comparisons, both the federated and central models use the same algorithms, training and testing data, and hyperparameters. Finally, to show model performance and since the test sets for each behavior are separated, the accuracy of the model within each behavior test set is used. The scenarios consider the devices and behaviors modeled by the dataset explained in Section~\ref{sec:dataset} to create the federations. To decide the number of sensors participating in each scenario, each participant must have enough data to achieve meaningful convergence in its local training loop. Therefore, the scenarios explained in this section restrict the number of participants per device type to a maximum of 4. Below, each scenario details the exact number and type of sensors used in its federation as well as the behaviors considered for training and testing. \subsubsection{\textbf{Scenario 1: Federated Anomaly Detection with Balanced Device type}} This scenario focuses on federated anomaly detection to detect zero-day attacks when there is a balanced federation of different sensors types (RPi3, RPi4 2GB, and RPi4 4GB). In particular, four participants of each sensor type are generated to set a total of 12. Between the 12 participants of the federation a privacy-preserving Autonencoder is trained following the pipeline previously explained. Each participant uses 1500 samples of its normal behavior for training and 150 different normal samples for the threshold selection task. Once the federated Autoencoder is trained, 75 samples per behavior (normal, normal\_v2, and eight SSDF attacks) of each participant are evaluated. \tablename~\ref{tab:scenario1} reports the accuracy achieved by the federated Autoencoder model per device type and behavior. The parentheses denote the difference with the accuracy of a central model (not protecting data privacy) that concatenates all train sets, and uses the same algorithm and hyperparameters. A positive difference means that the federation outperforms the simple central approach, whereas a negative difference is the opposite. Finally, it is important to note that RPi4\_2 is excluded from training and only used for testing. \begin{table}[ht] \caption{Accuracy of Scenario 1 Autoencoder Model and Difference with a Centralized Approach (in Parentheses)} \centering \begin{adjustbox}{width=\columnwidth} \begin{tabular}{l|l|l|l|l} Behavior & RPi3 (diff.) & RPi4\_1 (diff.) & RPi4\_2 (diff.) & RPi4\_3 (diff.) \\ \hline \hline normal & 96.0\% (2.7\%) & 100\% (1.3\%) & 100\% (2.7\%) & 99.3\% (0.7\%) \\ normal\_v2 & 96.0\% (3.3\%) & 96.7\% (2.7\%) & 99.3\% (0.7\%) & 98.7\% (6.7\%) \\ \hline delay & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ disorder & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ freeze & 0.7\% (-8.7\%) & 4.00\% (-0.7\%) & 2.0\% (0\%) & 0\% (-1.3\%) \\ hop & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ mimic & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ noise & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ repeat & 6.0\% (-4.0\%) & 3.3\% (-0.7\%) & 2.7\% (-2.0\%) & 2.7\% (-2.0\%) \\ spoof & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ \hline \end{tabular} \end{adjustbox} \label{tab:scenario1} \end{table} As can be seen in \tablename~\ref{tab:scenario1}, both models (federated and centralized) perform almost identically, a good signal for the federated Autoencoder. More in detail, looking at the detection of anomalies produced by SSDF attacks, both models cannot detect freeze and repeat, but the rest of attacks are classified correctly ($\geq$96\%). An important aspect is that the accuracy on the second normal behavior (normal\_v2) is also high (96.00-99.33\%) despite not being used during training. \subsubsection{\textbf{Scenario 2: Federated Anomaly Detection with New Device type}} This scenario evaluates whether a federated model can also be useful for a new device type joining the federation and detecting zero-day SSDF attacks. Thus, in this scenario, the federated anomaly detection model is trained with a total of eight participants belonging to two device types, and subsequently, the model is evaluated on behavior samples (normal and under-attack) of the new third device type. As indicated in \tablename~\ref{tab:models_sc2}, this is done for the three possible combinations of device types, generating three federated Autoencoder. In this context, following the previously defined pipeline each Autoencoder is trained. For each combination, participants provide 1500 samples of normal behavior for training, and 150 for the threshold selection. After that, 75 samples per behavior (two normal and eight attacks) of the third device type are used for testing. \begin{table}[ht] \caption{Federated Models Used in Scenario 2 and 4} \centering \begin{adjustbox}{width=\columnwidth} \begin{tabular}{l|l|l} Model ID & Training Devices & Testing Devices \\ \hline \hline Autoencoder/MLP 1 & RPi3 \& RPi4\_1 & RPi4\_3 \\ Autoencoder/MLP 2 & RPi3 \& RPi4\_3 & RPi4\_1 \& RPi4\_2 \\ Autoencoder/MLP 3 & RPi4\_1 \& RPi4\_3 & RPi3 \\ \hline \end{tabular} \end{adjustbox} \label{tab:models_sc2} \end{table} \tablename~\ref{tab:scenario2} shows the accuracy of the three federated Autoencoders and the difference with the centralized one. As an example, the first column displays the accuracy of Autoencoder 3 (see \tablename~\ref{tab:models_sc2}), trained with four participants of RPi4\_1 and four of RPi4\_3, and evaluated with the RPi3 samples. As in the previous scenario, RPi4\_2 is excluded from training and only used during testing. \begin{table}[ht] \caption{Accuracy of Scenario 2 Autoencoder Models and Difference with a Centralized Approach (in Parentheses)} \centering \begin{adjustbox}{width=\columnwidth} \begin{tabular}{l|l|l|l|l} Behavior & RPi3 (diff.) & RPi4\_1 (diff.) & RPi4\_2 (diff.) & RPi4\_3 (diff.) \\ \hline \hline normal & 0\% (0\%) & 98.0\% (4.0\%) & 97.33\% (4.0\%) & 99.3\% (9.3\%) \\ normal\_v2 & 0\% (0\%) & 98.00\% (2.7\%) & 99.3\% (4.0\%) & 96.0\% (2.7\%) \\ \hline delay & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ disorder & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ freeze & 100\% (0\%) & 4.7\% (-2.7\%) & 0.7\% (-6.0\%) & 0.7\% (-4.7\%) \\ hop & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ mimic & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ noise & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ repeat & 100\% (0\%) & 2.00\% (-5.3\%) & 4.00\% (-6.00\%) & 1.3\% (-2.7\%) \\ spoof & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ \hline \end{tabular} \end{adjustbox} \label{tab:scenario2} \end{table} As can be seen in \tablename~\ref{tab:scenario2}, knowledge transfer to unseen device types is possible if there are similarities in the hardware configuration. Since the behaviors of RPi3 and RPi4s are quite different (non-IID data), the knowledge transfer to RPi3 is not possible and all samples are classified as abnormal. In contrast, the performance on unseen RPi4s with different RAM is generally high, again with the exceptions of freeze and repeat behavior, which are not even detected when including the respective attack in the federation. Comparing the federated model to the centralized approach, there are no major differences, performing the federated model slightly better when detecting normal behavior. \subsubsection{\textbf{Scenario 3: Federated Binary Classification with Balanced Device Type}} It analyzes the capabilities of a federated binary classifier to transfer knowledge of known SSDF attacks between the federation. In particular, this scenario creates a federation of four participants per device type (12 in total) with the same behavioral data (normal and under-attack) per device type. More in detail, one participant per device type holds only normal data while the other three hold normal and delay, normal and freeze, and normal and noise, respectively. Each participant holds 250 samples of each selected behavior in its training set, 25 of each selected behavior in its validation set, and 75 of each existing behavior (two normal and eight attacks) in the test set. With this configuration and following the previous pipeline, a federated MLP is trained and evaluated. \tablename~\ref{tab:scenario3} shows the detection accuracy of the federated MLP model and the difference with the centralized approach. As usual, RPi4\_2 is only used during testing. \begin{table}[ht] \caption{Accuracy of Scenario 3 MLP Model and Difference with a Centralized Approach (in Parentheses)} \centering \begin{adjustbox}{width=\columnwidth} \begin{tabular}{l|l|l|l|l} Behavior & RPi3 (diff.) & RPi4\_1 (diff.) & RPi4\_2 (diff.) & RPi4\_3 (diff.) \\ \hline \hline normal & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ normal\_v2 & 100\% (4.0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ \hline delay & 100\% (0\%) & 100\% (2.7\%) & 100\% (2.7\%) & 100\% (0\%) \\ disorder & 93.33\% (49.3\%) & 96.00\% (17.3\%) & 98.7\% (13.3\%) & 97.3\% (22.7\%) \\ freeze & 0\% (-6.7\%) & 6.67\% (-4.67\%) & 5.33\% (-3.33\%) & 5.33\% (-4.67\%) \\ hop & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ mimic & 100\% (0\%) & 100\% (1.3\%) & 100\% (5.3\%) & 100\% (0\%) \\ noise & 100\% (1.33\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ repeat & 2.7\% (-2.7\%) & 4.0\% (-2.6\%) & 5.3\% (-2.3\%) & 13.33\% (-0.5\%) \\ spoof & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ \hline \end{tabular} \end{adjustbox} \label{tab:scenario3} \end{table} As can be seen in \tablename~\ref{tab:scenario3}, the federated MLP transfers the attack knowledge quite well. It even improves the accuracy of a centrally trained model for the disorder attack. For behaviors other than disorder, no difference $>6\%$ can be observed between the federated and centralized approaches. \subsubsection{\textbf{Scenario 4: Federated Binary Classification with New Device type}} The last scenario is a combination of Scenarios 2 and 3. It evaluates the capabilities of a federated binary classifier to transfer attack knowledge from a federation to a new device type affected by attacks modeled in the federation. In particular, the scenario considers the same three federations of eight participants as Scenario 2 (see \tablename~\ref{tab:models_sc2}). In addition, as in Scenario 3, each participant holds 250 and 25 samples of selected behaviors for training and validation, respectively. Finally, the participant of the third device type holds 75 samples of each behavior (two normal and eight attacks) for testing. Following the pipeline previously explained, a federated MLP model per federation (3 in total) is trained. \tablename~\ref{tab:scenario4} shows the accuracy of the three federated MLP, and their differences with the centralized versions (using the same algorithms, data, and hyperparameters). \begin{table}[ht] \caption{Accuracy of Scenario 4 MLP Models and Difference with a Centralized Approach (in Parentheses)} \centering \begin{adjustbox}{width=\columnwidth} \begin{tabular}{l|l|l|l|l} Behavior & RPi3 (diff.) & RPi4\_1 (diff.) & RPi4\_2 (diff.) & RPi4\_3 (diff.) \\ \hline \hline normal & 100\% (100\%) & 100\% (1.3\%) & 100\% (0\%) & 100\% (4.00\%) \\ normal\_v2 & 100\% (100\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ \hline delay & 0\% (-100\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ disorder & 0\% (-100\%) & 97.3\% (0\%) & 100\% (1.3\%) & 88.0\% (-12.0\%) \\ freeze & 0\% (-2.3\%) & 8.0\% (-3.0\%) & 6.67\% (-2.1\%) & 2.7\% (-6.4\%) \\ hop & 1.33\% (-98.7\%) & 100\% (0\%) & 100\% (0\%) & 98.7\% (1.3\%) \\ mimic & 0\% (-100\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ noise & 0\% (-100\%) & 100\% (0\%) & 100\% (0\%) & 100\% (0\%) \\ repeat & 1.3\% (-4.7\%) & 4.0\% (-2.3\%) & 9.3\% (-2.0\%) & 5.3\% (-4.7\%) \\ spoof & 0\% (-100\%) & 100\% (0\%) & 98.7\% (-1.3\%) & 100\% (0\%) \\ \hline \end{tabular} \end{adjustbox} \label{tab:scenario4} \end{table} The results of \tablename~\ref{tab:scenario4} are very similar to those of Scenario 2. The transfer between RPi4s works well for most behaviors, while the transfer between RPi4 to RPi3 does not work at all (normal behavior of RPi3 is predicted as attack). Hence, there is no real advantage of either approach for the RPi3. Further, the model does not detect the behaviors freeze and repeat as attack in all RPi4. For these behaviors, the centralized model slightly outperforms the federated model. Further, the federated model shows better performance (+12\%) for the disorder behavior on a RPi4 with 4GB of RAM. \section{Robustness Against Adversarial Attacks} \label{sec:attacks} This section evaluates the robustness of the FL models created in Scenario 1 and 3 (defined in Section~\ref{sec:federated}) when they are affected by malicious participants executing adversarial attacks. In particular, an increasing number of adversaries execute data and model poisoning attacks over a federated anomaly detector (Scenario 1) and a binary classifier (Scenario 3) equipped with different aggregation functions. In terms of adversarial attacks, the following are evaluated: \textit{(i)} behavior injections, as a variant of data poisoning, \textit{(ii)} model canceling, as a model poisoning attack, and \textit{(iii)} random weight upload, another model poisoning attack. More in detail, behavior injection uses malicious data to train models. In the case of anomaly detection, attack data is used as if it were normal, while for classification, the labels of normal and attack data are flipped during training. Model canceling tries to bring all global model parameters to zero. For that, it uploads the parameters of the last known global model multiplied by a factor $\alpha$ that is determined according to the formula based on the number of participants $K$ and the number of adversaries $f$: $ K - f + \alpha \cdot f = 0$. Finally, random weight upload sends random weights to the aggregation server. This work generates the weights from a normal distribution with a mean of zero and a standard deviation of three. As secure aggregation functions, additionally to \textit{FedAvg}, the models consider \textit{(i) trimmed mean}, which excludes the highest and the lowest entries to calculate the mean of the local models weights, \textit{(ii) trimmed mean\_2} excluding the two highest and lowest entries for the averaging, and \textit{(iii) coordinate-wise median}, which uses the median of every weight instead of the average. Random weight upload uploads a random weight vector to the aggregation server. For the corresponding experiments random weight adversaries have been chosen to select their random weights using a normal distribution with a mean of 0 and a standard deviation of 3. Finally, to measure the impact of the attacks when evaluating normal and under-attack behaviors, all behavioral data of each participant are concatenated, and the F1-score metric is calculated as $F1-score=\frac{TP}{TP+\frac{1}{2}(FP+FN)}$ (TP:True Positive, TN:True Negative, FP: False Positive, FN: False Negative) \subsection{Robustness of Scenario 1} The previous adversarial attacks and secure aggregation functions are considered to measure the robustness of the federated Autoencoder detecting anomalies in Scenario 1. \subsubsection{Attack Behavior Injection} In the federation of 12 participants (four per device type), from zero to four participants per device type (0\% to 33\% of the federation) are turned into data poisoning adversaries. This adversary setup is repeated three times, one per device type. Adversaries use attack samples (instead of normal samples) to train the federated Autoencoder. Each adversary injects different attack behaviors into the training process. In particular, the first adversary uses spoof behavior to train, the second mimic, the third delay, and the fourth disorder. These attacks are selected according to their median MSE in the corresponding federation without adversaries, choosing the ones more dissimilar from the normal behavior. Freeze and repeat behaviors are not injected due to their similarity to normal behavior. The first row of \figurename~\ref{fig:ad_attacks} shows for each device type, the F1-score of the federated Autoencoder according to the implemented aggregation function, and the number of adversaries belonging to the RPi3. The second row shows the same but when the adversaries belong to the RPi4 2GB family. Due to space constraints and similarities with the RPi4 2GB family, it is not shown when attackers belong to the RPi4 4GB type. \begin{figure*}[t] \centering \includegraphics[width=1.7\columnwidth]{Images/AD_Attacks.pdf} \caption{Impact of Different Adversarial Configurations on the Anomaly Detection Approach} \label{fig:ad_attacks} \end{figure*} As can be seen in \figurename~\ref{fig:ad_attacks}, for federated averaging, even one adversary (8\% of the federation) decreases the F1-score of each device below 70\%. Four adversaries (33\%) destroy the model performance. Furthermore, the injecting device type matters and attacks performed by RPi4\_1 have a more significant impact on all different hardware configurations. Comparing the aggregation functions, \textit{coordinate-wise median} performs best in general, achieving an F1-score above 60\% for all test sets up to two adversaries. Especially in the case of adversarial RPi3, the aggregation function achieves excellent robustness with F1-scores above 80\% for up to 4 adversaries. \subsubsection{Model Canceling and Threshold Attack} In contrast to the previous attack, the impact of model canceling does not depend on the device type executing it. For this attack, the federation remains as in Scenario 1, but with up to 6 adversaries affecting the model robustness. It means that the number of participants varies from 12 (no adversaries) to 18 (with six malicious actors, 33\%). It is important to note that the model canceling attack is combined with an overstatement of the threshold. In other words, besides selecting model canceling weights, adversaries choose a threshold randomly from the uniform distribution in the range $[10^6, 10^9]$. The third row of \figurename~\ref{fig:ad_attacks} reports the F1-score of each attack configuration per aggregation function. The \textit{FedAvg} aggregation is only capable of defending against one adversary. Most importantly, the threshold overstatement can destroy the model performance once one manipulated threshold is not filtered. In this scenario, the \textit{coordinate-wise median} provides a very robust defense since the federation maintains very good performance even with six adversaries. In conclusion, while the mean is shifted heavily towards the attackers for the \textit{FedAvg} and \textit{trimmed mean} aggregations, the median can be more stable against largely different adversarial model weights. However, in the case of $\geq50\%$ adversarial percentage, the median would also lose effectiveness. \subsubsection{Random Uploads} The fourth row of \figurename~\ref{fig:ad_attacks} shows the impact of the adversaries per aggregation function. Here, similar results as in the model canceling attack can be observed. \textit{Coordinate-wise median} performs best, followed by \textit{trimmed mean\_2} and the basic \textit{trimmed mean}. Nonetheless, in this attack where adversaries produce random weights, it is not as obvious as for model canceling how the aggregation function can filter the exact weight values. Random weights can be in a completely honest range for some layers or hidden units, but they can also be extreme values for others. It depends on which distribution the random values are sampled and whether they are extreme values compared to the honest weights. However, random weights have no significant impact on the median. In conclusion, this scenario has shown how robust aggregation methods improve the model resilience against adversaries. In all attacks, \textit{coordinate-wise median} is the aggregation method offering the best robustness. It maintains the model performance almost unaltered in 3 of 4 adversarial attacks, only decreasing (still better than the other aggregation methods) when RPi4\_1 performs a data poisoning attack. \subsection{Robustness of Scenario 3} This section measures the robustness of the federated MLP classifying normal and under-attack behaviors in Scenario 3. \subsubsection{Label Flipping} For this attack, the federation of 12 participants (four per device type and three device types) contains from zero to four adversaries of a given device type. This adversary setup is repeated three times, once per device type. In each configuration, adversaries flip the labels of their local data. In particular, the first adversary flips the labels of normal behavior, the second flips normal and delay, the third normal and freeze, and the fourth normal and noise. The two first rows of \figurename~\ref{fig:binary_attacks} follow the same structure as \figurename~\ref{fig:ad_attacks}, but for label flipping attacks this time. As can be seen, for \textit{FedAvg} and regardless of the device type acting maliciously (but especially for RPi3 acting as an adversary), the attack does not have a big impact on the average model performance. This is because certain attack characteristics are already available in the federation. For example, if there are two adversarial RPi4\_1, the normal and freeze behavior labels are flipped for this specific device type only. However, the knowledge about these behaviors is fully present for the other two device types. It explains the much higher F1-score than in other attack scenarios. Apart from that, the \textit{trimmed mean} function is the one providing more robustness for all devices. In contrast, \textit{coordinate-wise median} shows a different pattern where the model performs poorly, especially for RPi3. It can be appreciated how performance even improves with the presence of some adversaries. However, once there are too many label flipping adversaries, it seems to become a lottery which participants weights are chosen for the update. \begin{figure*}[t] \centering \includegraphics[width=1.7\columnwidth]{Images/BC_Attacks.pdf} \caption{Impact of Different Adversarial Configurations on the Binary Classification Approach} \label{fig:binary_attacks} \end{figure*} \subsubsection{Model Canceling} This attack considers the same 12 participants of Scenario 3 and adds from zero to six adversaries (0-33\% of the federation). In contrast to the anomaly detection experiment, the threshold cannot be attacked in this case. The third row of \figurename~\ref{fig:binary_attacks} reports the results for zero to six model canceling adversaries regardless of the device type acting maliciously. As can be seen, while the performance for \textit{trimmed mean} excluding one extreme value is very similar to \textit{FedAvg} aggregation, the exclusion of two extreme values (trimmed mean\_2) helps to protect from one more adversary. Still, the performance drops below 20\% for three or more adversaries. \textit{Coordinate-wise median} performs better than the other aggregation functions with four or more adversaries but does not present a viable solution either. It might be explained by the fact that filtering out RPi3 good weights by the median as this device type represents a minority in the federation. \subsubsection{Random Uploads} It also considers Scenario 3, where different adversaries (from zero to six) execute random weight model uploads. The fourth row of \figurename~\ref{fig:binary_attacks} reports the F1-score of the federated MLP for different aggregation functions and adversaries. As can be seen, the \textit{trimmed mean\_2} function provides the most robust results in general (and especially for RPi4). Adversaries generating random weights do not necessarily always produce wrong weights for the overall model. Moreover, it looks like random adversarial weights cancel out each other, which explains the instability. Indeed, the global model becomes more random when more adversaries are introduced. Therefore, federated averaging is highly unstable. The second \textit{trimmed mean} variant provides a better defense as more adversaries can be filtered, being especially in favor of the RPi4. Finally, \textit{coordinate-wise median} does not perform well for low numbers of adversaries but provides an effective countermeasure for four or more malicious participants. In conclusion, this scenario has shown that robust aggregation methods are not as effective as in the first scenario. Here, there is not a clear aggregation method better than the others. \textit{Trimmed mean} is the one offering best results under label flipping attacks, while \textit{trimmed mean\_2} and \textit{coordinate-wise median} have the best results for model canceling and random uploads attacks, but still with a significant performance loss compared to no-attack situations. \section{Summary, Conclusions, and Future Work} \label{sec:conclusions} This work evaluates the robustness of fingerprinting FL models equipped with anti-adversarial mechanisms and able to detect cyberattacks affecting resource-constrained devices. To achieve that goal, this work first creates and makes public a FL-oriented dataset based on fingerprints of Raspberry Pis utilized as spectrum sensors of ElectroSense. The dataset contains samples from eight different SSDF attacks and from two versions of normal behavior for a total of four physical sensors. After that, four federated scenarios based on anomaly detection and binary classification are created to evaluate and compare the detection performance of privacy-preserving FL models and DL models where data privacy is neglected. The main results of each scenario demonstrate that FL achieves a detection performance that can compete with centralized DL approaches without significant limitations. Finally, this work analyzes the impact of different amounts of malicious participants executing data and model poisoning attacks against FL models equipped with different aggregation mechanisms (\textit{FederatedAveraging}, \textit{trimmed mean}, and \textit{coordinate-wise median}). The experiments conducted show that both data poisoning and model poisoning severely affect the federated performance for binary classification as well as anomaly detection. However, \textit{Trimmed mean} and \textit{coordinate-wise median} can help if adversaries are present up to a certain percentage, but they cannot guarantee robustness, and their applicability depends on the specific scenario considered. As the main conclusion of this study, FL legitimates itself by achieving competitive performance in scenarios where privacy plays an important role, meaning that a centralized setup is not possible. Due to its simplicity in training, the capability to detect zero-day attacks, and possible robustness improvements, anomaly detection appears to be the best solution for the federated detection of data integrity attacks in a crowdsensing platform. Regarding the robustness of FL models and the impact of anti-adversarial mechanisms, the evaluated aggregation functions provide different results depending on the scenario and attack. Generally, the \textit{FedAvg} is particularly vulnerable to extreme values as they distort the selected global weight average entirely. The \textit{trimmed mean} aggregation may be able to filter extreme values up to some extent, but if there are sufficiently many adversaries, not all weight uploads can be excluded. Further, there is the concern of excluding honest participants weights despite adversaries only. Thus, the best number of updates to exclude from the averaging is very difficult to determine in \textit{trimmed mean}. The \textit{coordinate-wise median aggregation} is most robust against extreme values but can lead to unstable results in heterogeneous federations. As future work, there is still room for further research about robust aggregation mechanisms. In the case of anomaly detection, domain-specific aggregation functions could be used to filter adversaries more effectively by leveraging further knowledge about common distributions and fingerprint patterns of normal behavior. For instance, it could be utilized that the threshold of an honest federation participant should be in a certain range for a given device type. Lastly, a larger dataset could greatly enhance the exploration of the FL use case. This could allow to test more extensively how heterogeneity and non-IID data influence federated model performance. \section*{Acknowledgment} This work has been partially supported by \textit{(a)} the Swiss Federal Office for Defense Procurement (armasuisse) with the CyberTracer and RESERVE projects (CYD-C-2020003) and \textit{(b)} the University of Zürich UZH. \bibliographystyle{apalike}
{ "timestamp": "2022-02-02T02:06:47", "yymm": "2202", "arxiv_id": "2202.00137", "language": "en", "url": "https://arxiv.org/abs/2202.00137" }
\section{Introduction} Inspired by an article by Grimmett and Marstrand on supercritical percolation in dimension $d\ge 3$, Bezuidenhout and Grimmett have shown in a famous article that the contact process vanishes at the critical point. Their proof technique has often been used to study various growth models. The implementation of their proof technique is usually quite technical, as it relies on a renormalization procedure with quite complicated events as a basic brick. The purpose of this article is therefore to introduce this technique with growth models for which the implementation is much simpler. Among the growth models, the most famous is the Galton--Watson process. The basic theorem concerns the probability of survival as a function of fertility: except in degenerate cases, survival is possible only if the fertility rate exceeds $1$. The proof that is usually taught -- see for example Benaïm--El Karoui~\cite{BEK} or Durrett~\cite{Durrett} -- is essentially analytic. It relies on generating functions and convexity arguments, which may seem rather frustrating or at least quite miraculous. We propose here, inspired by the work of Bezuidenhout and Grimmett, to give a proof that is more in line with the probabilistic intuition. This gives an introduction to the ideas of Bezuidenhout and Grimmett, with a model that is probably the simplest of the models that can be considered. We then continue with the study of the survival problem on an original model, mixing sexual and asexual reproduction. In order to keep our text self-contained (maybe event suitable for a presentation to graduate students), the first section is devoted to the introduction of the Galton--Watson process with all the necessary results. The new proof of the classical result comes in Section~2. Section~3 is devoted to the introduction and the study of a new cooperative model, mixing sexual and asexual reproduction. \section{Galton--Watson processes: definition and first properties} Let $\nu,\mu$ be two distributions on $\ensuremath{\mathbb{N}}$. The distribution $\nu$ is denoted as the offspring distribution, whereas $\mu$ is the distribution of the size of the initial population. We denote as the Galton--Watson process with initial distribution $\mu$ and offspring distribution $\nu$ the Markov chain that starts with $\mu$ as initial distribution, and whose transition matrix is given by \begin{equation*} p_{i,j}= \begin{cases} \nu^{*i}(j)\text{ if }i\ne 0\\ \delta_0(j)\text{ if }i=0 \end{cases} \end{equation*} One can build such a chain as follows: Let $(X_i^n)_{i,j\ge 1}$, $Y_0$ be independent random variables with $Y_0\sim\mu$ and $X_i^n\sim\nu$ for every $i,n$. Then, the sequence $(Y_n)_{n\ge 1}$ is recursively defined by $$\forall n\ge 0\quad Y_{n+1}=\sum_{1\le i\le Y_n}X_i^n.$$ Then, $(Y_n)_{n\ge 0}$, is a Galton--Watson process with initial distribution $\mu$ and offspring distribution $\nu$. The mean number of offspring $m=\int_{\ensuremath{\mathbb{N}}} x\ d\nu(x)$ is denoted as the fertility. If we define $\mathcal{F}_n=\sigma(X_i^k,i\ge 1,k\le n)$, we have \begin{equation} \label{puissance} \ensuremath{\mathbb{E}}[Y_{n+1}|\mathcal{F}_n]=mY_n,\quad \ensuremath{\mathbb{E}}[Y_{n+1}]=m\ensuremath{\mathbb{E}}[Y_n]\text{ and }\ensuremath{\mathbb{E}}[Y_n]=m^n\ensuremath{\mathbb{E}}[Y_0] \end{equation} We define the time to extinction $\tau$ as follows: $\tau=\inf\{n\ge 0; Y_n=0\}$. \begin{theo} \label{mort} If $m<1$, $\P(\tau>n)=O(m^n)$. Particularly, $\P(\tau<+\infty)=1$. \end{theo} \begin{proof} With~\eqref{puissance}, we have $\P(\tau>n)\le\P(Y_n\ge 1)\le \ensuremath{\mathbb{E}}[Y_n]=m^n\ensuremath{\mathbb{E}}[Y_0]$. \end{proof} \begin{theo} \label{galtonindep} Let $(X_n)_{n\ge 0}$ and $(Y_n)_{n\ge 0}$ be independent Galton--Watson processes with the same offspring distribution $\nu$. Then, $(X_n+Y_n)_{n\ge 0}$ is also a Galton--Watson process with $\nu$ as offspring distribution. \end{theo} \begin{proof} Since $(X_n)_{n\ge 0}$ and $(Y_n)_{n\ge 0}$ are independent Markov chains, $((X_n,Y_n))_{n\ge 0}$ is a Markov chain, with the transition matrix $$p_{(x,a),(y,b)}=\nu^{* x}(a)\nu^{* y}(b).$$ Let us denote by $\P^{(x,y)}$ the distributions of the canonically associated Markov chains. We must prove that if the function $f$ is defined by $f(x,y)=x+y$, then $(f(X_n,Y_n))_{n\ge 0}$ is still a Markov Chain. To this aim, we apply the Dynkin criterion: it is sufficient to prove that whenever $x+y=r$, then $\P^{(x,y)}(f(X_1,Y_1)=\ell)$ only depends on $r$ and $\ell$. Also, under $\P^{(x,y)}$, $X_1$ and $Y_1$ are independent random variables with $\nu^{* x}$ and $\nu^{* y}$ as their respective distributions, so the distribution of $f(X_1,Y_1)$ is $\nu^{* x}* \nu^{* y}=\nu^{* (x+y)}=\nu^{* r}$. Finally, $\P^{(x,y)}(f(X_1,Y_1)=\ell)=\nu^{* r}(\{\ell\})$ and $(X_n+Y_n)_{n\ge 0}$ is a Galton--Watson process with $\nu$ as offspring distribution. Since the initial distribution is $\P_{X_0+Y_0}=\P_{X_0}*\P_{Y_0}=\mu_1*\mu_2$, we get the desired result. \end{proof} In the sequel, $\P^i$ denotes a probability measure under which $(Y_n)_{n\ge 0}$ is a Galton--Watson process with initial distribution $\delta_{i}$ and offspring distribution $\nu$. \begin{coro} We have \begin{itemize} \item For each $n\ge 0$, $\P^n(\tau<+\infty)=\P^{1}(\tau<+\infty)^n$ \item For $n,\ell\ge 0$, $\P^n(\tau<+\infty|\mathcal{F}_{\ell})=\P^{1}(\tau<+\infty)^{Y_{\ell}}$. \item For $n,\ell\ge 1$, we have $\P^n(\tau=+\infty)>0 \iff \P^{\ell}(\tau=+\infty)>0.$ \end{itemize} \end{coro} \begin{proof} Thanks to Theorem~\ref{galtonindep}, we have $$\P^{n+1}(\tau<+\infty)=\P^{n}(\tau<+\infty)\P^{1}(\tau<+\infty),$$ then $\P^{n}(\tau<+\infty)=\P^{1}(\tau<+\infty)^n$ follows by natural induction. This gives the first item. Then, the second item follows from the Markov property. The last point is obvious. \end{proof} \begin{coro} \label{souschaine} Let $T\ge 1$. $(Y_{Tn})_{n\ge 0}$ is a Galton--Watson process with offspring distribution $\P^1_{Y_T}$. \end{coro} \begin{proof} Since $(Y_n)$ is a Markov chain, it is well known that so does $(Y_{Tn})_{n\ge 0}$. Let us compute the transition probabilities. Let $k\ge 1$. Applying Theorem~\ref{galtonindep} ($k-1$ times), we see that if the processes $(Y^1_t)_{t\ge 0}$, $(Y^2_t)_{t\ge 0}, $\dots$ (Y^k_t)_{t\ge 0}$ are independent Galton--Watson processes with $\delta_1$ as their common initial distribution and $\nu$ as offspring distribution, then $(Y^1_t+\dots Y^k_t)_{t\ge 0}$ is a Galton--Watson process with $\delta_{k}$ as initial distribution and $\nu$ as offspring distribution. Then, $$\P^k(Y_T=\ell)=\P(Y^1_T+\dots Y^k_T=\ell)=\P_{Y^1_T}^{*k}(\ell).$$ Also, $\P^0(Y_T=\ell)=\delta_0(\ell)$: this gives the desired result. \end{proof} \section{A probabilistic proof} In the first step of the proof, we show that a certain growth process may survive, with the idea that the process that we finally want to study will be compared to the surviving reference process. In the present paper, the reference process is a Galton--Watson process too. However in general, the reference process may belong to a related family. For example, Bezuidenhout and Grimmett compared the contact process to a supercritical oriented percolation process. \subsection{Survival in the supercritical phase} \begin{theo} \label{surcritique} If $m>1$, then $\P^{1}(\tau=+\infty)>0$. \end{theo} \begin{proof} Let $a$ with $1<a<m$. We have $$\miniop{}{\lim}{M\to +\infty} \int x\wedge M \ d\nu= \int x \ d\nu=m,$$ so there exists $M$ with $\int x\wedge M \ d\nu>a$. For $k\ge n$, we have \begin{align*} \P^k(Y_1<na)&=\P(X_1+\dots X_k<na)\\\le &\P(X_1\wedge M+\dots X_n\wedge M<na)\\& =\P(n\ensuremath{\mathbb{E}}[X_1\wedge M]-(X_1\wedge M+\dots X_n\wedge M))> (\ensuremath{\mathbb{E}}[X_1\wedge M]-a)n)\\& \le\frac{\text{Var } X_1\wedge M}{(\ensuremath{\mathbb{E}}[X_1\wedge M]-a)n}, \end{align*} by the Tchebitchef inequality. Let us define $\phi(k,x)=\P^k(Y_1<x)$ and consider $n>c=\frac{\text{Var } (X_1\wedge M)}{\ensuremath{\mathbb{E}}[X_1\wedge M]-a}$.\\ Let $t\ge 0$. By the Markov property, for each $A\in \mathcal{F}_t$ with $A\subset\{Y_t\ge n\}$, we can write \begin{align*} \P(A\cap \{Y_{t+1}<an\})&= \ensuremath{\mathbb{E}}[\mathbbo{1}_A \mathbbo{1}_{Y_{t+1}<an\}}]=\ensuremath{\mathbb{E}}[\mathbbo{1}_A\ensuremath{\mathbb{E}}[\mathbbo{1}_{Y_{t+1}<an\}}|\mathcal{F}_t]]\\ &=\ensuremath{\mathbb{E}}[\mathbbo{1}_A \phi(Y_t,an)]\le \ensuremath{\mathbb{E}} [\mathbbo{1}_A c/n]=c/n\P(A), \end{align*} so $\P(Y_{t+1}\ge an| A)\ge 1-\frac{c}n$.\\ By natural induction, it follows that for $A_t=\miniop{t}{\cap}{i=1}\{Y_{t}\ge na^t\}$, we have $$\P^n(A_t)\ge\miniop{t-1}{\prod}{i=0}\left(1-\frac{c}{na^i}\right),$$ then $\P^n(\tau=+\infty)\ge\P^n(\forall t\ge 0\quad Y_{t}\ge na^t)\ge\prod_{i=0}^{+\infty} \left(1-\frac{c}{na^i}\right)>0.$ \end{proof} Some remarks: \begin{itemize} \item Obviously, the bound $1-\frac{c}n$ is very bad, coming from the Tchebitchev inequality. We were doing better with the Höffding inequality, but that is sufficient for our purpose. \item The same pattern can be applied to demonstrate that survival is possible for a multitype Galton--Watson process whose fertility matrix has a spectral radius strictly greater than~1 (see for example~\cite{Garet-livre}). \end{itemize} \subsection{Survival is a local property} \begin{theo} \label{equivalence} Let $(Y_n)_{n\ge 0}$ be a Galton--Watson process with offspring distribution~$\nu$. Suppose that $\nu(0)>0$. Then there is an equivalence between: \begin{itemize} \item $\exists N,T\ge 1\quad \P^N(Y_T\ge 2N)>\frac12$. \item $\P^1(\tau=+\infty)>0$. \end{itemize} \end{theo} The event $\{Y_T\ge 2N\}$ only depends on what happens in a finite time box. Thus, it can be considered to be a local event, which will be useful to get some continuity with respect to the parameters of the model. \\ Before starting the proof, let us give the main ideas: \begin{itemize} \item For the direct implication, the idea is to compare the chain with a supercritical Galton--Watson process, then conclude with the help of Theorem~\ref{surcritique}. \item The reverse implication is quite simple, because one essentially has to prove that the number of particles explodes as soon as the process survives. However, it must be kept in mind that if the local event is more complicated, this part will actually be the most difficult one. \end{itemize} \begin{lemme} If there exist $a>0$ and $N\ge 1$ such that $a\P^N(Y_1\ge aN)>1$, then $\P^1(\tau=+\infty)>0$. \end{lemme} \begin{proof} Let $X_i^n$ be i.i.d. with $\nu$ as common distribution. Let $M_0=1$, $Y_0=N$, and then $$\forall n\ge 0\quad Y_{n+1}=\sum_{1\le i\le Y_n}X_i^n\text{ and }M_{n+1}=\sum_{i=1}^{M_n} aB_i^n,$$ with $B_i^n=\mathbbo{1}_{\{X^n_{(i-1)N+1}+\dots X^n_{iN}\ge aN\}}$.\\ We prove by natural induction that $Y_n\ge NM_n$ for each $n\ge 0$. Indeed, if $Y_n\ge NM_n$, it follows that \begin{align*} Y_{n+1}=\sum_{1\le i\le Y_n}X_i^n\ge \sum_{1\le i\le NM_n}X_i^n&=\sum_{i=1}^{M_n}(X^n_{(i-1)N+1}+\dots X^n_{iN})\\&\ge \sum_{i=1}^{M_n} aNB_i^n=NM_{n+1}. \end{align*} We note that $(M_n)$ is a Galton--Watson process, and its fertility is given by\\ $m=\ensuremath{\mathbb{E}}[aB_i^n]=a\P^N(Y_1\ge aN)>1$, then it may survive by Theorem~\ref{surcritique}. Since $Y_n\ge NM_n$, the process $(Y_n)$ may survive too. \end{proof} Note that the proof of the lemma relies on a coupling argument: we make live on the same space $(Y_n)_{n\ge 0}$ and a Galton--Watson process with offspring distribution $(1-q)\delta_0+q\delta_{a}$, where $q=\P^N(Y_1\ge aN)$.\\ This step can be seen as a static renormalization: with the help of the local events $\{X^n_{(i-1)N+1}+\dots X^n_{iN}\ge aN\}$, we build a growth process involving Bernoulli variables, in such a way that \begin{itemize} \item The process using Bernoulli variables is known to be able to survive; \item The process using Bernoulli variables is dominated by the process that we study. \end{itemize} \begin{proof}[Proof of Theorem~\ref{equivalence}] By corollary~\ref{souschaine}, $(Y_{nT})_{n\ge 0}$ is a Galton--Watson process. So we can apply the Lemma with $a=2$: $(Y_{nT})_{n\ge 0}$ may survive, thus $(Y_{n})_{n\ge 0}$ may survive also. Conversely, let us suppose that $\nu(0)>0$, and $\P^1(\tau<+\infty)<1$.\\ Since $\P^N(\tau<+\infty)=\P^1(\tau<+\infty)^N$, there exists $N$ with $\P^N(\tau<+\infty)<1/2$. We have noted that $\P^N(\tau<+\infty|\mathcal{F}_t)=\P^1(\tau<+\infty)^{Y_t}$.\\ Since $\P^1(\tau<+\infty)\ge \P^1(Y_1=0)=\nu(0)>0$, we can write $$Y_t=\frac{\log \P^N(\tau<+\infty|\mathcal{F}_t)}{\log \P^1(\tau<+\infty)}.$$ Now, the Martingale convergence Theorem ensures that $$\ensuremath{\mathbb{E}}^N[\mathbbo{1}_{\{\tau<+\infty\}}|\mathcal{F}_t]=\P^N(\tau<+\infty|\mathcal{F}_t)\to \mathbbo{1}_{\{\tau<+\infty\}}\quad\P^N\text{ a.s.}$$ when $t$ tends to infinity. \\ Particularly, on the event $\{\tau=+\infty\}$, $\P^N(\tau<+\infty|\mathcal{F}_t)$ almost surely tends to $0$ and $Y_t$ almost surely tends to infinity. Therefore, the following inequality holds $\P^N$-almost surely: $$\mathbbo{1}_{\{\tau=+\infty\}}\le \miniop{}{\liminf}{t\to +\infty}\mathbbo{1}_{\{Y_t\ge 2N\}}.$$ With the Fatou Lemma, it follows that $$\P^N(\tau=+\infty)=\ensuremath{\mathbb{E}}^N(\mathbbo{1}_{\{\tau=+\infty\}})\le \miniop{}{\liminf}{t\to +\infty}\ensuremath{\mathbb{E}}^N[\mathbbo{1}_{\{Y_t\ge 2N\}}]= \miniop{}{\liminf}{t\to +\infty}\P^N(Y_t\ge 2N).$$ Since $\P^N(\tau=+\infty)>1/2$, there exists $T$ such that $\P^N(Y_T\ge 2N)>1/2$. \end{proof} \subsection{The critical case} \begin{theo} If $\nu(0)>0$ and $m=1$, then $\P^1(\tau=+\infty)=0$. \end{theo} \begin{proof}[First proof] It is sufficient to note that for every $N,T\ge 1$, we have $$\P^N(Y_T\ge 2N)\le\frac{\ensuremath{\mathbb{E}}^N(Y_T)}{2N}=\frac{N}{2N}=\frac12,$$ then apply the converse part in Theorem~\ref{equivalence}. \end{proof} We now present another line of proof, somewhat longer, but also more robust. It was used in Garet--Marchand~\cite{GM-BRW} and Gantert--Junk~\cite{Gantert} for the study of some branching random walks. The first proof is not robust because it exploits the fact that we exactly know how to characterize the critical parameter for survival. However, in many growth models, the critical parameter can not be given explicitly. The idea is then: having shown that survival is characterized by the fact that a local event has a fairly high probability, we reason by contradiction and suppose that there is survival at the critical point for a certain parameter. Then, with a slight modification of the local event, we can, by continuity, exhibit a model of the same family that is a little weaker, for which the local event still has a probability that is large enough to ensure survival, but which must nevertheless die because its parameter has become subcritical. \begin{proof}[Second proof] By contradiction, let us assume that we have $\nu(0)>0$, $m=1$ and also $\P^1(\tau=+\infty)>0$. By Theorem~\ref{equivalence} (converse implication), one can choose $n$ and $T$ such that $\P^N(Y_T\ge 2N)>\frac12$. The idea is to provide a coupling with a subcritical process. Let $(X_i^n)_{i,j\ge 1}$, $(B_i^n)_{i,j\ge 1}$ be independent variables with $X_i^n\sim \nu$, and the $(B_i^n)_{i,j\ge 1}$'s are Bernoulli with parameter $p$. Define $Y_0=N$, $Y^p_0=N$, then $$\forall n\ge 0\quad Y_{n+1}=\sum_{1\le i\le Y_n}X_i^n\text{ and }Y^p_{n+1}=\sum_{1\le i\le Y^p_n}B_i^n X_i^n.$$ By monotonicity, $$\miniop{}{\lim}{M\to +\infty}\P^N( \max(Y_i,0\le i\le T)\le M,Y_T\ge 2N)=\P^N( Y_T\ge 2N)>1/2,$$ so there exists $M$ such that $\P( \max(Y_i,0\le i\le T)\le M,Y_T\ge 2N)>1/2$. We have then \begin{align*} \P(Y^p_T\ge 2N)&\ge \P(Y_T\ge 2N,\forall i\le T\quad Y^p_i=Y_i)\\ & \ge \P\left( \begin{array}{l}\max(Y_i,0\le i\le T)\le M,Y_T\ge 2N,\\\forall (t,i)\in\{0,\dots, T-1\}\times\{1,\dots,M\} \quad B_i^t=1\end{array}\right)\\&=\P( \max(Y_i,0\le i\le T)\le M,Y_T\ge 2N)p^{TM} \end{align*} Taking $p<1$ large enough, we have $$\P( \max(Y_i,0\le i\le T)\le M,Y_T\ge 2N)p^{TM}>1/2,$$ so $\P(Y^p_T\ge 2N)>1/2$. But $(Y^p_t)$ is a Galton--Watson process with offspring distribution $B_1^1X_1^1$ and initial distribution $\delta_N$, so by Theorem~\ref{equivalence} (direct implication), this Galton--Watson process may survive. However $$\ensuremath{\mathbb{E}}[B_1^1X_1^1]=\ensuremath{\mathbb{E}}[B_1^1]\ensuremath{\mathbb{E}}[X_1^1]=pm=p<1,$$ so by Theorem~\ref{mort}, the process can not survive. This is a contradiction. \end{proof} \input{modele-eng} \section*{Appendix: source code in Julia} \begin{Julia} using AbstractAlgebra function compute_proba(p,q) ex=0 for a=0:1,b=0:1,c=0:1,d=0:1,e=0:1,f=0:1 s=a+b t=c+d+e+f z=p^s*(1-p)^(2-s)*q^t*(1-q)^(4-t) m=min(s,t) ex+=m*z end return(ex) end A,(p,q)=PolynomialRing(ZZ,["p"; "q"]) chaine="h(p,q)="*repr(compute_proba(p,q)) println(chaine) eval(Meta.parse(chaine)) # from now on # h(p,q)=4*p^2*q^4-12*p^2*q^3+12*p^2*q^2- # 4*p^2*q-2*p*q^4+8*p*q^3-12*p*q^2+8*p*q using Plots using Distributed using Distributions using DistributedArrays @everywhere using Distributions println(workers()) @everywhere function montecarlo(N,survie,p,q=p/2) s=0 for i=1:N a=Integer(1) b=Integer(1) while (a>0) && (b>0) && (b<survie) && (a<survie) distA=Binomial(2*(a+b),q) distB=Binomial(2*min(a,b),p) a=rand(distA,1)[1] b=rand(distB,1)[1] end s+=(a>0) && (b>0) end return(s/N) end pas=0.00125 NMC=1000 interv=0:pas:1 survie=@DArray [montecarlo(NMC,10^8,i,j) for i=interv, j=interv] survie_simul=convert(Array,survie) heatmap(interv,interv,survie_simul,ratio=1,xlabel="q", ylabel="p",c=reverse(cgrad(:default)),size=(1200,800)) savefig("survival_both_species_without_dot.png") function q_critique(pp) qmin=0 qmax=1 milieu=0.5 while ((qmax-qmin)>10^(-12)) milieu=(qmin+qmax)/2 if (h(pp,milieu)<1) qmin=milieu else qmax=milieu end end return(milieu) end y=0.5:0.01:1 plot!(q_critique.(y),y,linewidth=2,linestyle=:dash, color=:green,label="h(p,q)=1") savefig("survival_2_species_with_dot_and_legend.png") \end{Julia} \defReferences{References} \bibliographystyle{plain}
{ "timestamp": "2022-02-02T02:12:32", "yymm": "2202", "arxiv_id": "2202.00256", "language": "en", "url": "https://arxiv.org/abs/2202.00256" }
\section{Introduction} Web pages or documents are the most common and powerful source for humans to acquire knowledge. There are billions of websites that contain rich information about various objects. For example, Figure \ref{fig:example} shows a web page describing an event, which contains structured event information including event title, description, date, time and location. The large-scale web data becomes increasingly essential to facilitate new experiences in applications like web search and retrieval, which enables smart assistants to do complex tasks such as ``locating kid-friendly events in San Francisco this weekend'' and ``exploring Nike running shoes less than $\$$50''. Therefore, it is an important research problem to extract structured information from web pages. Structure information extraction from the web ~\cite{CrescenziM04,CarlsonS08,HaoCPZ11,ManabeT15,ZhanZ20} is a challenging task due to the unstructured nature of textual data and the diverse layout patterns of the web documents ~\cite{SleimanC13,MajumderPTWZN20}. There has been a lot of interest in this topic, and a plethora of research ~\cite{Yang00BN20,ChengQSH020,ZhangYXHLLLS21,TangXJWCXWWL21} in this area both in academia and industry. Among the early works, template/wrapper induction ~\cite{DalviKS11,ProskurniaCPKWK17,LockardDSE18} has proven to be successful for extracting information from web documents. However, these techniques do not scale to the whole web as obtaining accurate ground truth for all domains is expensive. Moreover, the wrappers go out-of-date quickly because page structure changes frequently, and require periodic updating. One also needs to generate new templates for the new domains. Recently, learning-based models ~\cite{GogarHS16,WangKGS19} have been proposed for automatic information extraction. These methods use schema.org markup \cite{TempelmeierDD18} as the supervision to build machine-learned extractors for different fields. Most recently, with the advance of natural language processing ~\cite{VaswaniSPUJGKP17,DevlinCLT19,abs-2004-05150}, language models with sequence modeling ~\cite{abs-2107-06955,WangWTJMDH21} have been applied to web document information extraction. These approaches first sequentialize the web document into a sequence of words, and then use RNN/LSTM \cite{ZhengMD018,Lin0VT20,abs-2101-02415} or attention networks ~\cite{XuXL0WWLFZCZZ20,HwangYPYS21} to extract the text spans corresponding to the structured fields from the sequence. Although existing natural language models achieve promising results on web information extraction, there are several major limitations. First, the structural HTML layout has not been fully exploited, which contains important information and relation about different text fields. For example, in an event page, the event date and location are naturally correlated, which form sibling nodes in the HTML (see Figure \ref{fig:example}). In a shopping page, the product price is often mentioned right after the product title on the page. Therefore, encoding the structural HTML beyond sequential modeling is essential in web document extraction. Second, most existing models do not scale up to a large number of fields across domains. They build one separate model for each text field, which are not suitable for large scale extraction, nor can be generalized to new domains. Third, large web documents with long sequences are not modeled effectively. Attention networks, such as Transformer-based models, usually limit their input to 512 tokens due to the quadratic computational cost with the sequence length. In this paper, we propose WebFormer, a novel Web-page transFormer model that incorporates the HTML layout into the representation of the web document for structure information extraction. WebFormer encodes the field, the HTML and the text sequence in a unified Transformer model. Specifically, we first introduce HTML tokens for each DOM node in the HTML. We then design rich attention patterns for embedding representation among all the tokens. WebFormer leverages the web layout structure for more effective attention weight computation, and therefore explicitly recovers both local syntactic and global layout information of the web document. We evaluate WebFormer on SWDE and Common Crawl benchmarks, which shows superior performance over several state-of-the-art methods. The experimental results also demonstrate the effectiveness of WebFormer in modeling long sequences for large web documents. Moreover, we show that WebFormer is able to extract information on new domains. We summarize the main contributions as follows: \begin{itemize} \item We propose a novel WebFormer model for structure information extraction from web documents, which effectively integrates the web HTML layout via graph attention. \item We introduce a rich attention mechanism for embedding representation among different types of tokens, which enables the model to encode long sequences efficiently. It also empowers the model for zero-shot extractions on new domains. \item We conduct extensive experiments and demonstrate the effectiveness of the proposed approach over several state-of-the-art baselines. \end{itemize} \section{Related Work} \subsection{Information Extraction} Early studies of extracting information from the web pages mainly focus on building templates for HTML DOM tree, named wrapper induction ~\cite{CohenHJ02,KimS11}. Template extraction techniques have been applied to improve the performance of search engines, clustering, and classification of web pages. They learn desired patterns from the unstructured web documents and construct templates for information extraction. Region extraction methods ~\cite{ChangKGS06,SleimanC13} try to classify portions of a web page according to their specific purposes, e.g., classify whether a text node is the title field. Foley et al. ~\cite{FoleyBJ15} use simple naive-Bayes to classify the web page and SVM methods to get the score for each field. Wang et al. ~\cite{WangKGS19} extend this work by designing deep neural network models and using well designed visual features like font sizes, element sizes, and positions. \begin{figure*} \begin{center} \includegraphics[width=0.84\linewidth]{overview_webformer_2.png} \end{center} \caption{The WebFormer model architecture.} \label{fig:overview} \end{figure*} Recently, there has been an increasing number of works that develop natural language models with sequence modeling ~\cite{HuangXY15,MaH16,abs-2101-09465,Lin0VT20,abs-2102-09550,abs-2101-02415} for web information extraction. Zheng et al. \cite{ZhengMD018} develop an end-to-end tagging model utilizing BiLSTM, CRF, and attention mechanism without any dictionary. Aggarwal et al. \cite{AggarwalGSK20} propose a sequence-to-sequence model using an RNN, which leverages relative spatial arrangement of structures. Aghajanyan et al. \cite{abs-2107-06955} train a hyper-text language model based on BART \cite{BART} on a large-scale web crawl for various downstream tasks. More recently, several attribute extraction approaches ~\cite{XuWMJL19,WangYKSSSYE20,Amazon2} have been proposed, which treat each field as an attribute of interest and extract its corresponding value from clean object context such as web title. Chen et al. \cite{abs-2101-09465} formulate the web information extraction problem as structural reading comprehension and build a BERT \cite{DevlinCLT19} based model to extract structured fields from the web documents. It is worth mentioning that there are also methods that work on multimodal information extraction ~\cite{YangYAKKG17,XuWLZM21,WangSLHDJ21,WangWTJMDH21}, which focus on extracting the field information from the visual layout or the rendered HTML of the web documents. \subsection{Relation Learning} Relation extraction/learning research ~\cite{ZhengLWYZ16,HeCLZZZ18,LiLSZYHJ20,LiuCWZLX20,LockardSDH20,XuCZ21} is also related to our work. Relation extraction refers to the task of extracting relational tuples and putting them in a knowledge base. Web information extraction can be thought of as the problem where the subject is known (the web document), and given the field (the relation) extract the corresponding text. However, relation extraction has traditionally focused on extracting relations from sentences relying on entity linking systems to identify the subject/object and building models to learn the predicates in a sentence ~\cite{LevySCZ17,0005FC0S19}. Whereas in structure information extraction, usually the predicates (the fields) rarely occur in the web documents, and entity linking is very hard because the domain of all entities is unknown. \section{WebFormer} \subsection{Problem Definition} We formally define the problem of structure information extraction from web documents in this section. The web document is first processed into a sequence of text nodes and the HTML DOM tree. We denote the text sequence from the web document as $T = (t_1, t_2, \dots, t_k)$, where $t_i$ represents the $i$-$th$ text node on the web. $k$ is the total number of text nodes with $t_i$=$(w_{i_1},w_{i_2},\dots,w_{i_{n_i}})$ as its $n_i$ words/tokens. Note that the ordering of the text nodes does not matter in our model, and one can traverse the DOM tree in any order to obtain all the text nodes. Denote the DOM tree of the HTML as $G$ = $(V, E)$, where $V$ is the set of DOM nodes in the tree with $E$ being the set of edges (see top left in Figure \ref{fig:overview}). Note that the $k$ text nodes are essentially connected in this DOM representation of the HTML, representing the layout of the web document. The goal of structure information extraction is that given a set of target fields $F$=$(f_1,\dots,f_m)$, extract their corresponding text information from the web document. For example, for the text field ``date'', we aim to extract the text span ``Dec 13'' from the web document. Formally, the problem is defined as finding the best text span $\bar{s_j}$ for each field $f_j$, given the web document $T$ and $G$: \[\bar{s_j} = \argmax_{b_j,e_j} \ Pr( \ w_{b_j}, \ w_{e_j} \ | \ f_j, \ T, \ G)\] where $b_j$ and $e_j$ are the begin and end offsets of the extracted text span in the web document for text field $f_j$. \subsection{Approach Overview} Existing sequence modeling methods either directly model the text sequence from web document ~\cite{Lin0VT20,WangYKSSSYE20} or serialize the HTML with the text in a certain order ~\cite{abs-2101-09465,abs-2101-02415} to perform the span based text extraction. In this work, we propose to simultaneously encode the text sequence using the Transformer model and incorporate the HTML layout structure with graph attention. The overall model architecture is shown in Figure \ref{fig:overview}. Essentially, our WebFormer model consists of three main components, the input layer, the WebFormer encoder and the output layer. The input layer contains the construction of the input tokens of WebFormer as well as their embeddings, including the field token, the HTML tokens from DOM tree $G$ and the text tokens from the text sequence $T$. The WebFormer encoder is the main block that encodes the input sequence with rich attention patterns, including HTML-to-HTML (H2H), HTML-to-Text (H2T), Text-to-HTML (T2H) and Text-to-Text (T2T) attentions. In the output layer, the text span corresponding to the field is computed based on the encoded field-dependent text embeddings. We present the detail of each component separately in the following subsections. \subsection{Input Layer} Most previous sequence modeling approaches ~\cite{AggarwalGSK20,Amazon2} only encode the text sequence of the web document without utilizing the HTML layout structure. In this work, we jointly model the text sequence with the HTML layout in a unified Transformer model. In particular, we introduce three types of tokens in the input layer of WebFormer. \noindent\textbf{Field token} A set of field tokens are used to represent the text field to be extracted, such as ``title'', ``company'' and ``base salary'' for a job page. By jointly encoding the text field, we are able to construct a unique model across all text fields. \noindent\textbf{HTML token} Each node in the DOM tree $G$, including both internal nodes (non-text node) and text nodes, corresponds to an HTML token in WebFormer. The embedding of a HTML token can be viewed as a summarization of the sub-tree rooted by this node. For example, in Figure \ref{fig:overview}, the embedding of the ``$<$$html$$>$'' token essentially represents the full web document, which can be used for page level classification. On the other hand, the embedding of the text node ``$<$$p_2$$>$'' summarizes the text sequence $t_4$. \noindent\textbf{Text token} This is the commonly used word representation in natural language models. For example, $t_1$ contains three words, ``Fun'', ``Family'' and ``Fest'', which correspond to three text tokens. In the input layer, every token is converted into a $d$-dimensional embedding vector. Specifically, for field and text tokens, their final embeddings are achieved by concatenating a word embedding and a segment embedding. For HTML token embedding, they are formulated by concatenating a tag embedding and a segment embedding. The word embedding is widely adopted in the literature \cite{MikolovSCCD13}. The segment embedding is added to indicate which type the token belongs to, i.e. field, HTML or text. The tag embedding is introduced to represent different HTML-tag of the DOM nodes, e.g. ``$div$'', ``$head$'', ``$h1$'', ``$p$'', etc. Note that all the embeddings in our approach are trainable. The word embeddings are initialized from the pretrained language model, while the segment and tag embeddings are randomly initialized. \subsection{WebFormer Encoder} The WebFormer encoder is a stack of $L$ identical contextual layers, which efficiently connects the field, HTML and text tokens with rich attention patterns followed by a feed-forward network. The encoder produces effective contextual representations of web documents. To capture the complex HTML layout with the text sequence, we design four different attention patterns, including 1) HTML-to-HTML (H2H) attention which models the relations among HTML tokens via graph attentions. 2) HTML-to-Text (H2T) attention, which bridges the HTML token with its corresponding text tokens. 3) Text-to-HTML (T2H) attention that propagates the information from the HTML tokens to the text tokens. 4) Text-to-Text (T2T) attention with relative position representations. Moreover, WebFormer incorporates the field into the encoding layers to extract the text span for the field. \subsubsection{HTML-to-HTML Attention} The HTML tokens are naturally connected via the DOM tree graph. The H2H attention essentially computes the attention weights among the HTML tokens and transfers the knowledge from one node to another with the graph attention \cite{VelickovicCCRLB18}. We use the original graph $G$ that represents the DOM tree structure of the HTML in the H2H attention calculation. In addition, we add edges to connect the sibling nodes in the graph, which is equivalent to include certain neighbors with edge distance 2 in the graph. For example, the HTML token ``$<$$div1$$>$'' is connected with itself, the parent token ``$<$$body$$>$'', the child tokens ``$<$$div2$$>$'' and ``$<$$h3$$>$'', and sibling token ``$<$$img$$>$''. Formally, given the HTML token embedding $x_i^H$, the H2H graph attention is defined as: \[\alpha^{H2H}_{ij} = \frac{\exp(e^{H2H}_{ij})}{\sum_{\ell\in \mathcal{N}(x_i^H)} \exp(e^{H2H}_{i\ell})}, \ for \ j \in \mathcal{N}(x_i^H)\] \[e^{H2H}_{ij} = \frac{x^H_i W_Q^{H2H} (x^H_j W_K^{H2H} + a_{ij}^{H2H})^T}{\sqrt{d}}\] \noindent where $\mathcal{N}(x_i^H)$ indicates the neighbors of the HTML token $x_i^H$ in the graph. $W_Q^{H2H}$ and $W_K^{H2H}$ are learnable weight matrices, and $a_{ij}^{H2H}$ are learnable vectors representing the edge type between the two nodes, i.e. parent, child or sibling. $d$ is the embedding dimension. \subsubsection{HTML-to-Text Attention} The H2T attention is only computed for the text nodes in the HTML to update their contextual embeddings. We adopt a full attention pattern where the HTML token $x_i^H$ is able to attend to each of its text tokens $x_j^T$ in $t_i$. For example, in Figure \ref{fig:overview}, the HTML token ``$<$$p_2$$>$'' attends to all the three text tokens in $t_4$, i.e. ``Spark'', ``Social'' and ``SF''. The H2T full attention is defined as: \[\alpha^{H2T}_{ij} = \frac{\exp(e^{H2T}_{ij})}{\sum_{\ell\in t_i} \exp(e^{H2T}_{i\ell})}, \ for \ j \in t_i\] \[e^{H2T}_{ij} = \frac{x^H_i W_Q^{H2T} (x^T_j W_K^{H2T})^T}{\sqrt{d}}\] \noindent where $W_Q^{H2T}$ and $W_K^{H2T}$ are weight matrices in H2T attention. \subsubsection{Text-to-HTML Attention} In T2H attention, each text token communicates with every HTML token. Intuitively, this T2H attention allows the text token to absorb the high-level representation from these summarization tokens of the web document. The formulation of the T2H attention is analogous to the above H2T attention except that each text token attends to all HTML tokens. \subsubsection{Text-to-Text Attention} The T2T attention is the regular attention mechanism used in various previous models ~\cite{VaswaniSPUJGKP17,DevlinCLT19}, which learns contextual token embeddings for the text sequence. However, the computational cost of the traditional full attention grows quadratically with the sequence length, and thus limits the size of the text tokens. Inspired by the work of ~\cite{ShawUV18,ShawMCPA19}, our T2T attention adopts relative attention pattern with relative position encodings, where each text token only attends to the text tokens within the same text sequence and within a local radius $r$. In Figure \ref{fig:overview}, the local radius $r$ is set to 1, which means each token will only attend to its left and right tokens, and itself. For instance, the text token ``is'' in $t_2$ attends to the tokens ``This'', ``is'' and ``a'' within $t_2$. The formal T2T relative attention is defined as: \[\alpha^{T2T}_{ij} = \frac{\exp(e^{T2T}_{ij})}{\sum_{i-r\le\ell\le {i+r}} \exp(e^{T2T}_{i\ell})}, \ for \ i-r \le j \le i+r\] \[e^{T2T}_{ij} = \frac{x^T_i W_Q^{T2T} (x^T_j W_K^{T2T} + b_{i-j}^{T2T})^T}{\sqrt{d}}\] \noindent where $W_Q^{T2T}$ and $W_K^{T2T}$ are weight matrices in T2T attention. $b_{i-j}^{T2T}$ are learnable relative position encodings representing the relative position between the two text tokens. Note that there are total $2r+1$ possible relative position encodings, i.e. ${(i-j)} \in\{-r,\dots,-1,0,1,\dots,r\}$. \subsubsection{Field Token Attention} Our WebFormer model jointly encodes the field information such that the structured fields share the unique encoder. Following the work in ~\cite{XuWMJL19,WangYKSSSYE20}, we introduce the field tokens into WebFormer and enable full cross-attentions between field and HTML tokens. Note that one can easily add cross-attention between field and text tokens. We found empirically in our experiments that this does not improve the extraction quality. Although there is no direct interaction between field and text tokens, they are bridged together through the text-to-HTML and the HTML-field attentions. \subsubsection{Overall Attention} We compute the final token representation based on the above rich attention patterns among field, text and HTML tokens. The output embeddings for field, text and HTML tokens $z_i^F, z_i^T, z_i^H$, are calculated as follows: \[z_i^F = \sum_{j} \alpha^{F2H}_{ij} x_j^H W_V^F\] \[z_i^T = \sum_{i-r\le j\le {i+r}} \alpha^{T2T}_{ij} x_j^T W_V^T + \sum_{k} \alpha^{T2H}_{ij} x_k^H W_V^H\] \[z_i^H = \sum_{j \in \mathcal{N}(x_i^H)} \alpha^{H2H}_{ij} x_j^H W_V^H + \sum_{k\in t_i} \alpha^{H2T}_{ij} x_k^T W_V^T\] where all the attention weights $\alpha_{ij}$ are described above. $W_V^F$, $W_V^T$ and $W_V^H$ are the learnable matrices to compute the values for field, text and HTML tokens respectively. \subsection{Output Layer} The output layer of WebFormer extracts the final text span for the field from the text tokens. We apply a softmax function on the output embeddings of the encoder to generate the probabilities for the begin and end indices: \[P_b = softmax(W_bZ^T), \ P_e = softmax(W_eZ^T) \] where $Z^T$ is the contextual embedding vectors of the input text sequence. $W_b$ and $W_e$ are two parameter matrices that project the embeddings to the output logits, for the begin and end respectively. Inspired by the work \cite{XLNet}, we further predict the end index based on the start index by concatenating the begin token embedding with every token embedding after it. \subsection{Discussion} This section provides discussion that connects WebFormer with previous methods as well as the limitations of our model. If we treat HTML tags as additional text tokens, and combine with the text into a single sequence without the H2H, H2T and T2H attentions, our model architecture degenerates to the sequence modeling approaches ~\cite{XuL0HW020,abs-2101-09465} that serialize the HTML layout. If we further trim the HTML from the sequence, our model is regressed to the sequence model \cite{WangYKSSSYE20} that only uses the text information. Moreover, if we also remove the text field from the input, our model degenerates to the sequence tagging method ~\cite{ZhengMD018,Lin0VT20}, which is not able to scale to a large set of target fields. There are two scenarios where our model is not directly applicable. First, our model focuses on structure information extraction on single object pages, where each target field only has one text value. For a multi-object page, e.g. a multi-event page, there are different titles and dates corresponding to different events on the page, which could be extracted with methods like repeated patterns \cite{AdelfioS13,WangKGS19}. Second, there are applications that require to extract information from the rendered pages, where OCR and CNN \cite{XuL0HW020} techniques are used. \section{Experiments} \subsection{Datasets} \noindent\textbf{SWDE} ~\cite{HaoCPZ11,abs-2101-02415}: The Structured Web Data Extraction (SWDE) dataset is designed for structural reading comprehension and information extraction on the web. It consists of more than 124,000 web pages from 80 websites of 8 verticals including ``auto'', ``book'', ``camera'', ``job'', ``movie'', ``nbaplayer'', ``restaurant'' and 'university'. Each vertical consists of 10 websites and contains 3 to 5 target fields of interest. We further split the data into train, dev and test sets with 99,248, 12,425 and 12,425 pages respectively. \begin{table} \begin{adjustbox}{width=1\columnwidth,center} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{ Data Splits} & \multirow{2}{*}{SWDE} & \multicolumn{3}{c}{Common Crawl} \\ \cline{3-5} & &Events & Products & Movies \\ \hline Train &99,248 &72,367 &105,642 &57,238 \\ Dev/Test &12,425 & 9,046 & 13,205 &7,154\\ \hline Training Time (10 epoch) &4h 15m & 3h 46m & 4h 22m &3h 21m\\ \hline \end{tabular} \end{adjustbox} \caption{Statistics of the datasets with the training time.} \label{table:data} \end{table} \noindent\textbf{Common Crawl}\footnote{\url{http://commoncrawl.org/connect/blog/}}: The Common Crawl corpus is widely used in various web search, information extraction and other related tasks. Common Crawl contains more than 250 TiB of content from more than 3 billion web pages. In our experiments, we select web pages that have schema.org annotations\footnote{\url{https://schema.org/}} within the three domains - \textbf{Events}, \textbf{Products} and \textbf{Movies}. The schema.org annotations contain the website provided markup information about the object, which are used as our ground-truth labels. The fields are \{``Name'', ``Description'', ``Date'', ``Location''\}, \{``Name'', ``Description'', ``Brand'', ``Price'', ``Color''\} and \{``Name'', ``Description'', ``Genre'', ``Duration'', ``Director'', ``Actor'', ``Published Date''\} for event, product and movie pages respectively. We further filter these pages by restricting to English and single object pages. We downsample the web pages by allowing at most 2,000 pages per website to balance the data, as some websites might dominate, e.g., amazon.com. All datasets are then randomly split into train, dev and test sets with raito 8:1:1. The details are given in Table \ref{table:data}. \begin{table*} \begin{adjustbox}{width=1.9\columnwidth,center} \begin{tabular}{c|cc|cc|cc|cc} \hline \multirow{3}{*}{Models} & \multicolumn{2}{c|}{\multirow{2}{*}{\bf SWDE}} & \multicolumn{6}{c}{\bf Common Crawl} \\ \cline{4-9} & & &\multicolumn{2}{c|}{\bf Events} &\multicolumn{2}{c|}{\bf Products}&\multicolumn{2}{c}{\bf Movies}\\ \cline{2-9} & EM & F1 &EM & F1 & EM & F1 & EM & F1 \\ \hline OpenTag & 81.33 $\pm$ 0.22 & 86.54 $\pm$ 0.27 & 77.14 $\pm$ 0.26 & 83.71 $\pm$ 0.12& 72.57 $\pm$ 0.20 & 77.75 $\pm$ 0.19 & 80.36 $\pm$ 0.15 & 85.06 $\pm$ 0.18 \\ DNN & 80.53 $\pm$ 0.15 & 85.64 $\pm$ 0.26 & 78.43 $\pm$ 0.18 & 85.06 $\pm$ 0.21& 74.64 $\pm$ 0.27 & 78.56 $\pm$ 0.15 & 82.44 $\pm$ 0.23 & 86.65 $\pm$ 0.16 \\ AVEQA & 83.27 $\pm$ 0.32 & 88.75 $\pm$ 0.16 & 80.82 $\pm$ 0.21 & 86.47 $\pm$ 0.14& 74.85 $\pm$ 0.32 & 79.49 $\pm$ 0.28 & 83.87 $\pm$ 0.30 & 88.51 $\pm$ 0.19 \\ SimpDOM & 84.67 $\pm$ 0.23 & 90.35 $\pm$ 0.21 & 81.96 $\pm$ 0.24 & 86.33 $\pm$ 0.17& 75.12 $\pm$ 0.27 & 78.22 $\pm$ 0.21 & 82.59 $\pm$ 0.25 & 87.72 $\pm$ 0.18 \\ H-PLM & 83.42 $\pm$ 0.20 & 89.04 $\pm$ 0.18 & 82.65 $\pm$ 0.15 & 87.52 $\pm$ 0.17& 76.24 $\pm$ 0.17 & 81.13 $\pm$ 0.26 & 83.72 $\pm$ 0.26 & 89.34 $\pm$ 0.17 \\ \hline WebFormer & {\bf 86.58 $\pm$ 0.16} & {\bf 92.46 $\pm$ 0.24} & {\bf 84.79 $\pm$ 0.24} & {\bf 89.33 $\pm$ 0.18}& {\bf 80.67 $\pm$ 0.20} & {\bf 83.37 $\pm$ 0.23} & {\bf 85.30 $\pm$ 0.19} & {\bf 90.41 $\pm$ 0.24} \\ \hline \end{tabular} \end{adjustbox} \caption{Performance comparison on all datasets. Results are statistically significant with p-value $<$ 0.001.}\label{table:performance} \end{table*} \begin{table} \begin{adjustbox}{width=0.88\columnwidth,center} \begin{tabular}{c|cc|cc|cc} \hline \multirow{2}{*}{Fields} & \multicolumn{2}{c|}{\bf Events} & \multicolumn{2}{c|}{\bf Products} & \multicolumn{2}{c}{\bf Movies} \\ \cline{2-7} & EM & F1 &EM & F1 & EM & F1 \\ \hline Name & 88.27 & 93.46 & 85.11 & 90.53 & 89.32 & 93.57 \\ Description & 81.62 & 85.50 & 77.94 & 81.46 & 82.71 & 88.19 \\ Date & 86.86 & 91.48 & - & - & - & - \\ Location & 82.41 & 86.88 & - & - & - & - \\ Brand & - & - & 84.23 & 85.63 & - & - \\ Price & - & - & 75.65 & 76.86 & - & - \\ Color & - & - & 80.42 & 82.35 & - & - \\ Genre & - & - & - & - & 89.49 & 92.67 \\ Duration & - & - & - & - & 83.74 & 88.35 \\ Director & - & - & - & - & 86.28 & 91.38 \\ Actor & - & - & - & - & 80.16 & 87.44 \\ Publish Date & - & - & - & - & 85.40 & 91.27 \\ \hline \end{tabular} \end{adjustbox} \caption{Field level metrics of WebFormer.}\label{table:field_performance} \end{table} \subsection{Implementation Detail} For data pre-processing, we use open-source LXML library\footnote{\url{https://lxml.de/}} to process each page for obtaining the DOM tree structures. We then use in order traverse of the DOM tree to obtain the text nodes sequence. We implemented our models using Tensorflow and Keras. Each model is trained on a 32 core TPU v3 configuration. The word embedding is initialized with the pretrained BERT-base. The parameters used in WebFormer are 12 layers, 768 hidden size, 3072 hidden units (for FFN) and 64 local radius. The maximum text sequence length is set to 2048. The maximum number of HTML tokens are set to 256. During training, we use the gradient descent algorithm with Adam optimizer. The initial learning rate is set to $3e^{-5}$. The batch size for each update is set as 64 and the model is trained for up to 10 epochs. The dropout probability for the attention layer is set to 0.1. \subsection{Evaluation Metric} We evaluate the performance of the WebFormer model with two standard evaluation metrics: \textbf{Exact Match} (EM) and \textbf{F1} from the package released in \cite{squad}. Exact Match is used to evaluate whether a predicted span is completely the same as the ground truth. It will be challenging for those answers that are only part of the text. F1 measures the overlap of the extracted answer and the ground truth by splitting the answer span into tokens and compute F1 score on them. We repeat each experiment 10 times and report the metrics on the test sets based on the average over these runs. \subsection{Baselines} \noindent\textbf{OpenTag} \cite{ZhengMD018} uses a BiLSTM-Attention-CRF architecture with sequence tagging strategies. OpenTag does not encode the field and thus builds one model per field. \noindent\textbf{DNN} \cite{WangKGS19} applies deep neural networks for information extraction. Text nodes in the HTML are treated as candidates, and are extracted with DNN classifiers. \noindent\textbf{AVEQA} \cite{WangYKSSSYE20} formulates the problem as an attribute value extraction task, where each field is treated as an attribute. This model jointly encodes both the attribute and the document with a BERT \cite{DevlinCLT19} encoder. \noindent\textbf{SimpDOM} \cite{abs-2101-02415} treats the problem as DOM tree node tagging task by extracting the features for each text node including XPath, and uses a LSTM to jointly encode with the text features. \noindent\textbf{H-PLM} \cite{abs-2101-09465} sequentializes the HTML together with the text and builds a sequence model using the pre-training ELECTRA \cite{ClarkLLM20} as backbone. The codes for OpenTag\footnote{\url{https://github.com/hackerxiaobai/OpenTag_2019}} and H-PLM\footnote{\url{https://github.com/X-LANCE/WebSRC-Baseline}} are publicly available. For our previous works DNN and AVEQA, we use the original codes for the papers. For SimpDOM, we re-implement their model using the parameters from the paper. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{att_em_new.png} \includegraphics[width=0.9\linewidth]{att_f1_new.png} \caption{Results of WebFormer with different attention patterns. Top: EM scores. Bottom: F1 scores.} \label{fig:att} \end{figure} \subsection{Results and Discussion} \subsubsection{Performance Comparison} The evaluation results of WebFormer and all baselines are reported in Table \ref{table:performance}. From these comparison results, we can see that WebFormer achieves the best performance among all compared methods on all datasets. For example, the EM metric of WebFormer increases over 7.8\% and 5.8\% compared with AVEQA and H-PLM on Products. There are three main reasons: First, our model integrates the HTML layout into a unified HTML-text encoder with rich attention, which enables the model to effectively understand the web layout structure. Second, WebFormer adopts the relative position encoding in T2T attention, which allows our model to represent large documents efficiently. Third, the field information is jointly encoded and attended with both HTML and text tokens. Different fields share one encoder and thus are able to benefit from each other. We further report the field level results of WebFormer on the Common Crawl dataset in Table \ref{table:field_performance}. It can be seen that some fields, such as ``Name'' and ``Genre'', obtain relatively higher scores compared with ``Price'' and ``Location''. We also observe that the difference between EM and F1 scores is very small for fields like ``Brand'' and ``Color''. The reason is that their text spans are usually very short, containing just one or two tokens. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{len_distribution_new.png} \includegraphics[width=0.95\linewidth]{len_swde.png} \includegraphics[width=0.95\linewidth]{len_cc_new.png} \caption{EM scores of different methods within each bucket of sequence length.} \label{fig:len} \end{figure} \subsubsection{Impact of Rich Attentions} To understand the impact of the rich attention patterns, we conduct a set of experiments by removing each attention from our model. Specifically, we train four separate models without T2T, H2T, T2H and H2H attention respectively. The results of these four models and WebFormer (refer to All) on all datasets are shown in Figure \ref{fig:att}. It is not surprising to see that the performance drops significantly without the T2T local attention. The reason is that T2T is used to model the contextual token embeddings for the text sequence, which is the fundamental component in the Transformer model. We can also observe that the model without H2H graph attention achieves much worse performance compared to the models without T2H or H2T attention. This observation validates that the HTML layout information encoded within the H2H attention is crucial for extracting structure fields from web documents. Moreover, it is clear that WebFormer with T2H and H2T attentions further improve the model performance on all datasets. \begin{figure} \centering \includegraphics[width=0.92\linewidth]{mistake_new.png} \caption{Mistake analysis: distribution of different type of mistakes.} \label{fig:mistake} \end{figure} \subsubsection{Impact on Large Document} To evaluate the impact of different models on large documents with long text sequence, we group the test examples into four buckets w.r.t. the sequence length of the example (i.e. 0-512, 512-1024, 1024-2048 and 2048-inf), and compute the metrics in each bucket for all methods. The length distribution of the test documents with EM scores on both datasets (for Common Crawl, we merge all the test sets from Events, Products and Movies) are shown in Figure~\ref{fig:len}. It can be seen that WebFormer achieves consistent results w.r.t. the sequence length. In contrast, the performances of OpenTag, AVEQA, SimpDOM and H-PLM go down with the increasing of the sequence length. Our hypothesis is that WebFormer utilizes L2L relative attention and the H2L attention, which enables the model to encode web documents with long sequences effectively and efficiently. Note that the DNN model does not depend on the sequence length and thus does not suffer from the long sequence. \begin{table} \begin{adjustbox}{width=0.95\columnwidth,center} \begin{tabular}{c|c|c|c} \hline & parameters & SWDE & Common Crawl \\ \hline AVEQA & 110M & 83.27 & 78.45 \\ H-PLM & 110M & 83.42 & 80.78 \\ \hline WebFormer-2L & 45M & 82.05 & 76.73 \\ WebFormer-6L & 82M & 83.86 & 79.35 \\ WebFormer-12L-share & 109M & 85.29 & 81.49 \\ WebFormer-12L & 151M & 86.58 & 83.22 \\ WebFormer-24L & 285M & {\bf 87.84} & {\bf 86.51} \\ \hline \end{tabular} \end{adjustbox} \caption{EM results over different model configurations.}\label{ablation} \end{table} \begin{table*} \begin{center} \begin{tabular}{c|ccc|ccc|ccc} \hline batch size & \multicolumn{3}{c|}{64} &\multicolumn{3}{c|}{128} &\multicolumn{3}{c}{512} \\ \hline learning rate & $3$x$10^{-5}$ & $5$x$10^{-5}$ & $1$x$10^{-4}$ & $3$x$10^{-5}$ & $5$x$10^{-5}$ & $1$x$10^{-4}$ & $3$x$10^{-5}$ & $5$x$10^{-5}$& $1$x$10^{-4}$ \\ \hline SWDE & {\bf 86.58} & 86.36 & 86.20 & 86.37& 86.42 & 86.35 & 86.18& 86.11 & 86.28\\ Events & {\bf 84.79} & 84.62 & NaN & 84.54 & 84.46 & 84.65 & 84.11 & 84.27 & 84.13\\ Products & 80.67 & {\bf 80.71} & NaN & 80.32 & 80.38 & 80.40 & 79.96 & 80.23 & 80.37 \\ Movies & {\bf 85.30} & 85.21 & 85.14 & 84.58 & 84.75& 84.83 & 84.39 & 84.56 & 84.77 \\ \hline \end{tabular} \end{center} \caption{EM results of WebFormer with different batch sizes and learning rates on all datasets.}\label{bs_lr} \end{table*} \subsubsection{Error Analysis} We conduct error analysis of WebFormer over 160 and 60 randomly selected Exact Match mistakes on SWDE and Common Crawl dataset respectively (5 per field). We identify several major mistake patterns and summarize them here: 1) Partial text extraction: The largest group of mistakes is that our model extracts a substring of the ground-truth text. For example, our model extracts ``Fun Festival'' as the event name instead of ``Fun Festival at Square Park''. 2) Multiple occurrences issue: There are cases where the target field is mentioned multiple times on the web page. For example, our model extracts ``SEP 11'' as the date, but the ground-truth text is ``Sat, September 11, 2011''. 3) Multi-value issue: The other type of error is that the field has multiple values and we only extract one of them. For example, a product has both ``blue'' and ``white'' as its color where we only extract ``blue''. 4) Range issue: There are a certain amount of mistakes that fall into the range issue group. For instance, our model extracts the ``price'' as ``19.90'' from the ground-truth ``19.90 - 26.35'' which is a range of prices. 5) Model mistakes: There are few other extraction errors made by the model, which are hard cases even for human raters. The summarization of the mistake analysis is reported in Figure \ref{fig:mistake}. By looking closely at these mistake patterns, we observe that our model actually extracts the correct or partially correct answers for most cases in the group of 1), 2), 3) and 4). These mistakes can be easily fixed by marking all answer occurrences and values as positives in the training, and adopting a BIO-based span extraction as mentioned. However, there are still difficult cases which require further investigations into the training data and the model. \subsubsection{Ablation Study} We further conduct a series of ablation studies of WebFormer. The WebFormer base model contains 12 layers. We first evaluate our model with a different number of encoder layers, i.e. 2L, 6L and 24L. We also evaluate another ablation of WebFormer by sharing the model parameters. Specifically, the query matrices of the text and HTML tokens are shared, i.e. $W_Q^{T2T}$=$W_Q^{T2H}$=$W_Q^T$, $W_Q^{H2H}$=$W_Q^{H2T}$=$W_Q^H$, $W_K^{T2T}$=$W_K^{H2T}$=$W_K^T$ and $W_K^{H2H}$=$W_K^{T2H}$=$W_K^H$. This model is referred to as WebFormer-12L-share. The EM results with the number of model parameters are shown in Table \ref{ablation}. It can be observed that WebFormer-24L achieves the best performance, which is consistent with our expectations. Similar behavior is also observed in ~\cite{DevlinCLT19,AinslieOACFPRSW20}. However, a larger model usually requires longer training time, as well as inference. The training time of the base models are reported in Table \ref{table:data}. \subsubsection{Impact of Training Batch Size and Learning Rate} To evaluate the model performance with different training batch size and learning rate, we conduct experiments to train a set of WebFormer models with a hyper-parameter sweep consisting of learning rates in \{$3$x$10^{-5}$, $5$x$10^{-5}$, $1$x$10^{-4}$\} and batch-size in \{64, 128, 512\} on the training set. The EM results with different learning rates and batch sizes on all datasets are reported in Table \ref{bs_lr}. It can be seen from the tables that WebFormer achieves the best result with batch size 64 and learning rate $3$x$10^{-5}$ on all datasets except Products. The observation is consistent with the findings in work \cite{WangYKSSSYE20}, where smaller batch size usually leads to better performance. This is also the reason that we set batch size to 64 and learning rate to $3$x$10^{-5}$ in all our previous experiments. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{events.png} \caption{EM results on zero-shot and few-shot learning.} \label{fig:generalization} \end{figure} \subsubsection{Zero-shot/Few-shot Extraction} We conduct zero-shot and few-shot extraction experiments to evaluate the generalization ability of WebFormer on unseen domains/fields. In this experiment, we first pre-train a WebFormer model on Products and Movies data only. We then perform fine-tuning on Events data for 10K steps by varying the number of training examples from \{0, 1, 2, 5, 10, 50, 100\}. The EM scores of WebFormer on all four event fields are shown in Figure~\ref{fig:generalization}. There are several interesting observations from this table. First, when the number of training examples is 0 (zero-shot extraction), the EM scores on ``Name'' and ``Description'' are reasonable around 75\%. However, the score on ``Location'' is close to 0. The reason is that both ``Name'' and ``Description'' are general fields that appear across domains, e.g. they both present in Products and Movies data. Therefore, the learned knowledge in WebFormer can be directly transferred to a new domain - Events. On the other hand, the pretrained model lacks knowledge about ``Location'' and thus performs poorly on this field. Second, it is not surprising to see that the EM scores increase with more training examples, and reach reasonably high values with 100 training examples. We also observe that the EM score for ``Location'' boosts dramatically even with one or two training examples. \section{Conclusion} In this paper, we introduce a novel Web-page transFormer model, namely WebFormer, for structure information extraction from web documents. The structured HTML layout information is jointly encoded through the rich attention patterns with the text information. WebFormer effectively recovers both local syntactic and global layout information from web document serialization. An extensive set of experimental results on SWDE and Common Crawl benchmarks has demonstrated the superior performance of the proposed approach over several state-of-the-art methods. In future, we plan to extend this work to multimodal learning that incorporates visual features. \begin{acks} This work is supported by the National Natural Science Foundation of China (No. 62176270). \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2022-02-02T02:10:47", "yymm": "2202", "arxiv_id": "2202.00217", "language": "en", "url": "https://arxiv.org/abs/2202.00217" }
\section{Preprocessing ImageNet} An initial challenge is the lack of high-resolution data; the mean resolution of ImageNet is $469\times387$. Similar to the procedure used for generating CelebA-HQ\cite{Karras2018ICLR}, we preprocess the whole dataset with SwinIR-Large~\cite{Liang2021ICCV}, a recent model for real-world image super-resolution. Of course, a trivial way of achieving good performance on this dataset would be to draw samples from a $256^2$ generative model and passing it through SwinIR. However, SwinIR adds significant computational overhead as it is $60$ times slower than our upsampling stack. Furthermore, this way, StyleGAN-XL's weights can be used for initialization when finetuning on other high-resolution datasets. Lastly, combining StyleGAN-XL and SwinIR would impair translation equivariance. \section{Classes of unaligned Humans} As mentioned in section \ref{sec:limitations}, we observe that ADM~\cite{Dhariwal2021NEURIPS} generates more convincing human faces than StyleGAN-XL and BigGAN. Both GANs can synthesize realistic faces; however, the main challenge in this setting is that the dataset is unstructured, and the humans are not aligned. Brock et al.~\cite{Brock2019ICLR} remarked the particular challenge of classes containing details to which human observers are more sensitive. We show examples in~\figref{fig:aquamen}. \aquamen \section{Implementation details} \boldparagraph{Inversion.} Following \cite{Karras2020CVPR}, we use basic latent optimization in $\mathcal{W}$ for inversion. Given a target image, we first compute its average style code $\bar{\mathbf{w}}\newcommand{\bW}{\mathbf{W}}$ by running $10000$ random latent codes $\mathbf{z}}\newcommand{\bZ}{\mathbf{Z}$ and target specific class samples $\mathbf{c}}\newcommand{\bC}{\mathbf{C}$ through the mapping network. As the class label of the target image is unknown, we pass it to a pretrained classifier. We then use the classifier logits as a multinomial distribution to sample $\mathbf{c}}\newcommand{\bC}{\mathbf{C}$. In our experiments, we use Deit-M~\cite{Touvron2021ICML} as a classifier, but other choices are possible. At the beginning of optimization , we initialize $\mathbf{w}}\newcommand{\bW}{\mathbf{W} = \bar{\mathbf{w}}\newcommand{\bW}{\mathbf{W}}$. The components of $\mathbf{w}}\newcommand{\bW}{\mathbf{W}$ are the only trainable parameters. The optimization runs for 1000 iterations using the Adam optimizer~\cite{Kingma2015ICLR} with default parameters. We optimize the LPIPS~\cite{Zhang2018CVPR} distance between the target image and the generated image. For StyleGAN-XL, the maximum learning rate is $\lambda_{max} = 0.05$. It is ramped up from zero linearly during the first 50 iterations and ramped down to zero using a cosine schedule during the last 250 iterations. For BigGAN, we empirically found $\lambda_{max} = 0.001$ and a ramp-down over the last 750 iterations to yield the best results. All inversion experiments are performed at resolution $512^2$ and computed on $5k$ images ($10$\% of the validation set). We report the results in \tabref{tab:invresults}. \invresults \boldparagraph{Training StyleGAN3 on ImageNet.} For training StyleGAN3, we use the official PyTorch implementation\footnote{\url{https://github.com/NVlabs/stylegan3.git}}. The results in \figref{fig:teaser} are computed with the StyleGAN3-R configuration on resolution $256^2$ until the discriminator has seen $10$ million images. We find that StyleGAN3-R and StyleGAN3-T converge to similar FID without any changes to their training paradigm. The run with the best FID score was selected from three runs with different random seeds. We use a channel base of $16384$ and train on $8$ GPUs with total batch size $256$, $\gamma=0.256$. The remaining settings are chosen according to the default configuration of the code release. For the ablation study in \tabref{tab:ablation}, we use the StyleGAN3-T configuration as baseline since StyleGAN-XL builds upon the translational-equivariant layers of StyleGAN3. We train on $4$ GPUs with total batch size $256$ and batch size $32$ per GPU, $\gamma=0.25$, and disable augmentation. \boldparagraph{Training \& Evaluation.} For all our training runs, we do not use data amplification via \textit{x-flips} following~\cite{Karras2019CVPR}. Furthermore, we evaluate all metrics using the official StyleGAN3 codebase. For the baseline values in ~\tabref{tab:sotapr}, we report the numbers of~\cite{Dhariwal2021NEURIPS}. The official codebase of ADM\footnote{\url{https://github.com/openai/guided-diffusion}} provides files containing $50$k samples for ADM and BigGAN. We utilize the provided samples to compute rFID. In addition, we recompute precision and recall for all baselines, as ~\cite{Dhariwal2021NEURIPS} compute these metrics between $10$k real samples and $50$k generated samples. We follow the original formulation of~\cite{Kynknniemi2019NEURIPS} and use $50$k real and $50$k generated samples. \boldparagraph{Layer configurations.} We start progressive growing at resolution $16^2$ using $11$ layers. The layer specifications are computed according to \cite{Karras2021NEURIPS} and remain fixed for the remaining training. For the next stage, at resolution $32^2$, we discard the last $2$ layers and add $7$ new ones. The specifications for the new layers are computed according to \cite{Karras2021NEURIPS} for a model with resolution $32^2$ and $16$ layers. Continuing this strategy up to resolution $1024^2$ yields the flexible layer specification of StyleGAN-XL in \figref{fig:layerspecs}. \layerspecs \section{Introduction} \label{sec:intro} Computer graphics has long been concerned with generating photorealistic images at high resolution that allow for direct control over semantic attributes. Until recently, the primary paradigm was to create carefully designed 3D models which are then rendered using realistic camera and illumination models. A parallel line of research approaches the problem from a data-centric perspective. In particular, probabilistic generative models ~\cite{Goodfellow2014NEURIPS,Oord2017NEURIPS,Kingma2018NEURIPS,Song2021ICLR} have shifted the paradigm from designing assets to designing training procedures and datasets. Style-based GANs (StyleGANs) are a specific instance of these models, and they exhibit many desirable properties. They achieve high image fidelity~\cite{Karras2019CVPR, Karras2020CVPR}, fine-grained semantic control~\cite{Haerkoenen2020NEURIPS, WU2021CVPRa,Ling2021ARXIV}, and recently alias-free generation enabling realistic animation~\cite{Karras2021NEURIPS}. Moreover, they reach impressive photorealism on carefully curated datasets, especially of human faces. However, when trained on large and unstructured datasets like ImageNet~\cite{Deng2009CVPR}, StyleGANs do not achieve satisfactory results yet. One other problem plaguing data-centric methods, in general, is that they become prohibitively more expensive when scaling to higher resolutions as bigger models are required. Initially, StyleGAN~\cite{Karras2019CVPR} was proposed to explicitly disentangle factors of variations, allowing for better control and interpolation quality~\cite{Karras2019CVPR}. However, its architecture is more restrictive than a standard generator network~\cite{Radford2016ICLR, Karras2018ICLR, Liu2021ICLR} which seems to come at a price when training on complex and diverse datasets such as ImageNet. Previous attempts at scaling StyleGAN and StyleGAN2 to ImageNet led to sub-par results~\cite{Gwern2020MISC, Grigoryev2022ICLR}, giving reason to believe it might be fundamentally limited for highly diverse datasets~\cite{Gwern2020MISC}. BigGAN~\cite{Brock2019ICLR} is the state-of-the-art GAN model for image synthesis on ImageNet. The main factors for BigGANs success are larger batch and model sizes. However, BigGAN has not reached a similar standing as StyleGAN as its performance varies significantly between training runs~\cite{Karras2020NeurIPS} and as it does not employ an intermediate latent space which is essential for GAN-based image editing~\cite{Abdal2021TOG, Patashnik2021ICCV, Collins2020CVPR, WU2021CVPRa}. Recently, BigGAN has been superseded in performance by diffusion models~\cite{Dhariwal2021NEURIPS}. Diffusion models achieve more diverse image synthesis than GANs but are significantly slower during inference and prior work on GAN-based editing is not directly applicable. Following these arguments, successfully training StyleGAN on ImageNet has several advantages over existing methods. The previously failed attempts at scaling StyleGAN raise the question of whether architectural constraints fundamentally limit style-based generators or if the missing piece is the right training strategy. Recent work by Sauer et al. \cite{Sauer2021NEURIPS} introduced \textit{Projected GANs} which project generated and real samples into a fixed, pretrained feature space. Rephrasing the GAN setup this way leads to significant improvements in training stability, training time, and data efficiency. Leveraging the benefits of Projected GAN training might enable scaling StyleGAN to ImageNet. However, as observed by~\cite{Sauer2021NEURIPS}, the advantages of Projected GANs only partially extend to StyleGAN on the unimodal datasets they investigated. We study this issue and propose architectural changes to address it. We then design a progressive growing strategy tailored to the latest StyleGAN3. These changes in conjunction with Projected GAN already allow surpassing prior attempts of training StyleGAN on ImageNet. To further improve results, we analyze the pretrained feature network used for Projected GANs and find that the two standard neural architectures for computer vision, CNNs and ViTs~\cite{Dosovitskiy2021ICLR}, significantly improve performance when used jointly. Lastly, we leverage \textit{classifier guidance}, a technique originally introduced for diffusion models to inject additional class-information~\cite{Dhariwal2021NEURIPS}. Our contributions culminate in a new state-of-the-art on large-scale image synthesis, pushing the performance beyond existing GAN and diffusion models. We showcase inversion and editing for ImageNet classes and find that Pivotal Tuning Inversion (PTI)~\cite{Roich2021ARXIV}, a powerful new inversion paradigm, combines well with our model and even embeds out-of-domain images smoothly into our learned latent space. Our efficient training strategy allows us to triple the parameters of the standard StyleGAN3 while reaching prior state-of-the-art performance of diffusion models~\cite{Dhariwal2021NEURIPS} in a fraction of their training time. It further enables us to be the first to demonstrate image synthesis on ImageNet-scale at a resolution of $1024^2$ pixels. We will open-source our code and models upon publication. The supplementary video can be found at \url{https://sites.google.com/view/stylegan-xl/}. \section{Limitations and Future Work} \label{sec:limitations} Our contributions allow StyleGAN to accomplish state-of-the-art high-resolution image synthesis on ImageNet. Exploring new editing methods and dataset generation~\cite{Chai2021CVPR,Li2022ARXIV} using StyleGAN-XL are exciting future avenues. Furthermore, future work may tackle an even larger megapixel dataset. However, a larger yet diverse dataset is not available so far. Current large-scale, high-resolution datasets are of single object classes or contain many similar images~\cite{Zhang2020ECCV,Fregin2018ICRA,Perot2020NEURIPS}. In the following, we discuss some limitations of the current model which should be addressed in the future. \boldparagraph{Architectural Limitations.} First, StyleGAN-XL is three times larger than StyleGAN3, constituting a higher computational overhead when used as a starting point for finetuning. Therefore, it will be worth exploring GAN distillation methods~\cite{Chang2020ACCV} that trade-off performance for model size. Second, StyleGAN-XL uses translation-equivariant layers of StyleGAN3-T. As described above, StyleGAN3-R tends to produce overly symmetrical images and adds significant computational overhead. Finding a more efficient rotational-equivariant architecture is an important future direction. Lastly, we find StyleGAN3, and consequently, StyleGAN-XL, harder to edit, e.g., high-quality edits via $\mathcal{W}$ are noticeably easier to achieve with StyleGAN2. As already observed in~\cite{Karras2021NEURIPS}, StyleGAN3's semantic controllability is reduced for the sake of equivariance. However, techniques using the \textit{StyleSpace}~\cite{WU2021CVPRa}, e.g. StyleMC~\cite{Kocasari2021WACV}, tend to yield better results in our experiments confirming the findings of concurrent work by Alaluf et al.~\cite{Alaluf2022ARXIV}. Furthermore, we remark that for applications where equivariance is not essential, our framework can easily be used with StyleGAN2 layers instead. \boldparagraph{Drawbacks Compared to Diffusion Models.} Low data coverage is a known problem of GANs, and StyleGAN-XL makes notable headway on this issue. However, StyleGAN-XL is still outperformed by diffusion models regarding data coverage. Furthermore, similar to~\cite{Brock2019ICLR}, classes of unaligned humans appear harder to model for a GAN. For such classes, ADM~\cite{Dhariwal2021NEURIPS} generates more convincing human faces than BigGAN~\cite{Brock2019ICLR} or StyleGAN-XL, see supplementary. Whether the points above are a general limitation of GANs remains an interesting open question for future research. \section{Scaling StyleGAN to ImageNet} \label{sec:method} As mentioned before, StyleGAN has several advantages over existing approaches that work well on ImageNet. But a na\"{i}ve training strategy does not yield state-of-the-art performance~\cite{Gwern2020MISC, Grigoryev2022ICLR}. Our experiments confirm that even the latest StyleGAN3 does not scale well, see \figref{fig:teaser}. Particularly at high resolutions, the training becomes unstable, resulting in a high FID. Therefore, our goal is to train a StyleGAN3 generator on ImageNet successfully. Success is defined in terms of sample quality primarily measured by inception score (IS)~\cite{Salimans2016NEURIPS} and diversity measured by Fr\'echet Inception Distance (FID)~\cite{Heusel2017NEURIPS}. Throughout this section, we gradually introduce changes to the StyleGAN3 baseline (\textbf{Config-A}) and track the improvements in \tabref{tab:ablation}. First, we modify the generator and its regularization losses, adapting the latent space to work well with Projected GAN (\textbf{Config-B}) and for the class-conditional setting (\textbf{Config-C}). We then revisit progressive growing to improve training speed and performance (\textbf{Config-D}). Next, we investigate the feature networks used for Projected GAN training to find a well-suited configuration (\textbf{Config-E}). Lastly, we propose classifier guidance for GANs to provide class information via a pretrained classifier (\textbf{Config-F}). Our contributions enable us to train a significantly larger model than previously possible while requiring less computation than prior art. Our model is three times larger in terms of depth and parameter count than a standard StyleGAN3. However, to match the prior state-of-the-art performance of ADM~\cite{Dhariwal2021NEURIPS} at a resolution of $512^2$ pixels, training the models on a single NVIDIA Tesla V100 takes $400$ days compared to the previously required $1914$ V100-days. We refer to our model as \textbf{StyleGAN-XL} (\figref{fig:system}). \system \ablation \subsection{Adapting Regularization and Architectures} Training on a diverse and class-conditional dataset makes it necessary to introduce several adjustments to the standard StyleGAN configuration. We construct our generator architecture using layers of StyleGAN3-T, the translational-equivariant configuration of StyleGAN3. In initial experiments, we found the rotational-equivariant StyleGAN3-R to generate overly symmetric images on more complex datasets, resulting in kaleidoscope-like patterns. \boldparagraph{Regularization.} In GAN training, it is common to use regularization for both, the generator and the discriminator. Regularization improves results on uni-modal datasets like FFHQ~\cite{Karras2019CVPR} or LSUN~\cite{Yu2015ARXIV}, whereas it can be detrimental on multi-modal datasets~\cite{Brock2019ICLR, Gwern2020MISC}. Therefore, we aim to avoid regularization when possible. Karras et al.~\cite{Karras2021NEURIPS} find style mixing to be unnecessary for the latest StyleGAN3; hence, we also disable it. Path length regularization can lead to poor results on complex datasets~\cite{Gwern2020MISC} and is, per default, disabled for StyleGAN3~\cite{Karras2021NEURIPS}. However, path length regularization is attractive as it enables high-quality inversion~\cite{Karras2020CVPR}. We also observe unstable behavior and divergence when using path length regularization in practice. We found that this problem can be circumvented by only applying regularization after the model has been sufficiently trained, i.e., after 200k images. For the discriminator, following~\cite{Sauer2021NEURIPS}, we use spectral normalization without gradient penalties. In addition, we blur all images with a Gaussian filter with $\sigma=2$ pixels for the first $200k$ images. Discriminator blurring has been introduced in ~\cite{Karras2021NEURIPS} for StyleGAN3-R. It prevents the discriminator from focusing on high frequencies early on, which we found beneficial across all settings we investigated. \boldparagraph{Low-Dimensional Latent Space.} As observed in~\cite{Sauer2021NEURIPS}, Projected GANs work better with FastGAN~\cite{Liu2021ICLR} than with StyleGAN. One main difference between these generators is their latent space, StyleGAN's latent space is comparatively high dimensional (FastGAN: $\mathbb{R}^{100}$, BigGAN: $\mathbb{R}^{128}$, StyleGAN: $\mathbb{R}^{512}$). Recent findings indicate that the \textit{intrinsic dimension} of natural image datasets is relatively low~\cite{Pope2021ICLR}, ImageNet's dimension estimate is around $40$. Accordingly, a latent code of size $512$ is highly redundant, making the mapping network's task harder at the beginning of training. Consequently, the generator is slow to adapt and cannot benefit from Projected GAN's speed up. We therefore reduce StyleGAN's latent code $\mathbf{z}}\newcommand{\bZ}{\mathbf{Z}$ to $64$ and now observe stable training in combination with Projected GAN, resulting in lower FID than the baseline (\textbf{Config-B}). We keep the original dimension of the \textit{style code} $\mathbf{w}}\newcommand{\bW}{\mathbf{W} \in \mathbb{R}^{512}$ to not restrict the model capacity of the mapping network $\bG_m$. \boldparagraph{Pretrained Class Embeddings.} Conditioning the model on class information is essential to control the sample class and improve overall performance. A class-conditional variant of StyleGAN was first proposed in~\cite{Karras2020NeurIPS} for CIFAR10~\cite{Krizhevsky2009CITESEER} where a one-hot encoded label is embedded into a 512-dimensional vector and concatenated with $\mathbf{z}}\newcommand{\bZ}{\mathbf{Z}$. For the discriminator, class information is projected onto the last discriminator layer~\cite{Miyato2018ICLR}. We observe that \textbf{Config-B} tends to generate similar samples per class. To quantify mode coverage, we leverage the recall metric~\cite{Kynknniemi2019NEURIPS} and find that \textbf{Config-B} achieves a low recall of~$0.004$. We hypothesize that the class embeddings collapse when training with Projected GAN. Therefore, to prevent this collapse, we aim to ease optimization of the embeddings via pretraining. We extract and spatially pool the lowest resolution features of an Efficientnet-lite0~\cite{Tan2019ICML} and calculate the mean per ImageNet class. The network has a low channel count to keep the embedding dimension small, following the arguments of the previous section. The embedding passes through a linear projection to match the size of $\mathbf{z}}\newcommand{\bZ}{\mathbf{Z}$ to avoid an imbalance. Both $\bG_m$ and $\bD_i$ are conditioned on the embedding. During GAN training, the embedding and the linear projection are optimized to allow specialization. Using this configuration, we observe that the model generates diverse samples per class, and recall increases to $0.15$ (\textbf{Config-C}). Note that for all configurations in this ablation, we restrict the training time to $15\;V\text{-}100\;days$. Hence, the absolute recall is markedly lower compared to the fully trained models. Conditioning a GAN on pretrained features was also recently investigated by~\cite{Casanova2021NEURIPS}. In contrast to our approach, Casanova et al.~\cite{Casanova2021NEURIPS} condition on specific \textit{instances}, instead of learning a general class embedding. \subsection{Reintroducing Progressive Growing} Progressively growing the output resolution of a GAN was introduced by~\cite{Karras2018ICLR} for fast and more stable training. The original formulation adds layers during training to both $\bG$ and $\bD$ and gradually fades in their contribution. However, in a later work, it was discarded~\cite{Karras2020CVPR} as it can contribute to texture sticking artifacts. Recent work by Karras et al.~\cite{Karras2021NEURIPS} finds that the primary cause of these artifacts is aliasing, so they redesign each layer of StyleGAN to prevent it. This motivates us to reconsider progressive growing with a carefully crafted strategy that aims to suppress aliasing as best as possible. Training first on very low resolutions, as small as $16^2$ pixels, enables us to break down the daunting task of training on high-resolution ImageNet into smaller subtasks. This idea is in line with the latest work on diffusion models~\cite{Nichol2021ICML, Saharia2021ARXIV, Dhariwal2021NEURIPS, Ho2021ARXIV}. They observe considerable improvements in FID on ImageNet when using a two-stage model, i.e., stacking an independent low-resolution model and an upsampling model to generate the final image. Commonly, GANs follow a rigid sampling rate progression, i.e., at each resolution, there is a fixed amount of layers followed by an upsampling operation using fixed filter parameters. StyleGAN3 does not follow such a progression. Instead, the layer count is set to $14$, independent of the output resolution, and the filter parameters of up- and downsampling operations are carefully designed for antialiasing under the given configuration. The last two layers are critically sampled to generate high-frequency details. When adding layers for the subsequent highest resolution, discarding the previously critically sampled layers is crucial as they would introduce aliasing when used as intermediate layers~\cite{Karras2020CVPR, Karras2021NEURIPS}. Furthermore, we adjust the filter parameters of the added layers to adhere to the flexible layer specification of~\cite{Karras2021NEURIPS}; we refer to the supplementary for details. In contrast to~\cite{Karras2018ICLR} we do not add layers to the discriminator. Instead, to fully utilize the pretrained feature network $\bF$, we upsample both data and synthesized images to $\bF$'s training resolution ($224^2$ pixels) when training on smaller images. We start progressive growing at a resolution of $16^2$ using $11$ layers. Every time the resolution increases, we cut off $2$ layers and add $7$ new ones. Empirically, fewer layers result in worse performance; adding more leads to increased overhead and diminishing returns. For the final stage at $1024^2$, we add only $5$ layers as the last two are not discarded. This amounts to $39$ layers at the maximum resolution of $1024^2$. Instead of a fixed growing schedule, each stage is trained until FID stops decreasing. We find it beneficial to use a large batch size of $2048$ on lower resolution ($16^2$ to $64^2$), similar to~\cite{Brock2019ICLR}. On higher resolutions, smaller batch sizes suffice ($128^2$ to $256^2$: $256$, $512^2$ to $1024^2$: $128$). Once new layers are added, the lower resolution layers remain fixed to prevent mode collapse. In our ablation study, FID improves only slightly (\textbf{Config-D}) compared to \textbf{Config-C}. However, the main advantage can be seen at high resolutions, where progressive growing drastically reduces training time. At resolution $512^2$, we reach the prior state-of-the-art (FID$\;=3.85$) after $2$ V100-days. This reduction is in contrast to other methods such as ADM, where doubling the resolution from $256^2$ to $512^2$ pixels corresponds to increasing training time from $393$ to $1914$ V100-days to find the best performing model\footnote{Note that these settings are not directly comparable as the stem of our model is pretrained, but the values should give a general sense of the order of magnitude.}. As our aim is not to introduce texture sticking artifacts, we measure $EQ\text{-}T$, a metric for determining translation equivariance~\cite{Karras2021NEURIPS}, where higher is better. \textbf{Config-C} yields $EQ\text{-}T=55$, while \textbf{Config-D} attains $EQ\text{-}T=48$. This only slight reduction in equivariance shows that \textbf{Config-D} restricts aliasing almost as well as a configuration without growing. For context, architectures with aliasing yield $EQ\text{-}T\sim 15$. \subsection{Exploiting Multiple Feature Networks} An ablation study conducted in~\cite{Sauer2021NEURIPS} finds that most pretrained feature networks $\bF$ perform similarly in terms of FID when used for Projected GAN training regardless of training data, pretraining objective, or network architecture. However, the study does not answer if combining several $\bF$ is advantageous. Starting from the standard configuration, an EfficientNet-lite0, we add a second $\bF$ to inspect the influence of its pretraining objective (classification or self-supervision) and architecture (CNN or Vision Transformer (ViT)~\cite{Dosovitskiy2021ICLR}). The results in \tabref{tab:ablation} show that an additional CNN leads to slightly lower FID. Combining networks with different pretraining objectives does not offer benefits over using two classifier networks. However, combining an EfficientNet with a ViT improves performance significantly. This result corroborates recent results in neural architecture literature, which find that supervised and self-supervised representations are similar~\cite{Grigg2021ARXIV}, whereas ViTs and CNNs learn different representations~\cite{Raghu2021NEURIPS}. Combining both architectures appears to have complementary effects for Projected GANs. We do not see significant improvements when adding more networks; hence, \textbf{Config-E} uses the combination of EfficientNet~\cite{Tan2019ICML} and DeiT-M~\cite{Touvron2021ICML}. \subsection{Classifier Guidance for GANs} Dhariwal and Nichol~\cite{Dhariwal2021NEURIPS} introduced classifier guidance to inject class information into diffusion models. Classifier guidance modifies each diffusion step at time step $t$ by adding the gradients of a pretrained classifier $\nabla_{\mathbf{x}}\newcommand{\bX}{\mathbf{X}_t}\log p_{\phi}(\mathbf{c}}\newcommand{\bC}{\mathbf{C}|x_t, t)$. The best results are obtained when applying guidance on class-conditional models and scaling the classifier gradients by a constant $\lambda>1$. This successful combination indicates that our model may also profit from classifier guidance, even though we already supply class information via embeddings. We first pass the generated image $\mathbf{x}}\newcommand{\bX}{\mathbf{X}$ through a pretrained classifier CLF to predict the class label $c_i$. We then add a cross-entropy loss $\mathcal{L}_{CE} = -\sum_{i=0}^{C} c_i \log CLF({x}_i) $ as an additional term to the generator loss and scale this term by a constant $\lambda$. For the classifier, we use DeiT-S~\cite{Touvron2021ICML}, which exhibits strong classification performance while not adding much overhead to the training. Similar to ~\cite{Dhariwal2021NEURIPS}, we observe a significant improvement in IS, indicating an increase in sample quality (\textbf{Config-F}). We find $\lambda=8$ to work well empirically. Classifier guidance only works well on higher resolutions ($>32^2$); otherwise, it leads to mode collapse. This is in contrast to~\cite{Dhariwal2021NEURIPS} who exclusively guide their low-resolution model. The difference stems from how guidance is applied: we use it for model training, whereas~\cite{Dhariwal2021NEURIPS} guide the sampling process. \section{Background} \label{sec:background} We first introduce the main building blocks of our system: the StyleGAN3 generator~\cite{Karras2021NEURIPS} and Projected GAN's~\cite{Sauer2021NEURIPS} feature projectors and multi-scale discriminators. \boldparagraph{StyleGAN.} This section describes style-based generators in general with a focus on the latest StyleGAN3~\cite{Karras2021NEURIPS}. A StyleGAN generator consists of a mapping network $\bG_m$ and a synthesis network $\bG_s$. First, $\bG_m$ maps a normally distributed latent code $\mathbf{z}}\newcommand{\bZ}{\mathbf{Z}$ to a style code $\mathbf{w}}\newcommand{\bW}{\mathbf{W}$. This style code $\mathbf{w}}\newcommand{\bW}{\mathbf{W}$ is then used for modulating the convolution kernels of $\bG_s$ to control the synthesis process. The synthesis network $\bG_s$ of StyleGAN3 starts from a spatial map defined by Fourier features~\cite{Tancik2020NEURIPS, Xu2021CVPR}. This input then passes through $N$ layers of convolutions, non-linearities, and upsampling to generate an image. Each non-linearity is wrapped by an upsampling and downsampling operation to prevent aliasing. The low-pass filters used for these operations are carefully designed to balance image quality, antialiasing, and training speed. Concretely, their cutoff and stopband frequencies grow geometrically with network depth, the transition band half-widths are as wide as possible within the limits of the layer sampling rate, and only the last two layers are critically sampled, i.e., the filter cutoff equals the bandlimit. The number of layers $N$ is $14$, independent of the final output resolution. Style mixing and path length regularization are methods for regularizing style-based generators. In style mixing, an image is generated by feeding sampled style codes $\mathbf{w}}\newcommand{\bW}{\mathbf{W}$ into different layers of $\bG_s$ independently. Path length regularization encourages that a step of fixed size in latent space results in a corresponding fixed change in pixel intensity of the generated image~\cite{Karras2020CVPR}. This inductive bias leads to a smoother generator mapping and has several advantages including fewer artifacts, more predictable training behavior, and better inversion. Progressive growing was introduced by~\cite{Karras2018ICLR} for stable training at high resolutions but~\cite{Karras2020CVPR} found that it can impair shift-equivariance. Karras et al.~\cite{Karras2021NEURIPS} observe that texture sticking artifacts are caused by a lack of equivariance and carefully design StyleGAN3 to prevent texture sticking. Hence, in this paper, as we build on StyleGAN3, we can revisit the idea of progressive growing to improve convergence speed and synthesis quality. \boldparagraph{Projected GAN.} The original adversarial game between a generator $\bG$ and a discriminator $\bD$ can be extended by a set of feature projectors $\{\bP_l\}$~\cite{Sauer2021NEURIPS}. The projectors map real images $\mathbf{x}}\newcommand{\bX}{\mathbf{X}$ and images generated by $\bG$ to the discriminator's input space. The Projected GAN objective is formulated as \begin{equation} \begin{aligned} \min_\bG \max_{\{\bD_l\}} &\sum_{l \in \mathcal{L}} \Big ( \mathbb{E}_{\mathbf{x}}\newcommand{\bX}{\mathbf{X}} [\log \bD_l(\bP_l(\mathbf{x}}\newcommand{\bX}{\mathbf{X}))]\\ &\quad \; + \mathbb{E}_{\mathbf{z}}\newcommand{\bZ}{\mathbf{Z}}[ \log( 1- \bD_l(\bP_l(\bG(\mathbf{z}}\newcommand{\bZ}{\mathbf{Z}))))] \Big) \end{aligned} \end{equation} \label{eq:GANobjective2} where $\{\bD_l\}$ is a set of independent discriminators operating on different feature projections. The projectors consist of a pretrained feature network $\bF$, cross-channel mixing (CCM) and cross-scale mixing (CSM) layers. The purpose of CCM and CSM is to prohibit the discriminators from focusing on only a subset of its input feature space which would result in mode collapse. Both modules employ differentiable random projections that are not optimized during GAN training. CCM mixes features across channels via random 1x1 convolutions, CSM mixes features across scales via residual random 3x3 convolution blocks and bilinear upsampling. The output of CSM is a feature pyramid consisting of four feature maps at different resolutions. Four discriminators operate independently on these feature maps. Each discriminator uses a simple convolutional architecture and spectral normalization~\cite{Miyato2018ICLR}. The depth of the discriminator varies depending on its input resolution, i.e., a spatially larger feature map corresponds to a deeper discriminator. Other than spectral normalization, Projected GANs do not use additional regularization such as gradient penalties~\cite{Mescheder2018ICML}. Lastly, ~\cite{Sauer2021NEURIPS} apply differentiable data-augmentation \cite{Zhao2020NeurIPS} before $\bF$ which improves Projected GAN's performance independent of the dataset size. Sauer et al.~\cite{Sauer2021NEURIPS} evaluate several combinations of $\bF$ and $\bG$ and find an EfficientNet-Lite0~\cite{Tan2019ICML} and a FastGAN generator~\cite{Liu2021ICLR} to work especially well. When using a StyleGAN generator, they observe that the discriminators can quickly overpower the generator for suboptimal learning rates. The authors suspect that the generator might adapt too slowly due to its design which modulates feature maps with styles learned by a mapping network. \section{Results} \label{sec:results} In this section, we first compare StyleGAN-XL to the state-of-the-art approaches for image synthesis on ImageNet. We then evaluate the inversion and editing capabilities of StyleGAN-XL. As described above, we scale our model to a resolution of $1024^2$ pixels, which no prior work has attempted so far on ImageNet. The resolution of most images in ImageNet is lower. We therefore preprocess the data with a super-resolution network~\cite{Liang2021ICCV}, see supplementary. \subsection{Image Synthesis} Both our work and ~\cite{Dhariwal2021NEURIPS} use classifier networks to guide the generator. To ensure the models are not inadvertently optimizing for FID and IS, which also utilize a classifier network, we propose random-FID (rFID). For rFID, we calculate the Fr\'echet distance in the \texttt{pool\_3} layer of a randomly initialized inception network~\cite{Szegedy2015CVPR}. The efficacy of random features for evaluating generative models has been demonstrated in~\cite{Naeem2020ICML}. Furthermore, we report sFID~\cite{Nash2021ICML} to assess the spatial structure of generated samples. Lastly, sample fidelity and diversity are evaluated via precision and recall~\cite{Kynknniemi2019NEURIPS} complementing FID and IS. In \tabref{tab:sotapr}, we compare StyleGAN-XL to the currently strongest GAN model (BigGAN-deep~\cite{Brock2019ICLR}) and diffusion models (CDM~\cite{Ho2021ARXIV}, ADM~\cite{Dhariwal2021NEURIPS}) on ImageNet. The values for ADM are calculated with and without additional methods (Upsampling \textbf{U} and Classifier Guidance \textbf{G}). Both BigGAN and StyleGAN-XL allow for the truncation trick, i.e., drawing a latent code from a truncated sampling space. A lower truncation $\psi$ increases sample quality while lowering sample diversity, resulting in an FID vs. IS tradeoff~\cite{Brock2019ICLR}. We find that StyleGAN-XL substantially outperforms all baselines across all resolutions in FID, sFID, rFID, and IS. An exception is precision and recall. According to recall, StyleGAN-XL's sample diversity lies between BigGAN and ADM, making progress in closing the gap between these model types. StyleGAN-XL's increase in diversity comes at the price of individual sample quality measured by precision, where BigGAN is the best among all compared approaches. Interestingly, StyleGAN-XL attains high diversity across all resolutions, which can be attributed to our progressive growing strategy. Furthermore, this strategy enables to scale to megapixel resolution successfully. Training at $1024^2$ for a single V100-day yields a noteworthy FID~of~$4.3$. At this resolution, we do not compare to baselines because of resource constraints as they are prohibitively expensive to train. \figref{fig:highres} visualizes generated samples at increasing resolutions. \sotapr \highres \subsection{Inversion and Manipulation} GAN-editing methods first \textit{invert} a given image into latent space, i.e., find a style code $w$ that reconstructs the image as faithful as possible when passed through $\bG_s$. Then, $\mathbf{w}}\newcommand{\bW}{\mathbf{W}$ can be manipulated to achieve semantically meaningful edits. \boldparagraph{Inversion.} Standard approaches for inverting $\bG_s$ use either latent optimization~\cite{Abdal2019ICCV,Creswell2019NEURAL,Karras2020CVPR} or an encoder~\cite{Perarnau2016ARXIV,Alaluf2021ICCV,Tov2021TOG}. A common way to achieve low reconstruction error is to use an extended definition of the latent space: $\mathcal{W}+$. For $\mathcal{W}+$ a separate $\mathbf{w}}\newcommand{\bW}{\mathbf{W}$ is chosen for each layer of $\bG_s$. However, as highlighted by~\cite{Tov2021TOG}, this extended definition achieves higher reconstruction quality in exchange for lower editability. Therefore, ~\cite{Tov2021TOG} carefully design an encoder to maintain editability by mapping to regions of $\mathcal{W}+$ that are close to the original distribution of $\mathcal{W}$. We follow~\cite{Karras2020CVPR} and use the original latent space $\mathcal{W}$. We find that StyleGAN-XL already achieves satisfactory inversion results using basic latent optimization. For inversion on the ImageNet validation set at $512^2$, StyleGAN-XL yields $\text{PSNR}=13.5$ on average, improving over BigGAN at $\text{PSNR}=10.8$. Besides better pixel-wise reconstruction, StyleGAN-XL's inversions are semantically closer to the target images. We measure the FID between reconstructions and targets, and StyleGAN-XL attains $\text{FID}=21.7$ while BigGAN reaches $\text{FID}=47.5$. \figref{fig:inversion} shows qualitative results. For implementation details and additional metrics, we refer to the supplementary. Given the results above, it is also possible to further refine the obtained reconstructions. Roich et al.~\cite{Roich2021ARXIV} recently introduced pivotal tuning inversion (PTI). PTI uses an initial inverted style code as a pivot point around which the generator is finetuned. Additional regularization prevents altering the generator output far from the pivot. Combining PTI with StyleGAN-XL allows us to invert both in-domain (ImageNet validation set) and out-of-domain images almost precisely. At the same time, the generator output remains perceptually smooth, see~\figref{fig:interpolations}. \inversion \interpolations \boldparagraph{Image Manipulation.} Given the inverted images, we can leverage GAN-based editing methods to manipulate the style code $\mathbf{w}}\newcommand{\bW}{\mathbf{W}$. In \figref{fig:editing}~(Left), we first invert a given source image via latent space optimization. We then apply different manipulation directions obtained by GANspace~\cite{Haerkoenen2020NEURIPS} and StyleMC~\cite{Kocasari2021WACV}. Prior work~\cite{Jahanian2020ICLR} also investigates in-plane translation. This operation can be directly defined in the input grid of StyleGAN-XL. The input grid also allows to perform extrapolation by increasing the grid size. An inherent property of StyleGAN is the ability of style mixing by supplying the style codes of two samples to different layers of $\bG_s$, generating a hybrid image. This hybrid takes on different semantic properties of both inputs. Style mixing is commonly employed for instances of a single domain, i.e., combining two human portraits. StyleGAN-XL inherits this ability and, to a certain extent, even generates out-of-domain combinations between different classes, akin to counterfactual images presented in~\cite{Sauer2021ICLR}. This technique works best for aligned samples, similar to StyleGAN's originally favored setting, FFHQ. Curated examples are shown in \figref{fig:editing}~(Right). \editing
{ "timestamp": "2022-02-02T02:13:28", "yymm": "2202", "arxiv_id": "2202.00273", "language": "en", "url": "https://arxiv.org/abs/2202.00273" }
\section*{SUPPLEMENTAL MATERIAL} \end{document}
{ "timestamp": "2022-02-02T02:12:11", "yymm": "2202", "arxiv_id": "2202.00244", "language": "en", "url": "https://arxiv.org/abs/2202.00244" }
\section{Introduction} The phenomenon of stretching a polymer chain by pulling on its ends is a standard subject in polymer physics, with important applications in cell biology; as a recent example see, e.g., \cite{stretching} about chromatin stretching by optical tweezers. By analogy with macroscopic examples (e.g., mooring line around a bollard), here we examine another efficient way to stretch a polymer, when it is tight around a curved obstacle. As a model, it is prototypical for several biological contexts. As just one example, we mention the recently documented (via imaging \cite{wong19,oakley70} and Hi-C experiments \cite{nand21}) chromosomes morphology in a certain algae (dinoflagelletes) that form cylindrical rods with helically twisted bundles of wrapped DNA. In addition, the model turns out to have surprisingly rich connections with several other fields of theoretical physics, first and foremost with KPZ statistics. Winding of a Gaussian polymer around a topological point-like obstacle in 2D is also a classical problem in polymer physics, pioneered by S.F. Edwards \cite{Edwards:1967}, Prager and Frisch \cite{Prager_Frisch} and Saito and Chen \cite{Chen}. For a finite size obstacle in 2D (or a cylinder in 3D), Green's function of a Gaussian polymer was considered by Comtet et al in \cite{Comtet} and later formally expressed in terms of an infinite series of linear combinations of Bessel functions \cite{Grosberg_Frish:2003}. Although the latter result is exact, addressing the limit of strongly stretched chain based on this expression remains a steep challenge. A significant progress in this direction was recently achieved by B.Meerson and N.Smith \cite{Baruch1,Baruch2}, some related problems were also examined by some of us \cite{1,2,valov_fixman}. The model that we address in this paper is depicted in Fig. \ref{fig:pol-f01}, with panels (\textit{a,b}) and (\textit{c}) showing polymer chain stretched along a flat and curved boundary, respectively. We will be interested in the span of fluctuations of the polymer away from the boundary, characterized by the length scale $\Delta$. Specifically, following a note by one of us \cite{Grosberg_Comment_2021}, we argue that $\Delta$ in the curved boundary case is determined by the non-local competition between entropy loss when polymer is tightly confined in a narrow strip of width $\Delta$ along the surface, and entropy loss of its stretching beyond imposed necessity by making a wider detour around the obstacle. Our analysis reveals the universal scaling, $\Delta \sim R^{\beta}$, as a function of $R$ with the KPZ growth exponent $\beta=1/3$, while the correlation length $S^*$ at which the chain experiences curvature of the disk scales as $S^* \sim R^{1/z}$ with the KPZ dynamic exponent $z=3/2$. Simulations of this system reveal that the one-point distribution of typical fluctuations can be well described by the squared Airy law, connecting our polymer problem in 2D with the (1+1)D Ferrari-Spohn universality class \cite{spohn_ferrari,Baruch1}. Strikingly, we found that this KPZ-like behavior is valid not only for a Gaussian polymer (which is like a regular random walk), but for a polymer with an arbitrary fractal dimension $D_f=1/\nu$, where $\nu$ is the usual metric exponent of the mean-square end-to-end distance against the number of monomers $\left< R^2 \right> \sim b^2 N^{2\nu}$, where $b$ is the (Kuhn) monomer length scale. The examples, in addition to the usual Gaussian $\nu = 1/2$, include self-avoiding polymers in 2D, $\nu = 3/4$ \cite{nienhuis}, in 3D (if the ring is wound around a cylinder), $\nu \approx 0.588$ \cite{li}; annealed branched polymers, $\nu = 1/4$ \cite{zimm} without or $\nu \approx 7/13$ with excluded volume \cite{BranchedUnivClass}; one ring in a 3D melt of other unconcatenated rings, $\nu = 1/3$ \cite{khokhlov-nech,halverson}; polymers with quadratic non-local Hamiltonian producing subdiffusive fractal paths with arbitrary $\nu \le 1/2$ \cite{nech-tamm-pol,polovnikov19}. \begin{figure} \centering \includegraphics[width=250pt]{polymnew-f01.pdf} \caption{Stretching of a polymer chain in a flat (left) or curved (right) geometry. In each case, chain is represented as a train of Pincus blobs. (a): the polymer is stretched above a planar boundary and fluctuates at distance $D$ in the perpendicular direction; (b): the polymer is additionally confined within distance $\Delta \ll D$ above the surface, and Pincus blobs are combined as ``super-blobs'' (grey ball). (c): the polymer is stretched around a circular boundary of radius $R$. End-to-end distance along the surface in all cases is fixed, $S \gg b N^{\nu}$.} \label{fig:pol-f01} \end{figure} \section{Path stretching in empty space or along a flat boundary} As a worm-up and for future reference, consider a chain of $M \gg 1$ monomers with the fractal dimension $D_f=1/\nu$ and let it be stretched along a flat boundary, with end-to-end distance fixed at $S \gg b M^{\nu}$. As shown in Fig. \ref{fig:pol-f01}(a), the chain looks like a train of blobs of $g$ monomers of size $\xi$ each. Chain statistics is unaffected inside the blob, i.e. $\xi = b g^{\nu}$, and the train of blobs is stretched, meaning that $S = \frac{M}{g} \xi$. Simple algebra then gives $g = \left( M b/S \right)^{1/(1-\nu)}$ and $\xi = b \left( M b/S \right)^{\nu/(1-\nu)}$. These blobs generalize the classical ``Pincus blobs'' \cite{PincusBlobs_ma60051a002} for arbitrary $\nu$, except our problem is not formulated in terms of stretching force, but in terms of fixed end-to-end distance, $S \gg b M^{\nu}$. Free energy of chain stretching, $F_{\rm str}$, is about $k_B T$ per blob: \begin{equation} \frac{F_{\rm str}}{k_B T} \sim \frac{M}{g} = \left( \frac{S}{b M^{\nu}} \right)^{\frac{1}{1-\nu}} \ . \label{eq:FreeEnergy_Stretch} \end{equation} The statistics of chain of blobs in the direction perpendicular to the surface is Gaussian (compare, e.g., with the similar conclusion for self-avoiding polymers in \cite[Equation 1.50]{deGennes}), its spread is, therefore, \begin{equation} D = \left(\frac{M}{g} \right)^{1/2} \xi = b M^{\nu} \left( \frac{b M^{\nu}}{S} \right)^{\frac{2 \nu - 1}{2 (1-\nu)}} \ . \label{eq:D_Flat} \end{equation} If a chain is additionally confined within a layer of width $\Delta \ll D$ as depicted in Fig. \ref{fig:pol-f01}(b), then, considering ``super-blobs'' of $G$ Pincus blobs each (see Fig. \ref{fig:pol-f01}(b)), such that $\xi G^{1/2} = \Delta$, we can find the confinement free energy as $k_BT$ per super-blob: \begin{equation} \frac{F_{\rm conf}}{k_B T} = \frac{M}{gG} = \frac{b^2 M^{2\nu}}{\Delta^2} \left( \frac{b M^{\nu}}{S} \right)^{\frac{2 \nu -1}{1-\nu}} \label{eq:FreeEnergy_Conf} \end{equation} An interesting observation is that size $D$ perpendicular to stretching (see \eq{eq:D_Flat}) is a \textit{decreasing} function of elongation $S$ only for $\nu > 1/2$, while for $\nu < 1/2$ it is an \textit{increasing} function. In other words, the ``subdiffusive'' polymers with $\nu < 1/2$ behave as a substances with a negative Poisson ratio: upon stretching they swell in perpendicular direction. Clearly, this is because fractal polymers with $\nu < 1/2$ have some negative correlations between monomers. These correlations are destroyed by stretching which leads to chain's ``releasing''. \begin{figure*} \centering \includegraphics[width=\textwidth]{polymnew-f02.pdf} \caption{(a): Polymer chain spread, $\Delta$, away from a cylindrical surface as a function of curvilinear distance between chain ends $S$, for a variety of $\nu$ values (or fractal dimensions $D_f=1/\nu$). All distances measured relative to the unperturbed coil size $R_0 = b N^{\nu}$. (b): Four regimes of fluctuations for various values of disc radius $R$ and end-to-end distance $S$. Regime II corresponds to effectively flat barrier, while regime IV is for an obstacle thinner than one Pincus blob. The most interesting is regime III, it corresponds to the stretched polymer on the essentially curved barrier. The dashed line highlights winding around the cylinder. (c): Exponents $z$ (upper) and $\alpha$ (bottom) as the functions of $\nu <\gamma$ for different values of $\gamma$. The increase of $\gamma$ from $\gamma=0.1$ (yellow) to $\gamma=0.9$ (dark blue) is shown by arrows on both diagrams. Limiting KPZ values corresponding to $\gamma=1$ are marked by black dashed lines.} \label{fig:pol-f02} \end{figure*} \section{Path stretching along a curved surface: free energy and elliptic blobs} Let us keep ends of a chain of $N$ monomers at distance $S$ away along a surface of a cylinder of radius $R$ -- see Fig. \ref{fig:pol-f01}(c). We assume the chain is stretched, i.e. $S \gg a N^{\nu}$, however we do not impose any relation between $S$ and $R$. Clearly, $S>2\pi R$ means wrapping around the cylinder. Free energy of such a chain consists of two contributions. The first one describes chain stretching to the distance $\frac{S}{2 \pi R} \left(R+\Delta \right)$; the corresponding free energy is given by \eq{eq:FreeEnergy_Stretch}, by replacing $M \to N$ and $S \to S \left(1 + \Delta/R \right)$. The second term corresponds the polymer confining in a strip of the width $\Delta$, and it is given by \eq{eq:FreeEnergy_Conf}, with similar replacement $M \to N$. Overall, variational free energy becomes \begin{equation} \frac{F_{\mathrm{circ}}}{k_B T} \propto \left(\frac{S}{b N^{\nu}}\; \frac{R+\Delta}{R} \right)^{\frac{1}{1-\nu}} + \frac{b^2 N^{2\nu}}{\Delta^2} \left( \frac{b N^{\nu}}{S} \right)^{\frac{2 \nu -1}{1-\nu}} \label{eq:free_energy_circle} \end{equation} This free energy is the extension of Eq. (2) in the Comment \cite{Grosberg_Comment_2021}. Assuming $\Delta \ll R$, we can linearize the first term and then minimize this free energy to get \begin{equation} \frac{\Delta}{bN^{\nu}} = \left( \frac{R}{bN^{\nu}} \right)^{\frac{1}{3}} \left( \frac{b N^{\nu}}{S} \right)^{\frac{2 \nu}{3(1-\nu)}} \label{eq:Delta_Curved} \end{equation} It is instructive to re-derive (\ref{eq:Delta_Curved}) in a different way. Given the chain is localized in a strip of width $\Delta$, curvature of the underlying surface becomes relevant only at scales exceeding $S^{\ast} = (R \Delta )^{1/2}$ (see Fig. \ref{fig:pol-f01}(c)). We call chain section covering distance $S^{\ast}$ ``an elliptic blob''. To find the number of monomers in elliptic blob, $N^{\ast}$, we can use the result \eq{eq:D_Flat}, with the replacement $D \to \Delta$ and $S \to S^{\ast}$. The train of elliptic blobs is fully stretched around the curved boundary, hence their number is $N/N^{\ast} \sim S/S^{\ast}$. A few lines of algebra based on this relation yield the previously obtained answer (\ref{eq:Delta_Curved}). Simultaneously, we get the expressions for cross-over scale $S^{\ast}$ at the given $R$: \begin{equation} S^{\ast}/b N^{\nu} = \left( R / b N^{\nu} \right)^{\frac{2(1-\nu)}{3-2\nu}}\ . \label{eq:crossover} \end{equation} Our results (\ref{eq:D_Flat}) for effectively flat surface at $S < S^{\ast}$ and (\ref{eq:Delta_Curved}) for the curved one at $S > S^{\ast}$ are collected in Fig. \ref{fig:pol-f02}(\textit{a}). There, we plot $\Delta /bN^{\nu}$ as a function of stretching $S/bN^{\nu} > 1$ for a variety of $\nu$ values, $\nu\in (0,1)$. In particular, at $\nu < 1/2$, the behavior $\Delta(S)$ is non-monotonic: $\Delta$ increases at modest $S$, because stretching destroys negative correlations, while at larger $S$ the curvature of the underlying disc takes over and forces $\Delta$ to decrease again. Thus, for "subdiffusive" paths the negative Poisson ratio flips its sign to the positive at stretching $S^*$ when the boundary becomes substantially curved. A curious fact is that at the specific value $\nu = 1/3$ (which corresponds to a sort of a crumpled statistics) the non-monotonous dependence $\Delta(S)$ comes back to the starting value $\Delta / b N^{\nu} \sim 1$ exactly when $S$ becomes of the order of disc radius, $R$, i.e. when chain is forced to make about one full turn around the disc. Another way to summarize our results is given in the Fig. \ref{fig:pol-f02}(\textit{b}) in terms of four distinct regimes in the space of two dimensionless control parameters, namely, the amount of stretching $S$ and radius of the disc $R$, both scaled by the unperturbed coil size, $S/bN^{\nu}$ and $R/bN^{\nu}$: \begin{equation} \frac{\Delta}{b N^{\nu}} = \left\{\begin{array}{ll} 1 , & \mathrm{Regime \ I} \\\left( S / b N^{\nu} \right)^{-\frac{2 \nu - 1}{2 (1-\nu)}}, & \mathrm{Regime \ II} \\ \left(R / b N^{\nu} \right)^{\frac{1}{3}} \left( S / b N^{\nu} \right)^{-\frac{2 \nu}{3 (1-\nu)}} , & \mathrm{Regime \ III} \\ \left( S / b N^{\nu} \right)^{-\frac{\nu}{1-\nu} }, & \mathrm{Regime \ IV} \end{array} \right. \label{eq:sumDelta} \end{equation} The first regime (I) deals with the free unrestricted polymers with $S< b N^{\nu}$; for them, fixation of ends only marginally affects the statistics, $\Delta \sim b N^{\nu}$. The second (II) and the third (III) regimes correspond to stretched polymers above effectively flat \eq{eq:D_Flat} and curved \eq{eq:Delta_Curved} boundaries, respectively. The most interesting regime, where the span of fluctuations is $R$-dependent, is the regime (III); remarkably, in this regime the dependence $\Delta(R) \propto R^{1/3}$ turns out to be independent on $\nu$. When the cylinder radius becomes as small as the Pincus blob size, $R \sim \xi$, so does the elliptic blob, $S^{\ast} \sim \xi$, and the crossover to regime IV occurs. Clearly, in this regime every Pincus blob ``hugs'' around the entire cylinder and, thus, $\Delta = \xi \ge R$, \footnote{Note that \eq{eq:Delta_Curved} is not valid in this regime, since the Pincus blob $\xi$ exceeds the size of the elliptic blob $S^{\ast}$.}. In terms of the winding number, $n$, the regime IV corresponds to $n > \left(S/bN^\nu\right)^{1/(1-\nu)} \gg 1$. In this regard, it is tempting to compare regime IV of winding around a thin cylinder with winding around a zero-width line topological obstacle studied earlier \cite{Edwards:1967, Prager_Frisch, Chen, Comtet, Grosberg_Frish:2003}. In that case, $\Delta \sim b N^{\nu}$, with only weak dependence on winding number $n$ (see formula (48) in the work \cite{Grosberg_Frish:2003}, which is exact for $\nu = 1/2$). Since physical obstacle is always not purely topological, but also geometrical with some finite radius $R$, our present work allows to clarify the applicability conditions of the result of \cite{Grosberg_Frish:2003}: $2 \pi R n \ll b N^{\nu}$. \begin{figure} \centering \includegraphics[width=250pt]{polymnew-f03.pdf} \caption{Molecular dynamics simulations of a stretched polymer of length $N$ over a disc of radius $R$, in the units of monomer size $b$. (a): Probability density distribution $P(h, N/2)$ of scaled variable $h = (r - \Delta)/\sigma(r)$, where $\sigma(r)$ is the standard deviation of radial excursions $r$ of the median monomer $N/2$ above the disk of radius $R=10$. The collapse is shown for different polymer length $N=80,100,200,300,400,500,600,700,800,1000$, each coded by individual color. The black thick line is the best fit by the squared Airy function with the parameters $a \approx 0.7$, $b \approx -0.8$, properly normalized. The inset shows the tail of $-\log P(h, N/2)$. (b): Typical excursions as a function of $N$ at the fixed disk size, $R=10$. (c): Typical excursions as a function of $R$ at the fixed polymer length, $N=1000$.} \label{fig:pol-f03} \end{figure} Our results can be generalized for the chain stretched around a smooth barrier of varying curvature, e.g., around an ellipse (see Fig. A1). Clearly, varying curvature determines the local size of the elliptic blob, $S^*$, while the Pincus blob size $\xi$ and the number of monomers therein, $g$, would be invariant. As a result, the spread in the radial direction to the barrier at each point $\tau$ adopts the following universal form (see Appendix \ref{app_curved} for derivation details): \begin{equation} \Delta(\tau) = \left(R(\tau) \xi^2 \right)^{1/3} \label{eq:curv} \end{equation} where $R(\tau)$ describes the local radius of curvature at point $\tau$. In order to validate the scaling results we have performed molecular dynamics simulations of the chains stretched around the disk (see Appendix \ref{app_sims} for details). Namely, we have considered a situation, in which the ends of an ideal chain are fixed at $S=\pi R$. According to \eq{eq:Delta_Curved}, theoretical predictions for that case are: \begin{equation} \Delta = \begin{cases} N^{2/3}R^{-1/3} b^{4/3} & \mbox{``stretched'', $\Delta < R$} \medskip \\ N^{1/2} b & \mbox{``unperturbed'', $\Delta > R$ } \end{cases} \label{eq:spir} \end{equation} As we see from Fig. \ref{fig:pol-f03}(\textit{b}) and (\textit{c}) the span of fluctuations in both sets of simulations perfectly agrees with the predictions \eq{eq:spir}. We further compute the distribution of the scaled excursions for the stretched chain at various $N$, see Fig. \ref{fig:pol-f03}(\textit{a}). As one can infer from this plot, the distributions remarkably collapse on the squared Airy function, which also describes the one-point distribution of fluctuation in the Ferrari-Spohn process, i.e. constrained (1+1)D random paths above the semicircle \cite{spohn_ferrari,valov_fixman,Baruch1}. \section{Discussion} One of the most striking of our results is that the dependence of the typical fluctuations in the curved regime on the disc radius, $R$, can be written as $\Delta = R^{\beta}\, f(S,N,b,\nu)$ with $\beta=1/3$ being the 1D KPZ \textit{growth} exponent. Thus, it is tempting to look for a mapping between our problem and KPZ and to interpret $R$ as time, $t$, in the associated stochastic growth problem. To see how it works, let us set $S = N^{\gamma} b \gg N^{\nu} b$, i.e. consider $\nu < \gamma < 1$. As \eq{eq:Delta_Curved} suggests, in the limit $\gamma \to 1$ typical fluctuations $\Delta$ are controlled by $R$ only \begin{equation} \Delta = b(R/b)^{\beta}, \end{equation} for $R < R^*$, where $R^* = b(S/b)^z$ is the crossover radius, below which a polymer with $N=(S/b)^{1/\gamma}$ monomers experiences curvature of the disk, and $z$ reads \begin{equation} z(\gamma, \nu)= \frac{3\gamma-2\nu \gamma-\nu}{2\gamma(1-\nu)} \to 3/2, \quad \gamma \to 1. \label{rstar} \end{equation} The crossover, described by the 1D KPZ \textit{dynamic} exponent, $z=3/2$, \eq{rstar} corresponds to the boundary between flat (II) and curved (III) regimes in \fig{fig:pol-f02}(\textit{b}). In the flat regime (II), $R>R^{\ast}$, the typical fluctuations do not depend on $R$ and are described by the stretching $S$ only, $\Delta = b(S/b)^{\alpha}$, where $\alpha$ reads \begin{equation} \alpha(\gamma, \nu) = 1 - \frac{\gamma-\nu}{2\gamma(1-\nu)} \to \frac{1}{2}, \quad \gamma \to 1, \end{equation} yielding the 1D KPZ \textit{roughness} exponent $\alpha=1/2$. In Fig. \ref{fig:pol-f02}(c) we demonstrate the dependence of the exponents $z$ and $\alpha$ on $\nu$ at different values of $\gamma$. As an intrinsic property of the ``full stretching'' limit, the curves $z(\nu), \alpha(\nu)$ become flat upon the increase of $\gamma$, approaching their respective 1D KPZ values. Importantly, the implications of the $\gamma \to 1$ limit above can be realized for any other $\gamma$, but the chain should be renormalized to the Pincus blobs. Indeed, under the change $N \to N/g$, $b \to \xi$ a two-dimensional walk becomes effectively (1+1)D and, therefore, it naturally inherits all the scalings of the ``full stretching" limit \begin{equation} \begin{cases} \Delta_{\mathrm{curved}} = \xi (R/\xi)^{\beta}, \quad S>S^* \medskip \\ S^* = \xi (R/\xi)^{1/z} \medskip \\ \Delta_{\mathrm{flat}} = \xi (S/\xi)^{\alpha}, \quad S<S^* \end{cases} \label{eq:renorm} \end{equation} where $\xi = \xi(N,S)$ plays a role of a new coarse-grained monomer. Note that \eq{eq:renorm} holds for any fractal dimension of the polymer: upon stretching the correlations induced by fractality are destroyed at scales larger than the Pincus blob size, $\xi$. This effect is already observed in the classical stretching in the flat regime (II). However, at scales larger than $S^*$ universal curvature-induced correlations get developed, featuring the KPZ exponent $\beta=1/3$. From representation \eq{eq:renorm} it is evident that after proper renormalization \textit{any} fractal walk in two dimensions above the disk can be self-consistently described by the set of KPZ exponents. The curvature-induced correlation length of the two-dimensional path $S^* \sim R^{2/3}$ has the physical meaning of the elliptic blob, i.e. the scale at which the walk stays effectively flat. The flat regime of the chain is characterized by insufficiently strong curvature of the disk and, in turn, corresponds to large time scales of the KPZ growth in a finite system. However, as our simulations suggest, the distribution of typical fluctuations in the polymer problem is given by the squared Airy function, which is different from the Tracy-Widom distribution of the KPZ process (though the tails, $\sim \exp(-ch^{3/2})$, are equivalent, see the inset in \ref{fig:pol-f03}(\textit{a})). In fact, this is a well-known consequence of the impermeability of the boundary, playing a role of the "mean-field" for a more complex system of many non-intersecting ("vicious") (1+1)D Brownian walks, the top of which is known to belong to the KPZ universality class (see the flowchart Fig. A3 and further discussion in Appendix \ref{app_blabla}). Replacing all such walks (but the top one) with the circular boundary we arrive at the Ferrari-Spohn model, for which the squared Airy behaviour for the one-point distribution has been established. Therefore, we conjecture that the (1+1)D representation \eq{eq:renorm} of the two-dimensional stretched polymers belong to the Ferrari-Spohn universality class. Another interesting connection of our problem is revealed by looking at free energy (\ref{eq:free_energy_circle}) for the specific case when $\nu = 1/2$ and radius has specific value $R = S^2/b N$ (indicated by a dashed blue line on Fig. S2). Along this line, free energy reads $\frac{F_{\mathrm{circ}}}{k_BT} \sim \frac{\Delta}{b} + \frac{b^2 N}{\Delta^2}$, which can be interpreted by noticing that $W(N) = \max_{\Delta} \exp \left( -\frac{\Delta}{b} - \frac{b^2 N}{\Delta^2} \right)$ is the probability for a random walker with diffusivity $b^2/\pi^2$ to survive during time $N$ on the line with randomly Poisson positioned traps with density $1/b^2$. This is classical Balagurov-Vaks problem \cite{balagurov}, and its solution $W(N) \sim e^{ - \mathrm{const} \, N^{1/3}}$ is controlled by the optimal interval between the traps, $\Delta$. In the Appendix \ref{app_bv} we develop this connection in greater details along with the review of several relations to other known problems and models in statistical physics. \acknowledgements The authors thank L. Mirny, M. Tamm, A. Gorsky and A. Valov for illuminating discussions. The work of KP is supported by the Russian Science Foundation (Grant No. 21-73-00176). AYG acknowledges the Aspen Center for Physics where part of this work was written with the support of the National Science Foundation grant number PHY-1607611. The authors thank MirnyLab for kindly sharing the resources for computer simulations. \renewcommand{\theequation}{A-\arabic{equation}} \renewcommand{\thefigure}{A\arabic{figure}} \renewcommand{\thesection}{A\arabic{section}} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{section}{0} \section*{APPENDIX} In this Appendix we first generalize our results about stretching polymer around a circular disc and consider stretching it around an ellipse or another convex barrier. Then, we re-formulate our problem and establish its connection to the problems of random walks in the space with Poisson-distributed traps. Further, we speculate about other connections of our problem across the fields. Finally, at the end, we describe technical details of polymer simulations used in our work. \section{Stretching of a polymer around an elliptic (or another convex) barrier}\label{app_curved} Let us return to Eq. (4) in the main text. We offer here a slightly different view on it. Let us start from a scaling derivation of Pincus blobs. Clearly, the quantity $\xi$ is a correlation length. Given that there is only one macroscopic length scale for an unrestricted coil, $R_F = b N^{\nu}$, correlation length in case when two ends stay at distance $S$ apart, must obey the scaling $\xi = R_F \phi (S/R_F)$, where the behavior of scaling function $\phi(x)$ is as follows: $\phi(x) \sim 1$ when $x \ll 1$, while $\phi(x)\sim x^{\mu}$ when $x \gg 1$ with some critical exponent $\mu$. The latter must be chosen such that for $S \to b N$ the blob size is reduced to (Kuhn) monomer size $b$. The following equation with $\mu = \nu / (1-\nu)$, provides the requested behavior: \begin{equation} \xi = R_F \left( \frac{R_F}{S} \right)^{\frac{\nu}{1-\nu}} \label{eq:xi} \end{equation} If the chain is stretched by force rather than by fixing end-to-end distance, then $\xi = k_B T/f$, so $\xi$ can be viewed as a proxy for the stretching force. Now, we return to the equation (4) of the main text and re-write it as follows: \begin{equation} \begin{split} \frac{F_{\mathrm{circ}}}{k_B T} & \sim \left(\frac{S}{b N^{\nu}}\; \frac{R+\Delta}{R} \right)^{\frac{1}{1-\nu}} + \frac{b^2 N^{2\nu}}{\Delta^2} \left(\frac{b N^{\nu}}{S} \right)^{\frac{2 \nu -1}{1-\nu}} \\ & \sim \left( \frac{S}{b N^{\nu}} \right)^{\frac{1}{1-\nu}} \frac{\Delta}{R} + \frac{b^2 N^{2\nu}}{\Delta^2} \left(\frac{b N^{\nu}}{S} \right)^{\frac{2\nu-1}{1-\nu}} \\ & \sim \frac{S}{\xi} \left(\frac{\Delta}{R} + \frac{\xi^2}{\Delta^2}\right) \ . \label{eq:free_energy_circle_1} \end{split} \end{equation} In the last transformation, we expressed free energy in terms of blob size, $\xi$. Minimization of this free energy with respect to $\Delta$ yields the result \begin{equation} \Delta \sim \left(R \xi^2 \right)^{1/3} \label{eq:xi2} \end{equation} Of course, this result is entirely equivalent to our previous formula (5) in the main text for the curved surface. Quite remarkable fact is that this result does not involve $\nu$ at all. In terms of $\xi$ (or stretching force) there is no dependence on $\nu$. The result \eq{eq:xi2} allows us to consider stretching a polymer around a bumpy surface whose curvature changes from place to place, for instance, around an ellipse or around other convex curve with slowly changing curvature as shown in \fig{fig:Bumpy_Surface}. \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth]{polymnew-f01-suppl.pdf} \caption{Polymer chain stretched around an ellipse.} \label{fig:Bumpy_Surface} \end{figure} Since tension force is the same everywhere along the polymer, so is the blob size $\xi$. Therefore, if curvature radius is different in different places (while changing slowly, over length scales much larger than $\xi$), the factor $S/\xi$ in \eq{eq:free_energy_circle_1}, which signals additivity of the free energy on the blob level, can be replaced by the integral along $S$ with ``density'' $1/\xi$: \begin{equation} \frac{F}{k_B T} = \int_{0}^{S} \frac{dx}{\xi} \left[ \frac{\Delta(x)}{R(x)} + \frac{\xi^2 }{\Delta^2(x)}\right] \ , \label{eq:free_energy_circle_2} \end{equation} providing the result \begin{equation} \Delta(x) \sim \left(R(x) \xi^2 \right)^{1/3} \ . \label{eq:curv} \end{equation} These results apply of course only to the case of everywhere convex impermeable boundary, because if some parts are concave, the stretched polymer will take a straight shortcut. \section{Stretching free energy minimization from an ``optimal fluctuation'' perspective}\label{app_bv} The purpose of this section is to establish the connection between our polymer stretching problem and classical Balagurov-Vaks (BV) problem of random walks on the line with randomly distributed traps \cite{balagurov} (see also later more detailed treatment by Donsker and Varadhan \cite{donsker}). To see the connection with the stretched polymers, let us return again to formula (4) in the main text and re-write it by assuming, as in the main text, $S = b N^{\gamma}$, with $\gamma < 1$. Power $\gamma$ can be viewed as a proxy of the distance $S$, characterizing the stretching degree. Furthermore, we can also say $S = R \theta$, where $\theta$ is the corresponding angle, $\theta<2\pi$ (or $\theta > 2/\pi$) correspond to less than one (or more than one) full turns around the cylinder; in the latter case, $\theta / 2 \pi$ is the winding number. In terms of $\gamma$ and $\theta$, the two terms of free energy read \begin{equation} \begin{split} \frac{F_{\mathrm{circ}}}{k_B T} & \sim \frac{\Delta}{R} N^{\frac{\gamma - \nu}{1-\nu}} + \frac{b^2 }{\Delta^2} N^{1+\frac{(1-\gamma )(2\nu-1)}{1-\nu}} \\ & = N^{\frac{(1-\gamma )(2\nu-1)}{1-\nu}} \left[ \theta N^{-\frac{(1-\gamma)(3 \nu -1 )}{1-\nu}} \frac{\Delta}{b} + \frac{b^2 }{\Delta^2} N \right] \ . \label{eq:free_energy_circle_2} \end{split} \end{equation} This result has a transparent connection with BV problem in two cases. First, if the chain is strongly stretched such that $\gamma \to 1^{-}$, such that $1 - \gamma \ll 1/\ln N$. In that case, \begin{equation} \frac{F_{\mathrm{circ}}}{k_B T} \sim \theta \frac{\Delta}{b} + \frac{b^2 }{\Delta^2} N \ . \label{eq:free_energy_circle_3} \end{equation} for arbitrary $\nu$. Second, if $\nu = 1/2$ and $\theta = N^{1-\gamma}$, in that case \begin{equation} \frac{F_{\mathrm{circ}}}{k_B T} \sim \frac{\Delta}{b} + \frac{b^2 }{\Delta^2} N \ ; \label{eq:free_energy_circle_4} \end{equation} this latter case corresponds to $\nu = 1/2$ of the polymer. In this case mapping to Balagurov-Vaks is realized along the line $R = b N^{2 \gamma -1}$, which can be also presented as \begin{equation} R/R_0=\frac{1}{\sqrt{N}} \left(S/R_0\right)^2. \label{eq:bvline} \end{equation} The behavior \eq{eq:bvline} is illustrated by the dashed blue line in the $R-S$ diagram, \fig{fig:diagram_bv}. Interestingly, the slope of this line coincides with the slope of the boundary between flat (II) and curved (III) regimes in the particular case of $\nu=1/2$. However, note that the coefficient in \eq{eq:bvline} is $N$-dependent. Therefore, the mapping to BV in the second case \eq{eq:free_energy_circle_4} might be realized only for particular value of $N$, provided a pair of values $(R/R_0, S/R_0)$ on this diagram. While in the first case \eq{eq:free_energy_circle_3} the BV mapping is realized in the whole area of the curved regime III, in the second case \eq{eq:free_energy_circle_4} it is not. Along the BV lines of constant $N$ in \fig{fig:diagram_bv} the stretching parameter $\gamma$ changes, and the extremities of these lines provide respective bounds to $\gamma$. As \fig{fig:diagram_bv} suggests, the stretching must be strong enough, $\gamma > 2/3$, otherwise one enters the regime IV of weak fluctuations. On the other hand, it is evident that as long as a BV polymer, being wrapped over cylinder many times, is forced to make a single turn only, $R/R_0 \to S/R_0$, the stretching attains its asymptotic limit, $\gamma \to 1$. This rhymes well with the behavior of the winding number in the second case $\theta = N^{1-\gamma} \gg 1$ for $\gamma<1$. Thus, the region of less than one turn, i.e. between the dashed black line and the boundary II-III, is forbidden for the BV polymers, unless they are fully stretched (the first case). \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth]{polymnew-f02-suppl.pdf} \caption{The same diagram as Fig.2(b) in the main text to demonstrate the place of Balagurov-Vaks problem (dashed blue line) in the context of 2D stretched polymer chains. The slopes between the regimes are computed for the particular value of $\nu=1/2$. The arrows correspond to two values of the stretching parameter, $\gamma=2/3$ and $\gamma=1$, between which the mapping to Balagurov-Vaks can be realized for any $N$.} \label{fig:diagram_bv} \end{figure} Let us remind the Balagurov-Vaks setting. Consider an auxiliary 1D problem of random walks on the line with Poisson-distributed absorbing traps. Let $n_{\mathrm{tr}}$ be the average density of traps on the line. Following Balagurov and Vaks \cite{balagurov}, we are interested in the probability $W(N)$ for the walker to survive during ``time'' $N$ (assuming ``diffusivity'' is equal to $b^2/\pi^2$), i.e. with the probability that until time $N$ the walker does not encounter any trap. The probability to have an interval $\Delta$ between nearest neighboring Poisson-distributed traps is equal $\exp(-n_{\mathrm{tr}}\Delta)$. On the other hand, the probability to survive for a ``long time'' $N \gg \Delta^2/b^2$ between absorbing (Dirichlet) boundary conditions on both ends of the interval $\Delta$ is estimated as $\exp(-b^2 N/\Delta^2)$. The total survival probability is controlled by the Lifshitz's ``optimal fluctuation'' \cite{Lifshitz_Tails}, i.e., by finding such an interval $\Delta$ that maximizes the product of the two above mentioned factors: \begin{equation} W(N) \sim \max_{\Delta} \left[e^{-n_{\mathrm{tr}} \Delta - b^2 N/\Delta^2}\right] \ . \label{eq:min} \end{equation} The connection with \eq{eq:free_energy_circle_3} is now obvious, and $\theta /b$ plays the role of traps density, $n_{\mathrm{tr}}=\theta/b$. Clearly, \eq{eq:free_energy_circle_4} (which is restricted to $\nu = 1/2$ and special value of $R$) corresponds to trap density just $n_{\mathrm{tr}}=1/b$. Note that the derivation of the BV survival probability has relied on the assumption $Nb^2 \gg \Delta^2$, i.e. a the walk between the neighboring traps is constrained. For the case of $\nu=1/2$ this is equivalent to $R/R_0 \ll (S/R_0)^2$ in the polymer problem, which forbids flat geometry. As can be seen from \fig{fig:diagram_bv}, this condition is well satisfied. Maximization of the expression \eq{eq:min} yields $W(N) \sim \exp \left(-\mathrm{const} \, b^{2/3} n_{\mathrm{tr}}^{2/3} N^{1/3} \right)$, which is exactly the Balagurov-Vaks answer \cite{balagurov} for the 1D survival probability of the unbiased random walk of time $N$ in the Poissonian array of traps. Due to the analogy, we can call the minus logarithm of the survival probability the ``trap free energy'' \footnote{To be specific, we stick to the first case of strong stretching, $\gamma \to 1$, \eq{eq:free_energy_circle_3}} (dropping from now on the $k_BT$ factor): $-\ln W(N) = F_{\mathrm{trap}} \sim \theta^{2/3} N^{1/3}$. The minimal value of the polymer free energy is given by the same formula $F_{\mathrm{circ}} \sim \theta^{2/3}N^{1/3} $. Interestingly, the equivalent to \eq{eq:min} weight was maximized in \cite{Muthukumar:2018} for computation of the correlation function of a polymer chain confined in a gel matrix. In that case the linear term was played by the confinement free energy inside a mesh (generating the exponential distribution of the chain segments lengths), while the quadratic term corresponded to the Rouse relaxation time of each chain segment within the mesh. In both polymer and BV problem there is in general also the leading extensive term, proportional to $N$. In BV problem, it is due to a constant bias, $c$, superimposed on the symmetric random walk. In polymer problem it is a constant energy per every monomer (e.g., a bond energy). In both cases, therefore, \begin{equation} \begin{split} F_{\mathrm{trap}} & \sim c N + \left( b n_{\mathrm{tr}} \right)^{2/3} N^{1/3} \ \ \mathrm{and} \\ F_{\mathrm{circ}} & \sim c N + \theta^{2/3} N^{1/3} \ , \end{split} \label{eq:identical_free_energies} \end{equation} free energies are given by identical expressions, albeit with different physical interpretation of the parameters. The Legendre transform from $N$ to a conjugate variable, $\lambda$, realized via the inverse Laplace transform of the survival probability $W(N) = \exp \left( - F_{\mathrm{trap}} \right)$ or of the partition sum for a polymer $\exp \left( - F_{\mathrm{circ}} \right)$, gives the spectral density, $\rho(\lambda)$ (see \cite{nieuwenhuizen} for more detail): \begin{equation} \begin{split} \rho(\lambda) & \propto \frac{1}{2\pi i}\int\limits_{\varepsilon-i\infty}^{\varepsilon+i\infty} e^{-c N - \theta^{2/3} N^{1/3}} \, e^{N\lambda} dN \\ & \propto \exp \left[ -\theta /\sqrt{c - \lambda} \right] \ . \label{eq:35} \end{split} \end{equation} \section{Polymer stretching above a disc in a broader context}\label{app_blabla} Having established the connection of our polymer problem with that of random walks between traps, we now switch to even further connections to a range of other problems and models. To facilitate the discussion, we outline it in the flowchart of mutually related ideas presented in Fig. \ref{fig:flowchart}. \begin{figure*} \includegraphics[width=0.9\textwidth]{polymnew-f03-suppl.pdf} \caption{Flowchart of logical connections: place of our ``2D polymer stretching above a curved boundary'' problem in the context of other models and systems in statistical physics. \textbf{Central column} -- 2D polymer (\textit{top}) and (1+1)D polymer (\textit{bottom}) are equivalent in the strong stretching regime, with free energy $F_{\mathrm{circ}} \simeq c N + \theta^{2/3} N^{1/3}$. \textbf{Right column} -- polymer problem in the proper limit maps onto biased Brownian motion in an array of Poisson distributed traps (\textit{top}), or, equivalently, related to the spectrum of the off-diagonal random Bernoulli matrix (\textit{bottom}). \textbf{Left column} -- curved polymer stretching problem is a mean field approximation for the top line in the system of (1+1)D vicious (mutually non-intersecting) walks (\textit{center}), which is in turn related to either directed polymer in Gaussian disorder (\textit{top}) and to the maximal eigenvalue statistics in the spectrum of random matrices (\textit{bottom}). The common motif is the $N^{1/3}$ scaling of the submeading correction term that controls relevant physics in all cases. \label{fig:flowchart} \end{figure*} The central rectangle in Fig. \ref{fig:flowchart} shows our problem and its limiting regime $\gamma\to 1^{-}$ of strong stretching (as a reminder, stretching of a polymer is characterized by the curvilinear end-to-end distance which we write in terms of $\gamma$ as $S = b N^{\gamma}$). Minimal value of polymer free energy, as discussed before, is given by $F_{\mathrm{circ}} \sim c N + \theta^{2/3} N^{1/3}$, where $\theta$ is the winding number, related to the radius of the void, $R = S/ \theta$. The sublinear in $N$ term of free energy represents the curvature-induced finite-size correction. The right rectangle in the same Fig. \ref{fig:flowchart} depicts the group of problems related to BV model of 1D random walk in the array of Poisson distributed traps, as reviewed in the previous section. In particular, the bottom panel of the right rectangle schematically depicts the (biased) BV model \cite{balagurov}. There, we show pictorially a set of randomly positioned traps - thick lines parallel to the time axis. Within each interval between traps, the walker moves randomly under some constant bias $c$ until it hits one of the boundaries for the first time. The connectioon to our polymer problem is highlighted by the ``free energy'' expression (\ref{eq:identical_free_energies}), in which trap density is related to the winding angle $\theta$ for the polymer. In the upper panel on the right hand side we have drawn the typical three-diagonal random matrix with Bernoulli disorder. Its connection with BV model and, therefore, its relation to our polymer-around-a-cylinder problem can be understood by the following simple calculation. Let $\rho(\lambda)$ be the spectral density of ensemble of large tridiagonal symmetric matrices, $A_N$, with the bimodal (Bernoulli) distribution of sub-diagonal matrix elements $a_{j,j\pm 1}=\{0,1\}$ as shown below: \begin{equation} A_N = \left(\begin{array}{ccccc} 0 & \varepsilon_1 & 0 & \cdots & 0 \smallskip \\ \varepsilon_1 & 0 & \varepsilon_2 & & \smallskip \\ 0 & \varepsilon_2 & 0 & & \smallskip \\ \vdots & & & & \smallskip \\ & & & & \varepsilon_{N-1} \smallskip \\ 0 & & & \varepsilon_{N-1} & 0 \end{array} \right) \label{e:04c} \end{equation} where \begin{equation} \varepsilon_x=\left\{\begin{array}{ll} 1 & \mbox{with probability $p$} \medskip \\ 0 & \mbox{with probability $1-p$} \end{array} \right. \label{e:04d} \end{equation} The matrix $A_N$ at each $\varepsilon_x=0$ splits into regular (gapless) three-diagonal ``cage'' of some random size $D$, each can be viewed as a transition matrix of a discrete random walk in the cage $D$. The probability to find such a cage is $Q(D) = p^D$. The spectral density $\rho(\lambda)$ of ensemble of matrices $A_N$ has been exhaustively analyzed in \cite{krapiv,polov} and the tail of $\rho(\lambda)$ near the spectral edge $\lambda\to\lambda_{\max}=2$ reads: \begin{equation} \rho(\lambda)\propto \exp \left[- \frac{\pi \ln p}{\sqrt{|2 - \lambda|}} \right] \label{e:05} \end{equation} Obviously, $\rho(\lambda)$ in \eq{e:05} is the same spectral density as in \eq{eq:35} for properly adjusted drift $c$ and trap density $n_{\mathrm{tr}}$. Thus, the close similarity between central and right rectangles in the flowchart in Fig. \ref{fig:flowchart} justifies our claim that nontrivial stretched exponent $1/3$ appearing for the random walk or a stretched polymer near the curved boundary points to the intimate connection with stretched exponent for survival probability of (1+1)D trapping problem in the Poissonian disorder. The left rectangle highlights the known relation between the ground state free energy, $F_{\mathrm{disord}}$ of (1+1)D directed polymer in quenched Gaussian disorder \cite{dotsenko} (upper panel) and the statistics of the top line in the ensemble of (1+1)D ``vicious'' random walks \cite{schehr08} (central panel). Let us note, that the last problem has also the interpretation (after proper rescaling by $\sqrt{N}$) in terms of the largest eigenvalue $\lambda_{\max}$ of the Gaussian ensemble of random matrices. Since the same scaling (subject to numerical factors) is valid for both Gaussian Orthogonal (GOE), and Gaussian Unitary (GUE) ensembles, we do not specify here which particular ensemble is considered. At the spectral edge $\lambda_{\max}$ has the finite-size corrections in $N$ ($N\gg 1$): $\lambda_{\max} \sim 2\sqrt{N} +\chi N^{-1/6}$, where $\chi$ is $N$-independent and is distributed according to the Tracy-Widom law which takes slightly different forms for GOE and GUE. The arrow ``Mean field'' designates the mean-field approximation of the many-body system of vicious walks, in which the influence of all trajectories lying below the topmost one, are replaced by the impermeable circular boundary, \cite{spohn_ferrari}. Note that finite-size corrections to the free energies, $F_{\mathrm{disord}}$ and $F_{\mathrm{upper}}$, have the same scaling as the one for $F_{\mathrm{circ}}$: in all cases the corresponding finite-size sublinear in $N$ terms are of order of $N^{1/3}$. We should emphasize that the above mentioned similarity, however attractive, is not complete. Although valid on averages, it cannot be extended on distributions: the partition functions of a polymer in quenched Gaussian disorder and fluctuations of the topmost vicious walks have the Tracy-Widom distribution, while the constrained random walk above the boundary is given by squared Airy function \cite{spohn_ferrari,valov_fixman}. Here we report the equivalent squared Airy PDF of fluctuations in the stretched polymer problem at various degrees of stretching (Fig. 3(a) in the main text). Apparently, this difference in distribution is the consequence of the fact that we have replaced the true many-body system (such as vicious walks) by its one-body mean-field analog. To summarize, scaling analysis of a polymer strongly stretched around a cylinder reveals an unusual behavior of free energy $F_{\mathrm{circ}} \sim cN + \theta^{2/3}N^{1/3}$ that points to an array of deep connections with a variety of problems in equilibrium and non-equilibrium statistical physics and random matrix theory, ranging from KPZ to Balagurov-Vaks problem, Lifshitz tails, Andresen localization, vicious random walks, etc. Although some of the arguments in the last section have intentionally tentative, hypothetical, and sometimes even speculative character, it seems to us that together they paint an exciting picture. \section{Details of polymer simulations}\label{app_sims} Simulations of stretched trajectories are done using polychrom module (available at https://github.com/open2c/polychrom), a wrapper around the open source GPU-assisted molecular dynamics package OpenMM \cite{eastman10}. A chain with phantom beads in simulations is supposed to model the ideal Gaussian chain with the fractal dimension $D_f=2$. The chain is equipped with harmonic bonds of the following energy \begin{equation} U_{\mathrm{bond}} = \frac{3}{2a^2} \sum_{i=1}^{N-1} \left(r_{i,i+1} - l_b\right)^2 \end{equation} where $a=0.06$ is the standard deviation of the monomer-to-monomer distance, $r_{i,i+1}=|\mathbf{r}_{i+1}-\mathbf{r}_{i}|$; the equilibrium bond length is $l_b=1$. The cylindrical barrier for the chain is aligned along the $z$-axis, having the infinite length and the radius $R$ in the $x-y$ plane. In order to prohibit the chain entering the area constrained by the cylinder, the following soft repulsive potential of strength $k_{\mathrm{cyl}}=5$ is introduced when the chain crosses the disk boundary \begin{equation} U_{\mathrm{cyl}} = k_{\mathrm{cyl}} \sum_{i=1}^{N} {\cal{H}}\left[R - \sqrt{x_i^2+y_i^2}\right] \left(R - \sqrt{x_i^2+y_i^2}\right)^2 \end{equation} with ${\cal{H}}[.]$ being the Heaviside step function. This potential has been further smoothed in simulations close to the vicinity of the boundary by means of a small parameter inserted under the root. Also, in order to keep the chain ends at the distance $S=\pi R$ apart, we additionally tether the end beads $\mathbf{r}_1, \mathbf{r}_{N}$ at two points on the diameter by springs of strength $k_{th}=100$ at a small distance $\delta=0.1 < \Delta$ from the disk surface. The chain of length $N$ is initialized with a random walk configuration and equilibrated for a Rouse time $\tau_R$ in the potentials above. Computation of the Rouse time in simulations has been performed using the dynamics-based estimate for the microscopic Rouse time $\tau_0$, then $\tau_R=\tau_0 N^2$. This is done in separate short-time runs, in which the transition time $\tau_0$ from the ballistic to Rouse behaviour of the mean-squared displacement of one monomer $r_0^2(t)$ is computed.
{ "timestamp": "2022-02-02T02:12:02", "yymm": "2202", "arxiv_id": "2202.00239", "language": "en", "url": "https://arxiv.org/abs/2202.00239" }
\section{Introduction} Semiconductor Hall sensors are the mainstream technology for linear magnetometer sensors \cite{review_hirohata_2020}. Magnetoresistive (MR) sensing technologies, such as anisotropic MR, giant MR, and tunneling MR (TMR) are finding an increased use, due to the evolving requirements in automotive, internet-of-things (IoT), and biomedical applications \cite{review_zheng_2019}. Various requirements of high sensitivity, wide dynamic range, low nonlinearity, low power consumption, and low noise are required for covering the needs of the emerging applications. As an example of power sensing, an IoT device needs to monitor its battery current in the range of $<$100 nA in sleep mode, to $1.0$ A in communication mode \cite{hertlein_2018,TI_CC2650MODA}. Some devices may need a granular monitoring of critical subsystems such as the power harvester, the RF transceiver, \emph{etc}. Such applications require low-power miniature sensors that can measure in over 7--8 orders of magnitude. Automotive applications also have multiple sensitivity requirements for angle, position, and power sensing. Therefore, there is a need for a linear magnetometer sensor that can be designed into various dynamic ranges. The sensitivity in single MR elements, is usually designed by the material choice of the ferromangetic sensing layer, which set the magnetic anisotropy field \cite{fujiwara_2012_13,nakano_2017}. {High-sensitivity sensors are made from low-anisotropy soft ferromagnetic layers, which form domains and domain walls. The creep and depinning avalanches of domain walls are a significant source of auto-correlated noise \cite{ledoussal_2009,ferrero_2017}, and coercivity increase.} For obtaining a geometrically-designed sensitivity, a suitable magnetic texture is the magnetic vortex \cite{behncke_2018}. Magnetic vortices are found in various magnetic micro-structures, such as cross-tie domain walls \cite{metlov_2001,wiese_2007,mccord_2009}, confined domain walls \cite{meier_2007a}, and most importantly in micro-sized circular disks \cite{shinjo_2000}, and rectangles \cite{vogel_2011,miyake_2013}. They are characterized as flux-closure curling magnetization patterns with an out-of-plane magnetization at the core position, and they form due to a minimization of magnetization surface charges against the exchange energy. Vortex-type magnetic sensors have been in interest recently \cite{novosad_2010, raberg_2018}. {Additional merits of using a vortex configuration is the possibility of lower noise due to topological protection \cite{suess_2018, he_2020}, and hysteresis is vanishingly small \cite{ostman_2014,he_2020}.} The vortex sensitivity is determined by the saturation field $H_\mathrm{an}$, at which the vortex core is at the edge of the disk. The ratio between free layer thickness and radius $(\beta = L_b/R_b)$ is the geometric parameter for setting $H_\mathrm{an}$ \cite{guslienko_2001a}. For the devices we will consider, $\beta \ll 1$ and $H_\mathrm{an}$ is linearly proportional to $\beta$ \cite{guslienko_2002}. By increasing the sensor free layer diameter, the sensitivity can be increased. However, above a critical diameter usually near 10 $\mu$m for soft magnetic layers \cite{he_2020}, the vortex state will break into a multi-domain state. The $H_\mathrm{an}$ is then limited to a minimum bound of 80--100 Oe. The TMR effect in MTJ sensors results in a large resistance readout sensitivity. The combination with a soft ferromagnetic free layer is the best candidate for fabricating vortex-type sensors. However, for a typical TMR ratio of 160\%, the maximum sensitivity (TMR$/2H_\mathrm{an}$) for vortex MTJ sensors is capped at 1 \%/Oe. Therefore, another control parameter for vortex-type magnetic sensors is needed. In this work, we show a method for sensitivity increase by constraining the pinned layer diameter. Hence, the sensitivity can be designed separately from the design of the vortex free layer, while keeping the linearity and low hysteresis. \section{Fabrication and Methods} \begin{figure} \includegraphics[width=0.5\textwidth]{fig1_schem.pdf} \caption{(a) A schematic of the sensor geometry and coordinates definitions. The core position sweeps linearly through the area covered by the pinned layer, marked by a dashed line. (b) A microscope photograph of a representative element, during microfabrication.} \label{fig:schem} \end{figure} We fabricated vortex MTJ sensor devices as depicted in Fig.~\ref{fig:schem}(a). The vortex is formed in a circular free layer of a diameter {$2R_b = 10\mu$m}. The pinned layer for detection is at top with a smaller diameter of {$2 r_t = 2.0$--$9.5\mu$m}. As the vortex core moves linearly in response to magnetic field, the average inplane magnetization in the area enclosed by $2 r_t$ is also linearly dependent on the applied field. Therefore, we can decrease the dynamic range and increase sensitivity by reducing the area covered by $2 r_t$. The free layer requires a soft magnetic property, with a small magnetic anisotropy and small exchange stiffness, and we choose permalloy (NiFe) for this purpose. The magnetization in the pinned layer should be parallel to the sensing axis, but with a minimal stray field that does not affect the vortex response to measured field. We used radio-frequency magnetron sputtering to deposit MTJ films similar in design to previous reports \cite{fujiwara_2012_13}: thermally-oxidized silicon substrate/Ta 5/Ru 10/Ta 5/Ni$_{80}$Fe$_{20}$ 100/Ru 0.85/Co$_{40}$Fe$_{40}$B$_{20}$ 3/MgO $2.0$/Co$_{40}$Fe$_{40}$B$_{20}$ 3/Ru 0.85/Co$_{75}$Fe$_{25}$ 3.4/IrMn 10/Ta 3/Ru 5, where the numbers are the nominal thicknesses in nanometers. We optimized the top synthetic-antiferromagnet (SAF) pinned layer for a vanishing stray mangetic field \cite{NoteSupp}. The magnitude of TMR ratio is 140 \%, and the blanket films magnetization curves are typical for such a stack \cite{NoteSupp,fujiwara_2012_13}. During deposition, a static magnetic field was applied in the inplane direction using a permanent magnet on the holder stage {(defined as $Y$ direction)}, to set the induced magnetic anisotropy axis of NiFe layer \cite{NoteSupp, katada_2000,chikazumi_1955}. We micro-fabricated MTJ vortex sensor elements, using electron-beam (EB) lithography and Ar ion etching. During EB lithography exposures, we used in-field gold alignment markers for patterning alignment, {achieving a misalignment error less than 30 nm}. First, we fabricated and etched the bottom disk with a fixed diameter $2R_b = 10$ $\mu\mathrm{m}$. Then, we used a second EB lithography step to define the top disk with a varying diameter {$2 r_t = 2.0, 4.0, 5.0, 6.0, 8.0, 9.5$ $\mu\mathrm{m}$}, and etched the top disk down to the MgO layer. We monitored the etching processes with \emph{in situ} secondary-ion mass spectrometry, to monitor etching rate. The etching was at 30$^\circ$ off-normal incidence, for a sharp side profile. After that, we insulated the pillars with SiO$_2$, and deposited contact electrodes. A microscope image of a representative device during a different fabrication run is shown in Fig.~\ref{fig:schem}(b). After microfabrication, we applied a two-step magnetic annealing process at a 10-kOe magnetic field for 1 hour, with directions shown in Fig.~\ref{fig:schem}(a) \cite{fujiwara_2012_13}. The first annealing was at $350^\circ\mathrm{C}$ to crystallize CoFeB and obtain high TMR ratio. { The pinning field during the first annealing was at the same direction as the field applied during deposition ($Y$ direction) \cite{NoteSupp}}. The second pin annealing was at {a lower temperature $300^\circ\mathrm{C}$ to rotate the pinned layer direction $90^\circ$ and define the sensing axis ($X$ direction)}. This two-step process improves response linearity, while keeping the NiFe induced anisotropy \cite{fujiwara_2012_13}, which stabilize vortex nucleation along domain walls of reversal domains \cite{NoteSupp}. We simulated the static magnetization curves using OOMMF micromagnetic simulator \cite{oommf}. We simulated a circular disk with a diameter of $2R_b = 800$ nm and a thickness of 80 nm. We calculated the average magnetization within a smaller diameter $2r_t$, to be equivalent to the TMR measurements using the smaller pinned top layer. We used the following parameters, which represent soft magnetic properties: a saturation magnetization of $M_s = 800$ emu/cm$^3$, an exchange stiffness constant of $A_\mathrm{ex} = 13$ pJ/m, {zero crytalline magnetic anisotropy}, and a discretization cell of $2\times 2 \times 40$ nm$^3$. The magneotostatic field of vortex state is from the side-surface magnetic charges, which is the same as a simple magnetic dipole \cite{guslienko_2001a,vogel_2010}. An inplane applied field $H_x$ will result in a linear orthogonal movement of the vortex core and a linear change of the pillar average magnetization. The saturation field $H_\mathrm{an}$ is inversely proportional to the zero-field susceptibility $\chi_0$. The static magnetic properties depend on the normalized $H_x/H_\mathrm{an}$, regardless of the pillar diameter. Therefore, a quantitative comparison between our simulations and experiments can be made after normalization. \section{Results and Discussions} \begin{figure} \includegraphics[width=0.5\textwidth]{fig2_vsm_mr.pdf} \caption{(a) Magnetization-field loops of 10-$\mu$m circular disks arranged in an array. (b) The conductance-field loop of a single-disk MTJ. (c) Longitudinal and polar MOKE images {at zero field after etching until the} NiFe free layer of the same sample in (a). The white and black arrows indicate the sense of color.} \label{fig:vsm_mr} \end{figure} In Fig.~\ref{fig:vsm_mr}, we show the verification of the vortex formation. First, we microfabricated {an array of $400 \times 400$ disks spanning $8 \times 8$ mm$^2$ area, where each disk is 10 $\mu$m in diameter, and the pitch is $40\mu$m.} We measured the $M$-$H$ loops by a vibrating-sample magnetometer (VSM). The center-to-center disks separation is four times the radius, and the inter-dot coupling can be neglected \cite{mejia-lopez_2006,vogel_2010}. {Therefore, the measured magnetization is of an average independent disk. In fig.~\ref{fig:vsm_mr}(a),} the $M$-$H$ curves show the typical hysteresis loops of vortex nucleation and linear displacement \cite{guslienko_2002}. The $M$-$H$ loops measured along the directions of the first annealing ($Y$) and second annealing ($X$) have the same loop shape and slope near zero field. The circular symmetry makes the magnetization hysteresis loops independent from $H$ direction. {The smooth transition at vortex nucleation shoulder indicates that the nucleation field has a distribution between 40--60 Oe, due to variation in edge roughness. The saturation field is tightly distributed at $H_\mathrm{an}$ = 79 Oe, since it is determined by the magnetostatic energy without effect from edge roughness.} In Fig.\ref{fig:vsm_mr}(b), we show the dependence of the tunneling conductance ratio $\left(G(H)/G_\mathrm{AP} - 1 \right)$ on applied field, using a single MTJ device prepared by the EB lithography process. The tunneling conductance is directly proportional to the magnetization average relative angle \cite{slonczewski_1989,nakano_2017,nakano_2018,ogasawara_2019a}. Both of the $G$-$H$ and $M$-$H$ loops match in shape, and the saturation fields are nearly equal at 83 Oe for the $G$-$H$ loop, and 79 Oe for the $M$-$H$ loop. The reorientation fields of the SAF pinned and free layers are at much higher fields [Fig.~\ref{fig:vsm_mr}(a,b) insets], and do not affect the vortex magnetization process. Due to the large thickness of NiFe compared to CoFeB layer, the Zeeman, exchange, and magnetostatic energies are dominated by the NiFe layer. Thus, the CoFeB free layer has anti-parallel vortex configuration to that of the NiFe layer. For the same array of Fig.~\ref{fig:vsm_mr}(a), we etched the sample to expose the NiFe surface. We imaged the vortex magnetic configuration by a magneto-optical Kerr effect (MOKE) microscope [Fig.~\ref{fig:vsm_mr}(c)]. Before MOKE measurement, we applied a 10-kOe field along $X$ and out-of-plane directions in an external electromagnet. The domain images were taken as the difference from a reference image taken at 50 Oe. The longitudinal MOKE image shows the vortex curling inplane magnetization configuration. The polar MOKE image shows an out-of-plane magnetization at the center of the disks, indicating a perpendicularly-polarized vortex core. {The core polarization flips together with the vortex chirality. The fixing of polarity-chirality product indicates the presence of a bulk-type Dzyaloshinskii-Moriya Interaction \cite{im_2012}, which requires an extended investigation.} The presence of a small induced anisotropy affects the initial curling in vortex nucleation, and we find that the vortex formation is stable up to $2R_b = 30 \mu$m \cite{NoteSupp}. \begin{figure} \includegraphics[width=0.5\textwidth]{fig3_mx_rt_2.pdf} \caption{The effect of top disk size on sensitivity. (a) The experimental normalized magnetization $\Delta m_x$ major loops. The curves are vertically-shifted for clarity. (b) The corresponding minor loops. (c) The dependence of effective dynamic range $2 H_k$ and sensitivity on $r_t$. (d) The simulated $\Delta m_x$ loops. (e) The simulated dyanmic range and empirically-estimated sensitivity. } \label{fig:rt} \end{figure} Figure \ref{fig:rt} shows the control of sensitivity by changing the top disk radius $r_t$, which is the main result of this work. From the normalized conductance, we calculate the average normalized magnetization projected to the pinned-layer direction ($\Delta m_x$) in the area covered by the pinned layer disk, as follows: \begin{equation} \Delta m_x = 2 \, \frac{G - G_\mathrm{AP}}{G_\mathrm{P} - G_\mathrm{AP}} - 1, \end{equation} where $G_\mathrm{AP}$ and $G_\mathrm{P}$ are the conductances in the anti-parallel and parallel saturated configurations, respectively. In Fig.~\ref{fig:rt}(a), we show the experimental major loops. As we reduce $r_t$, the $\Delta m_x$-$H_x$ slope increases. The linearity of $\Delta m_x$-$H_x$ is preserved in the minor loops [Fig.~\ref{fig:rt}(b)]. In Fig.~\ref{fig:rt}(c), we show the dependence of the effective dynamic range $2H_k$ on $r_t$, estimated from linear fittings to the major and minor loops. The smallest $2H_k$ is $31$ Oe at $r_t = 1.0 \mu$m, which is in agreement of a linear scaling from $2H_{k, \mathrm{max}} = 156$ Oe obtained at $r_t = R_b = 5.0$ $\mu$m. Correspondingly, the sensitivity defined as $\mathrm{TMR}/2H_k$ increases 5 times from 0.85 $\%/\mathrm{Oe}$ to 4.43 $\%/\mathrm{Oe}$. In the micromagnetic simulations, the dynamic range is $2 H_{k, \mathrm{max}} = 1800$ Oe, due to the smaller $R_b = 400$ nm. However, after normalizing $H_x$ by $H_{k, \mathrm{max}}$, the simulated $\Delta m_x$-$H$ curves show the same properties and the increase of slope by decreasing $r_t$ as the experimental curves [Figs.~\ref{fig:rt}(d)]. For a quantitative comparison, we deduce the simulated sensitivity using the empirical values of TMR ratio of 140 \% and $2H_{k, \mathrm{max}} = 156$ Oe. We find the simulated sensitivity to be 0.90$\%/\mathrm{Oe}$ at $r_t/R_b = 1.0$, and increases to 5.0$\%/\mathrm{Oe}$ at at $r_t/R_b = 0.2$ [Fig.~\ref{fig:rt}(e)], in agreement with experimental results. The decrease in $2 H_K$ follows a simple linear relation with $r_t$, namely $H_k / H_{k, \mathrm{max}} = r_t / R_b$, both in simulations and experiments. This linear scaling relation is due to the linear displacement of vortex core by the applied field, until the core is located at $r_t$ when $H_x = H_k$. More improvement in sensitivity should be achievable by further reductions in $r_t$. {However, as $r_t$ is reduced, the stray field from the pinned SAF at the center of the disk increases, even when SAF magnetization is fully compensated at the blanket film level \cite{devolder_2019}. This can be seen as a field offset in the $\Delta m_x$-$H_x$ curves, which starts to appear at $r_t = 2.0$ $\mu$m, and become more pronounced at $r_t = 1.0$ $\mu$m. Further optimizations of the pinned SAF stack and etching process are required to minimize the effect of stray field. At $r_t/R_b \leq 0.6$, there are small dips near $H_x = \pm50$ Oe in Fig.~\ref{fig:rt}(a), and at $H_x/H_{k,\mathrm{max}} = 0.6$ in Fig.~\ref{fig:rt}(d). These are related to the curling in pre-nucleation state \cite{NoteSupp}. However, they should not affect the sensor performance, as they are outside the dynamic range of the sensor.} \begin{figure} \includegraphics[width=0.5\textwidth]{fig4_mx_shift.pdf} \caption{The effect of the shift of pinned layer disk, showing the comparison between (a, b) the experimental results, and (c, d) the simulation results; where the shifts are along: (a, c) the $X$ direction, and (b, d) the $Y$ direction.} \label{fig:shift} \end{figure} The shift of the top pinned layer disk is also important to control the sensor response. We measured and simulated the effect of shifts either parallel or transverse to $H_x$ direction [Fig.~\ref{fig:shift}]. A shift in the $X$ direction does not have a significant effect on the sensor response [Figs.~\ref{fig:shift}(a,c)]. On the other hand, the shift in $Y$ direction causes a drastic shift of the transfer curve center [Figs.~\ref{fig:shift}(b,d)]. The vortex core moves perpendicular to an applied field. Therefore, a shift of the pinned layer center ($C_Y$) in the $Y$ direction will result in a shift in the sensor transfer curve center by $H_\mathrm{shift}/H_k = C_Y/R_b$, whereas a shift in the $X$ direction does not have a significant effect [the insets in Figs.~\ref{fig:shift}(a,b)]. We propose that a combination of four off-centered top pinned disks on a single vortex free layer would work as a multi-axis sensor. An independent readout of TMR from each top disk can be used to find the location of the vortex core, hence the direction of magnetic field in the inplane direction. Such a magnetic vector sensor would be more practical to implement, compared with the common method of using two separate sensors channels, which require mechanical alignment at orthogonal sensing axes \cite{yamazaki_2011}. \section{Conclusions} We demonstrate the control and design of the sensitivity in vortex-type magnetic MTJ sensors. We used the pinned layer size geometry as an effective method to increase the sensitivity. This is another degree of freedom in design, in addition to the free layer dimensions. {We validated this approach experimentally in MTJ sensors, and with micromagnetics simulations of a single NiFe disk.} At the current demonstration, by decreasing the pinned layer diameter from {$2r_t = 9.5$ to $2 \mu$m, we could tune the effective dynamic range $2H_k$ from 156 to 31 Oe, while keeping the vortex free layer diameter fixed at $2R_b = 10 \mu$m.} {The simple linear displacement of vortex core by applied field makes the design of vortex sensors straightforward for applications; the sensor sensitivity becomes $\propto R_b^2 / r_t$. The combination of varying vortex layer and pinned layer diameters covers 2--3 orders of magnitude in sensitivity design, which makes vortex MTJ sensors as a candidate for a wide range of magnetic sensing applications.} \section*{Acknowledgment} This work was supported by the Center for Science and Innovation in Spintronics (CSIS), Center for Spintronics Research Network (CSRN), Tohoku University, the S-Innovation program, Japan Science and Technology Agency (JST), and by JSPS KAKENHI Grant Number JP19K15429. \bibliographystyle{iopart-num} \section{Magnetic properties and TMR characteristics of the sensor films} We show the magnetic properties of the blanket film in Fig.~\ref{fig:film_vsm}(a). We optimized the thickness of CoFe in the pinned SAF for zero remnant magnetization, to reduce the effect of stray fields on the vortex core in the free layer. The magnetization reversal process of the full stack is indicated by the colored arrows. Although NiFe is rather thick, the coupling in the free layer between NiFe and CoFeB is antiferromagnetic through the Ru spacer, indicating the uniformity of the Ru ultra-thin layer. Fig.~\ref{fig:film_vsm}(b) shows the TMR characteristics of large pillars after the two-step field annealing process. The MTJ pillars have a 2:1 elliptical cross section, with a major axis of 48 $\mu$m in length. After the first annealing process, we obtained a relatively high TMR ratio of 140--150 $\%$, and the transfer curve shows switching character. After the second annealing step, the pinned layer is rotated orthogonal to free layer easy axis, and a linearized transfer curve is obtained. \begin{figure}[hb] \begin{center} \includegraphics[width=0.8\textwidth]{figS1_film_vsm_mr.pdf} \caption{(a) Magnetic characteristics of the top pinned SAF (top), and MTJ films (bottom). (b) TMR characteristics after the first (top) and second (bottom) annealing steps.} \label{fig:film_vsm} \end{center} \end{figure} \newpage \section{Induced magnetic anistropy in NiFe} The NiFe exhibit an induced magnetic anisotropy due to Ni-Fe pair ordering \cite{sup_chikazumi_1955}. The pair ordering occurs during the deposition in a magnetic field \cite{sup_katada_2000}. In Fig.~\ref{fig:sup_aniso}, we show the results of magnetization loops of blanket films of: thermally-oxidized silicon substrate/Ta 5/Ru 10/Ta 5/Ni$_{80}$Fe$_{20}$ 100/MgO $1.5$/Ta 1.0, where the numbers are the nominal thicknesses in nanometers. We measured the magnetiztion loops along the easy and hard axes in the as-deposited state, and after the first pin annealing with the same conditions as the main text, \emph{i.e.}~$350^\circ\mathrm{C}$ along the easy axis of induced anisotropy in NiFe ($Y$ direction). We obtain {a saturation magnetization of 800 emu/cm$^3$, an anisotropy field of 4 Oe, and an anisotropy energy of $1.6\times 10^3$ erg/cm$^3$,} similar to the literature values \cite{sup_katada_2000}. The anisotropy field is not affected by pin annealing. However, coercivity is reduced and linearity is improved in the hard axis loop. \begin{figure}[hb] \begin{center} \includegraphics[width=0.6\textwidth]{figS2_MH_asdepo_ann.pdf} \caption{The magnetization loops along the easy ($Y$) and hard ($X$) axes of NiFe films in the as deposited state, and after the first annealing.} \label{fig:sup_aniso} \end{center} \end{figure} \newpage \section{Critical size for vortex stability} We show the effect of induced anisotropy on the domain structure in Fig.~\ref{fig:sup_domain}. We fabricated circular disks with varying diameters from the films in Sec.~S2, after the first pin annealing. We used longitudinal MOKE to image the domain structure during magnetization loops with field applied along the easy or hard axes. In large diameter disks, a reversal domain forms at zero field [Fig.~\ref{fig:sup_domain}(a)]. The initial nucleation domains form parallel to easy axis direction. When the anisotropy axis is transverse to the applied field, multiple vortices are formed [indicated by {black} arrows in Fig.~\ref{fig:sup_domain}(a)]. At a critical diameter of 30 $\mu$m, the nucleation starts from a reversal domain in the easy-axis loop, whereas the vortex state nucleates for the hard-axis loop [indicated by an arrow in Fig.~\ref{fig:sup_domain}(b)]. Below that diameter, the vortex state is the stable reversal configuration, regardless of the field direction [Fig.~\ref{fig:sup_domain}(c)]. \begin{figure}[hb] \begin{center} \includegraphics[width=0.7\textwidth]{figS3_domain.pdf} \caption{Longitudinal MOKE domain images of NiFe disks during field scans along the easy and hard axes directions. The disk diameters are: (a) 60 $\mu$m, (b) 30 $\mu$m, and (c) 24 $\mu$m.} \label{fig:sup_domain} \end{center} \end{figure} \newpage \section{Nucleation features in the TMR-$H$ curves} {In Figs.~3(a,d), and 4(a--d) of the main text, there are dips that appear near $\pm$50 Oe for small $r_t/R_b$ ratios. They coincide with the nucleation field of the vortex state estimated from the VSM measurements of Fig.~2(a). We show the explanation from simulation results in Fig.~\ref{fig:sup_nuc}. Below the saturation field, and before the nucleation event, a curly domain appears [Fig.~\ref{fig:sup_nuc}(a)]. The magnetization vector becomes flat at the poles of the disk to minimize the magnetostatic energy. This causes a rotation of the magnetization at the center of the disk towards a $\approx 90^\circ$ direction. After vortex nucleation, the magnetization at the center region returns back to $0^\circ$ [Fig.~\ref{fig:sup_nuc}(b)]. If the area enclosed by the pinned layer is small [orange curve in Fig.~\ref{fig:sup_nuc}(c)], then before nucleation there is a dip in $\Delta m_x$. After nucleation, $\Delta m_x$ increases to 1, until the vortex position is close to the pinned layer edge. If the area enclosed by the pinned layer is large, then the average $\Delta m_x$ is smaller in the vortex state compared to pre-nucleation state [blue curve in Fig.~\ref{fig:sup_nuc}(c)]. } \begin{figure}[hb] \begin{center} \includegraphics[width=0.8\textwidth]{figS4_nucleation.pdf} \caption{Simulations of the domain state before and after the vortex nucleation. (a) Before vortex nucleation, and (b) after the vortex nucleation. In (a), we show a small area to be enclosed by the pinned layer, equivalent to a projection along the $X$ direction (blue arrow). (c) The $\Delta m_x$--$H_x$ loops for a small enclosed area ($r_t/R_b = 0.2$), or a large one ($r_t/R_b = 1.0$). The states in (a) and (b) are indicated on the curves. } \label{fig:sup_nuc} \end{center} \end{figure}
{ "timestamp": "2022-02-02T02:10:13", "yymm": "2202", "arxiv_id": "2202.00207", "language": "en", "url": "https://arxiv.org/abs/2202.00207" }
\section{Introduction} Network softwarization in 5G has allowed unprecedented flexibility in how cellular services are configured and delivered. Moving from traditional MVNO agreements and overlay networks existing with 4G, to enabling every function of the network with the ability to be virtualized and made dynamic in 5G and beyond deployment. As previously seen in cloud computing, this rapid advance of software has encouraged a decoupling of hardware from software to the extent that slower moving hardware generations are made general purpose and are able to accommodate increasing heterogeneity of software and services sitting on top \cite{gcomm-b3}. A potential of such decoupling is that in the long term, network infrastructure can be fully disaggregated to the extent that it becomes possible to stitch together wholly new formats of service from multiple Amazon Web Services for \textit{5G ...6G ...and beyond}. SDN (Software Defined Networking) technologies which previously allowed decoupling of data and control planes for backbone network flows are increasingly being adapted for wireless. These include recent research for adversarial dynamic spectrum access and software radio to enable infrastructure slicing through to the radio access edge \cite{gcomm-b3}. In tandem with these advances, standardization activities including the ETSI GANA (Generic Autonomous Networking Architecture) now provide a reusable model for the separation of higher-level resource orchestration (the cellular control plane) and the dynamic and software driven heterogeneous infrastructures delivering services underneath \cite{gcomm-b4}\cite{gcomm-b20}. Extending further from a general separation of data and control planes, recent research and commercial offerings increasingly are pursuing a goal of delivering infrastructures and services piecemeal or decentralizing and abstracting away service providers entirely. These range from a basic expansion of classic MVNO models such as Google Fi \cite{gcomm-b5} and HMD Connect \cite{gcomm-b6}; to dynamic and API consumable wireless services from vendors such as Twilio \cite{gcomm-b7} and Telnyx \cite{gcomm-b8}; and finally a full decentralization of wireless network functions using blockchain technologies \cite{gcomm-b9}\cite{gcomm-b29}\cite{gcomm-b30}. One difficulty in realizing full decentralization however is classic resource management structures which retains global visibility at the base station and cellular core, paired with a subordinate UE (User Equipment) device at the edge. Taking this as a starting point; one question raised is what becomes of the UE device at the network edge and how are network services consumed in the absence of classic network control. With significantly less environmental context available at the UE, addressing device control under this general lack of data requires further research. This paper investigates an enhancement of existing mobile-controlled handoff capabilities by doing all learning \textit{on-device}" using the existing mechanic of measuring RSSI (Received Signal Strength Indicator). The remainder of the paper is split into four parts. First we provide a background into the existing mechanics of cellular mobility, recent research into cellular network decentralization, and potential machine learning methods that may be considered as alternatives to the approach detailed in this paper. After this we present our design of a "transition learning" algorithm in section three, followed by our simulation results in section four. The paper concludes with a discussion of results and identification of paths for future research in section five. \section{Background and Related Research} The following section provides additional background and related research to highlight the gap and contribution made by the transition learning algorithm being presented in this research. This section covers cellular mobility management, network decentralization, and machine learning applications and limitations. \subsection{Cellular Mobility Management} Across network generations and vendor configurations, cellular mobility can inherit a broad range of architectures. At a high level, these can be organized into three categories: network controlled handoff, mobile assisted handoff, and mobile controlled handoff \cite{gcomm-b14}. \subsubsection{Network Controlled Handoff} As the most centralized approach, network control handoff places all knowledge and mobility control with network base stations. This approach is largely a carryover of the earliest network design in which UE devices lacked the sensors and compute resources to participate in mobility coordination. In this model, mobility decisions not taken at the network edge can add a non-trivial amount of signaling latency if they include a remote or regional network core. \subsubsection{Mobile Assisted Handoff} UE devices participating in mobile assisted handoff are able to report on-device sensor readings, specifically RSSI which is calculated from the RSRP (received signal received power) and RSRQ (reference signal received quality)\cite{gcomm-b12}. With this data updating periodically, the core network is able to balance the state of the UE device, with its global visibility of the wider capability and status of the network, including total device density, its own backhaul capacity, and the specific commitments and priorities tied to all other services operating from a given base station ahead of making a mobility decision. \subsubsection{Mobile Controlled Handoff} Allowing the UE to handle handoff decisions reduces handoff times compared to the previously mentioned methods \cite{gcomm-b2}. In this model, the UE monitors the measured RSSI values of pilot channels signals received from surrounding base stations and initiates a handoff when certain conditions are met, such as when the RSSI from a connected base station is no longer the highest and drops below a defined threshold with additional padding to limit hysteresis (Fig. \ref{classic_mobility}) \cite{gcomm-b1}\cite{gcomm-b13}. The research and transition learning algorithm presented in this paper are targeted at extending the capability of this type of handoff operation. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{mobility.pdf} \caption{Example transitions based on RSSI received at the UE.} \label{classic_mobility} \end{figure} \subsection{Blockchain Network Decentralization} \begin{figure} \centering \includegraphics[width=\linewidth]{multi_network_blockchain.pdf} \caption{Example networks operating individually decentralized network functions \cite{gcomm-b9}.} \label{network_blockchain} \end{figure} Blockchain at its lowest level is a forward hash-linked data structure. Data stored in "blocks" are hashed and this hash is carried forward and added to new blocks which themselves are then hashed. By including the hash from previous blocks, the data in total becomes cryptographically linked, forming a "chain" \cite{gcomm-b36}. Blockchain technology encompasses an entire category of implementations supporting combinations of cryptocurrency and contracts logic \cite{gcomm-b32} or isolated to be used only as database storage \cite{gcomm-b9}. In network implementation, blockchain has been pursued to allow a broad decentralization of network infrastructures and services. Examples include applications of network access control \cite{gcomm-b29}\cite{gcomm-b35}, spectrum access auctions \cite{gcomm-b34}, and the general use of blockchain technology as an agnostic storage layer used by network functions (fig. \ref{network_blockchain}) \cite{gcomm-b9}. The latter is significant because it is intended to be generalizable and allow broad decentralization of any network service which is built atop 5G VNF's (Virtual Network Functions). While this paper is not an investigation of blockchain technology itself, it is an important context to highlight as the experiment presented assumes a network environment where the infrastructures are not part of a unitary carrier deployment, but are instead independent with the only commonality being the UE which has access across them. This context is most similar to emergency calling or WPS (Wireless Priority Service) in which a UE, even while not having active carrier subscription, must be permitted access to available networks when placing emergency calls. To the author's knowledge, there is no deployed available equivalent to WPS for data access \cite{gcomm-b33}. The presented research extends the current body of knowledge in this direction. \subsection{Learning Applications and Limitations} Machine learning is a very active path of investigation for enabling autonomy in decision making. Machine learning approaches can be classified into three broad categories depending on the type of feedback signal available to the learning system: supervised learning, unsupervised learning, and reinforcement learning. This section provides a summary of these three as well as a fourth, more narrow subcategory chosen for the experiment in this paper. \subsubsection{Supervised Learning} Supervised learning models learn to generalize the input-output mappings presented to it by a “supervisor” signal in the form of labeled data. The use of labeled data to train and predict new data points gives precise control of what the model learns through the curation of the labeled dataset. Training supervised learning systems with high quality data that is representative of the ground truth can lead to high levels of accuracy in unseen data points. This level of precise control of what the model learns and dependence on labeled data points is also a drawback of supervised learning systems, as they require both a large and varied amount of representative data to be able to generalize well. Supervised learning methods are less common in cellular deployment, but have been employed for mobile edge computing (MEC) and QoS policy control operations taking place at the less resource-constrained network core \cite{gcomm-b21}\cite{gcomm-b22}. \subsubsection{Unsupervised Learning} In cases where labeled data is difficult to acquire or outright not available, unsupervised learning approaches can be used to uncover the underlying structures in data. These approaches trade a level of control on what the model learns for the ability to learn underlying structures and make predictions without knowing the ground truth in the form of labeled data. Beyond also requiring a large and varied dataset, a second drawback specific to unsupervised learning is the difficulty in assessing the accuracy of these models derived from unlabeled data without human validation. Human effort is back-loaded with unsupervised learning, compared to supervised learning where most human effort is front-loaded through the labeling of datasets to ensure they represent a ground truth. In 5G and beyond contexts, the unstructured format of unsupervised data learning has been has often been paired with network stream data and monitoring systems for retroactive self-diagnosis rather than autonomous actuation of cellular resources due to the mentioned lack of control over \textit{what} is learned \cite{gcomm-b23}\cite{gcomm-b24}. \subsubsection{Reinforcement Learning} Reinforcement learning models learn to map actions to situations in order to maximize a designated reward. A reinforcement learning approach differs from the previous two approaches in multiple ways. First of all, instead of learning from a large dataset, reinforcement learning agents interact with an environment to gather data points and learn how to maximize a reward signal. As the reinforcement learning agent is naive about the environment, it is required to explore the environment as well as exploit any potential source of rewards. This ongoing dilemma between exploration and exploitation means reinforcement learning agents require a high level of interaction. As a result of this ongoing interaction, it is much more adaptable with a minimum need for retraining as it can slowly adapt to changes with each new interaction while expiring old data, allowing it to be relatively storage efficient compared to supervised and unsupervised machine learning models. This structure and behavior make reinforcement learning a better suited candidate for cellular application and operations managed by the UE. Reinforcement learning models are commonly used in wireless for decision making under unknown network conditions and contexts involving resource competition or opportunistic access \cite{gcomm-b25}\cite{gcomm-b26}\cite{gcomm-b27}\cite{gcomm-b28}. \subsubsection{Markov Chains and Transition Learning} \begin{figure} \centering \includegraphics[width=\linewidth]{markov_chain.pdf} \caption{2-D Markov Chain model with North [N], South [S], East [E], and West [W] transitions.} \label{markov_chain} \end{figure} Markov Chains are a method of representing the probabilities of moving from one state to another. This movement is referred to as a \textit{transition}. By design Markov Chains and Markov processes are intended to model an expected outcome based only on a current state and are considered \textit{memoryless} \ref{markov_chain}. Markov chains are often used to model processes that are stochastic and where past history has decaying or no value over time such as in wireless networks \cite{gcomm-b15}\cite{gcomm-b16}. In cases where additional context can be gained from previous states, these Markov transitions can be saved for further processing in the form of Transition Learning. Data produced during state transitions in cellular networks has also been used as the training set for the previously mentioned formats of machine learning \cite{gcomm-b17}\cite{gcomm-b18}. This paper applies transition learning in isolation, rather than within a large learning algorithm. To the authors knowledge, transition learning has not previously been investigated in isolation as a solution to extend the capabilities of mobile-controlled handoff. Although a large block of learning algorithms fall into one of these broad categories, they should be understood more as general areas, and less as strict separations as there are exceptions existing which do not map cleanly into a single category as seen with methods such as meta-learning which can provide a cross-category aggregated result \cite{gcomm-b19}. \section{System Design} In this section we aim to implement an algorithm that extends the capabilities of the RSSI data already existing at the UE to determine if this minimal amount of data can be used to help a given UE take higher performing base station associations under a scenario of mobile-controlled handoff. To do this, we create an algorithm where a UE can store and take decisions informed by a compact history of prior state transitions combined with the performance outcome it received (fig. \ref{algorithm}. The following section details the transition learning algorithm and setup of the simulation environment. \subsubsection{Base Station Association} In the experiment, it is assumed that the UE has access and a policy giving equal preference to all base stations in the environment. In order to represent a traditional preferred roaming list, the UE constantly monitors the 3 closest base stations. The UE is configured to always associate with the closest base station of the three, mimicking default RSSI association behavior. \subsubsection{Base Station Allocation} Because real world cellular performance is a temporal mix of frequency band, resource block allocation, signal interference, backhaul load and further factors - the experiment abstracts these and defines an "allocation" value to be used as a proxy representing composite performance measured at the UE. Further, the experiment treats the base station allocations as uniform with an isotropic radiation pattern in free space. Allocation values of 5 and 7 were used to present a scenario of significant $allocation\ \Delta$ (fig. \ref{grid_world}). \subsubsection{Defining Transitions} To define transitions, the UE begins in some $state$ where it checks for the 3 base stations with the highest RSSI defined by their physical proximity (\ref{ranking}). After completing a random walk, the UE checks whether the rank order of these strongest 3 signals is changed. If it is not changed, the UE does not have a new state and does not evaluate any mobility action. If the UE detects a change in the rank order \emph{and} the strongest signal is also changed (\ref{new_state}); the UE understands this as a new $state'$. From here the UE takes the default action of connecting to the base station with the strongest signal and calculates the difference in the allocation it received from moving to the new $state'$ as an $allocation\ \Delta$. This beginning $state$, final $state'$ and $allocation\ \Delta$ are stored as $transition_n$ (\ref{transition}). This $transition_n$ is the only value the UE retains in memory. \begin{equation} \begin{array}{c} state\ =\\ \begin{bmatrix}base\ station\ rank_1, \\ base\ station\ rank_2, \\ base\ station\ rank_3\end{bmatrix} \end{array} \label{ranking} \end{equation} \vspace{6pt} \begin{equation} \begin{array}{c} state(base\ station\ rank_1)\ \not= \\ state'(base\ station\ rank_1) \end{array} \label{new_state} \end{equation} \vspace{6pt} \begin{equation} \begin{array}{c} transition_n =\\ ("state", "state'", "allocation\ \Delta") \end{array} \label{transition} \end{equation} \vspace{6pt} \subsubsection{Transition Learning} \begin{figure} \centering \includegraphics[width=\linewidth]{algorithm_logic.pdf} \caption{Handoff override decision process} \label{algorithm} \end{figure} Until this point, the UE has been configured with a baseline behavior that mirrors a standard association based on RSSI. To extend this we contribute a new algorithm that learns network allocation outcomes when the rank order of the 3 closest base stations is changed. If the UE has not seen a specific transition before, it continues the default behavior and associates to the base station with the highest RSSI. As the UE performs further handoffs and stores the state transitions, if the UE has seen some $transition_n$ previously, it can choose to perform an "override" and not to perform the handoff if it has learned there is a negative $allocation\ \Delta$ expected in that transition. The decision logic of this override process is shown in figure \ref{algorithm}. The \textit{compute} complexity of the logic is fixed at O(1) due to the logic always using the same two inputs of current allocation and expected allocation to make a decision. The total \textit{algorithmic} complexity of the transition learning process becomes O(log n) when paired with a binary search algorithm, assuming transitions are stored as a sorted list \cite{gcomm-b37}. \subsubsection{Simulation Environment} For the simulation we create an area that is a 23x23 unit grid containing 5 base stations placed at grid positions [0, 0], [22, 0], [22, 22], [22, 0], and [11, 11] (fig. \ref{grid_world}). In this structure the simulation environment presents a 2-D Markov chain with matching state space and cardinality (\ref{prob_matrix}). Each grid unit of the simulation represents 1 city block. At the start of the simulation, a UE is placed at position [11, 11] and completes a series of 2000 continuous random walks of 10 unit steps each throughout the environment. Having all grid positions being equidistant and with an eigenvalue of 1, the sum of probability of the UE transitioning into any given position in the state space converges to 1 after the 2000 walk trial (\ref{sum_probability}) \cite{gcomm-b11}. Additionally, setting a boundary for the simulation environment makes the grid state space irreducible, and combining this with the aperiodicity of the random walk enforces that the probability of the UE arriving to any single space in the environment during a single walk is dependant on the point in which the random walk started (\ref{position_probability}) \cite{gcomm-b11}\cite{gcomm-b31}. Transitions and allocations experienced during each 2000 walk trial are then averaged to provide an average allocation result for the simulation round. A total of 1000 simulation such rounds were run in order to provide a monte carlo sample of the transition learning algorithm performance. The simulation environment is written in the Python programming language and is available to download from Github \cite{gcomm-b10}. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{coverage_grid_scenario_2.pdf} \caption{The simulation environment using a 10-step random walk and static base station allocations.} \label{grid_world} \end{figure} \vspace{6pt} \begin{equation} P = \begin{bmatrix} P_{0,0} & P_{0,1} & \dots & P_{0,j} & \dots & P_{0,S}\\ P_{1,0} & P_{1,1} & \dots & P_{1,j} & \dots & P_{1,S}\\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ P_{i,0} & P_{i,1} & \dots & P_{i,j} & \dots & P_{i,S} \\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ P_{S,0} & P_{S,1} & \dots & P_{S,j} & \dots & P_{S,S} \end{bmatrix} \label{prob_matrix} \end{equation} \vspace{6pt} \begin{equation} \mathlarger{\mathlarger{\sum}}_{j=1}^{S}{P_{ij}=1} \label{sum_probability} \end{equation} \begin{equation} \lim_{k\to\infty} (P^k)_{ij} = \pi j \label{position_probability} \end{equation} \vspace{6pt} \section{Simulation Results} \begin{figure} \centering \includegraphics[width=\linewidth]{simulations/scen1_histogram.png} \caption{Network allocation distribution over 2000 rounds (Default Environment).} \label{map_4} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{simulations/scen1_alloc_default.png} \caption{Allocation map of Scenario 1 (Default Environment).} \label{map_1} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{simulations/scen1_alloc_learned_ave.png} \caption{Average allocation performance over 2000 rounds using transition learning (Default Environment).} \label{map_3} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{simulations/scen1_alloc_learned.png} \caption{Monte Carlo sample of a single random walk round. Allocations marked "0" are unexplored states. (Default Environment).} \label{map_2} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{simulations/load_allocation.png} \caption{Allocation map of Scenario 2 (Sector Load).} \label{map_5} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{simulations/load_snapshot.png} \caption{Monte Carlo sample of a single random walk round. Allocations marked "0" are unexplored states. (Sector Load).} \label{map_6} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{simulations/load_learned.png} \caption{Average allocation performance over 2000 rounds using transition learning (Sector Load).} \label{map_7} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{simulations/load_histogram.png} \caption{Network allocation distribution over 2000 rounds (Sector Load).} \label{map_8} \end{figure} To characterize the performance of the transition learning algorithm we analyse it under 2 scenarios. The results of the two simulation scenarios are presented in table \ref{simulation_table}. \subsubsection{Scenario 1: Default Environment} The first scenario is the "Default Environment" representing a best case scenario state where the allocations of all base stations is uniform across the entire state space and the final allocation granted is impacted only by the choice of base station association. Within the Default Environment, on average the transition learning algorithm performed an override during 29.36\% of transitions delivering a net allocation increase of 5.5\% compared to base station associations relying only on RSSI (fig. \ref{map_4}). This scenario provided a predictable result where the amount of overrides performed is roughly correlated to the area of the state space occupied by the base station with higher allocation given the environment geometry (fig. \ref{map_1}). This result also affirms the original probability relationship that over 2000 rounds the probability of the UE existing in a given space within the environment becomes 1 (\ref{sum_probability}). Figure \ref{map_7} reveals a pattern of higher average allocation in areas bordering the higher allocation zone, corresponding to the increased probability that a random walk from this area has an increased probability of experiencing a transition or transition override resulting from a base station rank change (\ref{position_probability}). Figure \ref{map_2} provides a single round snapshot that gives a higher resolution example of the overrides and resulting allocations that are occurring during individual rounds. \subsubsection{Scenario 2: Sector Load} The second scenario evaluated is "Sector Load" and is representative of a scenario where within the coverage of a single base station, there is some subset of coverage (in this case 1 base station sector) that is under significant load, even while RSSI across the state space is unchanged. In this loaded sector, allocation is changed from \textit{7} to \textit{1} (fig. \ref{map_5}. In this scenario, knowledge of the additional load is not present in the measures available to the UE and is effectively \textit{hidden}. Within the Sector Load scenario, on average the transition learning algorithm performed an override during 30.89\% of transitions delivering a net allocation increase of 7.0\% compared to base station associations based only on RSSI (\ref{map_8}). The amount of overrides performed in this scenario is not significantly changed in this scenario, reflecting the proportion of the state space with an allocation other than \textit{5} remains unchanged. The pattern of increased average allocation near edges of higher allocation is repeated here (fig. \ref{map_7}), but is now shifted towards the base station at position [22,22], reflecting some portion of transitions being learned and then subsequently overridden when involving the base station sector under load. Figure \ref{map_6} again gives a higher resolution example of the overrides and resulting allocations that are occurring during individual rounds of our Scenario 2 with sector load. \begin{table*}[htbp] \centering \begin{tabular}{l|c|c|c|c|c|c|} \cline{2-7} & \multicolumn{3}{c|}{\textbf{Default Environment}} & \multicolumn{3}{c|}{\textbf{Sector Load}}\\ \cline{2-7} & \cellcolor[HTML]{EFEFEF}\textit{\% Override} & \cellcolor[HTML]{EFEFEF}\textit{Allocation Average} & \cellcolor[HTML]{EFEFEF}\textit{Performance Gain} & \cellcolor[HTML]{EFEFEF}\textit{\% Override} & \cellcolor[HTML]{EFEFEF}\textit{Allocation Average} & \cellcolor[HTML]{EFEFEF}\textit{Performance Gain} \\ \hline \multicolumn{1}{|r|}{RSSI Default} & 0 & 6.01 & - & 0 & 5.26 & - \\ \hline \multicolumn{1}{|r|}{Transition Learning} & 41.59 & 6.36 & 5.5\% & 30.89 & 5.65 & 7.0\% \\ \hline \end{tabular} \vspace{6pt} \caption{Summary of simulation results} \label{simulation_table} \end{table*} \section{Discussion and Future Research} Collectively, the authors present this paper as an early result exploring the broader topic of how a network, or more specifically, UE devices can potentially operate after increases in network decentralization. Being able to place additional environment logic at the UE allows the logic the potential to become agnostic and move \textit{with} the UE in a state where network operation occurs peer-to-peer. It is important to note that of the results achieved, raw performance gain values can be considered as secondary, as they are partially a function of the difference between the chosen allocation values during simulation. The primary experiment finding is the underlying behavior relationships and reliability of the transition learning algorithm to attain a better result with O(log n) complexity - even with hidden environmental contexts such as base station section load. A potential area of investigation extending from the presented results is the impact and interaction of having multiple UE's within the environment making mobility decisions based on the transition learning algorithm. In this case it can be assumed that all UE's learn similar outcomes from similar transitions and begin to shift network load. In this case such behavior would bring the problem statement closer to existing reinforcement learning experiments in wireless and allow a further comparison of the two.
{ "timestamp": "2022-02-02T02:06:42", "yymm": "2202", "arxiv_id": "2202.00134", "language": "en", "url": "https://arxiv.org/abs/2202.00134" }
\section{Introduction}\vspace{-.1cm} Decentralized algorithms tackle an optimization problem with inter-connected agents/workers possessing local data without relying on a central server. For many scenarios relevant to large-scale machine learning, these algorithms improve computational scalability and preserve data privacy. Owing to these reasons, decentralized algorithms have become the critical enabler for applications such as sensor networks \citep{schizas2007consensus}, federated learning \citep{konevcny2016federated, wang2021field}, etc.\vspace{-.1cm} This paper concentrates on the \emph{communication efficiency} issue with decentralized algorithms, which is a key bottleneck as the latter rely heavily on the bandwidth limited inter-agent communication links \citep{wang2021field}. An inefficient design may lead to significant overhead and slow down to the application. Several approaches have been studied to tame with this issue. The first approach is to consider the optimal algorithm design. \citet{scaman2019optimal, uribe2021dual} studied algorithms with an optimal iteration complexity, \citet{sun2019distributed, sun2020improving, pmlr-v139-lu21a} focused on non-convex problems and studied lower bounds on the number of communications needed; also see \citep{gorbunov2019optimal}. We remark that a common algorithm design to achieve optimal rates is to balance between computation and communication by performing several computation (i.e., gradient) steps before communication.\vspace{-.1cm} Perhaps a more direct approach to improve communication efficiency is to apply \emph{compression} in every communication step of algorithms. This idea was first studied in the context of \emph{distributed optimization} where workers/agents communicate to a central server. A number of algorithms have been studied for the distributed setting with compression strategies such as sparsification \citep{stich2018sparsified, alistarh2019convergence, wangni2018gradient}, quantization \citep{alistarh2017qsgd, bernstein2018signsgd, reisizadeh2020fedpaq}, low-rank approximation \citep{vogels2019powersgd}, etc., often used in combination with an error compensation technique \citep{mishchenko2019distributed, tang2019doublesqueeze}. See a recent study via a unified framework in \citep{richtarik2021ef21}.\vspace{-.1cm} \begin{table*}[t] \centering \caption{Comparison of decentralized stochastic optimization algorithms for smooth \emph{non-convex} objective with $n$ agents. Iteration complexity is the no.~of iterations, $T$, required to obtain an $\epsilon$-stationary solution ($T^{-1} \sum_{t=0}^{T-1} \mathbb{E}[ \| \nabla f( \bar{\prm}^t ) \|^2] \leq \epsilon^2$). Constants $\delta, \sigma^2, \overline{G}_0, \rho$ are defined in \Cref{ass:mix}, \ref{ass:stoc}, \ref{ass:compress}, \Cref{th:main}. Highlighted in {\color{red} red} are dominate terms when $\epsilon \to 0$. }\vspace{-.1cm} \label{tab:compare} \begin{tabular}{llll} \toprule \bfseries Algorithms & \bfseries Iteration~Complexity & \bfseries Compress. & \bfseries Remarks \\ \midrule {\sf DSGD} & $ \mathcal{O}\left(\max\left\{ {\color{red} \frac{\sigma^2}{n} \epsilon^{-4} }, \frac{n( \sigma^2 + \varsigma^2) }{\rho^2 \epsilon^2} \right\} \right)$ & $\xmark$ & {\footnotesize $\varsigma^2 = \sup_{i, \theta} \| \nabla f_i(\theta) - \nabla f(\theta) \|^2$} \\ {\sf GNSD} & $\mathcal{O}\left( {\color{red} \frac{1}{C_0^2 C_1^2} \epsilon^{-4} } \right)$ & $\xmark$ & \begin{tabular}{@{}l@{}} {\footnotesize $C_0, C_1$ are not explicitly defined,} \\ {\footnotesize see \citep{lu2019gnsd}.}\end{tabular} \\ {\sf DeTAG} & $\mathcal{O}\left( \max\left\{ {\color{red} \frac{\sigma^2}{nB} \epsilon^{-4} }, \frac{ \log\left( n + \varsigma_0 n\epsilon^{-1}\right)}{ \rho^{0.5}\epsilon^2}\right\} \right)$ & $\xmark$ & \begin{tabular}{@{}l@{}} {\footnotesize $\varsigma_0$ is variance of stoc.~gradient at init.,} \\ {\footnotesize $B$ is rounds of comm. per iteration.}\end{tabular}\\ {\sf GT-HSGD} & $\mathcal{O}\left( \max\left\{ {\color{red} \frac{\sigma^3}{n} \epsilon^{-3} }, \frac{\overline{G}_0}{\rho^3\epsilon^2}, \frac{n^{0.5}\sigma^{1.5}}{\rho^{2.25}\epsilon^{1.5}} \right\} \right)$ & $\xmark$ \\ {\sf CHOCO-SGD} & $\mathcal{O}\left( \max\left\{ {\color{red} \frac{\sigma^2}{n} \epsilon^{-4} }, \frac{G}{\delta\rho^2\epsilon^3} \right\} \right)$ & $\cmark$ & {\footnotesize $G = \sup_{i, \theta} \mathbb{E}_{\zeta \sim \mu_i} [ \| \nabla f_i (\theta; \zeta) \|^2]$} \\ \cellcolor{gray!15}{\sf DoCoM-SGT} & \cellcolor{gray!15}$\mathcal{O}\left( \max\left\{ {\color{red} \frac{\sigma^3}{n} \epsilon^{-3} }, \frac{n\overline{G}_0}{\delta^2\rho^4\epsilon^2}, \frac{n^{1.25}\sigma^{1.5}}{\delta^{2.25}\rho^{4.5}\epsilon^{1.5}}\right\} \right)$ & \cellcolor{gray!15}$\cmark$ & \cellcolor{gray!15}{\footnotesize See \Cref{th:main}} \\ \bottomrule \end{tabular}\vspace{-.3cm} \end{table*} For decentralized optimization where a central server is not employed, the design of compression-enabled algorithm is more challenging. \citet{tang2018communication} proposed an extrapolation compression method, \citet{koloskova2019decentralized, koloskova2019decentralizeda} proposed the {\sf CHOCO-SGD} algorithm which combines decentralized SGD \citep{lian2017can} with error compensation. Despite the simplicity and reasonable practical performance demonstrated, algorithms such as {\sf CHOCO-SGD} suffer from a sub-optimal iteration complexity, and their analysis show that the performance depends on the data similarity across agents which is not ideal in light of applications such as federated learning \citep{konevcny2016federated}.\vspace{-.1cm} Concentrating on the stochastic optimization setting, this paper addresses the aforementioned issues by developing a communication efficient algorithm with three ingredients: {\sf (A)} compression, {\sf (B)} gradient tracking, and {\sf (C)} momentum-based variance reduction. Our contributions are:\vspace{-.1cm} \begin{itemize}[leftmargin=*, topsep=0mm, noitemsep] \item We derive the \emph{Doubly Compressed Momentum-assisted Stochastic Gradient Tracking} ({\sf DoCoM-SGT})~algorithm which utilizes two levels of error-compensated compressions for tackling stochastic optimization problem in a communication efficient manner. Incorporated with gradient tracking, our algorithm is able to find a stationary solution without relying on additional conditions such as bounded similarity between data distributions. \item We provide a unified convergence analysis for {\sf DoCoM-SGT}. Let $f(\bar{\prm})$ be the averaged objective function across the network, to be defined in \eqref{eq:opt}, our main result shows that {\sf DoCoM-SGT}~finds a solution, $\bar{\prm}^T$, in $T$ iterations and communications rounds with $\mathbb{E} [ \norm{ \nabla f(\bar{\prm}^T) }^2 ] = {\cal O}( 1 / {T}^{2/3} )$ for general smooth (possibly non-convex) objective functions in \Cref{th:main}, and with $\mathbb{E} [ f( \bar{\prm}^T ) - f^\star ] = {\cal O}( \log T / {T} )$ for objective functions satisfying the Polyak-Łojasiewicz condition in \Cref{cor:pl}. For the latter case, we further show that if deterministic gradients are available, then {\sf DoCoM-SGT}~converges \emph{linearly} in terms of the optimality gap. These convergence rates are comparable to state-of-the-art algorithms. \item We empirically evaluate the performance of {\sf DoCoM-SGT}~on training linear models and deep learning models using synthetic and real data, on non-convex losses. \end{itemize}\vspace{-.1cm} Our analysis relies on the construction of a new Lyapunov function that handles the coupled terms between the errors of compression, gradient tracking and average iterates; see \Cref{lem:wholesys}. We emphasize that obtaining the ${\cal O}(1/T^{2/3})$ bound requires a number of subtle modifications to be demonstrated in the proof outline of Sec.~\ref{sec:pf}. Lastly, we compare the iteration complexities of state-of-the-art algorithms in \Cref{tab:compare}. As seen, {\sf DoCoM-SGT}~is the only algorithm with compression and ${\cal O}(\epsilon^{-3})$ complexity.\vspace{-.1cm} \textbf{Notations.} $\| \cdot \|$, $\| \cdot \|_F$ denote Euclidean norm, Frobenius norm, respectively. The subscript-less operator $\mathbb{E} [\cdot]$ is total expectation taken over all randomnesses in operand. \vspace{-.4cm} \subsection{Related Works}\vspace{-.15cm} \textbf{Decentralized Optimization.} Algorithms for decentralized optimization have been first studied in \citep{nedic2009distributed}. The main idea is to mix communication (i.e., consensus) with optimization (i.e., gradient) steps. It has been extended to the stochastic setting (a.k.a.~{\sf DSGD}) in \citep{ram2010distributed}, and to directed graphs \citep{tsianos2012push, assran2019stochastic}. Notably, \citet{qu2017harnessing} proposed a gradient tracking technique where agents communicate local gradients to accelerate convergence.\vspace{-.1cm} In the stochastic non-convex optimization setting, we note that \citet{lian2017can} provided a performance analysis of {\sf DSGD}; \citet{lu2019gnsd} proposed {\sf GNSD} which combines gradient tracking with stochastic gradient (also see \citep{tang2018d}); \citet{pmlr-v139-lu21a} proposed {\sf DeTAG} with optimal computation-communication tradeoff; \citet{xin2021hybrid} proposed {\sf GT-HSGD} which extended {\sf GNSD} with momentum-based variance reduction, and a similar algorithm is in \citep{zhang2021gt}. Note that the latter idea was proposed in \citet{tran2021hybrid, cutkosky2019momentum} to achieve optimal sampling complexity for centralized SGD. See a recent survey in \citep{chang2020distributed}.\vspace{-.1cm} \textbf{Communication Efficient Algorithma.} Methods for reducing communication burden in decentralized algorithms have been developed. For instance, \citep{aysal2008distributed, kashyap2007quantized, reisizadeh2019exact} studied quantization for average consensus protocol which is a main building block for decentralized algorithms. Notably, recent works \citep{liu2020linear, liao2021compressed, song2021compressed} showed that combining compression with gradient tracking technique lead to algorithms that converge linearly to an optimal solution. We remark that these algorithms bear similar structure to {\sf DoCoM-SGT}, yet they focus on strongly convex objectives and consider deterministic gradients; see \citep{kovalev2021linearly} with extension to stochastic setting. \vspace{-.25cm} \section{Problem Setup \& Background}\vspace{-.15cm} We consider a weighted and undirected graph $G = ( {\cal N} , {\cal E} , {\bf W} )$ with the node set ${\cal N} = \{1,\ldots,n\}$ representing a set of $n$ agents, the edge set ${\cal E} \subseteq {\cal N} \times {\cal N}$ representing the communication links between agents, and ${\bf W} \in \mathbb{R}^{n \times n}$ is a weighted adjacency matrix. Note that self-loops are included such that $\{i,i\} \in {\cal E}$ for all $i$. Our goal is to tackle the following stochastic optimization problem in a decentralized manner by the $n$ agents on $G$: \begin{align} \textstyle \min_{ \theta \in \mathbb{R}^d }~f(\theta) := \frac{1}{n} \, \sum_{i=1}^n f_i(\theta), \label{eq:opt} \end{align} where $f_i : \mathbb{R}^d \rightarrow \mathbb{R}$ is a continuously differentiable (possibly non-convex) objective function known to the $i$th agent. In particular, the objective function can be expressed as the expectation $f_i(\theta) = \mathbb{E}_{ \zeta \sim \mu_i } [ f_i( \theta; \zeta ) ]$ such that $\mu_i$ denotes the data distribution available at agent $i$. Throughout this paper, we assume: \begin{assumption} \label{ass:lips} There exists $L \geq 0$ such that for any $i=1,...,n$, the gradient of $f_i( \cdot; \zeta )$ is $L$-Lipschitz, i.e., \begin{equation} \label{eq:lips} \| \nabla f_i( \theta; \zeta ) - \nabla f_i( \theta' ; \zeta ) \| \leq L \| \theta - \theta' \|, \end{equation} for any $\theta, \theta' \in \mathbb{R}^d$ and $\zeta \in {\rm supp} ( \mu_i )$. \end{assumption} \begin{assumption} \label{ass:mix} The adjacency matrix ${\bf W} \in \mathbb{R}_+^{n \times n}$ satisfies: \begin{enumerate}[leftmargin=*, topsep=0mm, itemsep=0mm, partopsep=0mm] \item (graph topology) $W_{ij} = 0$ if $ \{ i,j \} \notin {\cal E}$, \item (doubly stochastic) ${\bf W} {\bf 1}_n = {\bf W}^\top {\bf 1}_n = {\bf 1}_n$, \item (mixing) let ${\bf U} \in \mathbb{R}^{n \times (n-1)}$ be a matrix with orthogonal columns satisfying ${\bf I}_n - (1/n) {\bf 1}{\bf 1}^\top = {\bf U} {\bf U}^\top$, then there exists $\rho \in (0,1]$ such that $\| {\bf U}^\top {\bf W} {\bf U} \| \leq 1 - \rho$. \item (bounded eigenvalue) there exists $\bar{\omega} \in (0,2]$ such that $\norm{ {\bf W} - {\bf I}_n } \leq \bar{\omega}$. \end{enumerate} \end{assumption} The above conditions are standard. \Cref{ass:lips} requires the objective function to be smooth\footnote{\label{foot:lips}Our analysis is extensible to a slightly relaxed condition replacing \eqref{eq:lips} with $\mathbb{E}_\zeta[ \| \nabla f_i( \theta; \zeta) - \nabla f_i(\theta'; \zeta) \|^2 ] \leq L^2 \| \theta - \theta' \|^2$.}, and there exists ${\bf W}$ such that \Cref{ass:mix} is satisfied when $G$ is a connected graph; see \cite{boyd2004fastest}. Moreover, the gradient of $f_i$ can be estimated as $\nabla f_i( \theta; \zeta)$ satisfying: \begin{assumption} \label{ass:stoc} There exists $\sigma \geq 0$ such that for any $\theta \in \mathbb{R}^d$, $i=1,...,n$, the gradient estimate $\nabla f_i( \theta; \zeta)$ with $\zeta \sim \mu_i$ is unbiased with bounded second order moment, i.e., \begin{equation*} \mathbb{E}[ \nabla f_i( \theta; \zeta) ] = \nabla f_i(\theta),~\mathbb{E} [ \norm{ \nabla f_i( \theta; \zeta) - \nabla f_i( \theta) }^2 ] \leq \sigma^2, \end{equation*} where the expectations are taken w.r.t.~$\zeta \sim \mu_i$. \end{assumption} \textbf{DSGD and CHOCO-SGD Algorithms.} Equipped with \Cref{ass:mix}, \ref{ass:stoc}, a common practice for tackling \eqref{eq:opt} in a decentralized manner is to utilize ${\bf W}$ as a mixing matrix. To illustrate the basic idea, we observe the decentralized stochastic gradient ({\sf DSGD}) algorithm \citep{ram2010distributed, lian2017can}: at iteration $t$, \begin{align} \textstyle \theta_i^{t+1} = \sum_{j=1}^n W_{ij} \theta_j^t - \eta \nabla \widehat{f}_i^{t},~\forall~i, \label{eq:dgd} \end{align} where $\eta>0$ is the step size, $\nabla \widehat{f}_i^{t} \equiv \nabla f_i( \theta_i^{t}; \zeta_i^{t} )$ is the unbiased stochastic gradient with the data $\zeta_i^{t} \sim \mu_i$ drawn independently upon fixing $\theta_i^t$ and satisfying \Cref{ass:stoc}. For agent $i$, the \emph{consensus step} $\sum_{j=1}^n W_{ij} \theta_j^t$ can be computed with a local average among the neighbors of $i$. A drawback of \eqref{eq:dgd} is that agents are required to transmit $d$ real numbers on $G$ to their neighbors at every iteration. In practice, the communication links between agents are bandwidth limited and such algorithm may be undesirable when $d \gg 1$. To this end, a remedy is to apply \emph{compression} to messages transmitted on $G$. Formally, we consider a stochastic compression operator ${\cal Q}: \mathbb{R}^d \to \mathbb{R}^d$ satisfying the condition: \begin{assumption}\label{ass:compress} For any $x \in \mathbb{R}^d$, the compressor output ${\cal Q}(x)$ is the random vector $\tilde{\cal Q}(x; \xi)$ with $\xi \sim \pi_x$ such that there exists $\delta \in (0,1]$ satisfying\vspace{-.15cm} \begin{equation} \notag \mathbb{E} \left[ \norm{ x - {\cal Q}(x) }^2 \right] = \mathbb{E} \left[ \norm{ x - \tilde{\cal Q}(x; \xi) }^2 \right] \leq (1-\delta) \norm{x}^2.\vspace{-.3cm} \end{equation} \end{assumption} The above is a general condition on compressors as discussed in \citep{koloskova2019decentralized}. It is satisfied by a number of common designs. For instance, with $k \leq d$, the top-$k$ (resp.~random-$k$) \emph{sparsifier} given by\vspace{-.15cm} \begin{align} \label{eq:sparsifier} \left[ {\cal Q}(x) \right]_i = \begin{cases} x_i , & \text{if}~i \in {\cal I}_x, \\ 0 , & \text{otherwise},\vspace{-.05cm} \end{cases}\vspace{-.1cm} \end{align} where ${\cal I}_x \subseteq \{1,\ldots,d\}$ with $|{\cal I}_x| = k$ is the set of the coordinates of $x$ with the largest $k$ magnitudes (resp.~uniformly selected at random), satisfies \Cref{ass:compress} with $\delta = \frac{k}{d}$. Other compressors such as random quantization can also satisfy \Cref{ass:compress}; see \citep{alistarh2017qsgd, stich2018sparsified, alistarh2019convergence}. Note that sending ${\cal Q}(x)$ in \eqref{eq:sparsifier} over a communication channel requires only $k$ real number transmission. This achieves a $\frac{k}{d}$ compression ratio. However, applying ${\cal Q}(\cdot)$ to the consensus step in \eqref{eq:dgd} directly does not lead to a convergent algorithm as the compression error will accumulate with $t \to \infty$. The {\sf CHOCO-SGD} algorithm \citep{koloskova2019decentralized} resolves the issue by incorporating an error feedback step: at iteration $t$, \vspace{-.1cm} \begin{subequations} \label{eq:choco} \begin{align} & \widehat{\prm}_i^{t+1} = \widehat{\prm}_i^t + {\cal Q}( \theta_i^t - \eta \nabla \widehat{f}_i^{t} - \widehat{\prm}_i^t ), \\ & \textstyle \theta_i^{t+1} = \theta_i^t - \eta \nabla \widehat{f}_i^{t} + \gamma \sum_{j=1}^n W_{ij} ( \widehat{\prm}_j^{t+1} - \widehat{\prm}_i^{t+1} ),\vspace{-.3cm} \end{align} \end{subequations} for all $i$, where $\gamma > 0$ is the consensus step size, and $\eta, \nabla \widehat{f}_i^{t}$ were defined in \eqref{eq:dgd}. Instead of transmitting a compressed version of $\theta_i^t - \eta \nabla \widehat{f}_i^t$ directly, {\sf CHOCO-SGD} maintains an auxilliary variable $\widehat{\prm}_i^{t}$ that accumulates the compressed \emph{difference} ${\cal Q}(\theta_i^t - \eta \nabla \widehat{f}_i^t - \widehat{\prm}_i^t )$. Subsequently the main variable $\theta_i^t$ is updated through a consensus step with this auxilliary variable. \citet{koloskova2019decentralizeda} proved that in $T$ iterations, {\sf CHOCO-SGD} finds a near-stationary solution of \eqref{eq:opt}, $\{ \theta_i^\tau \}_{i=1}^n$ with $\tau \in \{0,\ldots,T-1\}$, satisfying $\mathbb{E}[ \norm{\nabla f( n^{-1} \sum_{i=1}^n \theta_i^\tau ))}^2 ] = {\cal O}(1/\sqrt{T})$. A drawback of {\sf CHOCO-SGD} is that the convergence of the algorithm requires the stochastic gradient $\mathbb{E}[ \| \nabla \widehat{f}_i^t \|^2 ]$ to be bounded for any $i,t$, see \Cref{tab:compare} and \citep{koloskova2019decentralized, koloskova2019decentralizeda}. The latter condition can be relaxed into requiring that the \emph{data similarity} $\sup_{ \theta \in \mathbb{R}^d} \norm{ \nabla f_i( \theta) - \nabla f( \theta) }$ is bounded, where the local objective functions have to be close to each other. Nevertheless, these quantities are not easy to control for applications such as federated learning \cite{konevcny2016federated} as the local data are non-i.i.d. \section{Proposed {\sf DoCoM-SGT}~Algorithm} Taking a closer look at {\sf CHOCO-SGD} \eqref{eq:choco} reveals that when the data available at the agents are heterogeneous (a.k.a.~non-i.i.d.), i.e., $\sup_{ \theta \in \mathbb{R}^d} \norm{ \nabla f_i( \theta) - \nabla f( \theta) } \neq 0$, the algorithm can only utilize local gradient estimates $\nabla \widehat{f}_i^t \approx \nabla f_i( \theta_i^t )$. This estimate can be large even when the solution $\theta_i^t$ is close to a stationary point of \eqref{eq:opt}. As a result, the algorithm needs to incorporate a small step size $\eta$ (or vanishing step size as $t \to \infty$) to compensate for the accumulated error. We propose the \emph{Doubly Compressed Momentum-assisted Stochastic Gradient Tracking} ({\sf DoCoM-SGT}) algorithm which offers improved convergence properties over {\sf CHOCO-SGD}. Let $\eta > 0$ be step size, $\gamma, \beta \in (0,1]$, the {\sf DoCoM-SGT}~algorithm at iteration $t \in \mathbb{N}$ reads\vspace{-.1cm} \begin{subequations} \label{eq:algo} \begin{align} \hspace{-.2cm} & \theta_i^{t+1} = \theta_i^t - \eta g_i^t + \gamma \sum_{j=1}^n W_{ij} ( \widehat{\prm}_j^{t+1} - \widehat{\prm}_i^{t+1} ) \label{eq:docom_a} \\ \hspace{-.2cm} & \widehat{\prm}_i^{t+1} = \widehat{\prm}_i^t + {\cal Q} \left( \theta_i^t - \eta g_i^t - \widehat{\prm}_i^t \right) \label{eq:docom_b} \\ \hspace{-.2cm} & v_i^{t+1} = \beta \nabla \widehat{f}_i^{t+1} + (1-\beta) \big[ v_i^t + \nabla \widehat{f}_i^{t+1} - \nabla \widetilde{f}_i^{t} \big] \label{eq:docom_c} \\ \hspace{-.2cm} & g_i^{t+1} = g_i^t + v_i^{t+1} - v_i^t + \gamma \sum_{j=1}^n W_{ij} ( \widehat{\gog}_j^{t+1} \hspace{-.1cm} - \widehat{\gog}_i^{t+1} ) \label{eq:docom_d} \\ \hspace{-.2cm} & \widehat{\gog}_i^{t+1} = \widehat{\gog}_i^t + {\cal Q} \left( g_i^t + v_i^{t+1} - v_i^t - \widehat{\gog}_i^t \right), \label{eq:docom_e} \end{align} \end{subequations} where we draw the sample $\zeta_i^{t+1} \sim \mu_i$ at agent $i$ (or a minibatch of samples) and define $\nabla \widehat{f}_i^{t+1} \equiv \nabla f_i( \theta_i^{t+1}; \zeta_i^{t+1} )$, $\nabla \widetilde{f}_i^t \equiv \nabla f_i( \theta_i^{t}; \zeta_i^{t+1} )$ such that the stochastic gradients in \eqref{eq:docom_c} are formed using the same data batch. In \Cref{alg:docom} we provide the psuedo-code of {\sf DoCoM-SGT}~which provides details on the initialization and implementation. The {\sf DoCoM-SGT}~algorithm features two ingredients: {\sf (A)} a \emph{gradient tracking} step with \emph{compression} where each agent maintains an estimate of the averaged gradient $n^{-1} \sum_{i=1}^n \nabla \widehat{f}_i^t$; {\sf (B)} a momentum-based variance reduction step to improve convergence rate, where our update form is similar to that of {\sf GT-HSGD} \citep{xin2021hybrid}. We observe that the compressed consensus step on $\{ \theta_i^t \}_{i=1}^n$ \eqref{eq:docom_a}, \eqref{eq:docom_b} resembles that of {\sf CHOCO-SGD} \eqref{eq:choco} except the local update is computed along the direction $g_i^t$; the latter can be updated according to another compressed consensus steps in \eqref{eq:docom_d}, \eqref{eq:docom_e} which aims at \emph{tracking the dynamically updated average gradient estimator} $g_i^t \approx n^{-1}\sum_{j=1}^n v_j^t$. Moreover, \eqref{eq:docom_c} uses a variance reduced estimate of the gradient with a recursive step similar to \cite{cutkosky2019momentum, tran2021hybrid}. \algsetup{indent=.25em} \begin{algorithm}[t] \caption{{\sf DoCoM-SGT}~Algorithm} \label{alg:docom} \begin{algorithmic}[1] \STATE {\bfseries Input:} mixing matrix ${\bf W}$; step sizes $\eta$, $\gamma$; momentum $\beta$; initial batch number $b_0$; initial iterate $\bar{\theta}^0 \in \mathbb{R}^d$. \STATE Initialize $\theta^0_i = \bar{\theta}^0,~\forall i \in [n], \widehat{\prm}^0_{i,j} = \bar{\theta}^0,~\forall \{i, j\} \in {\cal E}$, \\ \STATE Initialize stochastic gradient estimate \\ \centerline{$v^0_i = \frac{1}{b_0} \sum_{r=1}^{b_0} \nabla f_i(\theta_i^{0}; \zeta_i^{0,r}), \left\{\zeta_{i}^{0,r} \right\}_{r=1}^{b_0} \thicksim \mu_i$} \hspace{1.2em} $g_i^0 = v_i^{0},~\forall i \in [n], \widehat{\gog}^0_{i,j} = {\bf 0}_d,~\forall \{i, j\} \in {\cal E}$. \FOR{$t$ {\bfseries in} $0, \dots, T-1$} \STATE $\forall i:~\theta_i^{t+\frac{1}{2}} = \theta_i^t - \eta g_i^t$ \FOR{$\{i,j\}\in {\cal E}$ (notice $\{i,i\}\in {\cal E}$)} \STATE Agent $j$ receive ${\cal Q}(\theta_i^{t+\frac{1}{2}} - \widehat{\prm}_{i,i}^t )$ from agent $i$ \STATE Set $\widehat{\prm}^{t+1}_{j,i} = \widehat{\prm}^{t}_{j,i} + {\cal Q}(\theta_i^{t+\frac{1}{2}} - \widehat{\prm}_{i,i}^t )$ \ENDFOR \STATE $\forall i:~\theta_i^{t+1} = \theta_i^{t+\frac{1}{2}} + \gamma \sum_{j:\{i,j\}\in {\cal E}} W_{ij} (\widehat{\prm}_{i,j}^{t+1} - \widehat{\prm}_{i,i}^{t+1})$ \STATE $\forall i:~v_i^{t+1} = \beta\nabla \widehat{f}_{i}^{t+1} \hspace{-.1cm}+\hspace{-.1cm} (1-\beta)(v_i^{t} + \nabla \widehat{f}_i^{t+1} \hspace{-.1cm}-\hspace{-.1cm} \nabla \widetilde{f}_i^{t})$ \STATE $\forall i:~g_i^{t+\frac{1}{2}} = g_i^t + v_i^{t+1} - v_i^t$ \FOR{$\{i,j\}\in {\cal E}$ (notice $\{i,i\}\in {\cal E}$)} \STATE Agent $j$ receive ${\cal Q}(g_i^{t+\frac{1}{2}} - \widehat{\gog}_{i,i}^t )$ from agent $i$ \STATE Set $\widehat{\gog}_{j,i}^{t+1} = \widehat{\gog}_{j,i}^t + {\cal Q}(g_i^{t+\frac{1}{2}} - \widehat{\gog}_{i,i}^t )$ \ENDFOR \STATE $\forall i:~g_i^{t+1} = g_i^{t+\frac{1}{2}} + \gamma \sum_{j:\{i,j\}\in {\cal E}} W_{ij} (\widehat{\gog}_{i,j}^{t+1} - \widehat{\gog}_{i,i}^{t+1})$ \ENDFOR \STATE {\bfseries Output:} pick the ${\sf T}$th iterate $\theta_i^{\sf T}$, where ${\sf T}$ is uniformly selected from $\{0,\ldots,T-1\}$; or the last iterate $\theta_i^{T}$. \end{algorithmic}\vspace{-.1cm} \end{algorithm} Lastly, we notice that {\sf DoCoM-SGT}~shares a similar communication and computation cost per iteration as {\sf CHOCO-SGD}, except that an extra communication step (with compression) is needed for the tracking of $n^{-1} \sum_{i=1}^n v_i^t$ and an extra computation step is needed for computing $\nabla \widetilde{f}_i^t$, in \eqref{eq:docom_d}, \eqref{eq:docom_e}. Similar to {\sf CHOCO-SGD}, the {\sf DoCoM-SGT}~algorithm requires each agent to store the auxilliary variables $\{ \widehat{\prm}_j^t , \widehat{\gog}_j^t \}_{j \in {\cal N}_i}$ of its neighbors to apply error compensation. As we will show later, the above shortcomings can be overcome as {\sf DoCoM-SGT}~has a better convergence rate.\vspace{-.2cm} \section{Convergence Analysis}\vspace{-.1cm} This section analyzes the expected convergence rate of the {\sf DoCoM-SGT}~algorithm in seeking a (near-)stationary solution of \eqref{eq:opt}. We demonstrate that it achieves state-of-the-art performance for decentralized optimization. Let $\bar{\prm}^t := n^{-1} \sum_{i=1}^n \theta_i^t$ be the averaged iterate, $\overline{G}_0 := n^{-1} \mathbb{E}[ \sum_{i=1}^n \norm{ g_i^0 }^2 ]$ be the initial expected gradient norm, $f^\star := \min_{ \theta' } f( \theta' )$ be the optimal objective value. We first summarize the convergence results under the mentioned assumptions where \eqref{eq:opt} is possibly non-convex: \begin{theorem} \label{th:main} Under \Cref{ass:lips}, \ref{ass:mix}, \ref{ass:stoc}, \ref{ass:compress}. Suppose that the step sizes satisfies \begin{equation} \label{eq:docom_stepsize} \begin{split} & \eta \leq \min \left\{ \eta_\infty, \sqrt{ {\overline{\beta} n} / ({8 \mathbb{C}_{\avgg}}}) \right\},~~\gamma \leq \gamma_\infty, \end{split} \end{equation} where $\gamma_\infty, \eta_\infty$ are defined in \eqref{eq:stepsize_whole}. We set $\beta \in (0,1)$, $\overline{\beta} = \min\{ \frac{\rho \gamma}{8}, \frac{\delta \gamma}{8}, \beta \}$. Then, for any $T \geq 1$, it holds \begin{align} & \frac{1}{ T} \sum_{t=0}^{T-1} \mathbb{E} \left[ \frac{1}{2} \norm{ \nabla f( \bar{\prm}^t)}^2 + \frac{L^2}{n} \sum_{i=1}^n \norm{ \theta_i^t - \bar{\prm}^t}^2 \right] \leq \label{eq:main_bd} \\ & \frac{f( \bar{\prm}^0 ) - f^\star}{ \eta T / 2 } + \mathbb{C}_\sigma \frac{2 \beta^2 \sigma^2}{\overline{\beta} n} + \frac{4\sigma^2}{b_0 \overline{\beta} T n} + \frac{\eta^2}{\overline{\beta} T} \frac{384 L^2 \overline{G}_0}{ \rho^2 \gamma^2 (1-\gamma) } \nonumber, \end{align} where \begin{align} & \mathbb{C}_\sigma = 4 + \frac{\eta^2}{\gamma^3}\frac{672L^2 n}{\rho^3} + \frac{\eta^2}{\gamma} \frac{6 L^2 n \rho^4 \delta}{25 \bar{\omega}^2} + \frac{\eta^2}{\gamma^2} \frac{4 L^2 n}{\bar{\omega}^2}, \label{eq:constS} \\ & \mathbb{C}_{\avgg} = 8(1-\beta)^2 L^2 (1-\rho\gamma)^2 + \frac{L^2 n}{\rho \gamma} \left( 96 + \frac{141}{400} \frac{\rho^2}{\bar{\omega}^2} \right). \nonumber \end{align}\vspace{-.6cm} \end{theorem} \textbf{Convergence Rate.} Setting the step sizes and parameters as $\beta = \Theta( \frac{ n^{1/3} }{ T^{2/3} } ), \eta = \Theta( \frac{ n^{2/3} }{ L T^{1/3} } ), \gamma = \gamma_\infty, b_0 = \Omega( \frac{ T^{1/3} } { n^{2/3} } )$. Further, we select the ${\sf T}$th iterate such that ${\sf T}$ is independently and uniformly selected from $\{0,\ldots, T-1\}$ [cf.~the output of \Cref{alg:docom}], similar to \citep{ghadimi2013stochastic}. For a sufficiently large $T$, it can be shown that \begin{align} & \textstyle n^{-1} \sum_{i=1}^n \mathbb{E} \left[ \norm{ \nabla f( \theta_i^{\sf T} )}^2 \right] = \label{eq:grdFbound} \\ & {\cal O} \left( \frac{L (f(\bar{\prm}^0) - f^\star)}{(nT)^{2/3}} + \frac{\sigma^2}{ (n T)^{2/3}} + \frac{n \overline{G}_0}{ \delta^2 \rho^4 T} + \frac{\sigma^2 n^{5/3}}{ \delta^3 \rho^6 T^{4/3}} \right) , \nonumber \end{align} where we have used \eqref{eq:main_bd} and the Lipschitz continuity of $\nabla f_i(\cdot)$ [cf.~\Cref{ass:lips}] to derive a bound on the gradient of individual iterate $\theta_i^{\sf T}$. For any agent $i=1,\ldots, n$, the iterate $\theta_i^{\sf T}$ at the output of \Cref{alg:docom} is guaranteed to be ${\cal O}(1/T^{2/3})$-stationary to \eqref{eq:opt}. Notice that this is a state-of-the-art convergence rate for first order stochastic optimization even in the centralized setting; see \citep{cutkosky2019momentum, tran2021hybrid}. Our rate is comparable to or faster than a number of decentralized algorithms with or without compression; see \Cref{tab:compare}. \textbf{Impacts of Network Topology and Compressor.} Eq.~\eqref{eq:grdFbound} indicates the impacts of network topology (due to $\rho$) and compressor (due to $\delta$) vanish as $T \to \infty$. This can be observed by recognizing that the last two terms in \eqref{eq:grdFbound} are in the order of ${\cal O}(1/T), {\cal O}(1/T^{4/3})$. In \Cref{app:step_twist}, we demonstrate with a similar set of step sizes, for any $T \geq T_{\sf trans} = \Omega( n^3 \overline{G}_0^3 / ( \sigma^6 \delta^6 \rho^{12} ) )$, {\sf DoCoM-SGT}~enjoys a matching convergence behavior as a centralized SGD algorithm employing a momentum-based variance reduced gradient estimator with a batch size of $n$, e.g., \citep{tran2021hybrid}. In the latter case, we have $n^{-1} \sum_{i=1}^n \mathbb{E} \left[ \norm{ \nabla f( \theta_i^{\sf T} )}^2 \right] = {\cal O}( \sigma^2 / nT^{2/3} )$. Here, the constant $T_{\sf trans}$ is also known as the transient time of the decentralized algorithm \citep{pu2020asymptotic}. Besides, we remark that our result does not require any assumption on the data heterogeneity level nor the boundedness of gradient as in {\sf CHOCO-SGD} \citep{koloskova2019decentralizeda} or {\sf DSGD} \citep{lian2017can}. As mentioned before, this is a consequence of the gradient tracking procedure applied. In \Cref{app:betaequalone}, we provide a separate analysis for the case of $\beta = 1$, i.e., when no momentum is applied in the algorithm \eqref{eq:docom_c}. Interestingly, in the latter case, the fastest convergence rate achievable in our analysis is only ${\cal O}(1/\sqrt{T})$ [cf.~\eqref{eq:docom_beta}], indicating that the momentum term may be crucial in accelerating {\sf DoCoM-SGT}. \textbf{PL Condition.} Finally, we show that the convergence rate can be improved when the objective function satisfies the Polyak-Łojasiewicz (PL) condition: \begin{assumption} \label{ass:pl} For any $\theta \in \mathbb{R}^d$, it holds that \begin{equation} \textstyle \norm{ \nabla f( \theta ) }^2 \geq 2 \mu \big[ f( \theta ) - f^\star \big].\vspace{-.2cm} \end{equation} \end{assumption} Notice that the PL condition is satisfied by strongly convex functions as well as a number of non-convex functions; see \cite{karimi2016linear}. We obtain: \begin{corollary} \label{cor:pl} Under \Cref{ass:lips}, \ref{ass:mix}, \ref{ass:stoc}, \ref{ass:compress}, \ref{ass:pl}. Suppose that the step size condition \eqref{eq:docom_stepsize} holds and $\beta \in (0,1)$. Then, for any $t \geq 1$, it holds \begin{align} & \textstyle \Delta^{t} + \frac{2 L^2 \eta}{\overline{\beta} n} \sum_{i=1}^n \mathbb{E} [ \norm{\theta_i^t - \bar{\prm}^t}^2 ] \nonumber \\ & \textstyle \leq \left( 1 - \widetilde{\beta} \right)^t \left( \Delta^0 + \frac{2 \eta}{ \overline{\beta} n } {\tt V}^0 \right) + \frac{ \eta \beta^2 }{ \overline{\beta} \widetilde{\beta} } \frac{2 \mathbb{C}_\sigma \sigma^2}{n} , \label{eq:plcase} \end{align} where $\widetilde{\beta} := \min \left\{ \eta \mu, { \overline{\beta} } / {2} \right\}$, $\Delta^t := \mathbb{E} [ f( \bar{\prm}^t )] - f^\star$ is the expected optimality gap and the constant $\mathbb{C}_\sigma$ is defined in \eqref{eq:constS}. Notice that ${\tt V}^0$ can be upper bounded with \eqref{eq:er0}. \end{corollary} Setting the step sizes and parameters as $\beta = \Theta(\log T /T), \eta = \Theta(\log T/T), \gamma = \gamma_\infty, b_0 = \Omega(1)$. For sufficiently large $T$, it can be shown that \vspace{-.15cm} \begin{align} & \mathbb{E} [ f( \bar{\prm}^T )] - f^\star = {\cal O}( \log T /T ), \label{eq:epsreq1} \\ & \textstyle \frac{1}{n} \sum_{i=1}^n \mathbb{E} [ \norm{\theta_i^T - \bar{\prm}^t}^2 ] = {\cal O}( \log T /T ), \label{eq:epsreq2}\vspace{-.15cm} \end{align} see \Cref{app:largeT}. Moreover, in the \emph{deterministic gradient case with $\sigma^2 = 0$}, we can select $\beta = \Theta(1), \eta = \Theta(1)$. Then, \eqref{eq:plcase} shows that {\sf DoCoM-SGT}~converges \emph{linearly} to an optimal solution such that $\mathbb{E} [ f( \bar{\prm}^T )] - f^\star = {\cal O}( (1-\widetilde{\beta})^T )$. The latter matches the performance of recently proposed algorithms with compression \cite{liu2020linear, liao2021compressed, song2021compressed, kovalev2021linearly}. \subsection{Proof of \Cref{th:main}}\label{sec:pf} We preface the proof by defining the following notations for the variables in {\sf DoCoM-SGT}. For any $t \geq 0$: \begin{align*} \Theta^t = \left(\hspace{-.05cm} \begin{array}{c} (\theta_1^t)^\top \\ \vdots \\ (\theta_n^t)^\top \end{array} \hspace{-.05cm}\right), V^t = \left(\hspace{-.05cm} \begin{array}{c} (v_1^t)^\top \\ \vdots \\ (v_n^t)^\top \end{array} \hspace{-.05cm}\right), G^t = \left(\hspace{-.05cm} \begin{array}{c} (g_1^t)^\top \\ \vdots \\ (g_n^t)^\top \end{array} \hspace{-.05cm}\right) \end{align*} which are $n \times d$ matrices. Similarly, we define the matrices $\widehat{\Theta}^t$, $\widehat{G}^t$ based on $\{ \hat{\theta}_i^t \}_{i=1}^n, \{ \widehat{\gog}_i^t \}_{i=1}^n$, and the matrices $\nabla \widehat{F}^t$, $\nabla \widetilde{F}^t$, $\nabla F$ based on $\{ \nabla \widehat{f}_i^t \}_{i=1}^n$, $\{ \nabla \widetilde{f}_i^t \}_{i=1}^n$, $\{ \nabla f_i( \theta_i^t ) \}_{i=1}^n$. The norm of the matrix $\Theta_o^t = {\bf U}^\top \Theta^t$, i.e., $\norm{\Theta_o^t}_F^2$, measures the \emph{consensus error} of the iterate $\Theta^t$ since $\Theta_o^t = {\bf U}^\top ( {\bf I} - (1/n){\bf 1}{\bf 1}^\top) \Theta^t$; similarly, we denote $G_o^t = {\bf U}^\top G^t$ such that $\norm{G_o^t}_F^2$ measures the \emph{consensus error} of $G^t$. Denote the average variables $\bar{\prm}^t = n^{-1} {\bf 1}^\top \Theta^t$, $\bar{v}^t = n^{-1} {\bf 1}^\top V^t$, $\bar{g}^t = n^{-1} {\bf 1}^\top G^t$, $\overline{\nabla F}^t = n^{-1} {\bf 1}^\top \nabla F^t$. We have the following observation regarding the $\bar{\prm}^t$-update: \begin{lemma} \label{lem:f_1stepnew} Under \Cref{ass:lips} and the step size condition $\eta \leq \frac{1}{2L}$. Then, for any $t \geq 0$, it holds \begin{align} f( \bar{\prm}^{t+1} ) & \leq f( \bar{\prm}^t ) - \frac{\eta}{2} \norm{ \nabla f( \bar{\prm}^t ) }^2 + \frac{ L^2 \eta }{n} \norm{ \Theta_o^t }_F^2 \label{eq:f_1stepnew} \\ & \quad + \eta \norm{ \bar{v}^t - \overline{\nabla F}^t}^2 - \frac{\eta}{4} \norm{\bar{g}^t}^2. \nonumber \end{align} \end{lemma} The proof is relegated to \Cref{app:f1step} and is established using the relation $\bar{\prm}^{t+1} = \bar{\prm}^t - \eta \bar{g}^t$. We remark that the above lemma utilizes just \Cref{ass:lips} and results in a deterministic bound of $f( \bar{\prm}^{t+1})$. From \Cref{lem:f_1stepnew}, we observe that controlling $\norm{ \nabla f( \bar{\prm}^t ) }^2$ requires bounding $\norm{ \Theta_o^t }_F^2$ and $\norm{\bar{v}^t - \overline{\nabla F}^t}^2$. We have a set of coupled recursion formulas as \begin{lemma} \label{lem:theta_o_new} Under \Cref{ass:mix}, \ref{ass:compress}. Then, for any $t \geq 0$, it holds \begin{align} \mathbb{E} [\norm{\Theta_o^{t+1}}_F^2] & \leq (1-\frac{\rho\gamma}{2}) \mathbb{E} [\norm{\Theta_o^t}_F^2] + \frac{2}{\rho} \, \frac{\eta^2}{\gamma} \mathbb{E} [ \norm{G_o^t}_F^2 ] \nonumber \\ & \quad + \frac{\bar{\omega}^2}{\rho} \, \gamma \, \mathbb{E} \left[ \norm{\Theta^t - \eta G^t - \widehat{\Theta}^t}^2_F \right]. \end{align} \end{lemma} \begin{lemma} \label{lem:vt_bound} Under \Cref{ass:lips}, \ref{ass:mix}, \ref{ass:stoc}, \ref{ass:compress} and let $\beta \in [0,1)$. Then, for any $t \geq 0$, it holds \begin{align} & (1-\beta)^{-2} \mathbb{E} \left[ \norm{ \bar{v}^{t+1} - \overline{\nabla F}^{t+1} }^2 \right] \\ & \leq \mathbb{E} \left[ \norm{ \bar{v}^{t} - \overline{\nabla F}^{t} }^2 \right] + \frac{ 2 \beta^2}{(1-\beta)^2} \frac{\sigma^2}{n} \nonumber \\ & + \frac{8 L^2}{n^2} \eta^2 (1-\rho \gamma)^2 \mathbb{E} \left[ \norm{G_o^t}_F^2 + \frac{n}{2} \norm{ \bar{g}^t }^2 \right] \nonumber \\ & + \frac{8 L^2}{n^2} \bar{\omega}^2 \gamma^2 \, \mathbb{E} \left[ \frac{1-\delta}{2} \norm{\Theta^t - \eta G^t - \widehat{\Theta}^t}_F^2 + \norm{\Theta_o^t}_F^2 \right] \nonumber \end{align} \end{lemma} The proofs are relegated to \Cref{app:theta_o_new}, \ref{app:vt_bound}; the latter also provides a bound on the difference matrix $\mathbb{E} \left[ \norm{ V^{t+1} - \nabla F^{t+1} }_F^2 \right]$. Moreover, in \Cref{app:control_GoTheta}, we show that $\mathbb{E}[ \norm{G_o^t}_F^2]$, $\mathbb{E} [ \| \Theta^t - \eta G^t - \widehat{\Theta}^t \|_F^2]$, $\mathbb{E} [ \| G^t - \widehat{G}^t \|_F^2 ]$ can be bounded with a similar set of coupled recursions. Together, let us define the Lyapunov function: \begin{align} & {\tt V}^{t} = \mathbb{E} \left[ L^2 \norm{\Theta_o^{t}}_F^2 + n \norm{ \bar{v}^{t} - \overline{\nabla F}^{t} } + \frac{1}{n} \norm{V^{t} - \nabla F^{t} }_F^2 \right] \nonumber \\ & + \mathbb{E} \left[ a \norm{G_o^{t}}_F^2 + b \norm{ G^{t} - \widehat{G}^{t} }_F^2 + c \norm{ \Theta^{t} - \eta G^{t} - \widehat{\Theta}^{t} }_F^2 \right] \nonumber \end{align} where $a,b,c>0$ are determined below. We observe \begin{lemma} \label{lem:wholesys} Under \Cref{ass:lips}, \ref{ass:mix}, \ref{ass:stoc}, \ref{ass:compress} and let $\beta \in (0,1)$. Suppose that the step sizes satisfy: \begin{equation} \label{eq:stepsize_whole} \begin{split} \gamma & \leq \min \left\{ \frac{1}{4 \rho}, \frac{\rho n}{64 \bar{\omega}^2}, \frac{ \delta}{10 \bar{\omega}}, \frac{\delta \rho \sqrt{1-\gamma} }{259 \bar{\omega}^2} \right\} =: \gamma_\infty , \\ \eta & \leq \frac{\gamma}{L} \min \bigg\{ \sqrt{\frac{1-\beta}{\beta n}}\frac{\sqrt{ \gamma \rho^3} }{45} , \frac{\rho^2}{240 \bar{\omega} } \bigg\} =: \eta_\infty . \end{split} \end{equation} Then, for any $t \geq 0$, it holds \begin{align} \label{eq:erbound} \hspace{-.1cm} {\tt V}^{t+1} \leq ( 1- \overline{\beta} ) {\tt V}^t + \beta^2 \mathbb{C}_\sigma \sigma^2 + \eta^2 \mathbb{C}_{\avgg} \mathbb{E} \left[ \norm{\bar{g}^t}^2 \right], \end{align} where we have set $a = \frac{96L^2}{\rho^2\gamma^2}\eta^2 , b = \frac{\eta^2}{\gamma(1-\gamma)}\frac{3072\bar{\omega}^2 L^2}{\delta\rho^3} , c = \frac{\gamma}{1-\gamma}\frac{48 L^2\bar{\omega}^2}{\delta\rho}$ in the definition of ${\tt V}^t$, and $\mathbb{C}_\sigma, \mathbb{C}_{\avgg}, \overline{\beta}$ were defined in \Cref{th:main}. \end{lemma} The proof is relegated to \Cref{app:wholesys} where we demonstrated how to derive a set of tight parameters for $a,b,c$. \ificmlver \begin{figure*}[t] \vspace{-.5cm} \centering \hspace{-.4cm} \includegraphics[width=0.5\textwidth]{figures/syn_loss_net.pdf} \includegraphics[width=0.5\textwidth]{figures/syn_cons_net.pdf}\vspace{-.4cm}\\ \hspace{-.4cm} \includegraphics[width=0.5\textwidth]{figures/syn_loss_it.pdf} \includegraphics[width=0.5\textwidth]{figures/syn_cons_it.pdf}\vspace{-.4cm} \caption{\textbf{Experiments on Synthetic Data with Linear Model.} Worst-agent's loss value and consensus gap against the 32-bit floats transmitted (top) and iteration no.~(bottom). }\vspace{-.2cm} \label{fig:syn} \end{figure*} \fi Equipped with \eqref{eq:erbound} and define $\Delta^t := \mathbb{E} [ f( \bar{\prm}^t )] - f^\star$. From \Cref{lem:f_1stepnew}, we can deduce that \begin{align} & \Delta^{t+1} + \frac{2\eta}{n \overline{\beta}} {\tt V}^{t+1} \leq \Delta^{t} + \frac{2\eta}{n \overline{\beta}} {\tt V}^{t} + \frac{2 \eta}{n \overline{\beta}} \beta^2 \mathbb{C}_\sigma \sigma^2 \label{eq:sumup} \\ & \textstyle \qquad \qquad - \eta \, \mathbb{E} \left[ \frac{1}{2} \norm{\nabla f(\bar{\prm}^t)}^2 + \frac{L^2}{n} \norm{\Theta_o^t}_F^2 \right] \nonumber \\ &\qquad \qquad + \left( ({ n \overline{\beta} })^{-1} {2 \eta^3} \mathbb{C}_{\avgg} - 4^{-1} {\eta} \right) \mathbb{E} \left[ \norm{\bar{g}^t}^2 \right] \nonumber \end{align} Setting $\eta \leq \sqrt{ \frac{\overline{\beta} n}{ 8 \mathbb{C}_{\avgg} } }$ as in \eqref{eq:docom_stepsize} shows that the last term in the r.h.s.~of the above can be upper bounded by zero. Summing up both sides of \eqref{eq:sumup} from $t=0$ to $t=T-1$ yields \begin{align} & \textstyle \eta \sum_{t=0}^{T-1} \mathbb{E} \left[ \frac{1}{2} \norm{\nabla f(\bar{\prm}^t)}^2 + \frac{L^2}{n} \norm{\Theta_o^t}_F^2 \right] \nonumber \\ & \leq \Delta^0 + \frac{2\eta}{n \overline{\beta}} {\tt V}^{0} + \frac{2 \eta T}{n \overline{\beta}} \beta^2 \mathbb{C}_\sigma \sigma^2 \label{eq:fin_main_bd} \end{align} Furthermore, with the initialization, choice of $a,b,c$ and the step size $\gamma \leq \gamma_\infty$, it can be shown that \begin{align} \label{eq:er0} {\tt V}^0 \leq \frac{2 \sigma^2}{b_0} + \frac{192 L^2 n}{ \rho^2 \gamma^2 (1-\gamma)} \overline{G}_0 \eta^2. \end{align} Dividing both sides of the inequality \eqref{eq:fin_main_bd} by $\eta T$ and observing that $\norm{\Theta_o^t}_F^2 \geq \norm{ ( {\bf I} - (1/n) {\bf 1} {\bf 1}^\top ) \Theta^t }_F^2$ concludes the proof of \Cref{th:main}. \ificmlver \begin{figure*}[t] \vspace{-.5cm} \centering \hspace{-.4cm} \includegraphics[width=0.5\textwidth]{figures/dl_loss_net.pdf} \includegraphics[width=0.5\textwidth]{figures/dl_cons_net.pdf}\vspace{-.2cm} \caption{\textbf{Experiments on FEMNIST Data with LeNet-5.} Worst-agent's loss value and consensus gap against the communication cost, i.e., number of 32-bit floats transmitted. Notice the log-scale in the x-axis.}\vspace{-.2cm} \label{fig:real_main} \end{figure*} \fi \paragraph{Proof of \Cref{cor:pl}.} Applying the PL condition of \Cref{ass:pl} to the inequality \eqref{eq:f_1stepnew} shows that \begin{align} \Delta^{t+1} & \textstyle \leq (1 - \eta \mu) \Delta^t + \eta \mathbb{E} \left[ \frac{L^2}{n} \norm{\Theta_o^t}_F^2 + \norm{ \bar{v}^t - \overline{\nabla F}^t}_F^2 \right] \nonumber\\ & \textstyle \quad - \frac{\eta}{4} \mathbb{E} \left[ \norm{\bar{g}^t}^2 \right]. \end{align} Combining with \Cref{lem:wholesys} shows that \begin{align*} \textstyle \Delta^{t+1} + \frac{2 \eta}{\overline{\beta} n} {\tt V}^{t+1} & \textstyle \leq \left( 1 - \widetilde{\beta} \right) \left[ \Delta^t + \frac{2 \eta}{\overline{\beta} n} {\tt V}^t \right] + \frac{\eta \beta^2}{\overline{\beta} n} 2 \mathbb{C}_\sigma \sigma^2 \\ & \textstyle \quad + \left( \frac{\eta^3}{\overline{\beta} n} 2\mathbb{C}_{\avgg} - \frac{\eta}{4} \right) \mathbb{E} \left[ \norm{\bar{g}^t}^2 \right], \end{align*} where we have used $1 - \overline{\beta} + \frac{\overline{\beta} n}{2 \eta} \frac{\eta}{n} \leq 1 - \widetilde{\beta}$. Setting $\eta^2 \leq \frac{\overline{\beta} n}{4 \mathbb{C}_{\avgg}}$ and telescoping the above relation concludes the proof.\vspace{-.2cm} \section{Numerical Experiments}\vspace{-.1cm} \textbf{Setup.} In all experiments, we compare {\sf DoCoM-SGT}~to decentralized stochastic gradient algorithms including {\sf GNSD}~\citep{lu2019gnsd}, {\sf DeTAG}~\citep{pmlr-v139-lu21a}, {\sf GT-HSGD}~\citep{xin2021hybrid}, and compressed algorithms including {\sf CHOCO-SGD}~\citep{koloskova2019decentralized}. Our experiments are performed by running the optimization algorithms (i.e., training) on a 40 cores Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz server with MPI-enabled PyTorch and evaluating the performance of trained models on a Tesla K80 GPU server. To simulate the scenario of heterogeneous data distribution, each agent has a disjoint set of training samples, while we evaluate each model by its performance on all of the training/testing data in the network. \textbf{Hyperparameter Tuning.} For all algorithms we choose the learning rate $\eta$ from $\{0.1, 0.01, 0.001\}$, and fix the regularization parameter as $\lambda = 10^{-4}$ [cf.~\eqref{eq:syn_obj}]. For compressed algorithms, we adopt the top-$k$ compressor and we select the consensus step size $\gamma$ starting from the compression ratio $k / d$, and then increment at steps of $0.01$ until divergence. For {\sf DeTAG}, we adopt the parameters from \citep{pmlr-v139-lu21a}. For {\sf DoCoM-SGT}~and {\sf GT-HSGD}, we choose the best momentum parameter $\beta$ in $\{0.0001, 0.001, 0.01, 0.1, 0.5, 0.9\}$ and fix the initial batch number as $b_{0,i} = m_i$. We choose the batch sizes such that all algorithms spend the same amount of computation on stochastic gradient per iteration. The tuned parameters and additional results can be found in \Cref{app:more_plots}. \ificmlver \else \begin{figure*}[t] \vspace{-.5cm} \centering \hspace{-.4cm} \includegraphics[width=0.5\textwidth]{figures/syn_loss_net.pdf} \includegraphics[width=0.5\textwidth]{figures/syn_cons_net.pdf}\vspace{-.4cm}\\ \hspace{-.4cm} \includegraphics[width=0.5\textwidth]{figures/syn_loss_it.pdf} \includegraphics[width=0.5\textwidth]{figures/syn_cons_it.pdf}\vspace{-.4cm} \caption{\textbf{Experiments on Synthetic Data with Linear Model.} Worst-agent's loss value and consensus gap against the 32-bit floats transmitted (top) and iteration no.~(bottom). }\vspace{-.2cm} \label{fig:syn} \end{figure*} \fi \textbf{Synthetic Data with Linear Model.} We consider a set of synthetic data generated with the {\tt leaf} benchmarking framework \citep{caldas2019leaf}. The task is to train a linear classifier for a set of $1000$-dimensional features with $m=1443$ samples partitioned into $n=25$ non-i.i.d.~portions, each held by an agent that is connected to the others on a ring graph with uniform edge weights. Each feature vector is labeled into one of 5 classes. Altogether, the local dataset for the $i$th agent is given by $\{ x_j^i , \{ \ell_{j,k}^i \}_{k=1}^5 \}_{j=1}^{m_i}$, where $m = \sum_{i=1}^{25} m_i$, $x_j^i \in \mathbb{R}^{1000}$ denotes the $j$th feature, and $\{ \ell_{j,k}^i \}_{k=1}^5 \in \{0,1\}^5$ is the label such that $\ell_{j,k}^i = 1$ if the $j$th feature has label $k \in \{1,...,5\}$. To train a linear classifier $\theta = ( \theta_1, \ldots, \theta_5 ) \in \mathbb{R}^{5000}$ in a decentralized manner, we consider \eqref{eq:opt} with the following non-convex objective function that models a modified logistic regression problem with sigmoid loss and $\ell_2$ regularization:\vspace{-.15cm} \begin{equation} \label{eq:syn_obj} f_i(\theta) = \frac{1}{m_i} \sum_{j=1}^{m_i} \sum_{k=1}^5 \phi \big( \ell_{j,k}^i \dotp{x_j^i}{\theta_k} \big) + \frac{\lambda}{2}\norm{\theta}_2^2, \vspace{-.15cm} \end{equation} where $\phi(z) = (1 + e^{-z})^{-1}$ and $\lambda = 10^{-4}$ is the regularization parameter. Notice that $f_i(\theta)$ is a non-convex function in $\theta$, and we estimate its gradient stochastically by sampling a mini-batch of data. Our numerical experiment results are presented in Fig.~\ref{fig:syn} as we compare the worst agent's loss values $\max_i f( \theta_i^t)$ and consensus gap $\max_{i} \norm{\theta_i^t - \bar{\prm}^t}^2$ against the communication cost and iteration number. With the same number of iterations, {\sf DoCoM-SGT}~performs slightly slower than the other algorithms such as {\sf GT-HSGD}, {\sf DeTAG} in terms of the loss value, yet {\sf DoCoM-SGT}~achieves the fastest convergence in terms of the communication cost (number of floats transmitted). This is a main advantage since {\sf DoCoM-SGT}~incorporates compression for every message exchanged. Furthermore, we observe that {\sf DoCoM-SGT}~outperforms {\sf CHOCO-SGD} significantly in this experiment due to the use of gradient tracking and momentum. Lastly, {\sf DoCoM-SGT}~finds a solution with the lowest consensus gap (that is 35 times lower than {\sf CHOCO-SGD}) given the same communication budget. \vspace{-.1cm} \ificmlver \else \begin{figure*}[t] \vspace{-.5cm} \centering \hspace{-.4cm} \includegraphics[width=0.5\textwidth]{figures/dl_loss_net.pdf} \includegraphics[width=0.5\textwidth]{figures/dl_cons_net.pdf}\vspace{-.2cm} \caption{\textbf{Experiments on FEMNIST Data with LeNet-5.} Worst-agent's loss value and consensus gap against the communication cost, i.e., number of 32-bit floats transmitted. Notice the log-scale in the x-axis.}\vspace{-.2cm} \label{fig:real_main} \end{figure*} \fi \textbf{FEMNIST Data with LeNet-5.} We consider training a LeNet-5 (with $d=60850$ parameters) neural network on the FEMNIST dataset. The dataset contains $m=805263$ samples of $28 \times 28$ hand-written character images, each belongs to one of the 62 classes. The samples are partitioned into $n=36$ agents according to the groups specified in \citep{caldas2019leaf}. These agents are arranged according to a ring topology with uniform edge weights. We tackle \eqref{eq:opt} with $f_i(\theta)$ taken as the cross entropy loss function of the local dataset and an $\ell_2$ regularization is applied with the parameter of $\lambda = 10^{-4}$. We use a decreasing learning rate at $\eta^0, \eta^0/10, \eta^0 / 100$ during the 0-15th epoch, 16-30th epoch, 30-50th epoch of training; see \Cref{app:hypprm_lenet}.\vspace{-.1cm} Fig.~\ref{fig:real_main} compares the worst-agent's loss function, $\max_i f( \theta_i^t )$, against the communication cost and iteration number. We observe that the communication efficiency gap between the compressed and uncompressed algorithms has widened, in which {\sf DoCoM-SGT}~and {\sf CHOCO-SGD} can achieve the same level of loss values with 10-20x less communication cost. {Moreover, {\sf DoCoM-SGT}~has similar performance as {\sf CHOCO-SGD} in terms of the training loss, and it yields a better consensus gap than {\sf CHOCO-SGD} with the same communication budget.} Notice that in this experiment, we selected a compression ratio $k/d$ of $0.05$ and $0.1$ for {\sf DoCoM-SGT}~and {\sf CHOCO-SGD}.\vspace{-.1cm} \textbf{Conclusions.} We have proposed the {\sf DoCoM-SGT}~algorithm for communication efficient decentralized learning and shown that the algorithm achieves a state-of-the-art ${\cal O}(\epsilon^{-3})$ iteration complexity. Future works include investigating the effect of reducing the frequency of (compressed) communication, as done in (near-)optimal algorithms such as \cite{sun2020improving, pmlr-v139-lu21a}. \newpage \bibliographystyle{icml2022}
{ "timestamp": "2022-02-02T02:12:32", "yymm": "2202", "arxiv_id": "2202.00255", "language": "en", "url": "https://arxiv.org/abs/2202.00255" }
\section{Introduction} After several decades of continuous debate, the activated dynamics of thin glass-forming films remains poorly understood due to complexities of confinement effects and diverse applications \cite{3,4,8,14,20,21,22,23,24,17,15}. Vapor interfaces universally accelerates relaxation nucleated from free surfaces towards the interior. For solid interfaces, the spatially inhomogeneous dynamics becomes more complex and has less universal behaviors because surface roughness, film-substrate adsorption/attractions, and mechanical properties of substrate can significantly affect. The presence of solid surfaces can increase, decrease, or even do not modify bulk dynamics. These variations of dynamics strongly depend on specific details in design. Thus, it is essential to construct a predictive theoretical approach to understand the dependence of temperature and film thickness on the mobility gradients varies from the interface and transfers into the film. For many years, Mirigian, Phan, and Schweizer has formulated different versions of the Elastically Collective Nonlinear Langevin Equation (ECNLE) theory \cite{2,7,10,6,35,42,11,61,62,44,45} to study quantitatively and qualitatively glassy dynamics of bulk amorphous materials including polymers, drugs, thermal liquids, metallic glasses, and thermal liquids at ambient and elevated pressures. An activated event in the ECNLE theory is governed by local motions within cages and long-range collective molecular dynamics surrounding the cage. Effects of the cage-scale dynamics on the relaxation process dominate those of the collective dynamics at high temperatures but this trend is reversed at low temperatures. The temperature dependence of structural relaxation time ranging 1 ps to $10^3$ s and dynamic fragility are calculated. The prediction enable to quantitatively and qualitatively understand physical phenomena in simulation (the timescale $\leq 10^6$ ps) and experimental observation (the timescale $\sim 1-100$ s). We extended the ECNLE theory to predict the spatial gradient of the alpha relaxation time and glass transition temperature in polymer film systems having vapor and solid surface \cite{3,4,8,24}. Our new treatment describes the spatial propagation of caging constraints from the surface into the film interior. The dynamical caging constraints can be significantly weakened, slightly softened, and hardened at the vapor, smooth solid, and rough solid interface, respectively. From these, we determine how mobility changes and explain why the relaxation time gradient for free-standing and supported polymer films obeys a "double-exponential" form suggested by simulations few decades ago \cite{17,15}. While finite-size effects on the inhomogeneous glassy dynamics in free-standing polymer films have been recently investigated \cite{24}, problems for capped films have not been addressed yet, particularly when material is metallic glass. It is still unclear how two solid interfaces cause nonadditive interference effects to modify mobility gradients and invalidate the "double-exponential" form, how strong the interfacial coupling is when the films are thinner, how good an ansatz superposition approximation \cite{17} is when modelling a normalized alpha relaxation time gradient of metallic glass films at different thicknesses. In this paper, we develop the ECNLE theory to determine, for the first time, glassy behaviors of metallic glass thin film capped by two solid surfaces. The hard interfaces interact with the film via repulsive forces and, thus, can be treated as neutral rough surfaces. We construct a unified physical interpretation for influences of hard interface modified caging constraints on the gradients of mobility, glass transition temperature, dynamic fragility, and how interfacial effects from two surfaces are dynamically coupled when varying film thickness. Analytical expressions are clearly derived to explain numerical results. We also compare our findings with prior simulation and experiments. \section{Theoretical background} According to the ECNLE theory \cite{2,7,10,6,35,42,11,61,62,44,45}, glass-forming liquids are modeled as a hard-sphere fluid. The dynamic free energy of a tagged particle in a bulk fluid is \begin{eqnarray} \frac{F_{dyn}^{bulk}(r)}{k_BT} &=&-3\ln\frac{r}{d} \nonumber\\ &-&\int_{0}^{\infty} dq\frac{ q^2d^3 \left[S(q)-1\right]^2}{12\pi\Phi\left[1+S(q)\right]}\exp\left[-\frac{q^2r^2(S(q)+1)}{6S(q)}\right]\nonumber\\ &=& F_{ideal}(r) + F_{caging}^{bulk}(r), \label{eq:1} \end{eqnarray} where $\Phi$ is the volume fraction, $k_B$ is Boltzmann constant, $T$ is temperature, $d$ is the particle diameter, $r$ is the displacement from the initial position, and $S(q)$ is the static structure factor of hard-sphere fluids calculated using the Percus-Yevick theory \cite{1}. When investigating mobility gradients of one-interface thick films \cite{3,4,8}, the distance $(z)$ dependence of the dynamic free energy, $F_{dyn}(r,z)$, in the ECNLE theory is constructed using several assumptions. First, interfacial effects on packing structure (both pair correlation function and density profile) is ignored. This assumption is consistent with the definition of neutral confinement performed in simulation studies \cite{3,17,15,25,26}. This idea removes changes of structure as a complicating factor in trying to understand dynamical gradients. Thus, we can focus on a purely dynamic scenario with no perturbation in thermodynamics or the structure in the film. Second, a particle at the interface loses liquid nearest neighbors in its particle cage and the interface is very sharp. Missing particles due to solid surface are theoretically replaced by pinned hard-sphere particles, which have the size and density identical to that in the film \cite{12} as illustrated in Fig. \ref{fig:0}. In basic physics-oriented simulations \cite{17,15,25,26}, the interface is known as a type of "micro" roughness model. One can simulate a 1-component liquid, insert a dividing plane, and pin particles below it in their equilibrium configurations to create a pinned particle rough interface. In fact, this is a “micro-rough” interface since the surface is composed of precisely the same particles (with the same pair structure) as the mobile particles of the liquid. The presence of pinned particles solidifies caging constraints in the surface layer and effects of the surface caging force is spatially transferred into the film. Thus, the dynamical gradient occurs and is illustrated by different colors of particles. Here, we use the ECNLE theory for bulk fluids of pinned particles to calculate the surface caging potential under neutral confinement conditions \cite{12}. Third, in the film, the caging constraints acting on particles in layer $i$ is affected by the constraints in layer $i-1$. The distance from the interface is $z=(i-1)d$. This developed the ECNLE theory for thick films has provided good quantitative descriptions for simulations and experiments of polymer thick films with a vapor and rough solid interface \cite{3,4,8}. The dynamic free energy in $n^{th}$ of the thick film is \begin{eqnarray} F_{dyn}^{(n)}(r) &=& F_{ideal}(r) + \left(1-\frac{1}{2^n}\right)F_{caging}^{bulk}(r) + \frac{F_{caging}^{surface}(r)}{2^n}, \nonumber\\ &=& F_{dyn}^{bulk}(r) + \frac{F_{caging}^{surface}(r)-F_{caging}^{bulk}(r)}{2^n},\nonumber\\ &=& F_{dyn}^{bulk}(r) + \frac{1}{2}\frac{\Delta F_{caging}(r)}{2^{z/d}}, \label{eq:2} \end{eqnarray} where $\Delta F_{caging}(r)=F_{caging}^{surface}(r)-F_{caging}^{bulk}(r)$ is the difference between the solid-surface-induced caging constraint and its bulk counterpart at the interface, and \begin{widetext} \begin{eqnarray} F_{caging}^{surface}(r) &=& -2\int\frac{d\mathbf{q}}{(2\pi)^3}\left[ \frac{C(q)S_{12}(q)e^{-q^2r^2/6}}{\rho(1-\alpha)\left[1- \rho(1-\alpha)C(q)\right]}+ \frac{\rho(1-\alpha)C(q)^2e^{-q^2r^2\left[2-\rho(1-\alpha)C(q) \right]/6}}{\left[1- \rho(1-\alpha)C(q)\right]\left[2- \rho(1-\alpha)C(q)\right]} \right] \nonumber\\ &-& F_{caging}^{bulk}(r), \label{eq:3} \end{eqnarray} \end{widetext} where $\alpha$ is the fraction of pinned particles in the cage and $S_{12}(q)$ is the collective static structure factor between pinned and mobile particles. The analytical expression of $S_{12}(q)$ can be found in Refs. \cite{3,4,12}. Since particles at the interface lose one half of the nearest neighbors, $\alpha = 0.5$. By using the same analysis, when the film capped by two solid surfaces has a finite-size thickness of $H$, the dynamic free energy becomes \begin{eqnarray} F_{dyn}(r,z,H) &=& \frac{\Delta F_{caging}(r)}{2}\left(\frac{1}{2^{z/d}}+\frac{1}{2^{(H-z)/d}}\right)\nonumber\\ &+& F_{dyn}^{bulk}(r). \label{eq:4} \end{eqnarray} This dynamic free energy gives the local barrier, $F_B(z,H)= F_{dyn}(r_B,z,H)-F_{dyn}(r_L,z,H)$, the localization length or the minimum position of $F_{dyn}(r,z,H)$, $r_L(z,H)$, and the barrier position, $r_B(z,H)$, corresponding to the local maximum of $F_{dyn}(r,z,H)$. For free-standing polymer films, $F_{caging}^{surface}(r)=0$ since there is no particle in vapor layers, Eq. (\ref{eq:4}) recovers to the formula in Ref. \cite{24}, which is \begin{eqnarray} F_{dyn}(r,z,H) &=& \frac{ F_{caging}^{bulk}(r)}{2}\left(2-\frac{1}{2^{z/d}}-\frac{1}{2^{(H-z)/d}}\right)\nonumber\\ &+& F_{ideal}(r). \label{eq:4-1} \end{eqnarray} \begin{figure}[htp] \includegraphics[width=8.5cm]{Figure0_illustration.pdf} \caption{(Color online) Schematic illustration of capped metallic films indicating the coordinate $z$ and film thickness $H$. Color of particles illustrates the mobility gradient of the thin polymer film with neutrally solid substrates.} \label{fig:0} \end{figure} In previous works \cite{45,9}, authors found that the collective elastic barrier characterizing for effects of cooperative motions on the glass transition is almost zero in bulk metallic glasses. We suppose the conclusion remains unchanged in the metallic glass films. Thus, the structural relaxation time is calculated by \begin{eqnarray} \frac{\tau_\alpha}{\tau_s} = 1+ \frac{2\pi}{\sqrt{K_0(z,H)K_B(z,H)}}\frac{k_BT}{d^2}\exp\left(\frac{F_B(z,H)}{k_BT} \right), \label{eq:5} \end{eqnarray} where $K_0(z)$ and $K_B(z)$ are the absolute curvature of $F_{dyn}(r_B,z,H)$ at localization length and barrier position, respectively, and $\tau_s$ is short relaxation time analytically reported elsewhere \cite{2,7,10,6,35,42,11,61,62,44,45}. The above calculations give us $\tau_\alpha(\Phi, z, H)$. To determine the temperature dependence of $\tau_\alpha$, we use a density-to-temperature conversion or thermal mapping \cite{45,9}, $T=T_g+(\Phi_g-\Phi)/\beta\Phi_0$, here $T_g$ is the glass transition temperature, $\Phi_g$ is the volume fraction corresponding to the vitrification timescale criterion of $T_g$ ($\tau_\alpha(\Phi_g)=\tau_\alpha(T_g)$), $\Phi_0=0.5$ is the characteristic volume fraction, and $\beta=12\times10^{-4}$ $K^{-1}$ is the effective thermal expansion coefficient. It is important to note that this is a minimalist approach to predict the temperature dependence of structural relaxation time and gradients of the glass transition temperature in finite-size films. Thus, we employ the hard-sphere fluid to describe dynamics of glass formers. Information of atomic interactions are encoded in the thermal mapping constructed using the thermal expansion. There is no adjustable parameter needed in our calculations. One can consider effects of intermolecular forces on structure and dynamics of amorphous materials by using the standard reference interaction site model (RISM) \cite{1} or other molecular dynamics simulations to obtain $S(q)$ and $g(r)$ for Eq. (\ref{eq:1}-\ref{eq:4-1}) and calculate $\tau_\alpha(T,z)$. However, we have to know a mathematical form and its parameters of the intermolecular potential, unless ECNLE calculations have several control parameters and the thermal mapping is also altered. The problem becomes very complicated. \section{Results and Discussion} Figure \ref{fig:1} shows the logarithm of theoretical and experimental structural relaxation time of bulk \ce{Pd_{40}Ni_{40}P_{20}} as a function of temperature. The experimental value of $T_g$ is 582 $K$ defined by $\tau_{\alpha,bulk}=100$$s$ and the theoretical $\tau_{\alpha,bulk}=100$ $s$ at $\Phi_g = 0.6585$. The set of parameters is used for our thermal mapping. A good quantitative agreement between theory and experiment is observed. Our ECNLE theory can also describe well with $\tau_\alpha(T)$ of simulations. As shown in Figure S1 in the Supporting Information, theoretical alpha relaxation time of bulk \ce{Cu_{50}Zr_{50}} and \ce{Cu_{46}Zr_{46}Al_{8}} are close to the simulation counterparts. Based on $\tau_{\alpha,bulk}(T)$, a variation of the theoretical bulk $T_g$ with vitrification criterion is calculated and shown in the inset of Fig. \ref{fig:1}. In simulation, the glass transition occurs when $\tau_\alpha= 10^3-10^6$ ps. \begin{figure}[htp] \includegraphics[width=8.5cm]{Figure1_metallicglass.pdf} \caption{(Color online) The temperature dependence of structural relaxation time of \ce{Pd_{40}Ni_{40}P_{20}}. Open points and a solid curve correspond to experimental data \cite{19} and ECNLE calculations, respectively. The inset shows the glass transition temperature as a function of vitrification time scales.} \label{fig:1} \end{figure} Figure \ref{fig:2} shows the normalized mobility gradient of \ce{Pd_{40}Ni_{40}P_{20}} at different film thicknesses and temperatures. The dynamics is slowed down near the solid surface and is accelerated towards the film interior. One can see in Fig. \ref{fig:2}a that $\ln(\ln(\tau_\alpha(z)/\tau_{\alpha,bulk}))$ is linearly proportional to $z/d$. This form is known as the double exponential behavior and is found in a prior simulation working on metallic glass films \cite{14}. For sufficiently thick films, the dynamic free energy in Eq. (\ref{eq:4}) has the length scale or decay length, $\xi=d/\ln(2)\approx 1.44d$, which is independent of density or temperature. Its consequences leads to $\tau_\alpha(z)/\tau_{\alpha,bulk}$ decays in a weakly temperature-dependent manner but this result disagrees with Ref. \cite{14}. The disagreement suggests that atomic cooperativity or collective motion effects on glassy dynamics of metallic glasses can be very small but cannot be ignored. It still plays a role in metallic glasses. Comparisons between ECNLE calculations and simulations for \ce{Cu_{50}Zr_{50}} films in Ref. \cite{14} are shown in Figure S2 in the Supporting Information. \begin{figure*}[htp] \includegraphics[width=18cm]{Figure2_metallicglass.pdf} \caption{(Color online) (a) Natural logarithm of normalized local structural relaxation time in a \ce{Pd_{40}Ni_{40}P_{20}} thin film having $H = 50d$ at different temperatures. $T_{g,bulk}$ is defined as $\tau_\alpha(T_{g,bulk}) = 100$ s. Natural logarithm of normalized mobility gradient at $T = 651 K$ and different film thickness (b) with and (c) without normalization by $H$.} \label{fig:2} \end{figure*} The gradient of normalized local relaxation time in a very thin film ($H = 4d$) is flattened due to strong interfacial interference between two solid surfaces as shown in Fig. \ref{fig:2}b and c. The double exponential behavior clearly occurs near the solid surface but the gradient is roughly flattened in the film center. This means $\ln(\tau_\alpha(z,H)/\tau_{\alpha,bulk})\approx$ constant at a given thickness as $z\rightarrow H/2$. This result completely agrees with simulations in Ref. \cite{14}. The flattening of dynamical gradients in the midfilm region has been predicted by both simulation and theory for polymer free-standing films \cite{24}. In thicker films, the interference becomes smaller and the mobility gradient approximately behaves as if an one-interface system. The deviation between dynamics near the solid surface and in the film interior is large enough to remove the flattening. In metallic glasses, the density is very high and atomic structure is tightly packed. We can use the analysis of the ultra-local limit \cite{12}, which assumes a dominant role of large wavevector in calculations of $F_{dyn}(r,z,H)$, to obtain an analytical expression of $r_L(z,H)$. In this limit, $S(q)\approx 1$ and $C(q) = -4\pi d^3 g(d)\cfrac{\cos(qd)}{(qd)^2}$, with $g(d)$ being the contact number of a particle. Thus, we have \begin{eqnarray} \frac{r_L(z,H)}{r_{L,bulk}} = \frac{1}{1+\cfrac{\sqrt{2}-1}{2}\left(\cfrac{1}{2^{z/d}}+\cfrac{1}{2^{(H-z)/d}}\right)}. \label{eq:6} \end{eqnarray} Then, the dynamic shear modulus is \cite{12} \begin{eqnarray} G(z,H) &=& \frac{9\Phi k_BT}{5\pi r_L(z,H)^2d}=G_{bulk}\left(\frac{r_{L,bulk}}{r_L(z,H)} \right)^2 \nonumber\\ &=& G_{bulk}\left[1+\frac{\sqrt{2}-1}{2}\left(\cfrac{1}{2^{z/d}}+\cfrac{1}{2^{(H-z)/d}}\right)\right]^2, \label{eq:7} \end{eqnarray} where $G_{bulk}$ is the bulk dynamic shear modulus. The shear modulus can be used in the shoving model to provide another analytical interpretation for the double-exponential behavior of the mobility gradient. The alpha time is now given by \begin{eqnarray} \ln\left(\tau_\alpha(z,H)\right) = \ln\tau_{c} + \frac{G(z,H)V}{k_BT}, \label{eq:8} \end{eqnarray} where $\tau_{c}$ is a characteristic time and $V$ is the activation volume. For simplification, we suppose that the activation volume does not change compared to its bulk counterpart. From this, \begin{eqnarray} \ln\left(\frac{\tau_\alpha(z,H)}{\tau_{\alpha,bulk}}\right) &=& \frac{G_{bulk}V}{k_BT}\left[\frac{G(z,H)}{G_{bulk}}-1\right] \nonumber\\ &=& \ln\left(\frac{\tau_{\alpha,bulk}}{\tau_{c}}\right)\left[\frac{\sqrt{2}-1}{2}\left(\cfrac{1}{2^{z/d}}+\cfrac{1}{2^{(H-z)/d}}\right)\right]^2. \nonumber\\ \label{eq:9} \end{eqnarray} Clearly, $\ln(\tau_\alpha(z,H)/\tau_{\alpha,bulk}) \sim e^{-2z/\xi}+e^{-2(H-z)/\xi} \sim e^{-2z/\xi}$ when $H$ is large. This analysis suggests the derivation for a linear superposition of dynamical thick-film gradients empirically used in various works \cite{24,17}. Figure \ref{fig:3} shows the gradient of the normalized glass transition temperature at several film thicknesses and vitrification criteria. We calculate $T_{g}(z,H)/T_{g,bulk}$ as a function of $z/d$ at $H = 30d$ using $\tau_\alpha(T_g(z,H))=$ $10^{6}$ ps, 1 ms, and 100 s. This range of timescale covers from simulation to experiment. As seen in Fig. \ref{fig:3}a, the normalized $T_g$ gradients are almost perfectly overlapped. This finding reveals that although simulation is inaccessible to experimental observation or low temperature regimes, $T_{g}(z,H)/T_{g,bulk}$ predicted by simulation can describe well experimental data. Numerical results in Fig. \ref{fig:3}b shows that changing the film thickness slightly affects the normalized $T_g$ gradient when $H \geq 10d$. A significant deviation is found when the metallic glass film is very thin ($H = 4d$), where the spatial transfer of slowing down of dynamics from two surfaces is dynamically coupled. Our theoretical data can be fitted using the superposition approximation \cite{17} \begin{eqnarray} \frac{T_{g}(z,H)}{T_{g,bulk}}=1+A\left(e^{-z/\xi_L}+e^{-(H-z)/\xi_L}\right), \label{eq:10} \end{eqnarray} where $A$ and $\xi_L$ are adjustable fit parameters. The first two terms $(1+Ae^{-z/\xi_L})$ are a fitting function of the normalized $T_g$ gradient of very thick films and the last term is responsible for effects of another surface on the glass transition. Fitting our data for a film of $H = 50d$, which can be considered as a thick film, with Eq. (\ref{eq:10}) gives $A =0.145$ and $\xi_L=1.6d$. As seen in Fig. \ref{fig:3}b, the superposition approximation provides a good description for metallic glass films having $H \geq 15d$, while it fails to predict $T_{g}(z,H)/T_{g,bulk}$ at $H = 4d$. This is because this treatment underestimates the interfacial coupling at solid surfaces of thin films and its consequences in the film interior. \begin{figure*}[htp] \includegraphics[width=8.5cm]{Figure3a_metallicglass.pdf} \includegraphics[width=8.5cm]{Figure3b_metallicglass.pdf} \caption{(Color online) (a) The local glass transition temperature normalized by its bulk counterpart calculated using different vitrification time scales ranging from $10^6$ ps to 100s at $H = 30d$. (b) The local glass transition temperature normalized by its bulk counterpart (defined by the vitrification time scale of 100 s) in different film thicknesses. Solid data points and dashed curve corresponds to theoretical calculations and fitting curves.} \label{fig:3} \end{figure*} \begin{figure*}[htp] \includegraphics[width=8.5cm]{Figure4_metallicglass.pdf} \caption{(Color online) The film-averaged $T_g$ normalized by its bulk value as a function of $H/d$ for the 100s vitrification criterion. The solid curve is a fit using Eq. (\ref{eq:11}) with $A =0.145$ and $\xi_L=1.6d$. Inset: same results as in the mainframe plotted versus $d/H$. A straight line is a guide to the eye.} \label{fig:4} \end{figure*} Based on Eq. (\ref{eq:10}), the film-averaged $T_g$ is \begin{eqnarray} \frac{\left<T_{g}(z,H)\right>}{T_{g,bulk}}=1+\frac{2A\xi_L}{H}\left(1-e^{-H/\xi_L}\right). \label{eq:11} \end{eqnarray} As can be seen in Fig. \ref{fig:4}, Eq. (\ref{eq:11}) quantitatively describes the thickness-dependent film-average $T_g$ of capped films predicted by the ECNLE theory. When $H \gg \xi_L$, Eq. (\ref{eq:11}) suggests $\left<T_{g}(z,H)\right>/T_{g,bulk}\sim 1/H$ and the inset of Fig. \ref{fig:4} numerically confirms this linear relationship. The linear form is not exact since there are deviations at very thin and thick films, but the description on intermediate length scales is relatively good. This behavior is also consistent with experiments in Ref. \cite{18} and completely inverse to the case of free-standing film \cite{15,16}. However, when $H \ll \xi_L$, $\left<T_{g}(z,H)\right>/T_{g,bulk}=1+2A$ is independent of the film thickness and it suggests that the mobility gradient is perfectly flattened. \begin{figure}[htp] \includegraphics[width=8.5cm]{Figure5_metallicglass.pdf} \caption{(Color online) The temperature dependence of the structural relaxation time (secs) at various indicated distances from a solid surface of the capped \ce{Pd_{40}Ni_{40}P_{20}} film having $H = 30d$ and $T_{g,bulk}=582$ $K$. The inset shows the same data in the mainframe but as a function of $T_g(z)/T$, here $\tau_\alpha(T_g(z)) = 100$s.} \label{fig:5} \end{figure} The mainframe of Fig. \ref{fig:5} shows the alpha time at certain $z$ values as a function of $T_{g,bulk}/T$. This is another way to see a significant growth of relaxation near the solid surface and interfacial effects on the caging constraints decay in the film middle. When $z \geq 10d$, $\tau_\alpha(z)$ slightly changes and seems to recover to its bulk. To understand how the dynamic fragility is spatially varied, we plot $\tau_\alpha(z)$ versus $T_g(z)/T$ in the inset of Fig. \ref{fig:4}. Remarkably, all data perfectly overlapped and this finding implies that there is no change in the local dynamic fragility compared to the bulk counterpart. Moreover, the gradient mobility can be calculated using the $T_g$ gradient and vice versa. \newpage \section{Conclusions} In conclusion, we have developed the ECNLE theory to investigate the gradient of structural relaxation time and glass transition temperature in finite-size capped films of metallic glasses. The temperature dependence of $\tau_{\alpha,bulk}$ is in a good quantitative agreement with experiment. The presence of solid surfaces rigidifies the caging constrants at the interface and, thus, slows down the dynamics nearby. We find that the mobility and $T_g$ gradient normalized by their bulk values are well described using the superposition approximation. When the film is sufficiently thin, the flattening of the relaxation gradient occurs due to strong interference of interfacial effects. For thicker films, the dynamic coupling between two surfaces become minor and the double exponential behavior of relaxation reported by previous simulation \cite{14} as probing several layers near the interface is explained. However, our exponential decay length nearly remains unchanged with temperature and this result disagrees with Ref. \cite{14}. This is a consequence of ignoring contribution of collective motions to the glass transition. The ratio of $T_g(z)/T_{g,bulk}$ is unaffected by the vitrification criteria, which define for $T_g$ of simulation and experiments. Although the solid surfaces change both $\tau_\alpha(z)$ and $T_g(z)$, the Angell plot shows perfect overlapping among data of different layers in the film. This finding suggests a correlation between local relaxation and glass transition temperature. \section*{Supporting Information} ECNLE calculations for the alpha relaxation time of bulk and films of \ce{Cu_{50}Zr_{50}} and \ce{Cu_{46}Zr_{46}Al_{8}} at different temperatures are given and contrasted with their simulation counterparts. \section*{Acknowledgement} This research was funded by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 103.01-2019.318.
{ "timestamp": "2022-02-02T02:10:47", "yymm": "2202", "arxiv_id": "2202.00218", "language": "en", "url": "https://arxiv.org/abs/2202.00218" }
\section{Introduction}\label{Introduction} Extensive categories were introduced in \cite{LX} as categories $\mathsf{C}$ with finite coproducts and pullbacks in which the canonical functor $+ : \mathsf{C}/X \times \mathsf{C}/Y \rightarrow \mathsf{C}/(X + Y )$ is an equivalence, for every pair of objects $X$ and $Y$ of $\mathsf{C}$. A category is said to be coextensive if its opposite is extensive. Coextensive varieties (as algebraic categories) are of interest because according to \cite{L2008} and more recently \cite{M2021}, they bring an appropriate setting to develop algebraic geometry. In \cite{Z2021} it was shown that the theory central elements (\cite{BV2013}, \cite{SV2009}, \cite{V1999}) can be taken as an accurate tool to study coextensive varieties. This perspective comes from the intuition that varieties with well behaved products can be algebraically described by analyzing the peculiarities on the theory that describes the behavior of the elements which concentrate the information about finite product decompositions of its algebras. Under certain circumstances, it turns out that central elements can be treated functorially. As far as we know, this approach has not yet been exploited. Most of all, because the theory of central elements was originally constrained into the realm of universal algebra. Small extensive categories admit a particular subcanonical topology called the Gaeta topology (seemingly named in honor of the spanish algebraic geometer Federico Gaeta). This topology has to do with all the possible decompositions of objects into finite coproducts. In concrete examples (\cite{Za2017}, \cite{M2021}), it has been proved that the Gaeta topos is the classifying topos of the theory of connected objects, which can be considered as the ones who does not admit non-trivial binary coproduct decompositions. Naturally, when considering coextensive categories, the Gaeta topology and the Gaeta topos are related with decompositions into finite products and indecomposable objects. Nevertheless, in the practice it seems not quite easy to provide an axiomatization of the theory of indecomposable objects when regarding varieties in a more general setting. Considering the characterization given in \cite{Z2021}, it seems natural to wander if from the tools provided by universal algebra and the theory of central elements it is possible to determine, given a coextensive variety, whether the Gaeta topos classifies indecomposable objects. \\ This paper is organized as follows. Section \ref{Preliminaries} presents the most part of the definitions and basic results required for reading this work. Section \ref{Representability of the functor Z} is devoted to the functorial treating of central elements in coextensive varieties. Such an approach will allows to prove that the functor of central elements $Z$ is representable by the algebra $\mathbf{0}\times \mathbf{0}$. This result will leads us to characterize those varieties with $\vec{0}$ and $ \vec{1}$, boolean factor congruences and center stable which are coextensive. This section concludes with an application that connects coBoolean varieties with the Beth (definability) property for algebraizable logics. Section \ref{The Gaeta topos and fp-coextensive varieties} deals with the characterization of those fp-coextensive varieties such that the Gaeta topos classifies indecomposable objects. The paper comes to an end with the application of this result to some particular classes of interest in general algebra and non-classical logic. \\ The reader is assumed to be familiar with some standard topos theory as presented in \cite{MM2012} and \cite{J2002}. For standard notions in universal algebra the reader may consult \cite{MMT1987}. \section{Preliminaries}\label{Preliminaries} \subsection{Notation and basic results}\label{Notation and basic results} Let $A$ be a set and $k$ be a natural number. We write $\vec{a}$ for an element $(a_{1},\dots,a_{k})\in A^{k}$. If $f:A\rightarrow B$ is a function and $\vec{a}\in A^{k}$, then we write $f(\vec{a})$ for the element $(f(a_{1}),\dots,f(a_{k}))\in B^{k}$. If $X\subseteq A$ we write $f|_{X}$ for the restriction of $f$ to $X$, $\mathcal{P}(X)$ for the power set of $X$ and $f[X]$ for the image of $X$ through $f$. If $\vec{a} \in A^{k}$ and $\vec{b} \in B^{k}$, we write $[\vec{a}, \vec{b}]$ for the k-tuple $((a_{1} , b_{1} ), \dots, (a_{k} , b_{k})) \in (A \times B)^{k}$. If $g:A\times B \rightarrow C$ is a function and $[\vec{a}, \vec{b}]\in (A \times B)^{k}$ then we write $g(\vec{a},\vec{b})$ for the element $(g(a_{1},b_{1}),\dots,g(a_{k},b_{k}))\in C^{k}$. If $\mathbf{A}$ is an algebra of a given type we denote its universe by $A$ and its congruence lattice by $\mathsf{Con}(\mathbf{A})$. If $\theta \in \mathsf{Con}(\mathbf{A})$, and $\vec{a}\in A^{k}$ we write $\vec{a}/\theta$ for the k-tuple $(a_{1}/\theta , \dots, a_{k}/\theta)\in (A/\theta)^{k}$. The universal congruence on $\mathbf{A}$ and identity congruence on $\mathbf{A}$ are denoted by $\nabla^{\mathbf{A}}$ and $\Delta^{\mathbf{A}}$, respectively. If $S\subseteq A\times A$, we write $\mathsf{Cg}^{\mathbf{A}}(S)$ for the congruence generated by $S$. We also write $\mathsf{Cg}^{\mathbf{A}}(\vec{a},\vec{b})$, for the congruence generated by all pairs $(a_{1} , b_{1} ), \dots, (a_{k} , b_{k})$ where $\vec{a}, \vec{b}\in A^{k}$. We say that a congruence $\theta$ on $\mathbf{A}$ is \emph{finitely generated} if $\theta=\mathsf{Cg}^{\mathbf{A}}(F)$ for some finite set $F\subseteq A\times A$. We use $\mathsf{FC}(\mathbf{A})$ to denote the set of factor congruences of $\mathbf{A}$. We write $\theta \diamond \delta$ in $\mathsf{Con}(\mathbf{A})$ to denote that $\theta$ and $\delta$ are complementary factor congruences of $\mathbf{A}$. A variety $\mathcal{V}$ has the Fraser-Horn property \cite{FH1970} if for every $\mathbf{A}_1,\mathbf{A}_2 \in \mathcal{V}$, it is the case that every congruence $\theta$ in $\mathbf{A}_1\times\mathbf{A}_2 $ is the product congruence $\theta_1 \times \theta_2$ for some congruences $\theta_1$ of $\mathbf{A}_1$ and $\theta_2$ of $\mathbf{A}_2$. If $\theta,\lambda \in \mathsf{Con}(\mathbf{A})$ and $\theta \subseteq \lambda$, we write $\lambda/\theta$ for the set of pairs $(x/\theta,y/\theta)$ of $A/\theta$ such that $(x,y)\in \lambda$. If $g:\mathbf{A}\rightarrow \mathbf{B}$ is a homomorphism, we write $\mathsf{Ker}(g)$ for the \emph{kernel} of $g$. I.e. the congruence on $\mathbf{A}$ defined by the set $\{(a,b)\in A^{2}\colon g(a)=g(b)\}$. If $\mathbf{A}$ is an algebra of type $\mathcal{F}=\{f_{1},\ldots ,f_{m}\}$, when required, we will write its type as an $n$-tuple $(a_{1},\ldots, a_{m})$, where $a_{j}$ denotes the arity of $f_{j}$, with $1\leq j\leq m$. \\ The following result is probably folklore but, since we have not found it in the literature, we give some of the details of its proof. It provides a description of the factor congruences of the quotients of an algebra of a given type. \begin{lem}\label{factor congruence quotinents} Let $\mathbf{A}$ be an algebra of a given type and let $\theta\in \mathsf{Con}(\mathbf{A})$. Consider the sets \[P_{\theta}=\{(\lambda,\mu)\mid\theta\subseteq\lambda,\mu;\;\lambda\cap\mu=\theta\;\lambda\circ\mu=\nabla^{\mathbf{A}}\}\] and \[Z_{\theta}=\{(\alpha,\beta) \in \mathsf{FC}(\mathbf{A}/\theta)^{2}\colon \alpha \diamond \beta\}.\] Then, the assignment $(\lambda,\mu)\mapsto (\lambda/\theta,\mu/\theta)$ defines a bijection between $P_{\theta}$ and $Z_{\theta}$. \end{lem} \begin{proof} It is a straightforward consequence of Theorems 7.5, 6.15 and 6.20 of \cite{BS1981}. \end{proof} Given a variety $\mathcal{V}$ and a set $X$ of variables we use $\mathbf{T}_{\mathcal{V}}(X)$ for the term algebra of $\mathcal{V}$ over $X$ and $\mathbf{F}_{\mathcal{V}}(X)$ for the free algebra of $\mathcal{V}$ freely generated by $X$. In particular, if $X=\{x_{1},\dots,x_{m}\}$ with $m$ a non-negative integer and if no clarification is needed, then we write $\mathbf{T}_{\mathcal{V}}(m)$ and $\mathbf{F}_{\mathcal{V}}(m)$ instead of $\mathbf{T}_{\mathcal{V}}(\{x_{1},\dots,x_{m}\})$ and $\mathbf{F}_{\mathcal{V}}(\{x_{1},\dots,x_{m}\})$, respectively. We recall that an algebra $\mathbf{A}$ in $\mathcal{V}$ is a \emph{finitely generated free algebra} if it is isomorphic to $\mathbf{F}_{\mathcal{V}}(m)$ for some finite $m$, and \emph{finitely presented} if it is isomorphic to an algebra of the form $\mathbf{F}_{\mathcal{V}}(k)/\theta$, for some $k$ finite and $\theta$ finitely generated congruence on $\mathbf{F}_{\mathcal{V}}(k)$. \\ The following Lemma is a key result that we will employ repeatedly along Section \ref{The Gaeta topos and fp-coextensive varieties}. For its proof, the reader may consult \cite{V1996_1}. \begin{lem}\label{vey useful lemma} Let $\mathcal{V}$ be a variety and let $X$ be a set of variables. Let $r, r_1 , \ldots , r_m ,$ $s, s_1 , \ldots , s_m \in T_{\mathcal{V}}(X)$. Then, the following are equivalent: \begin{itemize} \item[(1)] $(r,s)\in \mathsf{Cg}^{\mathbf{F}_{\mathcal{V}}(X)}(\vec{r},\vec{s})$; \item[(2)] $\mathcal{V}\models \vec{r}=\vec{s}\Longrightarrow r=s$. \end{itemize} \end{lem} \begin{lem}\label{technicality lemma} Let $\mathcal{V}$ be a variety and let $p_{i}(\vec{x},\vec{y}), q_{i}(\vec{x},\vec{y})$, $1\leq i\leq n$ be terms in the language of $\mathcal{V}$. Let $X=\{\vec{x},\vec{y},\vec{z}\}$, \[\theta =\bigvee_{i=1}^{n} \mathsf{Cg}^{\mathbf{F}_{\mathcal{V}}(X)}(p_{i}(\vec{x},\vec{y}),q_{i}(\vec{x},\vec{y}))\] and $\mathbf{H}=\mathbf{F}_{\mathcal{V}}(X)/\theta$. Let $\mathbf{A}\in \mathcal{V}$ and suppose that $p_{i}^{\mathbf{A}}(\vec{a},\vec{b})=q_{i}^{\mathbf{A}}(\vec{a},\vec{b})$, for $1\leq i\leq n$. Then, for every $\vec{c}\in A^{N}$ there exists a unique $\Omega:\mathbf{H}\rightarrow \mathbf{A}$ such that $\Omega(\vec{x}/\theta)=\vec{a}$, $\Omega(\vec{y}/\theta)=\vec{b}$ and $\Omega(\vec{z}/\theta)=\vec{c}$. \end{lem} \begin{proof} It is straightforward. \end{proof} Let $\mathcal{L}$ be a first order language. If a $\mathcal{L}$-formula $\varphi (\vec{x})$ has the form \begin{equation*} \bigwedge_{j=1}^{n}p_{j}(\vec{x})=q_{j}(\vec{x}), \end{equation*}% for some positive number $n$ and terms $p_{j}(\vec{x})$ and $q_{j}(\vec{x})$ in $\mathcal{L}$, then we say that $\varphi (\vec{x})$ is a ($\bigwedge p=q$)-formula. If $\mathcal{K}$ is a class of $% \mathcal{L}$-structures and $R\in \mathcal{L}$ is a $n$-ary relation symbol, we say that a formula $\varphi (x_{1},\dots,x_{n})$ \emph{defines} $R$ \emph{in} $\mathcal{K}$ if% \begin{equation*} \mathcal{K}\vDash \varphi (\vec{x})\Longleftrightarrow R(\vec{x})\text{.} \end{equation*} In particular, if a ($\bigwedge p=q$)-formula defines $R$, we say that $R$ is \emph{equationally definable}. \\ Finally, we stress that all varieties considered along this paper always will be assumed as varieties with at least a constant symbol. \subsection{Central Elements} By \emph{a variety with $\vec{0}$ and $\vec{1}$} we understand a variety $\mathcal{V}$ in which there are $0$-ary terms $0_{1}$, $\ldots$ , $0_{N}$, $1_{1}$, $\ldots$ , $1_{N}$ such that $\mathcal{V} \models \vec{0}\approx \vec{1}\Longrightarrow x\approx y$, where $\vec{0}=(0_{1}, ..., 0_{N})$ and $\vec{1}=(1_{1}, ..., 1_{N})$. If $\mathbf{A}\in \mathcal{V}$ then we say that $\vec{e}=(e_{1}, ..., e_{N})\in A^{N}$ is a \emph{central element} of $\mathbf{A}$ if there exists an isomorphism $\tau: \mathbf{A}\rightarrow \mathbf{A}_{1}\times \mathbf{A}_{2}$, such that $\tau(\vec{e})=[\vec{0}_{\mathbf{A}_{1}}, \vec{1}_{\mathbf{A}_{2}}]$. Also, we say that $\vec{e}$ and $\vec{f}$ are a \emph{pair of complementary central elements} of $\mathbf{A}$ if there exists an isomorphism $\tau: \mathbf{A}\rightarrow \mathbf{A}_{1}\times \mathbf{A}_{2}$ such that $\tau(\vec{e})=[\vec{0}_{\mathbf{A}_{1}}, \vec{1}_{\mathbf{A}_{2}}]$ and $\tau(\vec{f})=[\vec{1}_{\mathbf{A}_{1}}, \vec{0}_{\mathbf{A}_{2}}]$. We write $Z(\mathbf{A})$ to denote the set of central elements of $A$ and $\vec{e}\diamond_{\mathbf{A}} \vec{f}$ to denote that $\vec{e}$ and $\vec{f}$ are complementary central elements of $\mathbf{A}$. We say that a variety $\mathcal{V}$ with $\vec{0}$ and $\vec{1}$ has \emph{Boolean Factor Congruences} (BFC) if the set of factor congruences of any algebra of $\mathcal{V}$ is a Boolean sublattice of its congruence lattice. Let $\mathcal{V}$ be a variety with BFC and $\mathbf{A}\in \mathcal{V}$. If $\vec{e}\in Z(\mathbf{A})$, we write $\theta_{\vec{0}, \vec{e}}^{\mathbf{A}}$ and $\theta_{\vec{1}, \vec{e}}^{\mathbf{A}}$ for the unique pair of complementary factor congruences satisfying $[\vec{e}, \vec{0}] \in \theta_{\vec{0}, \vec{e}}^{\mathbf{A}}$ and $[\vec{e}, \vec{1}]\in \theta_{\vec{1}, \vec{e}}^{\mathbf{A}}$. In Theorem 1 of \cite{SV2009} it was proved that the assignment which sends $\vec{e}$ into $\theta_{\vec{0}, \vec{e}}^{\mathbf{A}}$ establishes a bijection between $Z(\mathbf{A})$ and $\mathsf{FC}(\mathbf{A})$. Such a bijection, allows to define some operations in $Z(\mathbf{A})$ as follows: given $\vec{e}\in Z(\mathbf{A})$, the \emph{complement $\vec{e}^{c_{\mathbf{A}}}$} of $\vec{e}$, is the only solution to the equations $[\vec{z}, \vec{1}]\in \theta^{\mathbf{A}}_{\vec{0},\vec{e}}$ and $[\vec{z}, \vec{0}]\in \theta^{\mathbf{A}}_{\vec{1},\vec{e}}$. Given $\vec{e}, \vec{f}\in Z(\mathbf{A})$, the \emph{infimum} $\vec{e}\wedge_{\mathbf{A}}\vec{f}$ is the only solution to the equations $[\vec{z}, \vec{0}]\in \theta^{\mathbf{A}}_{\vec{0},\vec{e}}\cap \theta^{\mathbf{A}}_{\vec{0},\vec{f}}$ and $[\vec{z}, \vec{1}]\in \theta^{\mathbf{A}}_{\vec{1},\vec{e}}\vee \theta^{\mathbf{A}}_{\vec{1},\vec{f}}$. Finally, the \emph{supremum} $\vec{e}\vee_{\mathbf{A}}\vec{f}$ is the only solution to the equations $[\vec{z}, \vec{0}]\in \theta^{\mathbf{A}}_{\vec{0},\vec{e}}\vee \theta^{\mathbf{A}}_{\vec{0},\vec{f}}$ and $[\vec{z}, \vec{1}]\in \theta^{\mathbf{A}}_{\vec{1},\vec{e}}\cap \theta^{\mathbf{A}}_{\vec{1},\vec{f}}$. Observe that these operations makes $\textbf{Z}(\mathbf{A})=(Z(\mathbf{A}),\wedge_{\mathbf{A}},\vee_{\mathbf{A}}, ^{c_{\mathbf{A}}},\vec{0}^{\mathbf{A}},\vec{1}^{\mathbf{A}})$ a Boolean algebra which is isomorphic to $(\mathsf{FC}(\mathbf{A}), \vee, \cap, ^{\ast},\Delta^{\mathbf{A}},\nabla^{\mathbf{A}})$. \\ The following result was proved in Lemma 2.1.1. of \cite{B2012} for the case $N=1$. Nevertheless, since the arguments used for the case of an arbitrary $N$ does not change the essence of the proof, we omit the details. \begin{lem}\label{Useful lema Centrals} Let $\mathcal{V}$ be a variety with BFC and $\mathbf{A}\in \mathcal{V}$. For every $\vec{e},\vec{f}\in Z(\mathbf{A})$, the following holds: \begin{enumerate} \item $\vec{a}=\vec{e}\wedge_{\mathbf{A}}\vec{f}$ if and only if $[\vec{0},\vec{a}]\in \theta_{\vec{0},\vec{e}}^{\mathbf{A}}$ and $[\vec{a},\vec{f}]\in \theta_{\vec{1},\vec{e}}^{\mathbf{A}}$. \item $\vec{a}=\vec{e}\vee_{\mathbf{A}}\vec{f}$ if and only if $[\vec{1},\vec{a}]\in \theta_{\vec{1},\vec{e}}^{\mathbf{A}}$ and $[\vec{a},\vec{f}]\in \theta_{\vec{0},\vec{e}}^{\mathbf{A}}$. \end{enumerate} \end{lem} Let $\mathcal{V}$ be a variety with BFC. If $\mathbf{A},\mathbf{B}\in \mathcal{V}$ and $f:\mathbf{A}\rightarrow \mathbf{B}$ is a homomorphism, we say that $f$ \emph{preserves central elements} if the map $f:Z(\mathbf{A})\rightarrow Z(\mathbf{B})$ is well defined; that is to say, for every $\vec{e}\in Z(\mathbf{A}),$ it follows that $f(\vec{e})\in Z(\mathbf{B})$. We say that $f$ \emph{preserves complementary central elements} if it preserves central elements and for every $\vec{e}_{1}, \vec{e}_{2}\in Z(\mathbf{A})$, \[\vec{e}_{1}\diamond_{\mathbf{A}}\vec{e}_{2} \Rightarrow f(\vec{e}_{1})\diamond_{\mathbf{B}}f(\vec{e}_{2}). \] We say that a variety with BFC is \emph{center stable} if every homomorphism preserves central elements and we say that it is \emph{stable by complements} if every homomorphism preserves complementary central elements. In \cite{Z2021} it was shown that these notions are not equivalent. \subsection{Algebraizable Logics} The terminology and definitions of this section are based on those of \cite{BP1989}, \cite{BH2006}, \cite{MRW2020} and all the references therein. Let $\mathcal{L}$ be a language of algebras and let $X$ be a countable-infinite set. We write $T_{\mathcal{L}}(X)$ for the set of terms in $\mathcal{L}$. A \emph{logic} over $X$ is a pair $\mathbb{L}=\langle \mathcal{L}, \vdash_{\mathbb{L}}\rangle$ where $\vdash_{\mathbb{L}}\subseteq T_{\mathcal{L}}(X)\times \mathcal{P}(T_{\mathcal{L}}(X))$ is a \emph{substitution invariant consequence relation}. I.e. $\vdash_{\mathbb{L}}$ satisfies: \begin{displaymath} \begin{array}{ccc} \varphi \in \Gamma & \Rightarrow & \Gamma \vdash_{\mathbb{L}} \varphi; \\ \Gamma \vdash_{\mathbb{L}} \varphi\; \text{and}\; \Gamma \subseteq \Delta & \Rightarrow & \Delta \vdash_{\mathbb{L}} \varphi; \\ \Gamma \vdash_{\mathbb{L}} \varphi \; \text{and}\; \Delta \vdash_{\mathbb{L}} \psi\; \text{for every}\; \psi\in \Gamma & \Rightarrow & \Delta \vdash_{\mathcal{L}} \varphi. \\ \Gamma \vdash_{\mathbb{L}} \varphi & \Rightarrow & \sigma[\Gamma] \vdash_{\mathbb{L}} \sigma(\varphi). \end{array} \end{displaymath} for every endomorphism $\sigma$ of $\mathbf{T}_{\mathcal{L}}(X)$. In this context $X$ is usually referred as the set of \emph{variables of $\mathcal{L}$}. The set $T_{\mathcal{L}}(X)\times T_{\mathcal{L}}(X)$ is called the set of \emph{equations} of $\mathbb{L}$ and we denote it by $Eq_{\mathcal{L}}$. The elements $(\varphi,\psi)$ of $Eq_{\mathcal{L}}$ are noted as $\varphi\approx \psi$. A \emph{transformer from formulas to equations} is a function $\tau\colon T_{\mathcal{L}}(X)\rightarrow\mathcal{P}(Eq_{\mathcal{L}})$. A \emph{transformer from equations to formulas} is a function $\rho\colon Eq_{\mathcal{L}}\rightarrow\mathcal{P}(T_{\mathcal{L}}(X))$. In this case, $\tau$ is said to be \emph{structural} if for any endomorphism $\sigma$ of $\mathbf{T}_{\mathcal{L}}(X)$ and every $\varphi \in T_{\mathcal{L}}(X)$ we have $\tau(\sigma(\varphi))=\sigma[\tau(\varphi)]$. On the other hand, $\rho$ is said to be \emph{structural} if there is a set of formulas $\Delta(x,y)$ in at most two variables such that for any $\varphi,\psi\in T_{\mathcal{L}}(X)$ the condition $\rho(\varphi\approx\psi)=\Delta(\varphi,\psi)$ holds. A logic $\mathbb{L}$ is \emph{algebraizable (in the sense of Blok and Pigozzi)} with equivalent variety semantics $\sf \mathcal{V}$ if there are structural transformers $\tau\colon T_{\mathcal{L}}(X)\rightarrow\mathcal{P}(Eq_{\mathcal{L}})$ and $\rho\colon Eq_{\mathcal{L}}\rightarrow\mathcal{P}(T_{\mathcal{L}}(X))$ such that for all $\Gamma\cup\{\varphi\}\subseteq T_{\mathcal{L}}(X)$ and $\Theta\cup\{\epsilon\approx\delta\}\subseteq Eq_{\mathcal{L}}$, we have: \begin{enumerate} \item $\Gamma\vdash_{\mathbb{L}}\varphi \Longleftrightarrow \tau\Gamma\models_{\mathcal{V}}\tau\varphi$, and \item $\epsilon\approx\delta \Dashv\vDash_{\mathcal{V}} \tau\rho(\epsilon\approx\delta)$. \end{enumerate} Let $\mathbb{L}$ be a logic and let $V$ and $W$ be a disjoint pair of sets of variables such that $T_{\mathcal{L}}(V)\neq \emptyset$. Let $\Gamma\subseteq T_{\mathcal{L}}(V\cup W)$. We say that \emph{$\Gamma$ defines $W$ implicitly in terms of $V$ in $\mathbb{L}$} if for every set of variables $Y$, each $z\in W$ and homomorphism $h:\mathbf{T}_{\mathcal{L}}(V\cup W)\rightarrow \mathbf{T}_{\mathcal{L}}(Y)$, such that $h(x)=x$ for all $x\in V$, it follows that $\Gamma \cup h[\Gamma]\models_{\mathsf{Mod}^{\ast}(\vdash_{\mathbb{L}})}z\approx h(z)$. We say that \emph{$\Gamma$ defines $W$ explicitly in terms of $V$ in $\mathbb{L}$} if for each $z\in W$, there exists $\varphi_{z}\in T_{\mathcal{L}}(V)$ such that $\Gamma \models_{\mathsf{Mod}^{\ast}(\vdash_{\mathbb{L}})}z\approx \varphi_{z}$. Here $\mathsf{Mod}^{\ast}(\vdash_{\mathbb{L}})$ denotes the class of all reduced matrix models of $\vdash_{\mathbb{L}}$ (i.e., $\Gamma \vdash_{\mathbb{L}} \varphi$ iff $\Gamma \models_{\mathsf{Mod}^{\ast}(\vdash_{\mathbb{L}})} \varphi$). We say that a logic $\mathbb{L}$ has the \emph{Beth (definability) property} if for every disjoint pair of sets of variables $V$ and $W$ and $\Gamma\subseteq T_{\mathcal{L}}(V\cup W)$, if $\Gamma$ defines $W$ implicitly in terms of $V$ in $\mathbb{L}$ then $\Gamma$ defines $W$ explicitly in terms of $V$ in $\mathbb{L}$. \\ If $\mathbb{L}$ is an algebraizable logic with equivalent variety semantics $\mathcal{V}$, the underlying intuition of Beth's (definability) property is that epimorphisms in the (algebraic) category $\mathcal{V}$ correspond to implicit definitions in $\mathbb{L}$ and surjections in $\mathcal{V}$ correspond to explicit definitions in $\mathbb{L}$. The following result, originally proved in \cite{H2000}, establishes that such an intuition is in fact an equivalence. \begin{theo}\label{Beth definability property} Let $\mathbb{L}$ be an algebraizable logic with equivalent variety semantics $\mathcal{V}$. Then $\mathbb{L}$ has the Beth (definability) property if and only if all the epimorphisms of $\mathcal{V}$ are surjective. \end{theo} \section{Representability of the functor Z}\label{Representability of the functor Z} We recall that a category with finite products $\mathsf{C}$ is called coextensive if for each pair of objects $X$ and $Y$ of $\mathsf{C}$ the canonical functor $\times: \mathsf{C}/X \times \mathsf{C}/Y \rightarrow \mathsf{C}/(X \times Y)$ is an equivalence. Classical examples of coextensive categories are the categories ${\mathbf{Ring}}$ and $\mathbf{dLat}$ of commutative rings with unit and bounded distributive lattices, respectively. If $\mathcal{V}$ is a coextensive variety, the associated algebraic category will be also denoted by $\mathcal{V}$. In what follows, we write $\mathbf{0}$ and $\mathbf{1}$ for the initial and terminal algebras of $\mathcal{V}$, respectively. If $\mathbf{A}\in \mathcal{V}$ we write $\text{!`}_{\mathbf{A}}:\mathbf{0}\rightarrow \mathbf{A}$ for the unique morphism from $\mathbf{0}$ to $\mathbf{A}$ in $\mathcal{V}$. If $\vec{e}\in Z(\mathbf{A})$ we write $\text{!`}_{\vec{0},\vec{e}}$ and $\text{!`}_{\vec{1},\vec{e}}$ for the unique morphisms from $\mathbf{0}$ to $\mathbf{A}/\mathsf{Cg}^{\mathbf{A}}(\vec{0},\vec{e})$ and from $\mathbf{0}$ to $\mathbf{A}/\mathsf{Cg}^{\mathbf{A}}(\vec{1},\vec{e})$, respectively. Finally, we recall that due to $\mathcal{V}$ is assumed with at least a constant symbol then $\mathbf{0}$ is isomorphic to $\mathbf{F}_{\mathcal{V}}(\emptyset)$. \\ Given a variety $\mathcal{V}$ with $\vec{0}$ and $\vec{1}$, BFC and center stable, it is the case that for every $\mathbf{A}\in \mathcal{V}$ the assignment $\mathbf{A} \mapsto Z(\mathbf{A})$ defines a functor $Z:\mathcal{V} \rightarrow \mathsf{Set}$ in a obvious way. In this section we prove that when $\mathcal{V}$ is coextensive, such a functor is in fact representable by the algebra $\mathbf{0}\times \mathbf{0}$. This result leads us to show that the functor $Z$ can also be extended to a functor from $\mathcal{V}$ to the category $\mathsf{Boole}$ of Boolean algebras. Moreover, a characterization of coextensivity by means of the functors $Z$ and $\times$ is provided. The section concludes with an application on Beth (definability) property for logics that has coBoolean varieties as algebraic semantics. \\ We will begin by recalling some facts about coextensive varieties which will be essential for proving the results of this section. It is well known that every variety (as an algebraic category) has all limits. Therefore, as a restricted dual of Propositions 2.2 and 4.1 of \cite{CW1993} we obtain the following result. \begin{prop}\label{coextensive with zero} A variety $\mathcal{V}$ is coextensive if and only if it has pushouts along projections and every commutative diagram \begin{displaymath} \xymatrix{ \mathbf{0} \ar[d]_-{\text{!`}_{\mathbf{A}_{0}}} & \ar[l]_-{\pi_{0}} \ar[r]^-{\pi_{1}} \mathbf{0} \times \mathbf{0} \ar[d]_-{f} & \mathbf{0} \ar[d]^-{\text{!`}_{\mathbf{A}_{1}}} \\ \mathbf{A}_{0} & \ar[l]^-{g_{0}} \ar[r]_-{g_{1}} \mathbf{A}_{0}\times \mathbf{A}_{1} & \mathbf{A}_{1} } \end{displaymath} comprises a pair of pushout squares in $\mathcal{V}$ just when the bottom row is a product diagram in $\mathcal{V}$. \end{prop} We recall that a variety $\mathcal{V}$ is a \emph{Pierce variety} \cite{V1996} if there exist a positive natural number $N$ , $0$-ary terms $0_1$, $\ldots$, $0_N$, $1_1$, $\ldots$, $1_N$ and a term $U (x, y, \vec{z}, \vec{w})$ such that the identities \begin{displaymath} \begin{array}{ccc} U (x, y, \vec{0}, \vec{1})=x & \text{and} & U (x, y, \vec{1}, \vec{0})=y \end{array} \end{displaymath} hold in $\mathcal{V}$. It is worth mentioning that in a Pierce variety $\mathcal{V}$, it is also true that $\theta^{\mathbf{A}}_{\vec{0},\vec{e}}=\mathsf{Cg}^{\mathbf{A}}(\vec{0},\vec{e})$, for every $\mathbf{A}\in \mathcal{V}$ and every $\vec{e}\in \mathbf{A}$ (see \cite{BV2016} for details). \\ In \cite{Z2021} the following characterization of coextensive varieties by means of Pierce varieties, the equational definability of the relation ``$\vec{e}$ and $\vec{f}$ are complementary central elements'' and the stability by complements, was provided. It is a key result on which we will constantly rely for carrying on the goals of this section. \begin{theo}\label{charcoextensivity} Let $\mathcal{V}$ be a variety. Then, the following are equivalent: \begin{enumerate} \item[(1)] $\mathcal{V}$ is coextensive. \item[(2)] $\mathcal{V}$ is a Pierce variety in which the relation $\vec{e}\diamond_{\mathbf{A}} \vec{f}$ is equationally definable. \item[(3)] $\mathcal{V}$ is a Pierce variety stable by complements. \end{enumerate} \end{theo} Let $\mathcal{V}$ be a coextensive variety, $\mathbf{A},\mathbf{B}\in \mathcal{V}$ and $f:\mathbf{A}\rightarrow \mathbf{B}$ be a homomorphism. Observe that from Theorem \ref{charcoextensivity} (3) the assignments \[ \begin{array}{ccc} \mathbf{A}\mapsto Z(\mathbf{A}) \\ f \mapsto f|_{Z(\mathbf{A})} \end{array} \] determine a functor $Z:\mathcal{V} \rightarrow \mathsf{Set}$. In the following result we prove that such a functor is in fact, representable. \begin{theo}\label{centrals bijection} Let $\mathcal{V}$ be a coextensive variety. Then, for every $\mathbf{A}\in \mathcal{V}$ there is a bijection between $Z(\mathbf{A})$ and $\mathcal{V}(\mathbf{0}\times\mathbf{0}, \mathbf{A})$. Moreover, the functor $Z:\mathcal{V}\rightarrow \mathsf{Set}$ is representable by $\mathbf{0}\times\mathbf{0}$. \end{theo} \begin{proof} Let $\mathbf{A}\in \mathcal{V}$ and consider the assignments $\varphi_{\mathbf{A}}:\mathcal{V}(\mathbf{0}\times\mathbf{0}, \mathbf{A})\rightarrow Z(\mathbf{A})$ and $\mu_{\mathbf{A}}:Z(\mathbf{A}) \rightarrow \mathcal{V}(\mathbf{0}\times\mathbf{0}, \mathbf{A})$, defined by $\varphi_{\mathbf{A}}(g)=g[\vec{0},\vec{1}]$ and $\mu_{\mathbf{A}}(\vec{e})=\text{!`}_{\vec{0},\vec{e}}\times \text{!`}_{\vec{1},\vec{e}}$, respectively. We claim that $\varphi$ and $\mu$ are natural transformations which are inverse of each other. \begin{displaymath} \xymatrix{ \mathbf{0} \ar[d]_-{\text{!`}_{\vec{0},\vec{e}}} & \ar[l]_-{\pi_{0}} \mathbf{0}\times\mathbf{0} \ar[d]^-{\text{!`}_{\vec{0},\vec{e}}\times \text{!`}_{\vec{1},\vec{e}}} \ar[r]^-{\pi_{1}} & \mathbf{0} \ar[d]^-{\text{!`}_{\vec{1},\vec{e}}} \\ \mathbf{A}/\mathbf{Cg}^{\mathbf{A}}(\vec{0},\vec{e}) & \ar[r]_-{p_{1}} \ar[l]^-{p_{0}} \mathbf{A} & \mathbf{A}/\mathbf{Cg}^{\mathbf{A}}(\vec{1},\vec{e}) } \end{displaymath} We start by showing that $\varphi_{\mathbf{A}}$ and $\mu_{\mathbf{A}}$ are well defined. Since $[\vec{0},\vec{1}]\in Z(\mathbf{0}\times \mathbf{0})$, from Theorem \ref{charcoextensivity} (3), $g[\vec{0},\vec{1}]\in Z(\mathbf{A})$ for every $g\in \mathcal{V}(\mathbf{0}\times \mathbf{0}, \mathbf{A})$ so $\varphi_{\mathbf{A}}$ is well defined. Similarly, due to every $\vec{e}\in Z(\mathbf{A})$ induces a product decomposition of $\mathbf{A}$, then from the coextensivity of $\mathcal{V}$, we get that $\text{!`}_{\vec{0},\vec{e}}\times \text{!`}_{\vec{1},\vec{e}}$ is the unique morphism in $\mathcal{V}(\mathbf{0}\times \mathbf{0}, \mathbf{A})$ making pushouts both squares of the diagram of above, so $\mu_{\mathbf{A}}$ is also well defined. Now we prove that $\varphi_{\mathbf{A}}$ and $\mu_{\mathbf{A}}$ are mutually inverse. To do so, let $\vec{e}\in Z(\mathbf{A})$, $h=\text{!`}_{\vec{0},\vec{e}}\times \text{!`}_{\vec{1},\vec{e}}$ and let us consider $\varphi_{\mathbf{A}}(\mu_{\mathbf{A}}(\vec{e}))=h[\vec{0},\vec{1}]$. \begin{displaymath} \xymatrix{ \mathbf{0} \ar[d]_-{\text{!`}_{\vec{0},\vec{e}}} & \ar[l]_-{\pi_{0}} \mathbf{0}\times\mathbf{0} \ar[d]^-{h} \ar[r]^-{\pi_{1}} & \mathbf{0} \ar[d]^-{\text{!`}_{\vec{1},\vec{e}}} \\ \mathbf{P}_{0} & \ar[r]_-{j_{1}} \ar[l]^-{j_{0}} \mathbf{A} & \mathbf{P}_{1} } \end{displaymath} If $\mathbf{P}_{k}$ denote the pushouts of $\pi_{k}$ along $h$, with $1\leq k\leq 2$, from Lemma 2.3 of \cite{Z2021}, it is the case that: \[ \begin{array}{ccc} \mathbf{P}_{0}\cong \mathbf{A}/\mathbf{Cg}^{\mathbf{A}}(\vec{0},\varphi_{\mathbf{A}}(\mu_{\mathbf{A}}(\vec{e}))) \cong \mathbf{A}/\mathbf{Cg}^{\mathbf{A}}(\vec{0},\vec{e}) \\ \mathbf{P}_{1}\cong \mathbf{A}/\mathbf{Cg}^{\mathbf{A}}(\vec{1},\varphi_{\mathbf{A}}(\mu_{\mathbf{A}}(\vec{e})))\cong \mathbf{A}/\mathbf{Cg}^{\mathbf{A}}(\vec{1},\vec{e}). \end{array} \] Therefore, for general reasons we get: \[ \begin{array}{ccc} \mathbf{Cg}^{\mathbf{A}}(\vec{0},\varphi_{\mathbf{A}}(\mu_{\mathbf{A}}(\vec{e})))=\mathbf{Cg}^{\mathbf{A}}(\vec{0},\vec{e}) \\ \mathbf{Cg}^{\mathbf{A}}(\vec{1},\varphi_{\mathbf{A}}(\mu_{\mathbf{A}}(\vec{e})))=\mathbf{Cg}^{\mathbf{A}}(\vec{1},\vec{e}), \end{array} \] Hence, from Corollary 4 of \cite{V1999} it follows that $\varphi_{\mathbf{A}}(\mu_{\mathbf{A}}(\vec{e}))=\vec{e}$. On the other hand, let $g\in \mathcal{V}(\mathbf{0}\times \mathbf{0}, \mathbf{A})$ and consider $\mu_{\mathbf{A}}(\varphi_{\mathbf{A}}(g))=\text{!`}_{\vec{0},\varphi_{\mathbf{A}}(g)}\times \text{!`}_{\vec{1},\varphi_{\mathbf{A}}(g)}$. Then we have $\mathbf{A}\cong \mathbf{A}/\mathbf{Cg}^{\mathbf{A}}(\vec{0},\varphi_{\mathbf{A}}(g))\times \mathbf{A}/\mathbf{Cg}^{\mathbf{A}}(\vec{1},\varphi_{\mathbf{A}}(g))$. Since $\mathcal{V}$ is coextensive, there exist unique $u:\mathbf{0}\rightarrow \mathbf{A}/\mathbf{Cg}^{\mathbf{A}}(\vec{0},\varphi_{\mathbf{A}}(g))$ and $v:\mathbf{0}\rightarrow \mathbf{A}/\mathbf{Cg}^{\mathbf{A}}(\vec{1},\varphi_{\mathbf{A}}(g))$ such that $g=u\times v$. Observe that due to $\mathbf{0}$ is initial in $\mathcal{V}$, it must be the case that $u=\text{!`}_{\vec{0},\varphi_{\mathbf{A}}(g)}$ and $v=\text{!`}_{\vec{1},\varphi_{\mathbf{A}}(g)}$ so $g=\mu_{\mathbf{A}}(\varphi_{\mathbf{A}}(g))$, as desired. The proof of the naturality of $\varphi$ and $\mu$ is straightforward. \end{proof} As an immediate application of Theorem \ref{centrals bijection} and Corollary 9.33 of \cite{A2010}, we obtain the following result. \begin{coro}\label{coro limits} If $\mathcal{V}$ is a coextensive variety, the functor $Z:\mathcal{V}\rightarrow \mathsf{Set}$ preserves all limits. Therefore $Z$ has a left adjoint. \end{coro} Something more can be said about the functor $Z$. \begin{lem}\label{centrals morphisms boole} Let $\mathcal{V}$ be a coextensive variety. If $\mathbf{A},\mathbf{B}\in \mathcal{V}$ and $f:\mathbf{A}\rightarrow \mathbf{B}$ is a homomorphism, then $f|_{Z(\mathbf{A})}:\mathbf{A}\rightarrow \mathbf{B}$ is a homomorphism of Boolean algebras. \end{lem} \begin{proof} We start by recalling that from Theorem \ref{charcoextensivity} and Lemma 4.3 of \cite{Z2021}, for every $\mathbf{A}\in \mathcal{V}$ and every $\vec{e}\in Z(\mathbf{A})$ we have $\theta^{\mathbf{A}}_{\vec{0},\vec{e}}=\mathsf{Cg}^{\mathbf{A}}(\vec{0},\vec{e})$. Since $f$ is a homomorphism, it is clear that $f|_{Z(\mathbf{A})}$ preserves $\vec{0}$ and $\vec{1}$. Now, if $\vec{e}_{1},\vec{e}_{2}\in Z(\mathbf{A})$ and $\vec{a}=\vec{e}_{1}\wedge_{\mathbf{A}}\vec{e}_{2}$, then from Lemma \ref{Useful lema Centrals}, $[\vec{0},\vec{a}]\in \mathsf{Cg}^{\mathbf{A}}(\vec{0},\vec{e}_{1})$ and $[\vec{a},\vec{e}_{2}]\in \mathsf{Cg}^{\mathbf{A}}(\vec{1},\vec{e}_{1})$. From Theorem \ref{charcoextensivity} (3), we get $[\vec{0},f(\vec{a})]\in \mathsf{Cg}^{\mathbf{A}}(\vec{0},f(\vec{e}_{1}))$ and $[f(\vec{a}),f(\vec{e}_{2})]\in \mathsf{Cg}^{\mathbf{A}}(\vec{1},f(\vec{e}_{1}))$. Therefore, again by Lemma \ref{Useful lema Centrals} we conclude that $f|_{Z(\mathbf{A})}$ preserves the meet of $\mathbf{Z}(\mathbf{A})$. The proof that $f|_{Z(\mathbf{A})}$ preserves the join of $\mathbf{Z}(\mathbf{A})$ is analogue. This concludes the proof. \end{proof} Observe that as result of Lemma \ref{centrals morphisms boole}, it is the case that the functor $Z$ can be extended to a new functor $\mathbf{Z}:\mathcal{V}\rightarrow \mathsf{Boole}$. We can take advantage of this fact in order to extend the representable $\mathcal{V}(\mathbf{0}\times\mathbf{0},-)$ to a functor from $\mathcal{V}$ to $\mathsf{Boole}$ which we will denote by $\mathcal{H}$. Indeed, if $\mathbf{A}\in \mathcal{V}$ we can endow $\mathcal{V}(\mathbf{0}\times \mathbf{0}, \mathbf{A})$ with a Boolean algebra structure in such a way that the algebra obtained be isomorphic to $\mathbf{Z}(\mathbf{A})$. If $f,g\in \mathcal{V}(\mathbf{0}\times \mathbf{0}, \mathbf{A})$, by using Theorem \ref{centrals bijection} we define: \[ \begin{array}{rcl} 0 & := & \text{!`}_{\mathbf{1}}\times \text{!`}_{\mathbf{A}} \\ 1 & := & \text{!`}_{\mathbf{A}}\times \text{!`}_{\mathbf{1}} \\ g^{c} & := & \text{!`}_{\vec{1},\varphi_{\mathbf{A}}(g)}\times \text{!`}_{\vec{0},\varphi_{\mathbf{A}}(g)} \\ g\wedge h & := & \text{!`}_{\vec{0},\varphi_{\mathbf{A}}(g)\wedge_{\mathbf{A}} \varphi_{\mathbf{A}}(h)}\times \text{!`}_{\vec{1},\varphi_{\mathbf{A}}(g)\wedge_{\mathbf{A}} \varphi_{\mathbf{A}}(h)} \\ g\vee h & := & \text{!`}_{\vec{0},\varphi_{\mathbf{A}}(g)\vee_{\mathbf{A}} \varphi_{\mathbf{A}}(h)}\times \text{!`}_{\vec{1},\varphi_{\mathbf{A}}(g)\vee_{\mathbf{A}} \varphi_{\mathbf{A}}(h)}. \end{array} \] Notice that due to the natural isomorphism between $Z$ and the representable $\mathcal{V}(\mathbf{0}\times\mathbf{0},-)$ the functoriality of $\mathcal{H}$ is granted. \begin{coro}\label{centrals representable} Let $\mathcal{V}$ be a coextensive variety and consider the functors $\mathbf{Z}$ and $\mathcal{H}$ from $\mathcal{V}$ to $\mathsf{Boole}$. Then $\mathbf{Z}$ and $\mathcal{H}$ are naturally isomorphic. \end{coro} \begin{proof} Immediate from Theorem \ref{centrals bijection} and Lemma \ref{centrals morphisms boole}. \end{proof} At this stage, one may be wandering if a characterization of coextensive varieties in terms of the representability of the functor $Z$ can be established. We baer that in \cite{Z2021} it was shown that not every variety with BFC and $\vec{0}$ and $\vec{1}$ is center stable and even if it is, it may be the case that it may not be coextensive. We claim that the next result provides an effective answer to this question. \begin{theo}\label{converse coextensive} Let $\mathcal{V}$ be a variety with BFC, $\vec{0}$ and $\vec{1}$ and center stable. Then, the following are equivalent: \begin{itemize} \item[(1)] $\mathcal{V}$ is coextensive. \item[(2)] The following conditions hold: \begin{itemize} \item[(i)] The functor $Z:\mathcal{V}\rightarrow \mathsf{Set}$ is representable by $\mathbf{0}\times \mathbf{0}$. \item[(ii)] The functor $\times: \mathcal{V}/\mathbf{0}\times \mathcal{V}/\mathbf{0} \rightarrow \mathcal{V}/(\mathbf{0}\times \mathbf{0})$ is full and faithful. \end{itemize} \end{itemize} \end{theo} \begin{proof} We only proof $(2)\Rightarrow (1)$ because the converse follows from Theorem \ref{centrals bijection} and the dual of Lemma 1 of \cite{CPR2001}. Let us assume $(2)$. We start by noticing that from Theorem 3.4.5 of \cite{B1994}, $\mathcal{V}$ is cocomplete so in particular, it has pushouts along projections. In addition by $(i)$, there exists a natural isomorphism $\varphi$ from $\mathcal{V}(\mathbf{0}\times \mathbf{0}, -)$ to $Z$. If we write $\mu$ for the inverse natural transformation of $\varphi$, then for every $\mathbf{A}, \mathbf{B}\in \mathcal{V}$, $g\in \mathcal{V}(\mathbf{0}\times \mathbf{0}, \mathbf{A})$, $\vec{e}\in Z(\mathbf{A})$ and every homomorphism $f:\mathbf{A}\rightarrow \mathbf{B}$ the following identities hold: \begin{eqnarray} f(\varphi_{\mathbf{A}}(g))=\varphi_{\mathbf{B}}(fg) \label{1a} \\ f(\mu_{\mathbf{A}}(\vec{e}))= \mu_{\mathbf{B}}(f(\vec{e})) \label{1b} \end{eqnarray} Moreover, for every $\mathbf{B}_{0}, \mathbf{B}_{1}\in \mathcal{V}$, $\varphi_{\mathbf{B}_{0}\times \mathbf{B}_{1}}(\text{!`}_{\mathbf{B}_{0}}\times \text{!`}_{\mathbf{B}_{1}})=[\vec{0}_{\mathbf{B}_{0}},\vec{1}_{\mathbf{B}_{1}}]$. Since $\mathcal{V}$ is a variety with $\vec{0}$ and $\vec{1}$, then it is a variety admitting constant symbols. Then from Proposition 2.1 of \cite{B2021}, products are codisjoint. Now, let $\mathbf{A},\mathbf{A}_{0},\mathbf{A}_{1}\in \mathcal{V}$, $\mathbf{A}_{0} \xleftarrow[]{p{0}} \mathbf{A} \xrightarrow[]{p{1}} \mathbf{A}_{1}$ be a span, and let $g\in \mathcal{V}(\mathbf{0}\times \mathbf{0}, \mathbf{A})$. Consider the following diagram in which the upper left and right squares are pushouts: \begin{displaymath} \xymatrix{ \mathbf{0} \ar[d]^-{\text{!`}_{\mathbf{A}_{0}}} \ar@/_1pc/[dd]_-{\text{!`}_{\mathbf{B}_{0}}} & \ar[l]_-{\pi_{0}} \ar[r]^-{\pi_{1}} \ar[d]_-{g} \mathbf{0}\times \mathbf{0} & \mathbf{0} \ar[d]_-{\text{!`}_{\mathbf{A}_{1}}} \ar@/^1pc/[dd]^-{\text{!`}_{\mathbf{B}_{1}}} \\ \mathbf{A}_{0} \ar@{-->}[d]^-{a_{0}} & \ar[l]^-{p_{0}} \ar[r]_-{p_{1}} \mathbf{A} \ar[d]_-{h} & \mathbf{A}_{1} \ar@{-->}[d]_-{a_{1}} \\ \mathbf{B}_{0} & \ar[l]^-{q_{0}} \ar[r]_-{q_{1}} \mathbf{B}_{0}\times \mathbf{B}_{1} & \mathbf{B}_{1} } \end{displaymath} We will prove that the aforementioned span is a product diagram. Since $\varphi_{\mathbf{A}}(g)\in Z(\mathbf{A})$, then we have an isomorphism $h:\mathbf{A}\rightarrow \mathbf{B}_{0}\times \mathbf{B}_{1}$. Now we prove that $hg=i_{\mathbf{B}_{0}}\times i_{\mathbf{B}_{1}}$. To do so, we need to check that $q_{0}(hg)=\text{!`}_{\mathbf{B}_{0}}\pi_{0}$ and $q_{1}(hg)=\text{!`}_{\mathbf{B}_{1}}\pi_{1}$. We only check the first condition because the proof of the second one is similar. Observe that: \begin{displaymath} \begin{array}{ccll} q_{0}(hg) & = & q_{0}h(\mu_{\mathbf{A}}(\varphi_{\mathbf{A}}(g))), & \text{since $\mu_{\mathbf{A}}(\varphi_{\mathbf{A}}(g))=g$.} \\ & = & \mu_{\mathbf{B}_{0}}(q_{0}h(\varphi_{\mathbf{A}}(g))), & \text{from (\ref{1b}) applied to $q_{0}h$}. \\ & = & \mu_{\mathbf{B}_{0}}(q_{0}[\vec{0}_{\mathbf{B}_{0}},\vec{1}_{\mathbf{B}_{1}}]), & \text{since $h(\varphi_{\mathbf{A}}(g))=[\vec{0}_{\mathbf{B}_{0}},\vec{1}_{\mathbf{B}_{1}}]$}. \\ & = & \mu_{\mathbf{B}_{0}}(\vec{0}_{\mathbf{B}_{0}}). & \end{array} \end{displaymath} On the other hand, \begin{displaymath} \begin{array}{ccll} \varphi_{\mathbf{B}_{0}}(\text{!`}_{\mathbf{B}_{0}}\pi_{0}) & = & \varphi_{\mathbf{B}_{0}}(q_{0}(\text{!`}_{\mathbf{B}_{0}}\times \text{!`}_{\mathbf{B}_{1}})) & \\ & = & q_{0}(\varphi_{\mathbf{B}_{0}\times \mathbf{B}_{1}}(\text{!`}_{\mathbf{B}_{0}}\times \text{!`}_{\mathbf{B}_{1}})), & \text{from (\ref{1a}).} \\ & = & q_{0}[\vec{0}_{\mathbf{B}_{0}},\vec{1}_{\mathbf{B}_{1}}] & \\ & = & \vec{0}_{\mathbf{B}_{0}}. & \end{array} \end{displaymath} Therefore, from the following calculation we obtain: \[ \text{!`}_{\mathbf{B}_{0}}\pi_{0}=\mu_{\mathbf{B}_{0}}(\varphi_{\mathbf{B}_{0}}(\text{!`}_{\mathbf{B}_{0}}\pi_{0}))=\mu_{\mathbf{B}_{0}}(\vec{0}_{\mathbf{B}_{0}})=q_{0}(hg). \] So $hg=\text{!`}_{\mathbf{B}_{0}}\times \text{!`}_{\mathbf{B}_{1}}$ as claimed. Recall that the latter implies that $q_{0}(hg)=\text{!`}_{\mathbf{B}_{0}}\pi_{0}$ and $q_{1}(hg)=\text{!`}_{\mathbf{B}_{1}}\pi_{1}$. Thus, due to each of the upper squares of the diagram of above are pushouts by assumption, there exist unique $a_{i}:\mathbf{A}_{j}\rightarrow \mathbf{B}_{j}$ with $j=1,2,$ such that each of the lower squares of the diagram commute. From $(ii)$ and the dual of Lemma 1 of \cite{CPR2001}, the outer left and right squares of the diagram are pushouts. So, each of the lower squares of the diagram are pushouts. Since $q_{0}$ and $q_{1}$ are epi and $h$ is an iso, then $a_{0}$ and $a_{1}$ must be iso too. Therefore, the span $\mathbf{A}_{0} \xleftarrow[]{p{0}} \mathbf{A} \xrightarrow[]{p{1}} \mathbf{A}_{1}$ is a product, as desired. Hence from Proposition \ref{coextensive with zero} the result follows. \end{proof} We conclude this part by introducing a particular class of coextensive varieties. It is motivated by the intimate relation they present with a concrete property of the functor $Z$. Such a class will be related with some results in Section \ref{The Gaeta topos and fp-coextensive varieties}. \begin{defi}\label{center presentable coextensive} A coextensive variety $\mathcal{V}$ is said to be center presentable if $\mathbf{0}\times \mathbf{0}$ is finitely presentable. \end{defi} \begin{lem}\label{center presentable via Z} Let $\mathcal{V}$ be a coextensive variety. Then $\mathcal{V}$ is center presentable if and only if the functor $Z:\mathcal{V}\rightarrow \mathsf{Set}$ preserves filtering colimits. \end{lem} \begin{proof} From Theorem \ref{centrals bijection}, the functor $Z$ is representable by $\mathbf{0}\times \mathbf{0}$. The result follows from Proposition 3.8.14 of \cite{B1994}. \end{proof} \subsection{coBoolean varieties and the Beth property} Let $\mathsf{C}$ be a category with finite limits. We recall that $\mathsf{C}$ has a \textit{subobject classifier} if there is a mono $\top: 1\rightarrow \Omega$ in $\mathsf{C}$ such that for every object $X$ and mono $m:S\rightarrow X$ in $\mathsf{C}$, there exists a unique $\chi_{m}: X\rightarrow \Omega$ such that the following diagram \begin{displaymath} \xymatrix{ S \ar[r]^-{!} \ar[d]_-{m} & 1 \ar[d]^-{\top} \\ X \ar[r]_-{\chi_{m}} & \Omega } \end{displaymath} is a pullback. \begin{defi}\label{quotient coclassifier} A category with finite colimits $\mathsf{D}$ has a quotient coclassifier if $\mathsf{D}^{\mathrm{op}}$ has a subobject classifier. \end{defi} The following definition is an adaptation of Definition 4.2 of \cite{CW1993}. \begin{defi}\label{coBoolean variety} A coextensive variety $\mathcal{V}$ is said to be coBoolean if the first projection $\pi: \mathbf{0}\times \mathbf{0}\rightarrow \mathbf{0}$ is a quotient coclassifier. \end{defi} Recall that in the case of a variety, quotients are completely determined by congruences. Therefore, if $\mathcal{V}$ is coextensive and cooBoolean, by Theorem \ref{centrals bijection} it is the case that $\mathsf{Con}(\mathbf{A})\cong Z(\mathbf{A})$, for every $\mathbf{A}\in \mathcal{V}$. This observation motivates the following definition. \begin{defi}\label{varieties with boolean congruences} Let $\mathcal{V}$ be a variety with BFC. We say that $\mathcal{V}$ is congruence-factor if $\mathsf{Con}(\mathbf{A})\cong\mathsf{FC}(\mathbf{A})$, for every $\mathbf{A}\in \mathcal{V}$. \end{defi} \begin{rem} We say that a variety $\mathcal{V}$ has \textit{Boolean congruences} if $\mathsf{Con}(\mathbf{A})$ is a Boolean algebra, for every $\mathbf{A}\in \mathcal{V}$. If in particular $\mathcal{V}$ is congruence-distributive, then from Theorem 4 of \cite{KEa1986} it follows that $\mathcal{V}$ is semisimple. Therefore, it is immediate from Definition \ref{varieties with boolean congruences}, that congruence-factor varieties are semisimple and arithmetical. \end{rem} Let $\mathsf{C}$ be a category with binary products. We say that a morphism $X\rightarrow Y$ of $\mathsf{C}$ is the \emph{projection of a product} if there exist $X\rightarrow Z$ in $\mathsf{C}$ such that the span $Y\leftarrow X\rightarrow Z$ is a product. Observe that when $\mathsf{C}$ is a variety, the projections of a product are unique up to isomorphism in the following way: if $f:\mathbf{A}\rightarrow \mathbf{B}$ is the projection of a product and $g:\mathbf{A}\rightarrow \mathbf{C}$ and $g':\mathbf{A}\rightarrow \mathbf{C}'$ are such that $\mathbf{A}\cong \mathbf{B}\times \mathbf{C}\cong \mathbf{B}\times \mathbf{C}'$, thus $\mathsf{Ker}(f)\diamond \mathsf{Ker}(g)$ and $\mathsf{Ker}(f)\diamond \mathsf{Ker}(g')$ so $\mathsf{Ker}(g)=\mathsf{Ker}(g')$. Then, by general reasons (Lemma 3.3 of \cite{B2001}), there exist a unique isomorphism $i:\mathbf{C}\rightarrow \mathbf{C}'$ such that $ig=g'$. \\ The following result provides a characterization of coBoolean varieties by means of congruence-factor varieties. \begin{lem}\label{characterization coBoolean varieties} Let $\mathcal{V}$ be a coextensive variety. Then, the following are equivalent: \begin{itemize} \item[(1)] $\mathcal{V}$ is coBoolean. \item[(2)] Every epimorphism in $\mathcal{V}$ is the projection of a product. \item[(3)] $\mathcal{V}$ is congruence-factor and every epimorphism is surjective. \end{itemize} \end{lem} \begin{proof} $(1)\Leftrightarrow (2)$. This is a particular case of the dual of Proposition 4.4 of \cite{CW1993}. $(2)\Rightarrow (3)$. In order to check that $\mathcal{V}$ is congruence-factor, let $\mathbf{A}\in \mathcal{V}$ and $\theta\in \mathsf{Con}(\mathbf{A})$. By (2), the quotient map $\mathbf{A}\rightarrow \mathbf{A}/\theta$ is the projection of a product, so there exist $\mathbf{B}\in \mathcal{V}$ and $q:\mathbf{A}\rightarrow \mathbf{B}$ such that $\mathbf{A}/\mathsf{Ker}(q)\cong \mathbf{B}$ and $\mathbf{A}\cong \mathbf{A}/\theta\times \mathbf{B}$. Hence, $\theta$ and $\mathsf{Ker}(q)$ are complementary factor congruences, as desired. Finally, if $e:\mathbf{A}\rightarrow \mathbf{B}$ is an epimorphism, by (2) $e$ is a projection of a product. So in particular, $e$ is surjective. $(3)\Rightarrow (2)$. If $e:\mathbf{A}\rightarrow \mathbf{B}$ is an epimorphism, by (3) $e$ is surjective so $\mathbf{B}\cong \mathbf{A}/\mathsf{Ker}(e)$. Since $\mathcal{V}$ is congruence-factor by assumption, $\mathsf{Ker}(e)$ has a complementary factor congruence $\theta$. Thus we obtain that $e$ coincides with a projection of $\mathbf{B}\times \mathbf{A}/\theta$, as claimed. \end{proof} We conclude this section with an application of Lemma \ref{characterization coBoolean varieties} concerning algebraizable logics in the sense of Blok and Pigozzi and the Beth (definability) property. The following result reveals that cooBolean varieties can be useful to decide whether an algebraizable logic has the Beth (definability) property. \begin{theo}\label{Beth coBoolean} Let $\mathbb{L}$ be an algebraizable logic with equivalent variety semantics $\mathcal{V}$ and assume that $\mathcal{V}$ is coextensive. Then the following hold: \begin{itemize} \item[(1)] If $\mathcal{V}$ is coBoolean then $\mathbb{L}$ has the Beth (definability) property. \item[(2)] If $\mathcal{V}$ is congruence-factor and $\mathbb{L}$ has the Beth (definability) property, then $\mathcal{V}$ is coBoolean. \end{itemize} \end{theo} \begin{proof} The result follows by a straightforward application of Lemma \ref{characterization coBoolean varieties} (3), and Theorem \ref{Beth definability property}. \end{proof} \section{The Gaeta topos and fp-coextensive varieties}\label{The Gaeta topos and fp-coextensive varieties} In this section we show that given a coextensive variety $\mathcal{V}$, the characterization of coextensive varieties obtained in \cite{Z2021} brings a suitable axiomatization of the theory of $\mathcal{V}$-indecomposable objects. Thereafter, we restrict our study to fp-coextensive varieties. In this setting, will provide a criterion which allows us to decide whether given a fp-coextensive $\mathcal{V}$, the Gaeta topos classifies $\mathcal{V}$-indecomposable objects. Finally, with the aim of furnishing some examples, we apply our results to some particular coextensive varieties of interest in general algebra and algebraic logic. \\ We start by proving some technical results on coextensive varieties which will be used along this section. \begin{lem}\label{terms constants collapses in 0 and 1} Let $\mathcal{V}$ be a coextensive variety. Then, for every $n$-ary term $p(\vec{z})$ and constant symbols $c_{1},...,c_{n}$ in the language of $\mathcal{V}$, there exists a $2n$-ary term $q(\vec{x},\vec{y})$ such that \[\mathcal{V} \models p(\vec{c}) = q(\vec{0},\vec{1}). \] \end{lem} \begin{proof} Since $\mathcal{V}$ is coextensive, by Theorem \ref{charcoextensivity} (2), $\mathcal{V}$ is a Pierce variety in which the relation $\vec{e}\diamond_{\mathbf{A}}{f}$ is equationally definable. So in particular, $\mathcal{V}$ is a variety with $\vec{0}$ and $\vec{1}$. Let \[\sigma(\vec{x},\vec{y})=\bigwedge_{i=1}^{n}p_{i}(\vec{x},\vec{y})=q_{i}(\vec{x},\vec{y})\] define the relation $\vec{e}\diamond_{\mathbf{A}}\vec{f}$ in $\mathcal{V}$. Because $\vec{0}$ and $\vec{1}$ are complementary central elements in $Z(\mathbf{0})$, then for every $\mathbf{A}\in \mathcal{V}$ and $1\leq i\leq n$, \begin{equation}\label{equation first} p_{i}^{\mathbf{A}}(\vec{0}^{\mathbf{A}},\vec{1}^{\mathbf{A}})=q_{i}^{\mathbf{A}}(\vec{0}^{\mathbf{A}},\vec{1}^{\mathbf{A}}). \end{equation} Now we consider $X=\{\vec{x},\vec{y},\vec{z}\}$ and $\theta=\bigvee_{i=1}^{n}\mathsf{Cg}^{\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})}(p_{i}(\vec{x},\vec{y}),q_{i}(\vec{x},\vec{y}))$. Let $\mathbf{H}=\mathbf{F}_{\mathcal{V}}(X)/\theta$. Observe that since $p_{i}^{\mathbf{H}}(\vec{x}/\theta,\vec{y}/\theta)=q_{i}^{\mathbf{H}}(\vec{x}/\theta,\vec{y}/\theta)$ for every $1\leq i\leq n$, then $\vec{x}/\theta\diamond_{\mathbf{H}}\vec{y}/\theta$. Therefore, due to \[(p(\vec{z})/\theta,p(\vec{z})/\theta)\in \nabla^{\mathbf{H}}=\mathsf{Cg}^{\mathbf{H}}(\vec{x}/\theta,\vec{0})\circ \mathsf{Cg}^{\mathbf{H}}(\vec{y}/\theta,\vec{0}),\] there exists a term $t(\vec{x},\vec{y},\vec{z})$ such that \begin{eqnarray}\label{equation second} (p^{\mathbf{H}}(\vec{z}/\theta),t^{\mathbf{H}}(\vec{x}/\theta,\vec{y}/\theta,\vec{z}/\theta))\in \mathsf{Cg}^{\mathbf{H}}(\vec{x}/\theta,\vec{0}) \end{eqnarray} and \[(p^{\mathbf{H}}(\vec{z}/\theta),t^{\mathbf{H}}(\vec{x}/\theta,\vec{y}/\theta,\vec{z}/\theta))\in \mathsf{Cg}^{\mathbf{H}}(\vec{y}/\theta,\vec{0}).\] Let $\mathbf{A}\in \mathcal{V}$. Then, from (\ref{equation first}) and Lemma \ref{technicality lemma}, there exists a unique $\Omega:\mathbf{H}\rightarrow \mathbf{A}$ such that $\Omega(\vec{x}/\theta)=\vec{0}^{\mathbf{A}}$, $\Omega(\vec{y}/\theta)=\vec{1}^{\mathbf{A}}$ and $\Omega(\vec{z}/\theta)=\vec{c}^{\mathbf{A}}$. Thus from (\ref{equation second}) we obtain $p^{\mathbf{A}}(\vec{c}^{\mathbf{A}})=t^{\mathbf{A}}(\vec{0}^{\mathbf{A}},\vec{1}^{\mathbf{A}},\vec{c}^{\mathbf{A}})$. Hence $\mathcal{V} \models p(\vec{c}) = t(\vec{0},\vec{1},\vec{c})$. Finally, if we define $q(\vec{x},\vec{y})=t(\vec{x},\vec{y},\vec{c})$, then from the latter it is the case that $\mathcal{V} \models p(\vec{c}) = q(\vec{0},\vec{1})$, as required. \end{proof} Let $\mathcal{V}$ be a coextensive variety. Recall that again from Theorem \ref{charcoextensivity}, the relation $\vec{e}\diamond_{\mathbf{A}}\vec{f}$ in $\mathcal{V}$ is equationally definable. Let \[\sigma(\vec{x},\vec{y})=\bigwedge_{i=1}^{n}p_{i}(\vec{x},\vec{y})=q_{i}(\vec{x},\vec{y})\] define the relation $\vec{e}\diamond_{\mathbf{A}}\vec{f}$ in $\mathcal{V}$. Now for the rest of this section we consider: \[\theta=\bigvee_{i=1}^{n}\mathsf{Cg}^{\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})}(p_{i}(\vec{x},\vec{y}),q_{i}(\vec{x},\vec{y})),\] \[\mu=\mathsf{Cg}^{\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})}(\vec{x},\vec{0})\vee \mathsf{Cg}^{\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})}(\vec{y},\vec{1}),\] \[\lambda=\mathsf{Cg}^{\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})}(\vec{x},\vec{1})\vee \mathsf{Cg}^{\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})}(\vec{y},\vec{0}).\] \begin{lem}\label{pre useful lemma} Let $\mathcal{V}$ be a coextensive variety. Then, the following hold: \begin{itemize} \item[(1)] $\theta\subseteq \mu, \lambda$; \item[(2)] $\mu \circ \lambda =\lambda \circ \mu =\nabla^{\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})}$; \item[(3)] $\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\mu \cong \mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\lambda \cong \mathbf{0}$. \item[(4)] $\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/(\mu\cap \lambda)\cong \mathbf{0}\times \mathbf{0}$. \end{itemize} \end{lem} \begin{proof} $(1)$ Since $\mathcal{V}$ is coextensive, by Theorem \ref{charcoextensivity} (2), $\mathcal{V}$ is a Pierce variety in which the relation $\vec{e}\diamond_{\mathbf{A}}{f}$ is equationally definable. So in particular, by Lemma 4.3 (1) of \cite{Z2021}, $\mathcal{V}$ is a variety with $\vec{0}$ and $\vec{1}$. Since $\vec{0}^{\mathbf{A}},\vec{1}^{\mathbf{A}}\in Z(\mathbf{A})$ for every $\mathbf{A}\in \mathcal{V}$, it is the case that \[\mathcal{V} \models \vec{x}=\vec{0} \wedge \vec{y}=\vec{1} \Longrightarrow \sigma(\vec{x},\vec{y})\] so in particular \[\mathcal{V} \models \vec{x}=\vec{0} \wedge \vec{y}=\vec{1} \Longrightarrow p_{i}(\vec{x},\vec{y})=q_{i}(\vec{x},\vec{y})\] for every $1\leq i\leq n$. Then from Lemma \ref{vey useful lemma} \[\mathsf{Cg}^{\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})}(p_{i}(\vec{x},\vec{y}),q_{i}(\vec{x},\vec{y}))\subseteq \mu\] for every $1\leq i\leq n$. Therefore, $\theta \subseteq \mu$ as required. The proof of $\theta \subseteq \lambda$ is analogue. $(2)$ Let $(s(\vec{x},\vec{y}),t(\vec{x},\vec{y}))\in \nabla^{\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})}$. By Theorem \ref{charcoextensivity} (2) $\mathcal{V}$ is a Pierce variety so there exists a term $U(x,y,\vec{z},\vec{w})$ in the language of $\mathcal{V}$ such that \[\begin{array}{ccc} U(x,y,\vec{0},\vec{1})=x & \text{and} & U(x,y,\vec{1},\vec{0})=y. \end{array}\] Let us consider $p(\vec{x},\vec{y})=U(s(\vec{x},\vec{y}),t(\vec{x},\vec{y}),\vec{x},\vec{y})$. Observe that $p(\vec{0},\vec{1})=s(\vec{0},\vec{1})$ and $p(\vec{1},\vec{0})=t(\vec{1},\vec{0})$. Thus, it follows that \[\mathcal{V}\models (\vec{x}=\vec{0} \wedge \vec{y}=\vec{1}) \Longrightarrow p(\vec{x},\vec{y})=s(\vec{x},\vec{y}) \] and \[\mathcal{V}\models (\vec{x}=\vec{1} \wedge \vec{y}=\vec{0}) \Longrightarrow s(\vec{x},\vec{y})=t(\vec{x},\vec{y}). \] Thus by Lemma \ref{vey useful lemma} we get \[\begin{array}{ccc} (s(\vec{x},\vec{y}),p(\vec{x},\vec{y}))\in \mu & \text{and} & (p(\vec{x},\vec{y}),t(\vec{x},\vec{y}))\in \lambda. \end{array}\] Hence $\mu\circ \lambda = \nabla^{\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})}$. Finally, we stress that if we take \[q(\vec{x},\vec{y})=U(s(\vec{x},\vec{y}),t(\vec{x},\vec{y}),\vec{y},\vec{x})\] it is no hard to see that $q(\vec{0},\vec{1})= t(\vec{0},\vec{1})$ and $q(\vec{1},\vec{0})= s(\vec{1},\vec{0})$. Therefore, by the same argument we employed before together with Lemma \ref{vey useful lemma}, we get $\lambda\circ \mu = \nabla^{\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})}$, as claimed. $(3)$ Let $\mathbf{H}=\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\mu$ and $\mathbf{H}'=\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\lambda$. We will prove that $\mathbf{H}$ and $\mathbf{H}'$ are both isomorphic to $\mathbf{0}$. We will only exhibit the details about $\mathbf{H}\cong \mathbf{0}$ because the proof of the last part is similar. We start by considering the map $h:H \rightarrow F_{\mathcal{V}}(\emptyset)$ defined as $h(q(\vec{x},\vec{y})/\mu)=q(\vec{0},\vec{1})$. We will show that $h$ is an isomorphism. In order to see that $h$ is well defined, let $q(\vec{x},\vec{y}),q'(\vec{x},\vec{y}) \in F_{\mathcal{V}}(\vec{x},\vec{y})$ and suppose that $q(\vec{x},\vec{y})/\mu=q'(\vec{x},\vec{y})/\mu$. Thus $(q(\vec{x},\vec{y}),q'(\vec{x},\vec{y}))\in \mu$. Then, from Lemma \ref{vey useful lemma} we get \[\mathcal{V}\models (\vec{x}=\vec{0} \wedge \vec{y}=\vec{1}) \Longrightarrow q(\vec{x},\vec{y})=q'(\vec{x},\vec{y}), \] so, in particular $q^{F_{\mathcal{V}}(\emptyset)}(\vec{0},\vec{1})=q'^{F_{\mathcal{V}}(\emptyset)}(\vec{0},\vec{1})$ as claimed. Moreover, notice that the same argument applied in the reverse direction allows us to prove that $h$ is injective. The surjectivity of $h$ follows from Lemma \ref{terms constants collapses in 0 and 1}. Finally, it is clear that $h$ is a homomorphism. Hence $\mathbf{H}\cong \mathbf{0} \cong \mathbf{H}'$ as desired. $(4)$ Immediate from $(2)$, $(3)$ and Lemma \ref{factor congruence quotinents}. \end{proof} \subsection{$\mathcal{V}$-indecomposable objects in a topos}\label{Indecomposable models in a Topos} Let $\mathcal{V}$ be a coextensive variety. An algebra $\mathbf{A}$ of $\mathcal{V}$ is said to be \emph{$\mathcal{V}$-indecomposable} if it is indecomposable by binary products; i.e. if $\mathbf{A}\cong \mathbf{B}\times \mathbf{C}$, then $\mathbf{B}\cong \mathbf{1}$ or $\mathbf{C}\cong \mathbf{1}$. The following result allows will show that the theory of central elements brings axiomatization for the theory of $\mathcal{V}$-indecomposable objects. \begin{lem}\label{Theory of connected models} The class of $\mathcal{V}$-indecomposable objects is axiomatizable by a first order formula. \end{lem} \begin{proof} From Theorem \ref{charcoextensivity} (2) the relation $\vec{e}\diamond_{\mathbf{A}} \vec{f}$ is equationally definable in $\mathcal{V}$. So we can take $\sigma(\vec{x},\vec{y})$ as an equation defining such a relation. It is immediate that $\mathbf{A}\in \mathcal{V}$ is $\mathcal{V}$-indecomposable if and only if in $\mathbf{A}$ the following sentence holds \begin{center} $\vec{0}\neq \vec{1}$ and $(\forall_{\vec{e},\vec{f}} \sigma(\vec{e},\vec{f})\Rightarrow ((\vec{e}=\vec{0} \wedge \vec{f}=\vec{1})\vee (\vec{e}=\vec{1} \wedge \vec{f}=\vec{0}))).$ \end{center} \end{proof} Observe that from Lemma \ref{Theory of connected models}, it follows that an algebra $\mathbf{A}$ in $\mathcal{V}$ is $\mathcal{V}$-indecomposable if and only if the sequents \[0=1 \vdash \perp\] \[\sigma(\vec{x},\vec{y})\vdash_{\vec{x},\vec{y}} (\vec{x}=\vec{0} \wedge \vec{y}=\vec{1}) \vee(\vec{x}=\vec{1} \wedge \vec{y}=\vec{0})\] hold in $\mathbf{A}$. \\ Let $\mathcal{V}$ be a coextensive variety, $f$ be a symbol in the language of $\mathcal{V}$, $\mathsf{E}$ be a topos and $M$ be an object of $\mathsf{E}$. If we write $a_{f}$ for the arity of $f$ (which is a natural number), recall (see D1.2.1 of \cite{J2002}) that the interpretation of $f$ in $M$ is a morphism $f_M:M^{a_{f}}\rightarrow M$. Thus a \emph{$\mathcal{V}$-model in $\mathsf{E}$} is an object $M$ of $\mathsf{E}$ equipped with morphisms $f_M:M^{a_{f}}\rightarrow M$ for every symbol $f$ in the language of $\mathcal{V}$ for which the defining identities of $\mathcal{V}$ (expressed by diagrams in $\mathsf{E}$) hold. Moreover, a homomorphism between $\mathcal{V}$-models $M$ and $R$ in $\mathsf{E}$ is an arrow $h:M\rightarrow R$ in $\mathsf{E}$ making the diagram \begin{displaymath} \xymatrix{ M^{a_{f}}\ar[r]^-{h^{a_f}} \ar[d]_-{f_M} & R^{a_{f}} \ar[d]^-{f_R} \\ M \ar[r]_-{h} & R } \end{displaymath} commutes for every symbol $f$ in the language of $\mathcal{V}$ (here $h^{a_f}$ denotes the product morphism of $h$ $a_{f}$-times). This information defines the category of $\mathcal{V}$-models in $\mathsf{E}$. Notice that in particular, when regarding the topos $\mathsf{Set}$, the category of $\mathcal{V}$-models coincides with $\mathcal{V}$. In order to illustrate the latter, let us consider the variety $\mathcal{DL}_{01}$ of bounded distributive lattices. Then a $\mathcal{DL}_{01}$-model in $\mathsf{E}$ is an object $L$ of $\mathsf{E}$ endowed with arrows \begin{displaymath} \xymatrix{ 1 \ar@< 2pt> [r]^-{1_L} \ar@<-2pt> [r]_-{0_L} & L & \ar@< 2pt> [l]^-{\wedge_L} \ar@<-2pt> [l]_-{\vee_L} L\times L } \end{displaymath} such that the equations defining $\mathcal{DL}_{01}$ hold. For instance, the commutativity of the meet can be expressed by the commutativity of the following diagram in $\mathsf{E}$: \begin{displaymath} \xymatrix{ L\times L \ar[r]^-{\langle \pi_{1},\pi_{2}\rangle} \ar[d]_-{\langle \pi_{2},\pi_{1}\rangle} & L\times L \ar[d]^-{\wedge_{L}} \\ L\times L \ar[r]_-{\wedge_{L}} & L } \end{displaymath} Now, motivated by the observation made right after Lemma \ref{Theory of connected models} we introduce the following: \begin{defi}\label{indecomposable sequents} Let $\mathsf{E}$ be a topos. A $\mathcal{V}$-model $M$ of $\mathsf{E}$ is $\mathcal{V}$-indecomposable if the sequents \[0=1 \vdash \perp\] \[\sigma(\vec{x},\vec{y})\vdash_{\vec{x},\vec{y}} (\vec{x}=\vec{0} \wedge \vec{y}=\vec{1}) \vee(\vec{x}=\vec{1} \wedge \vec{y}=\vec{0})\] hold in the internal logic of $\mathsf{E}$. \end{defi} We stress that Definition \ref{indecomposable sequents} can be driven to categorical terms. To do so, let $M$ be a $\mathcal{V}$-model in a topos $\mathsf{E}$ and let $\vec{0}_{M}$, $\vec{1}_{M}$ and $[\sigma(\vec{x},\vec{y})]_{M}$ be the interpretations in $M$ of the constants $\vec{0}$, $\vec{1}$ and a equation $\sigma(\vec{x},\vec{y})$ defining the relation $\vec{e}\diamond_{\mathbf{A}}\vec{f}$ in $\mathcal{V}$, respectively (for details, see D1.2.6 of \cite{J2002}). Now, let us consider the elements $\langle \vec{0}_{M},\vec{1}_{M}\rangle: 1\rightarrow M^{N}\times M^{N}$ and $\langle \vec{1}_{M},\vec{0}_{M}\rangle: 1\rightarrow M^{N}\times M^{N}$. If $\alpha=[\langle \vec{0}_{M},\vec{1}_{M} \rangle, \langle \vec{1}_{M},\vec{0}\rangle_{M}]$ denotes the morphism from $1+1$ to $[\sigma(\vec{x},\vec{y})]_{M}$ induced by the coproduct, then a basic exercise in the internal logic of toposes shows the following: \begin{lem}\label{Connected VModels in a topos} Let $\mathsf{E}$ be a topos and let $M$ be a $\mathcal{V}$-model in $\mathsf{E}$. The following are equivalent: \begin{itemize} \item[(1)] $M$ is $\mathcal{V}$-indecomposable, \item[(2)] The diagram below \begin{displaymath} \xymatrix@1{ 0 \ar[r]^-{\text{!`}_{1}} & 1 \ar@<0.5ex>[r]^-{\vec{1}_M} \ar@<-0.5ex>[r]_-{\vec{0}_M} & M^{N} } \end{displaymath} is an equalizer in $\mathsf{E}$, and the morphism $\alpha: 1+1\rightarrow [\sigma(\vec{x},\vec{y})]_{M}$ is an isomorphism. \item[(3)] In the internal logic of $\mathsf{E}$, the following sequents hold: \begin{center} \begin{itemize} \item[(C1)] $\vec{0}=\vec{1} \vdash \perp$, \item[(C2)] $\sigma (\vec{x},\vec{y}) \vdash_{\vec{x},\vec{y}} (\vec{x}=\vec{0} \wedge \vec{y}=\vec{1}) \vee (\vec{x}=\vec{1} \wedge \vec{y}=\vec{0}).$ \end{itemize} \end{center} \end{itemize} \end{lem} \subsection{The characterization} Let $\mathsf{C}$ be a small extensive category. For every object $X$ of $\mathsf{C}$, we say that $\{f_{i}:X_{i}\rightarrow X\mid i\in I\}\in K_{\mathcal{G}}(X)$ if and only if $I$ is finite and the induced arrow $\Sigma X_{i}\rightarrow X$ is an isomorphism. From the extensivity condition it follows that $K_{\mathcal{G}}$ is a basis for a Grothedieck topology over $\mathsf{C}$ (see III.2.1 of \cite{MM2012}). The topology $J_{\mathcal{G}}$ generated by such a basis is called the \emph{Gaeta Topology} and the \emph{Gaeta topos} $\mathcal{G}(\mathsf{C})$, is the topos of sheaves on the site $(\mathsf{C}, J_{\mathcal{G}})$. As observed in \cite{CPR2001}, $\mathcal{G}(\mathsf{C})$ is equivalent to the category $\mathsf{Lex}(\mathsf{C}^{\mathrm{op}},\mathsf{Set})$ of product preserving functors to $\mathsf{Set}$ from the category $\mathsf{C}^{\mathrm{op}}$ with finite products. This fact implies that $J_{\mathcal{G}}$ is subcanonical. If $\mathsf{C}$ has a terminal object $1$, from Proposition 4.1 of \cite{CW1993}, it follows that $\mathsf{C}$ is extensive if and only if the canonical functors $1\rightarrow \mathsf{C}/0$ and $\mathsf{C}/(1+1)\rightarrow \mathsf{C}\times \mathsf{C}$ are equivalences. \\ \begin{rem}\label{Gaeta continous} Let $\mathsf{C}$ be an extensive category with a terminal object $1$ and let $\mathsf{E}$ be a topos. Notice that a finite limit preserving functor $G:\mathsf{C} \rightarrow \mathsf{E}$ is continuous (see VII.7 of \cite{MM2012}) with respect to the Gaeta topology over $\mathsf{C}$ if and only $G(0)\cong 0$ and $G(1+1)\cong 1+1$; i.e. it preserves binary coproducts. \end{rem} Let $\mathcal{V}$ be a coextensive variety. We write $\textsf{Mod}_{\textsf{fp}}(\mathcal{V})$ for the full subcategory of finitely presented algebras of $\mathcal{V}$. Let $\mathsf{E}$ be a topos. Due to Lawvere's duality \cite{L1963}, it is known that the category of $\mathcal{V}$-models in $\mathsf{E}$ is equivalent to the category of limit preserving functors $\mathsf{Lex}(\textsf{Mod}_{\textsf{fp}}(\mathcal{V})^{\mathsf{op}}, \mathsf{E})$. So, for every $\mathcal{V}$-model $M$ in $\mathsf{E}$, there exists an essentially unique limit preserving functor $\phi_{M}:\textsf{Mod}_{\textsf{fp}}(\mathcal{V})^{\mathsf{op}} \rightarrow \mathsf{E}$, such that $\phi_{M}(\textbf{F}_{\mathcal{V}}(x))\cong M$. In what follows we will refer to $\phi_{M}$ as the \emph{representative} of $M$. It is worth mentioning that in the case of $\mathsf{E}=\mathsf{Set}$, the representative of $\textbf{F}_{\mathcal{V}}(x)$ reflects isomorphisms. \begin{theo}\label{characterizaction coextensive fpmod coextensive} Let $\mathcal{V}$ be a coextensive variety. Then $\textsf{Mod}_{\textsf{fp}}(\mathcal{V})$ is coextensive if and only if binary products of finitely generated free algebras of $\mathcal{V}$ are finitely presented. \end{theo} \begin{proof} Let $\mathbf{F}_{\mathcal{V}}(n)/\theta$ and $\mathbf{F}_{\mathcal{V}}(m)/\delta$ be finitely presented algebras of $\mathcal{V}$. Then, $\theta$ and $\delta$ are finitely generated congruences on $\mathbf{F}_{\mathcal{V}}(n)$ and $\mathbf{F}_{\mathcal{V}}(m)$, respectively. From Lemma 4.3 (4) of \cite{Z2021}, $\mathcal{V}$ has the Fraser Horn property, thus \begin{equation}\label{auxiliar} \mathbf{F}_{\mathcal{V}}(n)/\theta \times \mathbf{F}_{\mathcal{V}}(m)/\delta \cong (\mathbf{F}_{\mathcal{V}}(n) \times \mathbf{F}_{\mathcal{V}}(m))/(\theta \times \delta). \end{equation} So, due to Theorem 3 (6) of \cite{FH1970}, $\theta \times \delta$ is finitely generated. Since $\mathbf{F}_{\mathcal{V}}(n) \times \mathbf{F}_{\mathcal{V}}(m)$ is finitely presented by assumption, there exist variables $x_{1},\ldots,x_{k}$ and a finitely generated congruence $\epsilon$ on $\mathbf{F}_{\mathcal{V}}(k)$, such that $\mathbf{F}_{\mathcal{V}}(n) \times \mathbf{F}_{\mathcal{V}}(m)\cong \mathbf{F}_{\mathcal{V}}(k)/\epsilon$. Thus, from Theorem 6.20 of \cite{BS1981}, there exists a compact congruence $\gamma$ on $\mathbf{F}_{\mathcal{V}}(k)$, with $\epsilon \subseteq \gamma$, such that $\theta \times \delta = \gamma/\epsilon$. Therefore, from Theorem 6.15 of \cite{BS1981} and (\ref{auxiliar}) we conclude that $\mathbf{F}_{\mathcal{V}}(n)/\theta \times \mathbf{F}_{\mathcal{V}}(m)/\delta$ is finitely presented, so $\textsf{Mod}_{\textsf{fp}}(\mathcal{V})$ has finite products and consequently, it is coextensive. On the other hand, if $\textsf{Mod}_{\textsf{fp}}(\mathcal{V})$ is coextensive, it has finite products. Observe that since $\mathcal{V}$ is coextensive, from Lemma 4.3 of \cite{Z2021}, $\Delta^{\mathbf{A}}=\mathsf{Cg}^{\mathbf{A}}(\vec{0},\vec{0})$ for every $\mathbf{A}\in \mathcal{V}$. In particular, this implies that every finitely generated free algebra of $\mathcal{V}$ is finitely presented so, binary products between them must be finitely presented. This concludes the proof. \end{proof} A coextensive variety $\mathcal{V}$ is said to be \emph{fp-coextensive} if it satisfies any of the equivalent conditions of Theorem \ref{characterizaction coextensive fpmod coextensive}. The following result, immediately establishes a relation between fp-coextensive varieties and center presentable varieties (see Definition \ref{center presentable coextensive}). \begin{coro}\label{fp implies center presentable} Every fp-coextensive variety es center presentable. \end{coro} At this stage one may be wandering if the finiteness of the type of coextensive varieties plays any r\^ole to decide fp-coextensivity. The next result provides an answer to this question. \begin{prop}\label{locally finite and finite type center presentable} Let $\mathcal{V}$ be a coextensive variety of finite type. If $\mathcal{V}$ is locally finite then it is fp-coextensive. So, in particular, the functor $Z$ preserves filtering colimits. \end{prop} \begin{proof} Let us assume that $\mathcal{V}$ is of finite type, coextensive and locally finite. Let $\mathbf{F}_{\mathcal{V}}(n)$ and $\mathbf{F}_{\mathcal{V}}(m)$ be finitely generated free algebras of $\mathcal{V}$. From Theorem 10.15 of \cite{BS1981}, the set $X=F_{\mathcal{V}}(n)\times F_{\mathcal{V}}(n)$ is finite. We stress that $\mathbf{F}_{\mathcal{V}}(n)\times \mathbf{F}_{\mathcal{V}}(m)$ is finitely presented because from Corollary II.10.11 of \cite{BS1981}, such an algebra is in fact isomorphic to $\mathbf{F}_{\mathcal{V}}(X)$ quotiented by the finitely many conditions which describe the operations in $\mathbf{F}_{\mathcal{V}}(n)\times \mathbf{F}_{\mathcal{V}}(m)$. The last part follows from Corollary \ref{fp implies center presentable}. \end{proof} Let $\mathcal{V}$ a fp-coextensive variety. Let $x_{1},\ldots,x_{k}$ be a finite set of variables and let $p_{1},\ldots,p_{k},q_{1},\ldots, q_{k}$ be terms in the language of $\mathcal{V}$ with variables $y_{1},\ldots,y_{l}$. If $\delta$ denotes the congruence $\bigvee_{i=1}^{k}\mathsf{Cg}^{\mathbf{F}_{\mathcal{V}}(\vec{y})}(p_{i}(\vec{y}),q_{i}(\vec{y}))$ and $\mathbf{A}$ denotes the algebra $\mathbf{F}_{\mathcal{V}}(\vec{y})/\delta$, observe that the representative of $M$ sends the finitely presentable algebra $\mathbf{A}$ to the following equalizer in $\mathsf{E}$: \[\xymatrix{ \phi_{M}(\mathbf{A}) \ar@{>->}[r] & M^{l} \ar@< 2pt>[rr]^-{\langle p_{M_1},\ldots,p_{M_k}\rangle} \ar@<-2pt>[rr]_-{\langle q_{M_1},\ldots,q_{M_k}\rangle} & & M^{k} } \] where $p_{M_i}$ and $q_{M_i}$ denote the interpretation in $\mathsf{E}$ of the terms $p_{i}$ and $q_{i}$ in $M$, respectively, with $1\leq i\leq k$. I.e. the image of $\mathbf{A}$ by $\phi_{M}$ essentially coincides in $\mathsf{E}$ with the interpretation in $M$ of the formula \[\varepsilon(\vec{y})=\bigwedge_{i=1}^{k} p_{i}(\vec{y})=q_{i}(\vec{y}).\] In what follows, we write $\mathcal{G}(\mathcal{V})$ for the Gaeta topos determined by the extensive category $\textsf{Mod}_{\textsf{fp}}(\mathcal{V})^{\mathsf{op}}$. We recall that from VII.7.4 of \cite{MM2012} there is an equivalence between the category $\mathsf{Geo}(\mathsf{E},\mathcal{G}(\mathcal{V}))$ of geometric morphisms from $\mathsf{E}$ to $\mathcal{G}(\mathcal{V})$ and the category $\mathsf{LexCon}(\textsf{Mod}_{\textsf{fp}}(\mathcal{V})^{\mathsf{op}},\mathsf{E})$ of limit preserving functors from $\textsf{Mod}_{\textsf{fp}}(\mathcal{V})^{\mathsf{op}}$ to $\mathsf{E}$ which are continuous with respect to the Gaeta topology over $\textsf{Mod}_{\textsf{fp}}(\mathcal{V})^{\mathsf{op}}$. \\ As result of the above discussion, now we can restate Lemma \ref{Connected VModels in a topos} by means of the representative of a $\mathcal{V}$-model in a topos $\mathsf{E}$. \begin{lem}\label{V-indecomposable in a topos} Let $\mathcal{V}$ be a fp-coextensive variety, $\mathsf{E}$ be a topos and let $M$ be a $\mathcal{V}$-model in $\mathsf{E}$. Let $\phi_{M}$ be the representative of $M$. Then, the following are equivalent: \begin{itemize} \item[(1)] $M$ is $\mathcal{V}$-indecomposable in $\mathsf{E}$. \item[(2)] $\phi_{M}(\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\theta)\cong 1+1$ and $\phi_{M}(\mathbf{1})\cong 0$. \end{itemize} Moreover, if $\mathsf{E}=\mathsf{Set}$ and $M=\mathbf{F}_{\mathcal{V}}(x)$, any of the above conditions is equivalent to $\phi_{\mathbf{F}_{\mathcal{V}}(x)}$ preserves finite coproducts. \end{lem} \begin{proof} Let $M$ be a $\mathcal{V}$-model in a topos $\mathsf{E}$. Notice that it is the case that \[[\sigma(\vec{x},\vec{y})]_{M}\cong \phi_{M}(\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\theta)\] and \[[\vec{0}=\vec{1}]_{M} \cong \phi_{M}(\mathbf{0}/\mathsf{Cg}^{\mathbf{0}}(\vec{0},\vec{1})).\] Since $\mathcal{V}$ is a variety with $\vec{0}$ and $\vec{1}$, then \[\mathbf{0}/\mathsf{Cg}^{\mathbf{0}}(\vec{0},\vec{1})\cong \mathbf{1}.\] Hence from Lemma \ref{Connected VModels in a topos} and Remark \ref{Gaeta continous} it is immediate that a $\mathcal{V}$-model $M$ in $\mathsf{E}$ is $\mathcal{V}$-indecomposable in such a topos if and only if $(2)$ holds. For the moreover part, we start by noticing that from Lemma \ref{pre useful lemma}, there exist arrows $f$ and $g$ from $\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\theta$ to $\mathbf{0}$. Now consider the following diagram in $\mathcal{V}$, in which the outer vertical arrows denote the identity of $\mathbf{0}$ and the middle vertical arrow is the arrow induced by the product. \begin{equation}\label{diagram1} \xymatrix{ \mathbf{0} \ar@{=}[d] & \mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\theta \ar[l]_-{f} \ar[r]^-{g} \ar[d] & \mathbf{0} \ar@{=}[d] \\ \mathbf{0} & \mathbf{0}\times \mathbf{0} \ar[l] \ar[r] & \mathbf{0} } \end{equation} We stress that Lemma \ref{pre useful lemma} $(1)$, $(3)$ and $(4)$ allows us to say that each of the squares of such a diagram is a pushout. Thus, if $\mathbf{F}_{\mathcal{V}}(x)$ is $\mathcal{V}$-indecomposable in $\mathsf{Set}$, by condition (2), we obtain that $\phi_{\mathbf{F}_{\mathcal{V}}(x)}(\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\theta)$ is isomorphic to $1+1$ and $\phi_{\mathbf{F}_{\mathcal{V}}(x)}(\mathbf{1})$ is isomorphic to $\emptyset$. Thus, since $\phi_{\mathbf{F}_{\mathcal{V}}}(x)$ preserves finite limits, in order to proof that such a functor preserves finite coproducts, we only need to show that $\phi_{\mathbf{F}_{\mathcal{V}}(x)}(\mathbf{0}\times \mathbf{0})$ is isomorphic to $1+1$. To do so, notice that from Lemma condition (1) and because $\phi_{\mathbf{F}_{\mathcal{V}}(x)}$ preserves pullbacks, the diagram of above turns into the following diagram \begin{displaymath} \xymatrix{ 1 \ar@{=}[d] \ar[r] & \phi_{\mathbf{F}_{\mathcal{V}}(x)}(\mathbf{0}\times \mathbf{0}) \ar[d] & 1 \ar@{=}[d] \ar[l] \\ 1 \ar[r] & 1+ 1 & 1 \ar[l] } \end{displaymath} in which both squares are pullbacks in $\mathsf{Set}$. Since $\mathsf{Set}$ is extensive, then we conclude that $\phi_{\mathbf{F}_{\mathcal{V}}(x)}(\mathbf{0}\times \mathbf{0})$ must be isomorphic to $1+1$, as desired. On the other hand, if $\phi_{\mathbf{F}_{\mathcal{V}}(x)}$ preserves finite coproducts, then $\phi_{\mathbf{F}_{\mathcal{V}}(x)}(\mathbf{1})\cong \emptyset$. Thus, because both of the squares of diagram (\ref{diagram1}) are pushouts and again by the extensivity of $\mathsf{Set}$, we get \[\phi_{\mathbf{F}_{\mathcal{V}}(x)}(\mathbf{0}\times \mathbf{0})\cong 1+1\cong \phi_{\mathbf{F}_{\mathcal{V}}(x)}(\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\theta),\] as claimed. \end{proof} Now, we are ready to show the main result of this section. \begin{theo}\label{Main Theorem} Let $\mathcal{V}$ be a fp-coextensive variety. Then, the following are equivalent: \begin{itemize} \item[(1)] $\mathcal{G}(\mathcal{V})$ is a classifying topos for $\mathcal{V}$-indecomposable objects. \item[(2)] $\mathbf{F}_{\mathcal{V}}(x)$ is $\mathcal{V}$-indecomposable in $\mathsf{Set}$. \end{itemize} \end{theo} \begin{proof} For the sake of readability of the following proofs, we start by fixing some notation. Let $\mathsf{E}$ be a topos. We write $\mathsf{Mod}(\mathcal{V}_{I},\mathsf{E})$ for the category of $\mathcal{V}$-indecomposable objects in $\mathsf{E}$ and we will denote by $\mathsf{A}$ the category $\mathsf{Mod_{fp}}(\mathcal{V})$. $(1)\Rightarrow (2)$ Let us assume that $\mathcal{G}(\mathcal{V})$ classifies $\mathcal{V}$-indecomposable objects. Then, for every topos $\mathsf{E}$, the categories $\mathsf{Mod}(\mathcal{V}_{I},\mathsf{E})$ and $\mathsf{Geo}(\mathsf{E},\mathsf{Set}^{\mathsf{A}})$ are equivalent. Thus, in particular, $\mathsf{Mod}(\mathcal{V}_{I},\mathsf{Set})\approx \mathsf{Geo}(\mathsf{Set},\mathsf{Set}^{\mathsf{A}})$. On the other hand, from VII.7.2 of \cite{MM2012}, there is an equivalence between the category $\mathsf{Lex}(\mathsf{A}^{\mathrm{op}}, \mathsf{Set})$ and the category $\mathsf{Geo}(\mathsf{Set},\mathsf{Set}^{\mathsf{A}})$, so let $g:\mathsf{Set} \rightarrow \mathsf{Set}^{\mathsf{A}}$ be the geometric morphism corresponding to $\phi_{\textbf{F}_{\mathcal{V}}(x)}$ from this equivalence. We will prove that $\phi_{\textbf{F}_{\mathcal{V}}(x)}$ is continuous with respect $\mathcal{J}_{\mathcal{G}}$. Because $\mathcal{J}_{\mathcal{G}}$ is subcanonical, $g$ factors through the inclusion from $\mathcal{G}(\mathcal{V})$ to $\mathsf{Set}^{\mathsf{A}}$, therefore by VII.7.3 of \cite{MM2012} we get $g^{\ast}\circ \textbf{y}\cong \phi_{\textbf{F}_{\mathcal{V}}(x)}$ is continuous with respect $\mathcal{J}_{\mathcal{G}}$, as claimed. Whence, \[(g^{\ast}\circ \textbf{y})(\mathbf{F}_{\mathcal{V}}(x))\cong \phi_{\mathbf{F}_{\mathcal{V}}(x)}(\mathbf{F}_{\mathcal{V}}(x))\cong \mathbf{F}_{\mathcal{V}}(x)\] is indecomposable in $\mathsf{Set}$, as desired. $(2)\Rightarrow (1)$ If $\mathbf{F}_{\mathcal{V}}(x)$ is $\mathcal{V}$-indecomposable in $\mathsf{Set}$, then, from Lemma \ref{V-indecomposable in a topos} and Remark \ref{Gaeta continous}, $\phi_{\mathbf{F}_{\mathcal{V}}(x)}$ is continuous with respect to $\mathcal{J}_{\mathcal{G}}$. So, in order to prove our claim, we need to show that there is an equivalence between $\mathsf{Mod}(\mathcal{V}_{I},\mathsf{E})$ and $\mathsf{Geo}(\mathsf{E}, \mathcal{G}(\mathcal{V}))$, for every topos $\mathsf{E}$. Since $\mathsf{LexCon}(\mathsf{A}^{\mathrm{op}}, \mathsf{E})$ and $\mathsf{Geo}(\mathsf{E}, G(\mathcal{V}))$ are equivalent from VII.7.4 of \cite{MM2012}, we only need to prove that $\mathsf{Mod}(\mathcal{V}_{I},\mathsf{E})$ and $\mathsf{LexCon}(\mathsf{A}^{\mathrm{op}}, \mathsf{E})$ are equivalent for every topos $\mathsf{E}$. To do so, let $\mathsf{E}$ be a topos and let $H:\mathsf{A}^{\mathrm{op}} \rightarrow \mathsf{E}$ be a finite limit preserving functor continuous with respect to $J_{\mathcal{G}}$. From Lawvere's duality, it follows that $M=H(\textbf{F}_{\mathcal{V}}(x))$ is a $\mathcal{V}$-model in $\mathsf{E}$ and also that $\phi_{M}\cong H$. Then from the following calculation \begin{align*} [\vec{0}=\vec{1}]_{M}\cong \phi_{M}(\mathbf{0}/\mathsf{Cg}^{\mathbf{0}}(\vec{0},\vec{1})))\cong \phi_{M}(\mathbf{1})\cong H(\mathbf{1}) \cong 0, \end{align*} we obtain that in the internal logic of $\mathsf{E}$, the sequent $(C1)$ holds. Now, observe that from $(2)$, and the moreover part of Lemma \ref{V-indecomposable in a topos}, it is the case that $\phi_{\mathbf{F}_{\mathcal{V}}(x)}(\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\theta)\cong \phi_{\mathbf{F}_{\mathcal{V}}(x)}(\mathbf{0}\times \mathbf{0})$. So since $\phi_{\mathbf{F}_{\mathcal{V}}(x)}$ reflects isomorphisms, we get $\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\theta \cong \mathbf{0}\times \mathbf{0}$. Therefore, from the following calculation \begin{align*} [\sigma(\vec{x},\vec{y})]_{M}\cong \phi_{M}(\mathbf{F}_{\mathcal{V}}(\vec{x},\vec{y})/\theta)\cong H(\mathbf{0}\times \mathbf{0})\cong 1+1, \end{align*} we get that the sequent $(C2)$ also holds in the internal logic of $\mathsf{E}$. Hence, by Lemma \ref{Connected VModels in a topos} $(3)$, we conclude that $M$ is indecomposable in $\mathsf{E}$. The proof that a $\mathcal{V}$-indecomposable model $M$ in $\mathsf{E}$ determines a functor in $\mathsf{LexCon}(\mathsf{A}^{\mathrm{op}}, \mathsf{E})$ is similar. Hence, from D3.1.9 in \cite{J2002}, the functor $\mathsf{LexCon}(\mathsf{A}^{\mathrm{op}}, \mathsf{E})\rightarrow \mathsf{Mod}(\mathcal{V}_{I},\mathsf{E})$ which sends $H$ to $H(\textbf{F}_{\mathcal{V}}(x))$ is an equivalence of categories. \end{proof} \subsection{Applications}\label{Applications} \subsubsection{Bounded distributive lattices} Let $\mathcal{DL}_{01}$ be the variety of bounded distributive lattices. Straightforward calculations show that the term $U(x, y, z, w) = (x\vee z)\wedge (y\vee w)$, together with the constants $0$ and $1$, makes $\mathcal{DL}_{01}$ a Pierce variety. In addition it is also well known that the relation $e\diamond_{\mathbf{A}}f$ in $\mathcal{DL}_{01}$ is defined by the equations $e\wedge f=0$ and $e\vee f=1$. Hence, by Theorem \ref{charcoextensivity}, $\mathcal{DL}_{01}$ is coextensive. Moreover, the only subdirectly irreducible member of $\mathcal{DL}$ is the two element distributive lattice $\mathbf{2}$, then from Theorem 10.16 of \cite{BS1981} it follows that $\mathcal{DL}_{01}$ is locally finite. Thus, by Proposition \ref{locally finite and finite type center presentable}, $\mathcal{DL}_{01}$ is fp-coextensive. Finally, since $\textbf{F}_{\mathcal{DL}_{01}}(x)$ is indecomposable (it is the three element chain) from Corollary \ref{fp implies center presentable} and Theorem \ref{Main Theorem} we can conclude: \begin{prop}\label{distributive classified gaeta} The variety $\mathcal{DL}_{01}$ is fp-coextensive and $\mathcal{G}(\mathcal{DL}_{01})$ classifies $\mathcal{DL}_{01}$-indecomposable objects. In particular, the functor $Z:\mathcal{DL}_{01}\rightarrow \mathsf{Set}$ preserves all limits and filtering colimits. \end{prop} It is worth mentioning that the first part of Proposition \ref{distributive classified gaeta} was stated without a proof in Section 8 of \cite{L2008} and later on, in \cite{Za2017} a detailed proof was provided. \subsubsection{Integral rigs} A \emph{rig} is an algebra $\mathbf{A}=(A, +, \cdot,0, 1)$ of type $(2,2,0,0)$ such that the structures $(A, \cdot, 1)$ and $(A, +, 0)$ are commutative monoids such that ``product distributes over addition" in the sense that $x \cdot 0 = 0$ and $x \cdot (y + z) = (x \cdot y) + (x \cdot z)$ for every $x, y, z \in A$. One may think such structures as ``(commutative) rings (with unit) without negatives". A rig is said to be \emph{integral} if the equation $1+x=1$ holds, for every $x\in A$. It is immediate from the latter that the class of integral rigs is a variety. We denote such a variety by $\mathcal{RN}$. Observe that the term $U(x, y, z, w) = (x+z)\cdot (y+w)$, together with the constants $0$ and $1$ makes $\mathcal{RN}$ a Pierce variety. In \cite{CMZ2016}, it was proved that the relation $e\diamond_{\mathbf{A}}f$ in $\mathcal{RN}$ is defined by the equations $e\cdot f=0$ and $e+ f=1$. Thus by Theorem \ref{charcoextensivity}, $\mathcal{RN}$ is coextensive. Moreover, in Corollary 8.1 of \cite{M2021} it was proved that $\mathcal{RN}$ is fp-coextensive. Observe that $\mathbf{F}_{\mathcal{RN}}(x)$ can be identified with the chain ${0 < . . . < x^n < . . . < x^2 < x < x^0 = 1}$ with the obvious multiplication and addition, so the free integral rig on one generator $x$ is indecomposable. Therefore, by Corollary \ref{fp implies center presentable} and Theorem \ref{Main Theorem} we have proved the following: \begin{prop}\label{irigs classified gaeta} The variety $\mathcal{RN}$ is fp-coextensive and $\mathcal{G}(\mathcal{RN})$ classifies $\mathcal{RN}$-indecomposable objects. In particular, the functor $Z:\mathcal{RN}\rightarrow \mathsf{Set}$ preserves all limits and filtering colimits. \end{prop} For a different proof of the fact that $\mathcal{G}(\mathcal{RN})$ classifies $\mathcal{RN}$-indecomposable objects, the reader may consult \cite{M2021}. \subsubsection{Commutative rings with unit} Let $\mathcal{R}$ be the variety of commutative rings with unit. It is fairly known that $\mathbf{A}\in \mathcal{R}$ is directly indecomposable if and only if the only idempotents in $A$ are the trivial ones. Observe that this is equivalent to say that the formula \[ \sigma(x,y)=(xy=0) \wedge (x+ y=1) \] defines the relation $e\diamond_{\mathbf{A}}f$ in $\mathcal{R}$. Moreover, we can employ the same term used for integral rigs and the constants $0$ and $1$ in order to make $\mathcal{R}$ a Pierce variety. So from Theorem \ref{charcoextensivity} we get $\mathcal{R}$ is coextensive. It is also well known that $\mathcal{R}$ is fp-coextensive. Recall that the free commutative rig with unit on one generator $x$ can be identified with the ring $\mathbf{Z}[x]$ of polynomials in the variable $x$ with coefficients in $\mathbb{Z}$ endowed with the usual sum and product of polynomials. It is straightforward to see that $\mathbf{Z}[x]$ is $\mathcal{R}$-indecomposable. Hence, due to Corollary \ref{fp implies center presentable} and Theorem \ref{Main Theorem} we obtain the following: \begin{prop} The variety $\mathcal{R}$ is fp-coextensive and $\mathcal{G}(\mathcal{R})$ classifies $\mathcal{R}$-indecomposable objects. In particular, the functor $Z:\mathcal{R}\rightarrow \mathsf{Set}$ preserves all limits and filtering colimits. \end{prop} \subsubsection{Heyting algebras} A Heyting algebra is an algebra $\mathbf{A}=(A, \wedge, \vee, \rightarrow,0, 1)$ of type $(2,2,2,0,0)$ such that the structure $(A,\wedge, \vee, 0, 1)$ is a bounded distributive lattice satisfying $x\wedge y\leq z$ if and only if $x\leq y\rightarrow z$. As usual, we denote $x\rightarrow 0$ by $\neg x$. Let $\mathcal{H}$ be the class of Heyting algebras. It is fairly known that $\mathcal{H}$ is a variety and also that this class provide an algebraic semantics for intuitionistic logic. It is no hard to see that the constants $0$ and $1$ and the term \[U(x,y,z,w)=(z\wedge y)\vee ( \neg z \wedge x) \] makes $\mathcal{H}$ a Pierce variety and moreover, that the formula \[ \sigma(x,y)=(x\wedge y=0) \wedge (x\vee y=1) \] defines the relation $e\diamond_{\mathbf{A}}f$ in $\mathcal{H}$. Then Theorem \ref{charcoextensivity} yields that $\mathcal{H}$ is coextensive. Further, by the dual of Proposition 5.5 of \cite{GS1995}, $\mathcal{H}$ is fp-coextensive and by Theorem 3.2 \cite{CDT2011}, all free algebras of $\mathcal{H}$ are $\mathcal{H}$-indecomposable. Therefore, we can conclude: \begin{prop} The variety $\mathcal{H}$ is fp-coextensive and $\mathcal{G}(\mathcal{H})$ classifies $\mathcal{H}$-indecomposable objects. In particular, the functor $Z:\mathcal{H}\rightarrow \mathsf{Set}$ preserves all limits and filtering colimits. \end{prop} \subsubsection{G\"{o}del algebras} A G\"{o}del algebra is an algebra $\mathbf{A}=(A, \wedge, \vee, \rightarrow,0, 1)$ of type $(2,2,2,0,0)$ such that the structure $(A,\wedge, \vee, \rightarrow, 0, 1)$ is a Heyting algebra satisfying the \emph{prelinearity} condition. I.e. the equation $(x\rightarrow y)\vee (y\rightarrow x)=1$, holds for every $x,y\in A$. G\"{o}del algebras provide an algebraic semantics for G\"{o}del logic, which can be defined as the schematic extension of the intuitionistic propositional calculus by the prelinearity axiom $(\alpha \rightarrow \beta) \vee (\beta \rightarrow \alpha)$. We write $\mathcal{P}\mathcal{H}$ for the variety of G\"{o}del algebras. Observe that we can apply the same arguments used for $\mathcal{H}$ in order to prove that $\mathcal{P}\mathcal{H}$ is coextensive. Furthermore, due to $\mathcal{P}\mathcal{H}$ is a locally finite variety and its type is finite, from Lemma \ref{locally finite and finite type center presentable}, $\mathcal{P}\mathcal{H}$ is fp-coextensive. In \cite{FGR2019} it was shown that $\mathbf{F}_{\mathcal{P}\mathcal{H}}(x)$ can be identified with the lattice displayed in Fig. 1. Nonetheless, it is the case that $\neg x$ and $\neg \neg x$ are non-trivial complementary central elements of $\mathbf{F}_{\mathcal{P}\mathcal{H}}(x)$. We have proved: \begin{figure}\label{ej1} \begin{center} \begin{tikzpicture} \filldraw [thick] (0,1) circle (2pt) node[left] {$\neg x$} -- (1,0) circle (2pt) node[right](x) {$0$} (1,2) (1,0)--(2,1) circle (2pt) node[right] {$x$} -- (1,2) circle (2pt) node[left] {$\neg x \vee x$}--(0,1) (2,1)--(3,2)circle (2pt) node[right] {$\neg \neg x$}-- (2,3)circle (2pt) node[left] {$1$}-- (1,2); \end{tikzpicture} \caption{$\mathbf{F}_{\mathcal{P}\mathcal{H}}(x)$} \end{center} \end{figure} \begin{prop} The variety $\mathcal{P}\mathcal{H}$ is fp-coextensive so in particular the functor $Z:\mathcal{P}\mathcal{H}\rightarrow \mathsf{Set}$ preserves all limits and filtering colimits. Nevertheless $\mathcal{G}(\mathcal{P}\mathcal{H})$ does not classifies $\mathcal{P}\mathcal{H}$-indecomposable objects. \end{prop} \subsubsection{MV-algebras} An MV-algebra is an algebra $(A, \oplus, \neg, 0)$ of type $(2,1,0)$ such that $(A, \oplus, 0)$ is a commutative monoid such that the following equations hold: \begin{enumerate} \item $\neg \neg x=x$, \item $x\oplus \neg 0= \neg 0$, \item $\neg (\neg x \oplus y) \oplus y = \neg (\neg y \oplus x) \oplus x$. \end{enumerate} We write $\mathcal{M}\mathcal{V}$ for the variety of MV-algebras. If we define the following operations \begin{displaymath} \begin{array}{ccc} x+y = \neg (\neg x \oplus y) \oplus y; & 1= \neg 0; & x\cdot y = \neg (\neg x \oplus \neg y), \end{array} \end{displaymath} straightforward calculations allows to see that the constants $0$ and $1$ and the term \[U(x,y,z,w)=(x+ z)\cdot ( y+ w) \] makes $\mathcal{M}\mathcal{V}$ a Pierce variety. It is known that $\mathcal{M}\mathcal{V}$ provides an algebraic semantics for \L ukasiewicz logic \cite{COM2000}. From Definition 1.5.2 and Theorem 1.5.3 of (\textit{op.cit.}) it can be proved that the formula \[ \sigma(x,y)=(x+ y=0) \wedge (x\cdot y=1) \] defines the relation $e\diamond_{\mathbf{A}}f$ in $\mathcal{M}\mathcal{V}$. So, by Theorem \ref{charcoextensivity}, $\mathcal{M}\mathcal{V}$ is coextensive. As a consequence of Theorem 3.4 of \cite{MS2013}, it follows that $\mathcal{M}\mathcal{V}$ is fp-coextensive. Finally, in \cite{CT2003} it was proved that every free MV-algebra is semisimple and directly indecomposable. Hence we get: \begin{prop} The variety $\mathcal{M}\mathcal{V}$ is fp-coextensive and $\mathcal{G}(\mathcal{M}\mathcal{V})$ classifies $\mathcal{M}\mathcal{V}$-indecomposable objects. In particular, the functor $Z:\mathcal{M}\mathcal{V}\rightarrow \mathsf{Set}$ preserves all limits and filtering colimits. \end{prop}
{ "timestamp": "2022-02-02T02:06:46", "yymm": "2202", "arxiv_id": "2202.00135", "language": "en", "url": "https://arxiv.org/abs/2202.00135" }
\section{Introduction} In many cases in equilibrium statistical physics, a steady-state solution of a master equation yields the equilibrium distribution. However, the formal steady-state solution may not be normalizable, especially for non-stationary stochastic processes found in the context of anomalous diffusion and non-normalizable Boltzmann states \cite{van1992stochastic, Kessler2010,lutz2013, Rebenshtok2014, Holz2015, Leibovich2019,Aghion2019,aghion2020infinite,aghion2021moses,Streissnin2021}. Such an unnormalized formal steady state is called an infinite invariant density, which is known from deterministic dynamical systems \cite{Thaler1983,Aaronson1997}. Interestingly, dynamical systems with infinite invariant densities exhibit non-stationary behaviors and trajectory-to-trajectory fluctuations of time averages, whereas they are ergodic in the mathematical sense \cite{Aaronson1997}. The ergodic properties of dynamical systems with infinite invariant densities have been established in infinite ergodic theory \cite{Aaronson1997,Thaler1998,Thaler2002, Akimoto2008,Akimoto2015,Sera2019,Sera2020}, where distributional limit theorems for time-averaged quantities play an important role. The distributional limit theorems state that time-averaged observables obtained with single trajectories show trajectory-to-trajectory fluctuations. The distribution function of the fluctuations depends on whether the observable is integrable with respect to the infinite invariant measure \cite{Aaronson1981,Akimoto2008,Akimoto2010,Akimoto2012,Akimoto2015}. This distributional behavior of time averages is a characteristic feature of infinite ergodic theory. Similar distributional behaviors have been observed in experiments such as the fluorescence of quantum dots, diffusion in living cells, and interface fluctuations in liquid crystals \cite{Brok2003,stefani2009,Golding2006,Weigel2011,Jeon2011,Hofling2013,Manzo2015,takeuchi2016}. Subrecoil laser cooling is a powerful technique for cooling atoms \cite{cohen1990new, Bardou1994}. A key idea of this technique is to realize experimentally a heterogeneous random walk (HRW) of the atoms in momentum space. In a standard cooling technique such as Doppler cooling, a biased random walk is utilized to shift the momenta of atoms towards zero \cite{cohen1990new}. Thus, Doppler cooling is routinely modeled using a standard Fokker--Planck equation for the momentum distribution. In contrast to a homogeneous random walk, an HRW enables the accumulation of walkers at some point without an external force induced by the Doppler effect. In other words, the probability of finding a random walker at that point converges to one in the long-time limit due to an ingenious trapping mechanism, that gives rise to a heterogeneous environment. Hence, for subrecoil laser cooling, instead of a biased random walk, an HRW plays an essential role. This was a paradigm shift for cooling and useful for cooling beyond the lowest limit obtained previously in standard cooling techniques \cite{cohen1990new}. It is now well recognized that infinite ergodic theory provides a fundamental theory for subrecoil-laser cooling \cite{Barkai2021,*barkai2021gas}. In \cite{Bardou2002} three models of subrecoil laser cooling are proposed. One is based on the HRW, another is obtained from the HRW model with long-range jumps called the exponential model, and the third is a mean-field-like approximation of the exponential model called the deterministic model. It is known that the infinite density depends in principle on some details of the system \cite{Aghion2019,aghion2020infinite,aghion2021moses}. The question then remains: what elements of the infinite ergodic theory remain universal? These questions with respect to the general validity of the theory are particularly important because we have at least two general classes of observables, i.e., integrable and non-integrable with respect to the infinite invariant measure. To unravel the universal features of subrecoil laser cooling, we explore here the three models of subrecoil laser cooling. \if0 Our study problem is to clarify the role of the infinite invariant density in subrecoil laser cooling. In our previous study \cite{Barkai2021,*barkai2021gas}, we partially provided the answer for a model of subrecoil laser cooling. The role of infinite invariant densities in non-stationary processes is not trivial, in contrast to equilibrium distributions, because non-stationary processes have no steady state. Because an infinite invariant density is a ``formal steady state," it plays a role in non-stationary processes. In dynamical systems with infinite invariant densities, the infinite invariant densities are essential for obtaining a deep understanding of the distributional behaviors of time averages, dynamical instability of the system, and evolution of the density \cite{Akimoto2007, Korabel2009, Akimoto2010, Akimoto2010a, Korabel2012, Akimoto2012, Akimoto2013b,Korabel2013}. In previous studies \cite{Akimoto2020, Barkai2021,*barkai2021gas}, we showed that the infinite invariant density in a stochastic process exhibiting non-stationary behaviors is important to obtain ergodic properties, such as the trajectory-to-trajectory fluctuations of time-averaged observables in the non-stationary process. The present study aims to provide an explicit role for the infinite invariant density for models of a subrecoil-laser-cooled gas. The results indicate that the infinite ergodic theory plays an important role in describing the fundamental {\color{red}theory} of subrecoil laser cooling. \fi The rest of the paper is organized as follows. In Sec.~II, we introduce the three stochastic models of subrecoil laser cooling. In Sec.~III, we introduce the master equation and the formal steady-state solution, i.e., the infinite invariant density, in the HRW model. In Sections~IV and V, we present the infinite invariant densities and the distributional limit theorems for the time average of the kinetic energy in the deterministic and exponential model, respectively. While the master equations for the HRW and exponential model are different, we show that the propagators and the distributional behaviors of the time-averaged kinetic energy match very well. Section VI is devoted to the conclusion. In the Appendix, we give a derivation of the moments of the associated action as a function of time $t$. \section{Three stochastic models} Here, we introduce the three stochastic models of subrecoil laser cooling. All the models describe stochastic dynamics of the momentum of an atom. First, the HRW model is a one-dimensional continuous-time random walk (CTRW) in momentum space. The CTRW is a random walk with continuous waiting times. Usually, in the CTRW the waiting times are independent and identically distributed (IID). In the HRW model, they are not an IID random variable. In the HRW, the waiting time between stochastic updates of momentum given $p$ is exponentially distributed with a mean waiting time $1/R(p)$. After waiting the atom jolts and momentum is modified. We assume that the jump distribution follows a Gaussian distribution: \begin{equation} G( \Delta p)=(2\pi \sigma ^{2})^{-1/2}\exp [-\Delta p^{2}/(2\sigma ^{2})], \end{equation} where $\Delta p$ is a jump of the momentum of an atom and $\sigma^2$ is the variance of the jumps. The heterogeneous rate $R(p)$ is important to cool atoms and can be realized by velocity selective coherent population trapping in experiments \cite{AAK88}. In subrecoil laser cooling, the jump rate $R(p)$ is typically given by \begin{equation} R(p)\propto |p|^\alpha \label{atom-laser interaction} \end{equation} for $|p|\to 0$ \cite{Bardou2002}, where $\alpha$ is a positive constant. This constant can take any value in principle \cite{KaC92}, for instance, $\alpha=2$ in velocity-selective coherent population trapping \cite{AAK88}. In what follows, we consider a specific jump rate: \begin{equation} R(p)= \left\{ \begin{array}{ll} c^{-1}|p|^\alpha \quad &(|p| <p_0)\\ \\ c^{-1}|p_0|^\alpha &(|p| \geq p_0), \end{array} \right. \label{atom-laser interaction2} \end{equation} where $p_0$ is the width of the jump rate dip and $c$ is a positive constant (see Fig.~\ref{traj}). A typical trajectory in the HRW model is shown in Fig.~\ref{traj}. Since the HRW model is a non-biased random walk, the momentum will eventually reach high values. To prevent such a situation, one considers a confinement in an experimentally realizable way. \begin{figure} \includegraphics[width=.95\linewidth, angle=0]{traj-Rp.eps} \caption{A typical trajectory of momentum $p(t)$ in the HRW model, where $R(p)=|p|^2$, $p_0=1$, $p_{\max}=2$, $\sigma^2=0.01$, and $p_{\rm trap} \cong 0.035$ is shown for reference. The blue and the yellow region are the trapping and the recycling region, respectively. The inset is a schematic illustration of the jump rate $R(p)$.} \label{traj} \end{figure} Next, we explain how we obtain the other two models, i.e., the exponential and the deterministic model, based on the HRW model. The region in momentum space can be divided into two regions, i.e., trapping and recycling region \cite{Bardou2002}. The trapping region is defined as $|p| \leq p_{\rm trap}$, where $p_{\rm trap} \ll \sigma$. In the recycling region, the atom undergoes a non-biased random walk, which will eventually lead the atom back to the trapping region with the aid of the confinement. In the HRW model, the trap size $p_{\rm trap}$ does not play any role. However, the jump of a random walker is long-ranged in the trapping region in the sense that momentum after jumping in the trapping region is approximately independent of the previous momentum. Therefore, the following assumption is quite reasonable. In the exponential and the deterministic model, momentum after jumping in the trapping region is assumed to be an IID random variable. In particular, the probability density function (PDF) $\chi (p)$ for the momentum at every jump in the trapping region is assumed to be uniform: \begin{equation} \chi (p)= \frac{1}{2p_{\rm trap}} ~~{\rm for}~p\in[-p_{\rm trap},p_{\rm trap}]. \label{chi} \end{equation} A trajectory for the exponential model is similar to that for the HRW model. However, a crucial difference between the HRW model and the exponential model is in the nature of the waiting time: the waiting time is an independent random variable in the exponential model, whereas it is not in the HRW model. In the HRW model, the waiting time is not independent of the previous waiting time because the momentum depends on the previous one. Such a dependence of waiting times is also the nature of the quenched trap model (QTM), which is a random walk in a static random heterogeneous environment \cite{bouchaud90}. Note that the heterogeneos evinronment in the HRW model is static but not random. A difference between the exponential and the deterministic models is in the coupling between the waiting time and the momentum. In the exponential model, momentum and waiting time are stochastically coupled. As for the HRW this model is a Markov model and the conditional PDF of the waiting time given the momentum $p$ follows an exponential distribution with mean $1/R(p)$. On the other hand, the deterministic model is a non-Markov model. The waiting time given the momentum $p$ is deterministically prescribed as $\tau (p) = 1/R(p)$ \cite{Bardou2002}. In other words, the waiting time, which is a random variable in the exponential model, is replaced by its mean in the deterministic model. In this sense, the deterministic model is a mean-field-like model of the exponential model. Note that this implies a double meaning of $1/R(p)$: while in the HRW and in the exponential model it is the mean waiting time, whereas in the deterministic model it is the exact waiting time for a given momentum $p$. \section{Heterogeneous Random Walk Model} Here, we consider the HRW model confined to the interval $[-p_{\max},p_{\max}]$ \cite{AAK88,Bardou1994}. The momentum $p(t)$ at time $t$ undergoes a non-biased random walk. Jumps of the momentum are attributed to photon scattering and spontaneous emissions. Importantly, its jump rate $R(p)$ follows Eq.~(\ref{atom-laser interaction2}) for $|p|<p_0$. In particular, we consider that the jump rate follows Eq.~(\ref{atom-laser interaction2}) \cite{Bardou1994}. In this model, the conditional PDF $q(\tilde{\tau}|p)$ of $\tilde{\tau}$ given $p$ follows the exponential distribution: \begin{equation} q(\tilde{\tau}|p)= R(p) \exp(-R(p) \tilde{\tau}). \label{conditional_PDF_HRW} \end{equation} Clearly, the mean waiting time given $p$ explicitly depends on $p$ when $|p| <p_0$. Thus, the random walk is heterogeneous. A confinement of atoms can also be achieved by Doppler cooling \cite{cohen1990new, Bardou1994}. However, for simplicity, we consider reflecting boundary conditions at $p=\pm p_{\max}$. As will be observed later, the size of the confinement or the width of the jump rate dip does not affect the asymptotic behavior of the scaling function of the propagator. More precisely, the scaling function and fluctuations of the time-averaged energy do not depend on $p_{\max}$ and $p_0$. As shown in Fig.~\ref{traj}, the momentum of an atom remains constant for a long time when $|p|$ is small. On the other hand, momentum changes frequently occur when $|p|$ is away from zero. \subsection{Master equation and infinite invariant density} The time evolution of the probability density function (PDF) $\rho(p,t)$ of momentum $p$ at time $t$ is given by the master equation with gain and loss terms: \begin{equation} \frac{\partial \rho \left( p,t\right) }{\partial t}=\int_{-p_{\max}}^{p_{\max}} dp^{\prime }\left[ W(p^{\prime}\rightarrow p) \rho \left( p^{\prime },t\right) -W(p\rightarrow p^{\prime})\rho \left( p,t\right) \right], \label{Master} \end{equation} where $W(p\rightarrow p^{\prime})$ is the transition rate from $p$ to $p'$. Jump and transition rates can be represented as \begin{equation} R(p)=\int_{-\infty}^\infty dp^{\prime }W(p\rightarrow p^{\prime }) \label{jump rate} \end{equation}% and \begin{equation} W(p\rightarrow p^{\prime })= R(p)G(p'|p), \label{transition rate0} \end{equation} where $G(p'|p)$ is the conditional PDF of $p'$ given $p$, respectively, where both the domain and the codomain of the function $G(p'|p)$ are $[-p_{\max}, p_{\max}]$ because of the confinement. Note that $G(p'|p)$ cannot depend solely on the difference $p'-p$ when a random walker reaches the reflecting boundary, i.e., $|p'+\Delta p| > p_{\max}$, where $\Delta p$ is a jump length. It follows that the master equation (Eq.~(\ref{Master})) of the HRW model takes the following form:% \begin{equation} \frac{\partial \rho \left( p,t\right) }{\partial t}=-R(p)\rho \left( p,t\right) +\int_{-p_{\max}}^{p_{\max}} dp^{\prime }\rho \left( p^{\prime },t\right) R(p^{\prime })G(p|p^{\prime }). \label{Master1} \end{equation} \if0 For $|p|$ and $\sigma \ll p_{\max}$, a random walker rarely comes close to the boundary, and the master equation can be approximated by \begin{equation} \frac{\partial \rho \left( p,t\right) }{\partial t}\simeq -R(p)\rho \left( p,t\right) +\int_{-\infty}^{\infty} dp^{\prime }\rho \left( p^{\prime },t\right) R(p^{\prime }) G(p-p^{\prime }). \label{Master2} \end{equation} As will be shown, a situation in which almost all walkers obey $|p| \ll p_{\max}$ is valid in the long-time limit. Because the jump distribution $G(\Delta p)$ is symmetric, i.e., $G(\Delta p)=G(-\Delta p)$, the transition rate satisfies $W(p\to p')/R(p) = W(p'\to p)/R(p')$, especially for $|p|$ and $|p'| \ll p_{\max}$. \fi The stationary solution $\rho^*(p)$ is easily obtained from the detailed balance in Eq.~(\ref{Master}), i.e., \begin{equation} W(p^{\prime }\rightarrow p)\rho ^{\ast }\left( p^{\prime }\right) -W(p\rightarrow p^{\prime })\rho ^{\ast }\left( p\right) =0, \label{detailed balance} \end{equation}% where $\rho^*(p)$ is the stationary solution. For $|p|$ and $|p'| \ll p_{\max}$, the conditional PDF $G(p|p')$ is approximately symmetric, i.e., $G(p|p')=G(p'|p)$. Therefore, for $|p|, |p'| \ll p_{\max}$, detailed balance yields \begin{equation} R(p^{\prime })\rho ^{\ast }\left( p^{\prime }\right) =R(p)\rho ^{\ast}\left( p\right) , \end{equation} which is fulfilled only if $R(p)\rho ^{\ast }\left( p\right) $ is constant. In subrecoil laser cooling, the jump rate $R(p)$ becomes a power-law form near $p\cong 0$, i.e., Eq.~(\ref{atom-laser interaction}). For example, the velocity selective coherent population trapping gives $\alpha=2$ \cite{AAK88}, and the Raman cooling experiments realize $\alpha=2$ and 4 by 1D square pulses and the Blackman pulses, respectively \cite{RBB95}. Therefore, for $|p| \ll p_{\max}$, the steady-state distribution $\rho^* (p)$ is formally given by \begin{equation} \rho^* (p) ={\rm const.}/R(p) \propto |p|^{-\alpha}. \label{steady-state} \end{equation} For $\alpha\geq 1$, it cannot be normalized because of the divergence at $p=0$, and $\rho^* (p)$ is therefore called an infinite invariant density. Although $\rho^* (p)$ is the formal steady state, a steady state in the conventional sense does not exist in the system with $\alpha\geq 1$. As will be shown below, a part of the infinite invariant density can be observed in the propagator especially for a large time. Moreover, it will be shown that $t^{1-1/\alpha} \rho (p,t)$ converges to the infinite invariant density for $t\to\infty$. Therefore, the infinite invariant density is not a vague solution but plays an important role in reality. \if0 For $\alpha<1$, it can be normalized, and a steady state exists. While this steady-state density is unbounded at $p=0$, the variance of the momentum converges to a non-zero constant. Thus, for subrecoil laser cooling with $\alpha<1$, the probability of finding a non-cooled state with $p^2 > \varepsilon$ is finite for any $0<$ $\varepsilon < p_{\max}^2$. On the other hand, because the formal steady state cannot be normalized for $\alpha \geq 1$, as will be shown later, $\rho (p,t)$ accumulates at $p=0$ in the long-time limit. In other words, the probability of finding a non-cooled state becomes zero in the long-time limit. \begin{widetext} For small momenta $|p|\ll 1$, the master equation, Eq.~(\ref{Master1}), can be approximated as \begin{equation} \frac{\partial \rho(p,t)}{\partial t} \simeq - \rho(p,t) R(p) + \int_{-\infty}^\infty \rho(p-p',t) R(p-p') G(p') dp', \end{equation} where an effect of the boundary was ignored. By Taylor expansions of $\rho(p-p',t)$ and $r(p-p') $ with respect to $p'$, we have \begin{equation} \frac{\partial \rho(p,t)}{\partial t} \simeq D (p) \frac{\partial^2 \rho(p,t)}{\partial p^2} + 2 \frac{\partial \rho(p,t)}{\partial p} \frac{\partial D(p)}{\partial p} + \rho(p,t) \frac{\partial^2 D(p)}{\partial p^2}, +O(\sigma^4), \end{equation} where $D(p) = \sigma^2 R(p)/2$. For $\sigma^2 \ll 1$, the master equation yields the following heterogeneous diffusion equation: \begin{equation} \frac{\partial \rho(p,t)}{\partial t} \simeq \frac{\partial^2}{\partial p^2} D(p) \rho (p,t). \label{hetero-diffusion-eq} \end{equation} While $\sigma^2$ is assumed to be small in the derivation of the diffusion equation, the master equation is valid for any $\sigma^2$. The steady state of the heterogeneous diffusion equation can be easily obtained as $\rho^* (p) ={\rm const.}/D(p)$, which is consistent with Eq.~(\ref{steady-state}). Note that the dynamics describing the heterogeneous diffusion {\color{red}is} continuous, whereas the dynamics of the HRW model {\color{red}is} discontinuous because of instantaneous jumps in the HRW model. \end{widetext} \fi Figure~\ref{prop-hrw} shows numerical simulations of the propagator in the HRW model. The propagator accumulates near zero, and $\rho (p,t)$ around $p\cong 0$ increases with time $t$. Moreover, a power-law form, i.e., $p^{-\alpha}$, of the formal steady state $\rho^*(p)$ is observed, especially when $t$ is large, except for $p\cong 0$ (see also Fig.~\ref{propagator-exp}). Since the infinite invariant density $\rho^*(p)$ cannot be normalized, the propagator never converges to $\rho^*(p)$. \begin{figure} \includegraphics[width=.95\linewidth, angle=0]{prop-hrw.eps} \caption{Time evolution of the propagator in the HRW model ($p_0=p_{\max}=1$, $\sigma=1$, and $R(p)=|p|^2$). Symbols with lines are the numerical results of the HRW model by simulating trajectories of random walkers. The solid line represents a part of a steady-state solution, $\rho^*(p) \propto |p|^{-\alpha}$, for reference. The dashed lines represent plateaus around $|p|= 0$, which shift up with time $t$. Initial momentum is chosen uniformly on $[-1,1]$. } \label{prop-hrw} \end{figure} \if0 \begin{eqnarray} \rho(p \pm \Delta p,t) &\simeq& \rho(p ,t) \pm \Delta p \frac{\partial \rho(p,t)}{\partial p} + \frac{1}{2}(\Delta p)^2 \frac{\partial^2 \rho(p,t)}{\partial p^2} , \\ r (p\pm \Delta p) &\simeq& R(p) \pm \Delta p \frac{\partial R(p)}{\partial p} + \frac{1}{2}(\Delta p)^2 \frac{\partial^2 R(p)}{\partial p^2} , \end{eqnarray} and \begin{equation} \int_{-1}^{1} n(p')dp'=1, \int_{-1}^{1} p' n(p')dp'=0, \int_{-1}^{1} p'^2n(p')dp'= \langle \delta p^2 \rangle, \int_{-1}^{1} p'^4 n(p')dp'= 3\langle \delta p^2 \rangle^2, \end{equation} \fi \if0 In subrecoil laser cooling, the jump rate $R(p)$ is given by $R(p)\propto |p|^\alpha$ for $|p|\ll 1$. Thus, we assume $D(p) =D_0 |p|^\alpha$. The steady-state solution $\rho_{\rm ss}(p)$ can be obtained using ${\displaystyle \frac{\partial}{\partial p} D(p) \rho_{\rm ss}(p)=0}$: \begin{equation} \alpha p^{\alpha -1} \rho_{\rm ss}(p) + p^{\alpha} \frac{d \rho_{\rm ss}(p)}{dp}=0. \end{equation} Therefore, the steady-state distribution $\rho_{\rm ss}(p)$ is given by \begin{equation} \rho_{\rm ss}(p) \propto |p|^{-\alpha}. \label{steady-state} \end{equation} For $\alpha\geq 1$, it cannot be normalized. Thus, it is an infinite invariant density. For $\alpha<1$, it can be normalized as \begin{equation} \rho_{\rm ss}(p) = \frac{1-\alpha}{2p_0^{1-\alpha}} |p|^{-\alpha}, \label{steady-state-norm} \end{equation} which does not depend on $D_0$. While it is unbounded at $p=0$, the variance of the momentum converges to a non-zero constant in the long-time limit. Thus, subrecoil laser cooling with $\alpha<1$ is not a suitable cooling technique. \fi \if0 {\color{red}Here, we introduce other stochastic models of subrecoil laser cooling. In experiments, the region on the momentum space can be divided into two regions, i.e., trapping and recycling regions \cite{Bardou2002}. In the trapping region ($p\cong 0$), the atom-laser interaction follows Eq.~(\ref{atom-laser interaction}), which makes the atom stay there for a long time. In the recycling region, the atom-laser interaction $R(p)$ becomes constant, i.e., $R(p)=$ const, which means that the atom undergoes a homogeneous random walk in the recycling region. With the aid of the confinement, the atom will eventually step back to the trapping region. The jump of a random walker is long-range in the trap region in the sense that the trap size is much smaller than $\sigma$, which is always possible in experiments because the trap size can be in principle arbitrarily small. In the trap region, the momentum after a jump is approximately independent of the previous momentum. Therefore, the momentum can be considered to be independent and identically distributed (IID) random variables at every jump. In what follows, we assume that the momentum at every jump is an IID random variable and consider two stochastic models. One model is called the exponential model, where momentum and waiting time are stochastically coupled, and the other is called the deterministic model, where they are coupled deterministically \cite{Bardou2002}. Here, we assume that the momentum at every jump is drawn uniformly on the interval $[-p_0, p_0]$, i.e., uniform approximation \cite{Bardou2002}, where $p_0$ is the trap size. Thus, the PDF $\chi (p)$ for the momentum of an elementary event becomes \begin{equation} \chi (p)= \frac{1}{2p_0} ~~{\rm for}~p\in[-p_0,p_0], \label{chi} \end{equation} where we assume $p_0\ll \sigma$. \fi \section{Exponential model} In this section, we give theoretical results for the exponential model, which were already shown in our previous study \cite{Barkai2021}. Here, we consider the Laplace transform of the propagator and execute the inverse transform to obtain the infinite invariant density and the scaling function. The derivation of the scaling function is different from the previous study \cite{Barkai2021}, where the master equation is directly solved. \subsection{Master equation, infinite invariant density, and scaling function} In the exponential model, the jump distribution is independent of the previous momentum unlike the HRW model. It follows that the transition rate of the exponential model becomes \begin{equation} W(p\rightarrow p^{\prime })= R(p)\chi (p^{\prime }). \label{transition rate exp} \end{equation} It follows that the master equation of the exponential model becomes \begin{equation} \frac{\partial \rho \left( p,t\right) }{\partial t}= - R(p)\rho(p,t) + \frac{1}{2p_{\rm trap}} \int_{-p_{\rm trap}}^{p_{\rm trap}} R(p') \rho(p',t) dp'. \label{Master-exp} \end{equation} The second term, i.e., gain term, is different from that in the HRW model, Eq.~(\ref{Master1}). In the exponential model, the momentum remains constant until the next jump, and the conditional waiting time distribution given by momentum $p$ follows an exponential distribution with mean $1/R(p)$, which is the same as in the HRW model, i.e., Eq.~(\ref{conditional_PDF_HRW}). Because the conditional waiting time distribution depends on $p$, the joint PDF of momentum $p$ and waiting time $\tilde{\tau}$, \begin{equation} \phi (p,\tilde{\tau} )=\left\langle \delta \left( p-p_{i}\right) \delta \left( \tilde{\tau}-\tilde{\tau}_{i}\right) \right\rangle , \label{jpdgen} \end{equation} where $\delta \left(.\right) $ is the $\delta$ function, plays an important role. It can be expressed by \begin{equation} \phi (p,\tilde{\tau} )= q(\tilde{\tau}|p) \chi (p) , \label{joint-pdf exp} \end{equation} where $q(\tilde{\tau}|p)$ is the conditional PDF $q(\tilde{\tau}|p)$ of waiting time $\tilde{\tau}$ given $p$, Eq.(\ref{conditional_PDF_HRW}), and $\chi (p)$ is given by Eq.(\ref{chi}) The unconditioned PDF of the waiting time is given by \begin{eqnarray} \psi (\tilde{\tau}) = \frac{1}{2p_{\rm trap}} \int_{-p_{\rm trap}}^{p_{\rm trap}} R(p) \exp (-R(p)\tilde{\tau})dp, \end{eqnarray} which follows from averaging the joint PDF, over a uniform density, i.e., $\chi(p)$. By a change of variables ($y=R(p)\tilde{\tau}$), we have \begin{eqnarray} \psi (\tilde{\tau}) &=& \frac{c^{\frac{1}{\alpha}}\tilde{\tau}^{-1-\frac{1}{\alpha}}}{\alpha p_{\rm trap}} \int_0^{\tilde{\tau} c^{-1} p_{\rm trap}^\alpha} y^{\frac{1}{\alpha}} \exp (-y)dy\\ &\sim& \frac{\gamma c^{\gamma}\Gamma (1+\gamma)}{ p_{\rm trap}} \tilde{\tau}^{-1-\gamma}\quad (\tilde{\tau}\to \infty), \end{eqnarray} where $\gamma=1/\alpha$. In what follows, we assume $\gamma\leq 1$, which implies that the mean waiting time diverges. Therefore, as will be shown, the dynamics of $p$ becomes non-stationary. The exponential model is a continuous-time Markov chain, which is a special type of semi-Markov process (SMP). Therefore, we utilize an SMP with continuous variables to obtain analytical results for the exponential model. In the SMP, the state value is determined by the waiting time, which is randomly selected, or equivalently, the waiting time is determined by the state value, which is randomly chosen. In the latter case, the state value is renewed according to the PDF $\chi(p)$. In general, an SMP is characterized by the state distribution $\chi(p)$ and the joint PDF of the state value and the waiting time $\phi (p,\tau)$, Eq.~(\ref{joint-pdf exp}). The deterministic model, which we will treat in Sect.~V, is identical to the SMP with a deterministic coupling between the state value and the waiting time. On the other hand, the SMP with an exponential conditional PDF of waiting times given the state is equivalent to the exponential model. For the SMP with $\chi(p)$ and $\phi (p,\tau)$, the Laplace transform of the propagator with respect to $t$ is obtained as in Ref.~\cite{Akimoto2020}. Applying the result to the exponential model yields \begin{equation} \hat{\rho} (p,s) = \frac{1}{s} \frac{\chi(p) - \hat{\phi}(p,s)}{1-\hat{\psi}(s)}, \label{MW-SMP} \end{equation} where $\hat{\phi}(p,s)$ and $\hat{\psi}(s)$ are the Laplace transforms of $\phi(p,\tilde{\tau})$ and $\psi(\tilde{\tau})$ with respect to $\tilde{\tau}$, respectively. Here, the ordinary renewal process was used as the initial condition \cite{Akimoto2020,Cox1962}. In the exponential model, the Laplace transform of the joint PDF is given by \begin{equation} \hat{\phi}(p,s)= \frac{\chi (p) R(p)}{s+R(p)}. \label{JPDF-SMP} \end{equation} If follows from Eqs.~(\ref{MW-SMP}) and (\ref{JPDF-SMP}) that $\hat{\rho} (p,s)$ becomes \begin{equation} \hat{\rho} (p,s) = \frac{ \chi(p)}{s+R(p)} \frac{1}{1-\hat{\psi}(s)}. \end{equation} In the long-time limit ($s\to 0$), it becomes \begin{equation} \hat{\rho} (p,s) \cong \frac{ 1}{s+c^{-1}|p|^{\alpha}} \frac{1}{2\Gamma(1-\alpha^{-1})\Gamma(1+\alpha^{-1}) (cs)^{\alpha^{-1}}}, \end{equation} where $\chi(p)=1/(2p_{\rm trap})$ is used. Interestingly, the Laplace transform of the propagator does not depend on $p_{\rm trap}$. To obtain the exponential model from the HRW model, we assumed that $p_{\rm trap}$ is much smaller than $\sigma$. However, the asymptotic behavior of the propagator is independent of $p_{\rm trap}$ in the exponential model. Therefore, $p_{\rm trap}$ can be assumed to be $p_{\rm trap}\ll \sigma$ without loss of generality when we consider the asymptotic behavior of the propagator. In other words, the exponential model with the uniform approximation for $\chi(p)$ is a good approximation for the HRW model. By the inverse Laplace transform, we have \begin{equation} \rho(p,t) \cong \frac{\sin (\pi \alpha^{-1})}{2\pi c^{\alpha^{-1}} \Gamma (1+\alpha^{-1})} \int_0^t dt' e^{-c^{-1} |p|^{\alpha} (t-t')} t'^{\alpha^{-1} -1} \label{propagator_asympt-exp} \end{equation} for $t\to\infty$. Through a change of variables ($u=t'/t$), we obtain \begin{equation} \rho(p,t) \cong \frac{\sin (\pi \alpha^{-1}) t^{\alpha^{-1} }}{2\pi c^{\alpha^{-1}} \Gamma (1+\alpha^{-1})} \int_0^1 du e^{-c^{-1} |p|^{\alpha} t(1-u)} u^{\alpha^{-1} -1}. \label{propagator_asympt-exp2} \end{equation} Therefore, the cooled peak, i.e., $\rho(0,t)$, increases with $t^{\alpha^{-1}}$, which means that the probability of finding the cooled state ($p\cong 0$) increases with time, i.e., this is a signature of cooling. For $|p|>0$ and $t\gg 1$, the integral is approximated as \begin{equation} \rho(p,t) \cong \frac{\sin (\pi \alpha^{-1}) t^{\alpha^{-1} -1}}{2\pi c^{\alpha^{-1}-1} \Gamma (1+\alpha^{-1})} \frac{1}{|p|^{\alpha} }. \label{propagator_asympt-exp3} \end{equation} Furthermore, an infinite invariant density is obtained as \begin{equation} \lim_{t\to \infty} t^{1-\alpha^{-1}} \rho(p,t) = I_{\rm exp} (p) \equiv \frac{ \sin (\pi \alpha^{-1} ) \left\vert p\right\vert ^{-{\alpha}}}{2 \pi c^{\alpha^{-1}-1} \Gamma (1+\alpha^{-1}) } \label{inf-d-exp} \end{equation} for $|p| \leq p_{\rm trap}$. The power-law form of Eq.~(\ref{inf-d-exp}), $I_{\rm exp} (p) \propto |p|^{-\alpha}$, in the exponential model matches with the infinite invariant density, Eq.~(\ref{steady-state}), in the HRW model. Through a change of variables ($p'=t^{\alpha^{-1}} p/c^{\alpha^{-1}}$), we obtain the rescaled propagator $\rho_{\rm res} (p',t)$. In the long-time limit, the rescaled propagator converges to a time-independent function $g_{\rm exp} (p')$ (scaling function): \begin{equation} \rho_{\rm res} (p',t) \equiv \rho (c^{\alpha^{-1}} p'/t^{\alpha^{-1}},t) \left| \frac{dp}{dp'}\right| \to g_{\rm exp} (p') , \label{rescaling} \end{equation} where the scaling function is given by \begin{equation} g_{\rm exp} (p') \equiv \frac{\sin (\pi \alpha^{-1}) }{2\pi \Gamma (1+\alpha^{-1})} \int_0^1 du e^{- |p'|^\alpha (1-u)} u^{\alpha^{-1} -1}. \label{sf-exp} \end{equation} This scaling function describes the propagator near $p=0$. This result was previously obtained by a different approach \cite{Barkai2021,*barkai2021gas}. Here. we are going to demonstrate that the theory of the exponential model describes the asymptotic behavior of the propagator in the HRW model surprisingly well. Figure~\ref{propagator-exp} shows that the propagator for the HRW model is in perfect agreement with the analytical result of the exponential model, i.e., Eq.~(\ref{propagator_asympt-exp2}). In the numerical simulations of the HRW model, we generated $10^8$ trajectories to obtain the propagator. There are two forms in the propagator. The propagator near $p=0$ increases with time $t$. On the other hand, the propagator for $p>0$ asymptotically approaches a power-law form, i.e., the infinite invariant density. Figure~\ref{propagator-rescale-exp} shows that the rescaled propagator of the HRW model for different times is well captured by the scaling function $g_{\rm exp} (p')$ without fitting parameters, where we generated $10^8$ trajectories to obtain the rescaled propagator. Because the scaling function describes the details of the propagator near $p=0$ and is universal in the sense that it does not depend on $p_{\rm trap}$ in the exponential model, the dynamics of the HRW model near $p=0$ should also be universal and does not depend on the details of the jump distribution $G(\Delta p)$. In fact, as shown in Fig.~\ref{propagator-rescale-exp}, the rescaled propagator does not depend on $\sigma^2$. This is one of the reasons why the uniform approximation works very well. Moreover, because the momentum almost certainly approaches zero in the long-time limit, the assumption of $|p|\ll 1$ is correct for $t\gg 1$. Furthermore, it can be confirmed that Eq.~(\ref{propagator_asympt-exp2}) becomes a solution to the master equation, Eq.~(\ref{Master1}), in the long-time limit, where the momentum at every jump is approximately renewed according to $G(\Delta p)$. Therefore, the theory of the exponential well describes the propagator for the HRW model. \begin{figure} \includegraphics[width=.95\linewidth, angle=0]{prop-inf-hrw-exp.eps} \caption{Time evolution of the propagator, i.e. data from Fig.~\ref{prop-hrw}, multiplied by $t^{1-\gamma}$ in the HRW model for different times ($\alpha =2$, $c=1$, $p_0=p_{\max}=1$, and $\sigma^2 =1$). Symbols with lines are the results of numerical simulations for the HRW model. The dashed lines represent the infinite density, i.e., Eq.~(\ref{inf-d-exp}). The solid lines represent rescaled scaling functions, $t g_{\rm exp} (t^\gamma p)$. The dotted lines represent $t g_{\rm exp} (0)$ for different values of $t$. The initial momentum is chosen uniformly on $[-1,1]$. } \label{propagator-exp} \end{figure} \begin{figure} \includegraphics[width=.95\linewidth, angle=0]{rescale-hrw-exp.eps} \caption{Rescaled propagator of the HRW model for different values of $\sigma^2$ ($\alpha =2$, $c=1$, $p_0=p_{\max}=1$, and $t=10^4$). Symbols with lines are the results of numerical simulations for the HRW model. The dashed solid line represents the scaling function, i.e., Eq.~(\ref{sf-exp}). The initial position is chosen uniformly on $[-1,1]$. Note that the results for different $\sigma^2$ are indistinguishable.} \label{propagator-rescale-exp} \end{figure} \subsection{Ensemble and time averages of observables } In this subsection, we consider the ensemble average of an observable, which is defined as \begin{eqnarray} \langle {\mathcal O}(p(t)) \rangle &\equiv& \int_{-p_{\rm trap}}^{p_{\rm trap}} {\mathcal O}(p) \rho(p,t)dp. \label{ensemble-ave-def} \end{eqnarray} We assume that the observable is ${\mathcal O}(p) = C|p|^\beta$ and $\beta>-1$. For example, if $\beta=2$ we are considering the kinetic energy of atom. Through a change of variables ($p'=t^{\alpha^{-1}} p/c^{\alpha^{-1}}$) and using the scaling function, Eq.~(\ref{sf-exp}), we have \begin{eqnarray} \langle {\mathcal O}(p(t)) \rangle \sim \int_{- \left(\frac{t}{c}\right)^{\alpha^{-1}} p_{\rm trap}}^{\left(\frac{t}{c}\right)^{\alpha^{-1}} p_{\rm trap}} {\mathcal O} \left( \frac{c^{\alpha^{-1}} p'}{t^{\alpha^{-1}}} \right) g_{\rm exp} (p')dp' \label{ensemble-ave-def-exp} \end{eqnarray} for $t\to\infty$. When $|p|^\beta$ is integrable with respect to $g_{\rm exp}(p)$, i.e., $\int_{-\infty}^\infty g_{\rm exp}(p) |p|^\beta dp<\infty$, $\beta$ satisfies $-1<\beta < \alpha -1$. In this case, the asymptotic behavior of the ensemble average becomes \begin{equation} \langle {\mathcal O}(p(t)) \rangle \sim \frac{C c^{\beta \alpha^{-1}}}{t^{\beta\alpha^{-1}}} \int_{-\infty}^\infty |p'|^\beta g_{\rm exp}(p')dp' \quad (t \to \infty). \label{en-ave-scaling} \end{equation} On the other hand, when $|p|^\beta$ is integrable with respect to $I_{\rm exp} (p)$, i.e., $\int_{-p_{\rm trap}}^{p_{\rm trap}} I_{\rm exp} (p) {\mathcal O}(p) dv < \infty$, $\beta$ satisfies $\beta > \alpha-1~(>0)$, implying that $|p|^\beta$ is not integrable with respect to the scaling function, i.e., $\int_{-\infty}^\infty g_{\rm exp}(p) |p|^\beta dp=\infty$. In this case, the asymptotic behavior of the ensemble average becomes \begin{equation} \langle {\mathcal O}(p(t)) \rangle \sim t^{\alpha^{-1}-1} \int_{-p_{\rm trap}}^{p_{\rm trap}} I_{\rm exp} (p) {\mathcal O}(p) dv \quad (t \to \infty). \label{en-ave-infty-exp} \end{equation} Therefore, the asymptotic behavior becomes \begin{equation} \langle {\mathcal O}(p(t)) \rangle \propto t^{-\lambda(\alpha,\beta)}\quad (t \to \infty), \end{equation} and the integrability of the observable with respect to the scaling function or infinite invariant density determines the power-law exponent $\lambda(\alpha,\beta)$. In the case of $\beta = \alpha -1$, the integrals of the observable with respect to both the scaling function and infinite invariant density diverge. In this case, the integration in Eq.~(\ref{ensemble-ave-def-exp}) contains a logarithmic divergence for $t\to\infty$. Therefore, the leading order for $t\to\infty$ is \begin{equation} \langle {\mathcal O}(p(t)) \rangle \propto t^{\alpha^{-1}-1} \ln t. \end{equation} The power-law exponent $\lambda(\alpha,\beta)$ in the exponential model is given by \begin{equation} \lambda(\alpha,\beta) = \left\{ \begin{array}{ll} 1 - \alpha^{-1} & (\beta > \alpha-1)\\ \\ \beta \alpha^{-1} & (\beta < \alpha-1) . \end{array} \right. \label{decay-exp} \end{equation} As will be shown later, the decay process is universal in the sense that $\lambda(\alpha,\beta)$ does not depend on the three models that we consider here. Moreover, the fastest decay, which implies the maximum of $\lambda(\alpha,\beta)$, is realized at the transition point between integrable and non-integrable with respect to the infinite invariant measure, i.e., $\alpha=\beta + 1$. In particular, the fastest decay of the kinetic energy, i.e., $\beta=2$, can be achieved for $\alpha=3$, which suggests that the cooling efficiency, in a sense, is optimized at this point. As shown in the previous subsection, the height of the cooled peak increases with $t^{\alpha^{-1}}$. Moreover, the half-width of the cooled peak in the momentum distribution decays with $t^{-\alpha^{-1}}$. If we use the half-width of the cooled peak in the momentum distribution to characterize the cooling efficiency, the optimized parameter is $\alpha=1$. Therefore, the most efficient cooling parameter depends on the definition of efficiency. \subsection{Distributional characteristics of time-averaged observables} Here, we construct a theory of the distribution of time averages in the exponential model. The time average of an observable ${\mathcal O}(p)$ is defined by \begin{equation} \overline{{\mathcal O}}(t) \equiv \frac{1}{t} \int_0^t {\mathcal O}(p(t'))dt'. \label{ta-def} \end{equation} We obtain the mean and variance for two cases, when the observable is integrable with respect to the infinite density and when it is not. In what follows, we consider kinetic energy as a specific example, i.e., ${\mathcal O}(p)=p^2$. The integrated value of an observable ${\mathcal O}(p)$ denoted by ${\mathcal S}(t)$ can be represented by \begin{eqnarray} {\mathcal S}(t) &=& \int_0^t {\mathcal O}(p(t'))dt'\\ &=& \sum_{i=1}^{N(t)} \Delta {\mathcal S}_i + {\mathcal O}(p_{N(t)+1}) (t-t_{N(t)}), \end{eqnarray} where $\Delta {\mathcal S}_i = {\mathcal O}(p_i) \tilde{\tau}_i $, $N(t)$ is the number of jumps until time $t$, $p_i$ is the momentum during $[t_{i-1}, t_{i})$, and $t_i=\tilde{\tau}_1 + \cdots \tilde{\tau}_i$. The integrated value ${\mathcal S}(t)$ is a piecewise linear function of $t$ \cite{*barkai2021gas} because ${\mathcal O}(p(t))$ is a piecewise constant function, where $p_i$ and $\tilde{\tau}_i$ are coupled stochastically. The joint PDF of $\Delta {\mathcal S}_i$, $\tilde{\tau}_i$, and $p_i$ denoted by $\phi_{3} (x,\tilde{\tau},p)$ is given by \begin{equation} \phi_{3} (x,\tilde{\tau},p) = \chi (p) R(p) e^{-R(p) \tilde{\tau}} \delta (x- {\mathcal O}(p)\tilde{\tau}). \end{equation} The joint PDF of the integrated value of an elementary step and the waiting time $\tilde{\tau}$ is given by \begin{eqnarray} \phi_{2} (x, \tilde{\tau}) &=& \int_{-p_{\rm trap}}^{p_{\rm trap}} dp \phi_{3} (x,\tilde{\tau},p) \nonumber\\ &=& \frac{1}{2 p_{\rm trap}\sqrt{x\tilde{\tau}}} R(\sqrt{x/\tilde{\tau}}) e^{-R(\sqrt{x/\tilde{\tau}}) \tilde{\tau}} \quad (\sqrt{x/\tilde{\tau}}<p_{\rm trap}). \nonumber \end{eqnarray} Let $Q(x,t)$ be the PDF of $x={\mathcal S}(t)$ when a jump occurs exactly at time $t$; then, we have \begin{equation} Q(x,t) = \int_0^x dx' \int_0^t dt' \phi_{t} (x', t') Q(x-x', t-t') + Q_0(x,t), \end{equation} where $Q_0(x,t)=\delta(x)\delta(t)$. The PDF of ${\mathcal S}(t)$ at time $t$ is given by \begin{eqnarray} P(x,t) &=& \int_0^x dx' \int_0^t dt' \Phi_{2} (x', t') Q(x-x', t-t'), \end{eqnarray} where \begin{equation} \Phi_2 (x,t) = \int_t^\infty d\tilde{\tau} \int_{-p_{\rm trap}}^{p_{\rm trap}} dp \chi (p) R(p) e^{-R(p) \tilde{\tau}} \delta (x- {\mathcal O}(p)t) . \end{equation} The double-Laplace transform with respect to $x$ and $t$ ($u\leftrightarrow x$ and $s\leftrightarrow t$) yields \begin{equation} \widehat{P}(u,s) = \frac{\widehat{\Phi}_{2}(u,s)}{1- \widehat{\phi}_{2}(u,s)}, \label{montroll-weiss-like} \end{equation} where $\widehat{\phi}_{2}(u,s)$ and $\widehat{\Phi}_{2}(u,s)$ are the double-Laplace transforms of $\phi_{2} (x, \tilde{\tau} )$ and $\Phi_{2} (x,t)$, which are given by \begin{eqnarray} \widehat{\phi}_{2}(u,s) &=& \int_0^\infty dx \int_0^\infty d\tau \int_{-p_{\rm trap}}^{p_{\rm trap}} dp e^{-ux-s\tau} \phi_3(x,\tau,p)\nonumber\\ &=& \int_0^{p_{\rm trap}} \frac{c^{-1}p_{\rm trap}^{-1}p^\alpha}{s+up^2 + c^{-1}p^\alpha}dp \label{psi_laplace_ta} \end{eqnarray} and \begin{eqnarray} \widehat{\Phi}_{2}(u,s) &=& \int_0^{p_{\rm trap}} \frac{p_{\rm trap}^{-1}}{s+up^2 + c^{-1}p^\alpha}dp, \label{PSI_laplace_ta} \end{eqnarray} respectively. Eq.~(\ref{montroll-weiss-like}) is the exact form of the PDF of ${\mathcal S}(t)$ in Laplace space. Because $1-\widehat{\phi}_{2}(0,s)=s\widehat{\Phi}_{2}(0,s)$, normalization is actually satisfied, i.e., $\widehat{P}(0,s)=1/s$. The Laplace transform of the first moment of ${\mathcal S}(t)$ can be obtained as \begin{equation} -\left. \frac{\partial \widehat{P}(u,s)}{\partial u} \right|_{u=0} = -\frac{\widehat{\Phi}_{2}'(0,s)}{1- \widehat{\phi}_{2}(0,s)} - \frac{\widehat{\phi}_{2}'(0,s)}{s[1- \widehat{\phi}_{2}(0,s)]}. \label{laplace-1st-moment-exp} \end{equation} For $\alpha<3$, $\widehat{\phi}_{2}'(0,0)$ is finite, whereas it diverges for $\alpha\geq 3$. Therefore, $\alpha=3$ is a transition point at which the asymptotic behavior of $\langle {\mathcal S}(t) \rangle$ exhibits a different form. The asymptotic behavior of $1- \widehat{\phi}_{2}(0,s)$ for $s\to 0$ is given by \begin{eqnarray} 1- \widehat{\phi}_{2}(0,s) &=& s \int_0^{p_{\rm trap}} \frac{p_{\rm trap}^{-1}}{s + cp^\alpha}dp \sim A_\alpha s^{1/\alpha}, \end{eqnarray} where $A_\alpha$ is given by \begin{equation} A_\alpha = \frac{c^{1/\alpha}p_{\rm trap}^{-1}\pi}{\alpha \sin (\pi/\alpha)}. \end{equation} For $\alpha<3$, the leading order of Eq.~(\ref{laplace-1st-moment-exp}) is \begin{equation} -\left. \frac{\partial \widehat{P}(k,s)}{\partial u} \right|_{u=0} \sim - \frac{\widehat{\phi}_{2}'(0,0)}{A_\alpha s^{1+\frac{1}{\alpha}}}, \label{pd-P} \end{equation} where the first term in Eq.~(\ref{laplace-1st-moment-exp}) is ignored because $\widehat{\Phi}_{2}'(0,s)\propto s^{3/\alpha -2}$. Therefore, the asymptotic behavior of $\langle {\mathcal S}(t) \rangle$ becomes \begin{equation} \langle {\mathcal S}(t) \rangle \sim \frac{-\widehat{\phi}_{2}'(0,0)}{A_\alpha \Gamma(1+1/\alpha) } t^{\frac{1}{\alpha}} \label{1st-moment-X-a<3} \end{equation} for $t\to\infty$, where $-\widehat{\phi}_{2}'(0,0)=cp_{\rm trap}^{2-\alpha}/(3-\alpha)$. For $\alpha \geq 3$, on the other hand, the asymptotic behavior of $\langle {\mathcal S}(t) \rangle$ becomes different from Eq.~(\ref{1st-moment-X-a<3}). For $\alpha >3$, the asymptotic behaviors of $-\widehat{\phi}_{2}'(0,s)$ and $-\widehat{\Phi}_{2}'(0,s)$ for $s\to 0$ become \begin{eqnarray} -\widehat{\phi}_{2}'(0,s) &=& \int_0^{p_{\rm trap}} \frac{cp_{\rm trap}^{-1}p^{2+\alpha}}{(s + cp^\alpha)^2}dp \sim b_\alpha s^{3/\alpha -1} \end{eqnarray} and \begin{eqnarray} -\widehat{\Phi}_{2}'(0,s) &=& \int_0^{p_{\rm trap}} \frac{p_{\rm trap}^{-1}p^{2}}{(s + cp^\alpha)^2}dp \sim B_\alpha s^{3/\alpha -2}, \end{eqnarray} where $b_\alpha$ and $B_\alpha$ are given by \begin{equation} b_\alpha = \frac{3c^{3/\alpha }\pi p_{\rm trap}^{-1}}{\alpha^2 \sin (3\pi/\alpha)} \end{equation} and \begin{equation} B_\alpha = \frac{(\alpha -3) \pi p_{\rm trap}^{-1}c^{3/\alpha}}{\alpha^2 \sin (3\pi/\alpha)}, \end{equation} respectively. Note that there is a logarithmic correction in the asymptotic behavior of $\langle {\mathcal S}(t) \rangle$ when $\alpha=3$. Therefore, the asymptotic behavior of $\langle {\mathcal S}(t) \rangle$ becomes \begin{eqnarray} \langle {\mathcal S}(t) \rangle &\sim& \frac{b_\alpha+B_\alpha}{A_\alpha \Gamma(2-2/\alpha) } t^{1-\frac{2}{\alpha}}\nonumber\\ &=& \frac{c^{2/\alpha}\sin(\pi/\alpha)}{ \Gamma(2-2/\alpha) \sin (3\pi/\alpha) } t^{1-\frac{2}{\alpha}} \label{1st-moment-X-a>3} \end{eqnarray} for $t\to\infty$. The Laplace transform of the second moment of ${\mathcal S}(t)$ can be obtained as \begin{eqnarray} \left. \frac{\partial^2 \widehat{P} (u,s)}{\partial u^2} \right|_{u=0} = \frac{\widehat{\Phi}_{2}''(0,s)}{1- \widehat{\phi}_{2}(0,s)} + \frac{2\widehat{\Phi}_{2}'(0,s)\widehat{\phi}_{2}'(0,s)}{[1- \widehat{\phi}_{2}(0,s)]^2} \nonumber\\ +\frac{\widehat{\Phi}_{2}(0,s)\widehat{\phi}_{2}''(0,s)}{[1- \widehat{\phi}_{2}(0,s)]^2} + \frac{2\widehat{\Phi}_{2}(0,s)\widehat{\phi}_{2}'(0,s)^2}{[1- \widehat{\phi}_{2}(0,s)]^3}. \label{laplace-2nd-moment-exp} \end{eqnarray} For $\alpha<3$, the last term represents the leading term. Therefore, we have \begin{eqnarray} \left. \frac{\partial^2 \widehat{P}(k,s)}{\partial u^2} \right|_{u=0} \sim \frac{2\widehat{\phi}_{2}'(0,0)^2}{s[1- \widehat{\phi}_{2}(0,s)]^2}\sim \frac{2\widehat{\phi}_{2}'(0,0)^2}{A_\alpha^2 s^{1+2/\alpha} } \end{eqnarray} for $s\to 0$. It follows that the asymptotic behavior of $\langle {\mathcal S}(t)^2 \rangle$ becomes \begin{equation} \langle {\mathcal S}(t)^2 \rangle \sim \frac{2\widehat{\phi}_{2}'(0,0)^2}{A_\alpha^2 \Gamma(1+2/\alpha) } t^{\frac{2}{\alpha}} \label{2nd-moment-X-a<3} \end{equation} for $t\to\infty$. Because the EB parameter is given by \begin{equation} {\rm EB}\equiv \frac{\langle \overline{\mathcal O}(t)^2 \rangle - \langle \overline{\mathcal O}(t) \rangle^2}{\langle \overline{\mathcal O}(t)\rangle^2} = \frac{\langle {\mathcal S}(t)^2 \rangle - \langle {\mathcal S}(t) \rangle^2}{\langle {\mathcal S}(t) \rangle^2}, \end{equation} we have the EB parameter for the kinetic energy: \begin{equation} {\rm EB}\to \frac{2 \Gamma(1+1/\alpha)^2}{\Gamma (1+2/\alpha)} -1 \label{eb-ML} \end{equation} for $t\to \infty$. This is a consequence of the Darling-Kac theorem \cite{Darling1957}. Thus, this is a universal result that does not depend on the subrecoil laser cooling model considered here. On the other hand, for $\alpha\geq 3$, all the terms in Eq.~(\ref{laplace-2nd-moment-exp}) contribute to the asymptotic behavior of $\langle {\mathcal S}(t)^2 \rangle$. For $\alpha >3$, the asymptotic behaviors of $\widehat{\Phi}_{2}''(0,s)$ and $\widehat{\phi}_{2}''(0,s)$ for $s\to 0$ become \begin{eqnarray} \widehat{\phi}_{2}''(0,s) &=& \int_0^{p_{\rm trap}} \frac{2c^{-1}p_{\rm trap}^{-1}p^{4+\alpha}}{(s + c^{-1}p^\alpha)^3}dp \sim c_\alpha s^{5/\alpha -2} \end{eqnarray} and \begin{eqnarray} \widehat{\Phi}_{2}''(0,s) &=& \int_0^{p_{\rm trap}} \frac{2p_{\rm trap}^{-1}p^{4}}{(s + c^{-1}p^\alpha)^3}dp \sim C_\alpha s^{5/\alpha -3}, \end{eqnarray} where $c_\alpha$ and $C_\alpha$ are given by \begin{equation} c_\alpha = \frac{5(-5+\alpha) \pi p_{\rm trap}^{-1} c^{5/\alpha}}{\alpha^3 \sin (5\pi /\alpha)} \end{equation} and \begin{equation} C_\alpha = \frac{(-5+\alpha)(-5+2\alpha) \pi p_{\rm trap}^{-1} c^{5/\alpha}}{\alpha^3 \sin (5\pi /\alpha)}, \end{equation} respectively. It follows that \begin{eqnarray} \left. \frac{\partial^2 \widehat{P}(u,s)}{\partial u^2} \right|_{u=0} \sim \left(\frac{c_\alpha + C_\alpha}{A_\alpha} + \frac{2B_\alpha b_\alpha}{A_\alpha^2} + \frac{2b_\alpha^2}{A_\alpha^2} \right)s^{4/\alpha-3}\nonumber \end{eqnarray} for $s\to 0$. Therefore, in the long-time limit, \begin{equation} \langle {\mathcal S}(t)^2 \rangle \sim \left(\frac{c_\alpha + C_\alpha}{A_\alpha} + \frac{2B_\alpha b_\alpha}{A_\alpha^2} + \frac{2b_\alpha^2}{A_\alpha^2} \right) \frac{t^{2(1-\frac{2}{\alpha})}}{\Gamma(3-4/\alpha) } , \label{2nd-moment-X-a>3} \end{equation} and the EB parameter becomes \begin{equation} {\rm EB}\to \frac{2 \Gamma (2-2/\alpha)^2}{\alpha\Gamma (3-4/\alpha)} \left[ \frac{(-5+\alpha) \sin^2 (3\pi/\alpha)}{\sin (5\pi/\alpha) \sin (\pi/\alpha)} +3\right] -1 \label{EB-p2-a>3} \end{equation} for $t\to \infty$. Contrary to the universality in the case of $\alpha<3$, as will be shown later, this result is different from that in the deterministic model. \if0 integrable with respect to the infinite invariant density, i.e., $\int_0^1{\mathcal O}(p) I_\infty(p) dp <\infty$, where the ensemble average of the increment is actually finite. Therefore, the distribution of the time average follows the Mittag-Leffler distribution. More precisely, the normalized time averages defined by $\overline{{\mathcal O}}(t)/\langle \overline{\mathcal O}(t)\rangle$ converge in distribution to the Mittag-Leffler distribution: \begin{equation} \frac{\overline{\mathcal O}(t) }{\langle \overline{\mathcal O}(t)\rangle t} \Rightarrow M_\gamma \end{equation} for $t\to\infty$, where $M_\gamma$ is a random variable. The ensemble average of the time average decays as $\langle \overline{\mathcal O}(t)\rangle \propto t^{\gamma-1}$ for $t\to\infty$ and, in general, $\langle \overline{\mathcal O}(t)^n \rangle \propto t^{n(\gamma-1)}$ for $t\to\infty$. Thus, $M_\gamma$ does not depend on time $t$ in the long-time limit. The mean of $M_\gamma$ is one by definition and the variance is given by \begin{equation} {\rm ML}(\gamma) \equiv \frac{2\Gamma(1+\gamma)^2}{\Gamma (1+2\gamma)} -1. \label{eb-ML} \end{equation} On the other hand, for $\alpha \geq 3$, observable ${\mathcal O}(p)=p^2$ is not integrable with respect to the infinite invariant density and the ensemble average of the increment also diverges. In this case, the normalized time average does not converge in distribution to $M_\gamma$ but another random variable $A_\gamma$: \begin{equation} \frac{\overline{\mathcal O}(t)}{\langle \overline{\mathcal O}(t)\rangle} \Rightarrow A_\gamma \end{equation} for $t\to\infty$. The ensemble average of the time average decays as $\langle \overline{\mathcal O}(t)\rangle \propto t^{-2\gamma}$ for $t\to\infty$ and, in general, $\langle \overline{\mathcal O}(t)^n \rangle \propto t^{-2n\gamma}$ for $t\to\infty$. The variance of $A_\gamma$ is given by \begin{equation} {\rm A}(\gamma)\equiv \frac{6\gamma \Gamma(2-2\gamma)^2}{\Gamma(3-4\gamma)} \left[\frac{ 3 \Gamma (2 - 5\gamma) \Gamma (1-\gamma) }{5\gamma \Gamma(1 -3\gamma)^2} + 1 \right] -1. \label{eb-abs-inf} \end{equation} Since the normalized time average defined by $\overline{{\mathcal O}}(t)/\langle \overline{\mathcal O}(t)\rangle$ converges in distribution to $M_\gamma$ or $A_\gamma$ for $\alpha<3$ and $\alpha>3$, respectively, the ergodicity breaking (EB) parameter, which is defined by the relative variance of $\overline{\mathcal O}(t)$, i.e., $\langle \overline{\mathcal O}(t)^2 \rangle/\langle \overline{\mathcal O}(t) \rangle^2 -1$, is given by ${\rm A}(\gamma)$ and ${\rm ML}(\gamma)$ for $\gamma<1/3$ and $\gamma>1/3$, respectively. As shown in Fig.~\ref{eb-gamma}, the trajectory-to-trajectory fluctuations of $\overline{\mathcal O}(t)$ surpress with increasing $\gamma$ for $\gamma>1/3$ and vanish for $\gamma\to 1$. On the other hand, they show non-trivial dependence of $\gamma$ for $\gamma<1/3$. We note that ${\displaystyle \lim_{\gamma\to1/3}{\rm A}(\gamma) = \lim_{\gamma\to1/3}{\rm ML}(\gamma)}$. \fi \begin{figure} \includegraphics[width=.95\linewidth, angle=0]{EB-gamma-all.eps} \caption{EB parameter as a function of $\gamma$ ($=1/\alpha$) for the kinetic energy, i.e., ${\mathcal O}(p)=p^2$. Symbols are the results of numerical simulations for the HRW, deterministic, and exponential models. The solid line represents ${\rm A}(\gamma)$ and ${\rm ML}(\gamma)$ for $\gamma<1/3$ and $\gamma>1/3$, respectively. The dashed line represents Eq.~(\ref{EB-p2-a>3}). The solid line represents Eq.~(\ref{eb-ML}) and (\ref{eb-p2-det}). } \label{eb-gamma} \end{figure} \section{Stochastic model with a deterministic coupling} Here, we consider a stochastic model with a deterministic coupling, i.e., the deterministic model. This model is obtained by replacing the conditional PDF of the waiting time given the momentum by its mean. In this sense, this model is a mean-field-like model of the exponential model. In the deterministic model, the conditional PDF $q(\tilde{\tau}|p)$ of $\tilde{\tau}$ given $p$ becomes deterministic: \begin{equation} q(\tilde{\tau}|p) = \delta(\tilde{\tau} - R(p)^{-1}). \end{equation} Using Eq.~(\ref{joint-pdf exp}) and integrating over momentum $p$ yields that the PDF of the waiting time follows a power law: \begin{equation} \psi (\tilde{\tau}) = \gamma p_{\rm trap}^{-1} c^\gamma \tilde{\tau}^{-1 -\gamma} \quad (\tilde{\tau}\geq c p_{\rm trap}^{-\gamma^{-1}}). \label{waiting-time-pdf-det} \end{equation} \if0 The dynamics of the deterministic model {\color{red}is} described by a semi-Markov process (SMP) with continuous variables, which was introduced in our previous paper \cite{Akimoto2020}. In the SMP, the state value is determined by the waiting time, which is randomly selected, or equivalently, the waiting time is determined by the state value, which is randomly chosen. In the HRW model, the method of determination follows the latter, and the state value (momentum) is not an IID random variable. {\color{red}On the other hand, in the deterministic model, the state values or the waiting times are IID random variables. The SMP is characterized by the joint PDF of the state value and the waiting time. The joint PDF can be written as \begin{equation} \phi (p,\tilde{\tau} )= \delta \left( \tilde{\tau} - R(p)^{-1} \right) \chi (p). \label{jpdfdet} \end{equation} \fi \subsection{Scaling function and infinite invariant density} The deterministic model is described by the SMP. Using Eq.~(\ref{MW-SMP}), we have \begin{equation} \hat{\rho} (p,s) = \frac{\chi(p)}{s} \frac{1 - e^{-sR(p)}}{1-\hat{\psi}(s)}. \end{equation} Because $\psi(\tilde{\tau})$ follows a power law, i.e., Eq.~(\ref{waiting-time-pdf-det}), the asymptotic form of the the Laplace transform $\hat{\psi}(s)$ for $s\to 0$ is given by \begin{equation} \hat{\psi}(s) = 1 - a s^\gamma + o(s^\gamma), \end{equation} where $a= \Gamma(1-\gamma) p_{\rm trap}^{-1}c^\gamma$. In the long-time limit, the propagator is expressed as \begin{equation} \rho(p,t) \sim \begin{cases} \dfrac{\sin (\pi \gamma) }{2\pi \gamma } \left(\dfrac{t}{c}\right)^{\gamma } \quad &(|p| \leq p_c(t))\\ \\ \dfrac{\sin (\pi \gamma)}{2 \pi\gamma } \dfrac{t^{\gamma} - (t- t_{c}(p))^\gamma }{c^\gamma} &(|p|>p_c(t)), \end{cases} \label{propagator_asympt1} \end{equation} where $p_{c}(t)=(t/c)^{-\gamma}$ and $t_c(p)= c|p|^{-\gamma^{-1}}$. We note that $\rho (p,t)$ is discontinuous at $|p|=p_{c}(t)$, in contrast to the HRW model. Importantly, the asymptotic behavior of the propagator, as expressed by Eq.~(\ref{propagator_asympt1}), does not depend on the details of the uniform approximation; i.e., $\rho(p,t)$ is independent of $p_{\rm trap}$. For any small $\varepsilon>0$, there exists $t$ such that $p_c(t) < \varepsilon$ because $p_c(t) \to 0$ for $t\to\infty$. Therefore, for any small $\varepsilon>0$, the probability of $|p|>\varepsilon$ becomes zero for $t\to\infty$. More precisely, for $t\gg t_c(\varepsilon)$, the probability is given by \begin{equation} \Pr (|p|>\varepsilon) \sim \frac{ \sin (\pi \gamma)}{1-\gamma}(1-\varepsilon^{1-\gamma})t^{\gamma -1}. \end{equation} Therefore, the temperature of the system almost certainly approaches zero in the long-time limit. By changing the variables ($p'=t^{\gamma} p/c^\gamma$), we obtain the rescaled propagator $\rho_{\rm res} (p',t)$. In the long-time limit, the rescaled propagator converges to a time-independent function $g_{\rm det} (p')$ (scaling function): \begin{equation} \rho_{\rm res} (p',t) \equiv \rho (c p'/t^{\gamma},t) \left| \frac{dp}{dp'}\right| \to g_{\rm det} (p') , \label{rescaling} \end{equation} where the scaling function is given by \begin{equation} g_{\rm det} (p') \equiv \left\{ \begin{array}{ll} \dfrac{ \sin (\pi \gamma )}{2\pi c^{\gamma-1} \gamma } ~&(|p'|<1)\\ \\ \dfrac{ \sin (\pi \gamma ) \{ 1 - (1-|p'|^{-\gamma^{-1}})^\gamma\}}{2\pi c^{\gamma-1} \gamma } &(|p'| \geq 1) . \end{array} \right. \label{master-curve} \end{equation} This scaling function describes the details of the propagator near $p=0$. Furthermore, an infinite invariant density is obtained as a formal steady state: \begin{equation} I_\infty(p) \equiv \lim_{t\to \infty} t^{1-\gamma} \rho(p,t) = \frac{ \sin (\pi \gamma ) \left\vert p\right\vert ^{-\gamma^{-1}}}{2 \pi c^\gamma } \label{inf-d} \end{equation} for $|p|<p_{\rm trap}$. In the long-time limit, the propagator can be almost described by the infinite invariant density, whereas the former is normalized and the latter is not. The infinite invariant density $I_\infty(p)$ is the same as the formal steady state obtained using Eq.~(\ref{steady-state}). However, the propagator described by Eq.~(\ref{propagator_asympt1}) is not a solution of the master equation, Eq.~(\ref{Master1}). Figure~\ref{propagator} shows the scaled propagator of the deterministic model. In the numerical simulations, we generated $10^8$ trajectories to obtain the propagator. There are two forms of the propagator. For $|p|<p_c(t)$, the propagator increases with time $t$. For $|p|>p_c(t)$, the asymptotic form of the propagator follows the infinite invariant density $t^{\gamma-1}I_\infty(p)$. Because the constant $t^{\gamma -1}$ approaches zero in the long-time limit, the propagator outside $p_c(t)$ becomes zero. A cusp exists at $p=t_c(t)$, in contrast to the HRW and the exponential model, where no cusp exists in the propagator. Figure~\ref{propagator-rescale} shows numerical simulations of the rescaled propagators in the deterministic case for different $\chi(p)$, i.e., for uniform and Gaussian distributions. The propagators are compared with the scaling function $g_{\rm det} (p')$ without fitting parameters, where we generate $10^8$ trajectories to obtain the rescaled propagator. Therefore, the scaling function describes the details of the propagator near $p=0$ and is universal in the sense that it does not depend on $\chi(p)$. \begin{figure} \includegraphics[width=.95\linewidth, angle=0]{prop-inf-det.eps} \caption{Time evolution of the propagator multiplied by $t^{\gamma-1}$ in the deterministic model for different times ($\alpha=\gamma^{-1} =2, c=1$, and $p_{\rm trap}=1$). Symbols with lines represent the results of numerical simulations of the deterministic model. The dashed lines represent the infinite invariant density $I_\infty(p)$ given by Eq.~(\ref{inf-d}). The solid lines represent rescaled scaling functions, $t g_{\rm det} (t^\gamma p)$. The dotted lines represent $t g_{\rm det} (0)$ for different values of $t$. The initial position is chosen uniformly on $[-1,1]$. } \label{propagator} \end{figure} \begin{figure} \includegraphics[width=.95\linewidth, angle=0]{sf-det-t=104.eps} \caption{Rescaled propagators for different distributions $\chi(p)$ ($\alpha=\gamma^{-1} =2$, $c=1$, and $p_{\rm trap}=1$), where we consider the uniform distribution $\chi(p)=1/2$ on $p\in [-1,1]$ and the Gaussian distribution $\chi (p) = \exp(-p^2/2)/\sqrt{2\pi}$. Symbols with lines are the results of the numerical simulations of the deterministic model with $t=10^4$. The solid line represents the scaling function given by Eq.~(\ref{master-curve}). The initial position is chosen uniformly on $[-1,1]$. Note that the results for different $\chi(p)$ are indistinguishable.} \label{propagator-rescale} \end{figure} \subsection{Ensemble and time averages of observables} Here, we consider the ensemble averages of observables and show that the scaling function and infinite invariant density play an important role. In this subsection, we set $p_{\rm trap}=1$ for simplicity. The ensemble average of an observable ${\mathcal O}(p)$ is given by Eq.~(\ref{ensemble-ave-def}), which can be represented using the scaling function and infinite invariant density. To verify, we divide the integral range as \begin{widetext} \begin{equation} \langle {\mathcal O}(p(t)) \rangle = \int_{-p_c(t)}^{p_c(t)} \rho(p,t) {\mathcal O}(p) dp + \int_{|p|>p_c(t)} \rho (p,t) {\mathcal O}(p)dp. \label{ensemble-ave} \end{equation} In the long-time limit, using the scaling function and infinite invariant density, we have \begin{equation} \langle {\mathcal O}(p(t)) \rangle \cong \int_{-1}^{1} g_{\rm det} (p') {\mathcal O}(cp'/t^{\gamma}) dp' + t^{\gamma-1} \int_{|p|>p_c(t)} I_\infty (p) {\mathcal O}(p) dp, \label{ensemble-ave2} \end{equation} where we applied a change of variables in the first term and used Eqs.~(\ref{propagator_asympt1}), (\ref{master-curve}), and (\ref{inf-d}). \end{widetext} Here, we assume that ${\mathcal O}(p) \sim C|p|^\beta$ for $p\to 0$ and that it is bounded for $p\ne 0$. In particular, the energy and the absolute value of the momentum correspond to observables with $\beta=2$ and $\beta=1$, respectively. When $|p|^\beta$ is integrable with respect to $g_{\rm det}(p)$, i.e., $\int_{-\infty}^\infty g_{\rm det}(p) |p|^\beta dp<\infty$, $\gamma^{-1}$ satisfies the following inequality: $-1<\beta < \gamma^{-1}-1$. In this case, the asymptotic behavior of the ensemble average becomes \begin{equation} \langle {\mathcal O}(p(t)) \rangle \sim \frac{C c^{\beta -\gamma+1} \sin(\pi\gamma)}{\pi \gamma(\beta+1)} t^{-\beta \gamma} \quad (t \to \infty), \label{en-ave-scaling} \end{equation} where we used Eq.~(\ref{master-curve}): \begin{equation} \int_{-1}^{1} g_{\rm det}(p') {\mathcal O}(cp'/t^{\gamma})dp' \sim C c^\beta \int_{-1}^1 g_{\rm det}(p') |p'|^{\beta} dp' t^{-\beta \gamma} \end{equation} for $t \to \infty$. Note that the second term in Eq.~(\ref{ensemble-ave2}) can be ignored in the asymptotic behavior because $-\beta \gamma>\gamma -1$. On the other hand, when ${\mathcal O}(p)$ is integrable with respect to $I_\infty (p)$, i.e., $\int_{-1}^1 I_\infty (p) {\mathcal O}(p) dp < \infty$, where $\beta$ must satisfy $\beta > \gamma^{-1}-1~ (>0)$, the asymptotic behavior of the ensemble average becomes \begin{equation} \langle {\mathcal O}(p(t)) \rangle \sim t^{\gamma-1} \int_{-1}^1 I_\infty (p) {\mathcal O}(p) dp \quad (t \to \infty). \label{en-ave-infty} \end{equation} Therefore, the asymptotic behavior of the ensemble average becomes proportional to $ t^{-\lambda(\alpha,\beta)}$, and the integrability of the observable with respect to the scaling function or infinite invariant density determines the power-law exponent $\lambda(\alpha,\beta)$. Note that the exponent $\gamma$ is defined as $\gamma=1/\alpha$. Therefore, the power-law exponent in decay processes of the ensemble- and time-averaged observable is universal. In the case of $\beta = \gamma^{-1}-1$, the integrals of the observables with respect to both the scaling function and infinite invariant density diverge. In this case, Eq.~(\ref{ensemble-ave2}) should be expressed as \begin{widetext} \begin{equation} \langle {\mathcal O}(p(t)) \rangle = \int_{-1}^{1} g_{\rm det} (p') {\mathcal O}(cp'/t^{\gamma}) dp' + \int_{1<|p'|\leq t^\gamma/c} g_{\rm det} (p') {\mathcal O}(cp'/t^{\gamma}) dp' . \label{ensemble-ave3} \end{equation} \end{widetext} The first term decays as $t^{-\beta \gamma}$ because the integral of the observable ${\mathcal O}(p)$ from -1 to 1 with respect to the scaling function is finite. Because there is a logarithmic correction in the second term, the second term yields the leading order for $t\to\infty$: \begin{equation} \langle {\mathcal O}(p(t)) \rangle \sim \frac{C c^{\gamma^{-1}-\gamma-1} \gamma \sin(\pi\gamma)}{\pi } t^{\gamma-1} \ln t. \end{equation} Here, we discuss the decrease of the energy. When the observable is the energy, i.e., ${\mathcal O}(p)=p^2$, the asymptotic decay is \begin{equation} \langle p(t)^2 \rangle \sim \frac{t^{-2 \gamma}}{\beta+1}\quad(t\to\infty) \end{equation} or \begin{equation} \langle p(t)^2 \rangle \sim t^{\gamma-1} \int_{-1}^1 I_\infty (p) {\mathcal O}(p) dv \quad(t\to\infty) \end{equation} for $\gamma^{-1}>3$ and $\gamma^{-1}<3$, respectively. Thus, the ensemble average of the energy approaches zero in the long-time limit. Interestingly, a constraint exists in the power-law exponent $\lambda(2,\gamma)$; i.e., $\lambda(2,\gamma) \leq 2/3$, where the equality holds at $\gamma^{-1}=\alpha=3$. For general observables, the power-law exponent is restricted as \begin{equation} \lambda(\beta,\gamma) < \frac{\beta}{\beta +1}. \end{equation} In the case of the absolute value of the momentum, it is bounded as $\lambda(1,\gamma) < 1/2$, which is maximized at $\gamma^{-1}=2$. \subsection{Distributional characteristics of time-averaged observables} Distributional limit theorems for time-averaged observables in the SMP with continuous state variables were also considered in Ref.~\cite{Akimoto2020}, where the infinite invariant density plays an important role in discriminating classes of observables. For the SMP, the integral of ${\mathcal O}(p(t))$ is a piecewise linear function of $t$ and is called a continuous accumulation process \cite{Akimoto2015}. The ensemble average of an increment of one segment, i.e., \begin{equation} \left\langle \int_0^{\tilde{\tau}} {\mathcal O}(p(t'))dt'\right\rangle \equiv \int_0^\infty \tilde{\tau} {\mathcal O} \left( c^\gamma\tilde{\tau}^{-\gamma} \right) \psi (\tilde{\tau})d\tilde{\tau}, \end{equation} may diverge for some observables. When it is finite, the distribution function of the time-averaged observable follows the Mittag--Leffler distribution, which is a well-known distribution in infinite ergodic theory \cite{Aaronson1997} and stochastic processes \cite{He2008, Miyaguchi2011, Miyaguchi2013, Akimoto2013a,AkimotoYamamoto2016a,Albers2018,Albers2022}. On the other hand, when it diverges, other non-Mittag-Leffler limit distributions are known \cite{Akimoto2008, Akimoto2015, Albers2018, Akimoto2020, Barkai2021, Albers2022}. This condition of integrability of the increment can be represented by the integrability of the observable with respect to the infinite invariant density. Here, we consider energy as a specific example. The distributional limit theorems derived in Ref.~\cite{Akimoto2020} can be straightforwardly applied to this case. A derivation of the distributional limit theorems is given in Appendix~A. Here, we simply apply our previous results. For $\gamma<1/3$, the observable ${\mathcal O}(p)=p^2$ is integrable with respect to the infinite invariant density, i.e., $\int_0^1{\mathcal O}(p) I_\infty(p) dp <\infty$, where the ensemble average of the increment is finite. Therefore, the distribution of the time average follows the Mittag--Leffler distribution. More precisely, the normalized time averages defined by $\overline{{\mathcal O}}(t)/\langle \overline{\mathcal O}(t)\rangle$ converges in distribution: \begin{equation} \frac{\overline{\mathcal O}(t) }{\langle \overline{\mathcal O}(t)\rangle } \Rightarrow M_\gamma \end{equation} for $t\to\infty$, where $M_\gamma$ is a random variable, distributed according to the Mittag-Leffler law \cite{Aaronson1997, Miyaguchi2013}. The ensemble average of the time average decays as $\langle \overline{\mathcal O}(t)\rangle \propto t^{\gamma-1}$ for $t\to\infty$ and, in general, $\langle \overline{\mathcal O}(t)^n \rangle \propto t^{n(\gamma-1)}$ for $t\to\infty$. Thus, $M_\gamma$ does not depend on time $t$. in the long-time limit. The mean of $M_\gamma$ is one by definition and the variance is given by \begin{equation} {\rm ML}(\gamma) \equiv \frac{2\Gamma(1+\gamma)^2}{\Gamma (1+2\gamma)} -1. \label{eb-ML2} \end{equation} On the other hand, for $\gamma \geq 1/3$, the observable ${\mathcal O}(p)=p^2$ is not integrable with respect to the infinite invariant density, and the ensemble average of the increment also diverges. In this case, the normalized time average does not converge in distribution to $M_\gamma$ but rather to another random variable $C_\gamma$ \cite{Akimoto2020}: \begin{equation} \frac{\overline{\mathcal O}(t)}{\langle \overline{\mathcal O}(t)\rangle} \Rightarrow C_\gamma \end{equation} for $t\to\infty$. The ensemble average of the time average decays as $\langle \overline{\mathcal O}(t)\rangle \propto t^{-2\gamma}$ for $t\to\infty$ and, in general, $\langle \overline{\mathcal O}(t)^n \rangle \propto t^{-2n\gamma}$ for $t\to\infty$. The variance of $C_\gamma$ is given by \begin{equation} {\rm A}(\gamma)\equiv \frac{6\gamma \Gamma(2-2\gamma)^2}{\Gamma(3-4\gamma)} \left[\frac{ 3 \Gamma (2 - 5\gamma) \Gamma (1-\gamma) }{5\gamma \Gamma(1 -3\gamma)^2} + 1 \right] -1. \label{eb-p2-det} \end{equation} Since the distribution of the normalized time average defined by $\overline{{\mathcal O}}(t)/\langle \overline{\mathcal O}(t)\rangle$ converges to $M_\gamma$ or $C_\gamma$ for $\gamma<1/3$ and $\gamma>1/3$, respectively, the ergodicity breaking (EB) parameter, which is defined by the relative variance of $\overline{\mathcal O}(t)$, i.e., $\langle \overline{\mathcal O}(t)^2 \rangle/\langle \overline{\mathcal O}(t) \rangle^2 -1$. is given by ${\rm A}(\gamma)$ and ${\rm ML}(\gamma)$ for $\gamma<1/3$ and $\gamma>1/3$, respectively. As shown in Fig.~\ref{eb-det}, the trajectory-to-trajectory fluctuations of $\overline{\mathcal O}(t)$ are suppressed by increasing $\gamma$ for $\gamma>1/3$ and vanish for $\gamma\to 1$. On the other hand, they show a non-trivial dependence on $\gamma$ for $\gamma<1/3$. We note that ${\displaystyle \lim_{\gamma\to1/3}{\rm A}(\gamma) = \lim_{\gamma\to1/3}{\rm ML}(\gamma)}$. \begin{figure} \includegraphics[width=.95\linewidth, angle=0]{EB-det.eps} \caption{EB parameter as a function of $\gamma$ for two observables ${\mathcal O}(p)=p^2$ and ${\mathcal O}(p)=I(|p|>0.5)$, where ${\mathcal O}(p)=I(|p|>0.5)=1$ if $|p|>0.5$ and zero otherwise. The solid line represents ${\rm A}(\gamma)$ and ${\rm ML}(\gamma)$ for $\gamma<1/3$ and $\gamma>1/3$, respectively. The dashed line represents ${\rm ML}(\gamma)$ for $\gamma<1/3$. Note that $I(|p|>0.5)$ is integrable with respect to $I_\infty (p)$ for all $\gamma$. } \label{eb-det} \end{figure} \begin{table*} \begin{tabular}{|p{30mm}|p{30mm}|p{30mm}|p{30mm}|} \hline & HRW & exponential model & deterministic model \\ \hline model & Markov & Markov & non-Markov \\ \hline invariant density & $\rho^*(p) \propto |p|^{-\alpha}$ & $\rho^*(p) \propto |p|^{-\alpha}$ & $\rho^*(p) \propto |p|^{-\alpha}$ \\ \hline scaling function & same as in the exponential model & Eq. (\ref{sf-exp}) & Eq.~(\ref{master-curve}) \\ \hline decay exponent & same as in the exponential model & Eq. (\ref{decay-exp}) & Eq. (\ref{decay-exp}) \\ \hline EB (integrable) & same as in the exponential model & $\dfrac{2\Gamma(1+\gamma)^2}{\Gamma(1+2\gamma)} -1$ & $\dfrac{2\Gamma(1+\gamma)^2}{\Gamma(1+2\gamma)} -1$ \\ \hline EB (non-integrable) & same as in the exponential model & Eq. (\ref{EB-p2-a>3}) & Eq. (\ref{eb-p2-det}) \\ \hline \end{tabular} \caption{Comparison of the infinite invariant density, the scaling function, the relaxation power-law exponent of the time-and-ensemble averaged energy, and the EB parameter in three stochastic models.} \label{sum} \end{table*} \section{Conclusion} We investigated the accumulation process of the momentum of an atom in three stochastic models of subrecoil laser cooling. For the HRW and the exponential models, the formal steady state of the master equation cannot be normalized when $\alpha\geq 1$. For all the models, the scaled propagator defined by $t^{1-\gamma} \rho (p,t)$ converges to a time-independent function, i.e., an infinite invariant density. In the deterministic and exponential model, we derived the exact forms of the scaling function and the infinite invariant density. As a result, we found universality and non-universality in all three stochastic models. In particular, the power-law form of the infinite invariant density is universal in the three models, whereas there is a clear difference in the scaling functions of the deterministic and exponential models. A summary of the comparisons of the three stochastic models is presented in Table~\ref{sum}. We numerically showed that the propagator obtained using the exponential model is in perfect agreement with that in the HRW model for large $t$, which means that the uniform approximation used in the exponential model is very useful for obtaining a deeper understanding of the HRW model. When we focus on the jumps of the momentum to the trapping region, the jump distribution can be taken as approximately uniform in the trapping region because the trap size $p_{\rm trap}$ can be arbitrarily small. We note that the uniform distribution for $\chi(p)$ is necessary but the value of $p_{\rm trap}$ is not relevant for reproducing the statistical behavior of the HRW model. This is the reason why the uniform approximation can be applied to the HRW model. The relation between the exponential and the HRW models is similar to that between the CTRW and the QTM. In particular, the waiting times in the exponential model and the CTRW are IID random variables, whereas those in the HRW and the QTM are not. Moreover, it is known that the CTRW is a good approximation of the QTM when the dimension is greater than two or under a bias \cite{Machta1985}. We showed that the integrability of observables with respect to the infinite invariant density determines the power-law-decay exponent in the decrease of the ensemble average of the observables in the exponential and deterministic models. As a result, we found that the power-law exponent has a maximum at the transition point for both models. Furthermore, we found that the integrability of the observable with respect to the infinite invariant density plays an important role in characterizing the trajectory-to-trajectory fluctuations of the time averages in the three models. When the observables are integrable, the distribution is universal and described by the Mittag-Leffler distribution. On the other hand, the distribution differs for the exponential and the deterministic model when the observables are not integrable. Using the EB parameter, we numerically showed that the distribution in the HRW model agrees with that in the exponential model even when the observable is not integrable. \section*{Acknowledgement} T.A. was supported by JSPS Grant-in-Aid for Scientific Research (No.~C JP18K03468). The support of Israel Science Foundation's grant 1898/17 is acknowledged (EB).
{ "timestamp": "2022-02-02T02:13:40", "yymm": "2202", "arxiv_id": "2202.00274", "language": "en", "url": "https://arxiv.org/abs/2202.00274" }
\section{Introduction} Deep neural networks (DNNs) and convolutional neural networks (CNNs) have been used widely in various applications such as image classification, semantic segmentation, and object detection\cite{krizhevsky2017imagenet,liu2019recent, galvez2018object}. Training high-performance models is not an easy task, because it requires a large amount of data, powerful computational resources (GPUs), and efficient algorithms. Considering the expertise, cost, and time required for training models, they are considered as a kind of intellectual property that should be protected. \par There are two approaches to intellectual property protection of models: ownership verification and access control. The difference between these two approaches is that the former aims to identify the ownership of the models, but the latter aims to protect models from unauthorized access\cite{kiya2022overview}. The ownership verification methods are inspired by watermarking, where a watermark is embedded in the models and the embedded watermark is used to verify the ownership of the models\cite{uchida2017embedding,zhang2018protecting,darvish2019deepsigns,NEURIPS2019_75455e06,xue2021dnn,maung2021piracy}. However, ownership verification does not have the ability to restrict the execution of the models. Thus, in principle, attackers can freely exploit the models for their own benefit, or use it in adversarial attacks\cite{szegedy2013intriguing}. Therefore, in this paper, we focus on access control, which aims to prevent models from unauthorized access.\par A number of access control methods have been proposed as a model protection method. By encrypting images or feature maps with a secret key, a stolen model cannot be used to its full capacity without a correct secret key \cite{maungmaung_kiya_2021,maung2021protection,ito2021access}. However, these methods have never been applied to object detection models. In this paper, an access control method with encrypted feature maps is applied to object detection models for the first time, and the effectiveness of the proposed method is confirmed in an experiment. \begin{figure}[t] \centering \includegraphics[bb=0 0 750 357,scale=0.3]{figure/access_control.png} \caption{Overview of access control} \label{access_control} \end{figure} \section{proposed method} \subsection{Overview} An overview of the access control to protect the trained models from unauthorized access is shown in Fig. \ref{access_control}. The protected models are trained with the secret key $K$. When authorized users enter test images and the correct key $K$ into the protected models, the results are equivalent to the models in the unprotected state (the access control is not assumed). In contrast, when unauthorized users without the key $K$ enter only test images or test images and a wrong key $K'$ into the protected models, lower performance results are provided.\par As access control methods using a secret key, the input image encryption method \cite{maungmaung_kiya_2021} and the feature map encryption method \cite{ito2021access} have been proposed. Maung's method\cite{maungmaung_kiya_2021} focuses on access control of image classification models, where input images are divided into blocks and encrypted with a secret key using methods such as pixel shuffling, bit flipping, and format-preserving Feistel-based encryption (FFX)\cite{bellare2010addendum}. These encrypted images are used as training and test images. Since this method encrypts the images block by block, it changes the spatial information and cannot be used to protect the object detection models described below. \par Ito's method \cite{ito2021access} focuses on access control of semantic segmentation models, where models are trained and tested by randomly permuting the channels of feature maps selected by a secret key. This encryption method is spatially invariant. This property was confirmed to be very important for some applications such as semantic segmentation \cite{ito2021access}. Although this method has been validated for semantic segmentation, it has not been validated for object detection models. Therefore, in this paper, we propose an access control method for object detection models based on this method. \begin{figure*}[htb] \centering \includegraphics[bb=0 0 1327 305,scale=0.35]{figure/model.png} \caption{Architecture of object detection model (SSD300)} \label{proposed_model} \end{figure*} \begin{figure}[tb] \centering \includegraphics[bb=0 0 699 231,scale=0.35]{figure/perm_layer.png} \caption{Feature map encryption\cite{ito2021access}} \label{transform} \end{figure} \subsection{Encryption Method} There are multiple feature maps in CNNs as shown in Fig. \ref{proposed_model}. In the proposed method, selected feature maps are transformed by using a secret key in accordance with the procedure of learnable image encryption\cite{maungmaung_kiya_2021,adv-def}. Below is the procedure of the encryption, where $x$ is a selected feature map with a dimension of ($c \times h \times w$), $c$ is the number of channels, $h$ is the height, and $w$ is the width of the feature map. \begin{itemize} \item[1)]Generate a random vector with a size of $c$ using a secret key as in (1). \begin{equation} [\alpha_1,.,\alpha_i,\alpha_{i'},...,\alpha_c],\alpha_i \in \left\{1,...,c\right\} \end{equation} where{ $\alpha_i \ne \alpha_{i'}$ }, if $i\ne i'$.\par \item[2)]Replace each element $x(i,j,k)$ of $x$, $i\in \left\{1,...,c\right\}, j \in \left\{1,...,h\right\}, k \in \left\{1,...,w\right\}$ with $x(\alpha_i,j,k)$ so that $x$ is transformed into a feature map $x'$. Note that elements of $x'$, $x'(i,j,k)$ is equal to $x(\alpha_i,j,k)$. \end{itemize} This encryption is a spatial-invariant operation, so the spatial information of feature maps can be maintained (see Fig. \ref{transform}). This property is very important in object detection tasks, which predict position and classes of objects. \subsection{Model Training and Testing} In the proposed method, the previously mentioned transformation method is applied to selected feature maps in an object detection model at each iteration for a training model. SSD300\cite{liu2016ssd} based on VGG16\cite{simonyan2014very}, which was pretrained on the ILSVRC CLS-LOC dataset\cite{russakovsky2015imagenet} is used as an object detection model in this paper, where SSD300 has 11 feature maps as illustrated in Fig. \ref{proposed_model}. \par In testing the trained model, authorized users have the same key that is used for the training. When Authorized users apply query images to the model, they transform the same feature maps that are selected for the training with the key. If unauthorized users without the correct key steal the protected model, we assume that they transform the feature maps with an incorrect key or use the model without the transform. \subsection{Requirements of Protected Models} Protected models should meet the following requirements. \begin{itemize} \item It provides almost the same performance as that of models trained with plain images to authorized users with the secret key. \item It provides a degraded performance to unauthorized users without the correct key. \end{itemize} \section{experiments and results} \subsection{Setup} We used the PASCAL visual object classes (VOC) challenge 2007 \cite{everingham2010pascal}, and 2012 \cite{everingham2015pascal} trainval datasets for training, and the PASCAL VOC 2007 test dataset for testing. For data augmentation, the random sample crop, horizontal flip, and some photometric distortions described in \cite{liu2016ssd} were used for training models. In addition, due to the restrictions of SSD300 shown in Fig. \ref{proposed_model}, input images were resized to $300\times300$ pixels.\par Models were trained by using a stochastic gradient descent (SGD) optimizer with an initial learning rate of $10^{-\:3}$, a momentum value of 0.9, a weight decay value of 0.0005, and a batch size of 32. Models were also trained with a learning rate of $10^{-\:3}$ for 60k iterations, then continue training for 20k iterations with $10^{-\:4}$ and 40k iterations with $10^{-\:5}$. The overall objective loss function is a weighted sum of the localization loss and the confidence loss. In this paper, the confidence loss was the cross-entropy loss over multiple classes confidences, and the localization loss was the Smooth L1 loss between the predicted position and the ground truth position. \begin{figure*}[t] \scalebox{0.8}[0.8]{ \begin{tabular*}{50mm}{@{\extracolsep{\fill}}c|c|ccc} Ground Truth&Baseline&Correct ($K$)&Plain&Incorrect ($K'$) \\ \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 559 453,scale=0.18]{figure/1/gt.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 556 453,scale=0.18]{figure/1/baseline.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 558 452,scale=0.18]{figure/1/correct.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 559 453,scale=0.18]{figure/1/plain.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 558 452,scale=0.18]{figure/1/Incorrect.png} \end{minipage}\\ \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 558 369,scale=0.18]{figure/2/gt.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 558 368,scale=0.18]{figure/2/baseline.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 559 369,scale=0.18]{figure/2/correct.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 561 383,scale=0.18]{figure/2/plain.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 561 370,scale=0.18]{figure/2/Incorrect.png} \end{minipage}\\ \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 414 546,scale=0.18]{figure/3/gt.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 427 546,scale=0.18]{figure/3/baseline.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 405 548,scale=0.18]{figure/3/correct.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 409 550,scale=0.18]{figure/3/plain.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 410 564,scale=0.18]{figure/3/Incorrect.png} \end{minipage}\\ \end{tabular*} } \caption{Examples of experimental result (Model-4)} \label{detection_result} \end{figure*} \subsection{Detection Performance} Mean average precision (mAP) \cite{liu2016ssd} with a range [0,1] was used as a metric for evaluating detection performance, where when a mAP value is closer to 1, it indicates a higher accuracy. In the experiment, a selected feature map was transformed with a key $K$ in accordance with the procedure in sec. II. In Table \ref{table:result}, Correct ($K$) indicates that the selected feature map was transformed with the same key $K$ as the training. Model-1 means that feature map 1 was selected for encryption, and Baseline indicates that training and testing were performed without any encryption. Fig. \ref{detection_result} shows examples of experimental result where Model-4 was used. \par From Table \ref{table:result} and Fig. \ref{detection_result}, the proposed method provides the same prediction results as the Baseline when the feature map is transformed using the correct key for the test. \begin{table}[bt] \caption{Detection accuracy (mAP) of proposed models} \label{table:result} \centering \begin{tabular}{|c|ccc|} \hline Selected feature map & Correct ($K$) & Plain & Incorrect ($K'$) \\ \hline Model-1& 0.7244 & 0.1363 & 0.0421 \\ Model-2& \textbf{0.7611} & \textbf{0.0091} & \textbf{0.0180} \\ Model-3& 0.7475 & 0.0091 & 0.0078 \\ Model-4& \textbf{0.7611} & \textbf{0.0023} & \textbf{0.0043} \\ Model-5& \textbf{0.7587} & \textbf{0.1672} & \textbf{0.1624} \\ Model-6& \textbf{0.7617} & \textbf{0.1732} & \textbf{0.1672} \\ Model-7& \textbf{0.7695} & \textbf{0.1768} & \textbf{0.1750} \\ Model-8& 0.7677 & 0.3529 & 0.3415\\ Model-9& 0.7705 & 0.5767 & 0.5678 \\ Model-10& 0.7705 & 0.7177 & 0.7027 \\ Model-11& 0.7512 & 0.7314 & 0.7252 \\ \hline Baseline& \multicolumn{3}{c|}{0.7690}\\ \hline \end{tabular} \end{table} \subsection{Robustness against Unauthorized Access} Two types of unauthorized access were considered in the experiment. Plain in Table \ref{table:result} represents that an unauthorized user without the key applied query images to protected models, without transforming the selected feature map. Incorrect ($K'$) in Table \ref{table:result} is that an unauthorized user without the key applied query images to protected models, after transforming the selected feature map with a randomly generated key $K'$. The result of Incorrect ($K'$) are the average value of 100 tests with random keys. \par From the table, Model-1$-$7 provided a low detection accuracy for both Plain and Incorrect ($K'$). On the other hand, when transforming the feature map of a deep layer, the resistance to unauthorized access is lost. We consider that the reason for this lies in the structure of SSD300. In order to detect objects of various scales in SSD, detection is performed using features from multiple layers (see Fig \ref{proposed_model}). Therefore, for example, in Model-9, layers 4, 7, and 8 can use the same features as Baseline. In other words, we consider that this is because the number of the same features as Baseline increases in the deeper layers.\par From Fig. \ref{detection_result}, the detection performance degraded significantly when the model was used illegally. Accordingly, the proposed models were robust enough against the unauthorized access. \begin{table}[tb] \caption{Detection accuracy (mAP) of models \\ with encrypted input images} \label{table:SHF} \centering \scalebox{0.9}{ \begin{tabular}{|c|c|ccc|} \hline method & block size & Correct ($K$) & Plain & Incorrect ($K'$) \\ \hline \multirow{5}{*}{pixel shuffling (SHF)} & 1 & 0.7710 & 0.7598 & 0.7603 \\ &4& 0.7154 & 0.5745 & 0.3883 \\ &12& 0.4891 & 0.1976 & 0.0910 \\ &20& 0.0083 & 0.0086 & 0.0065 \\ &60& 0.1284 & 0.0480 & 0.0416 \\ \hline \multicolumn{2}{|c|}{Proposed (Model-4)}& \textbf{0.7611} & \textbf{0.0023} & \textbf{0.0043} \\ \hline \multicolumn{2}{|c|}{Baseline} & \multicolumn{3}{c|}{0.7690}\\ \hline \end{tabular} } \end{table} \subsection{Comparison with encryption of input images} The proposed method was compared with a method to protect models with encrypted input images, which was proposed for image classification models\cite{maungmaung_kiya_2021}. In the method, there are three block-wise methods: pixel shuffling, bit flipping, and Format-preserving Feistel-based encryption (FFX)\cite{bellare2010addendum}, for encrypting input image. \par In this paper, pixel shuffling (SHF) with a block size of 1, 4, 12, 20, or 60 were applied to input images, and the encrypted images were used for training and testing. \par The experimental conditions are the same as in \textit{A} of sec. III. From Table \ref{table:SHF}, the detection accuracy was significantly lower than the proposed method under almost all block sizes. When the block size was small, the detection accuracy was high, but the resistance to unauthorized access was also degraded, so the models were not protected \cite{maungmaung_kiya_2021}. In contrast, when the block size was large, the resistance to unauthorized access was stronger, but the detection accuracy was greatly degraded. Therefore, the conventional method with encrypted input images is not effective in object detection models. \section{Conclusion} We proposed an access control method that uses encrypted feature maps transformation for object detection models for the first time. In the experiment, the proposed access control method was demonstrated not only to provide a high detection accuracy but also to robust enough against two types of unauthorized access. \section*{Acknowledgement} This study was partially supported by JSPS KAKENHI (Grant Number JP21H01327). \bibliographystyle{IEEEtran} \section{Introduction} This document is a model and instructions for \LaTeX. Please observe the conference page limits. \section{Ease of Use} \subsection{Maintaining the Integrity of the Specifications} The IEEEtran class file is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations. \section{Prepare Your Paper Before Styling} Before you begin to format your paper, first write and save the content as a separate text file. Complete all content and organizational editing before formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on proofreading, spelling and grammar. Keep your text and graphic files separate until after the text has been formatted and styled. Do not number text heads---{\LaTeX} will do that for you. \subsection{Abbreviations and Acronyms}\label{AA} Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. \subsection{Units} \begin{itemize} \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''. \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. \item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''. \item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.) \end{itemize} \subsection{Equations} Number equations consecutively. To make your equations more compact, you may use the solidus (~/~), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in: \begin{equation} a+b=\gamma\label{eq} \end{equation} Be sure that the symbols in your equation have been defined before or immediately following the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at the beginning of a sentence: ``Equation \eqref{eq} is . . .'' \subsection{\LaTeX-Specific Advice} Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead of ``hard'' references (e.g., \verb|(1)|). That will make it possible to combine sections, add equations, or change the order of figures or citations without having to go through the file line by line. Please don't use the \verb|{eqnarray}| equation environment. Use \verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}| environment leaves unsightly spaces around relation symbols. Please note that the \verb|{subequations}| environment in {\LaTeX} will increment the main equation counter even when there are no equation numbers displayed. If you forget that, you might write an article in which the equation numbers skip from (17) to (20), causing the copy editors to wonder if you've discovered a new method of counting. {\BibTeX} does not work by magic. It doesn't get the bibliographic data from thin air but from .bib files. If you use {\BibTeX} to produce a bibliography you must send the .bib files. {\LaTeX} can't read your mind. If you assign the same label to a subsubsection and a table, you might find that Table I has been cross referenced as Table IV-B3. {\LaTeX} does not have precognitive abilities. If you put a \verb|\label| command before the command that updates the counter it's supposed to be using, the label will pick up the last counter to be cross referenced instead. In particular, a \verb|\label| command should not go before the caption of a figure or a table. Do not use \verb|\nonumber| inside the \verb|{array}| environment. It will not stop equation numbers inside \verb|{array}| (there won't be any anyway) and it might stop a wanted equation number in the surrounding equation. \subsection{Some Common Mistakes}\label{SCM} \begin{itemize} \item The word ``data'' is plural, not singular. \item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''. \item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) \item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates). \item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''. \item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased. \item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''. \item Do not confuse ``imply'' and ``infer''. \item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen. \item There is no period after the ``et'' in the Latin abbreviation ``et al.''. \item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''. \end{itemize} An excellent style manual for science writers is \cite{b7}. \subsection{Authors and Affiliations} \textbf{The class file is designed for, but not limited to, six authors.} A minimum of one author is required for all conference articles. Author names should be listed starting from left to right and then moving down to the next line. This is the author sequence that will be used in future citations and by indexing services. Names should not be listed in columns nor group by affiliation. Please keep your affiliations as succinct as possible (for example, do not differentiate among departments of the same organization). \subsection{Identify the Headings} Headings, or heads, are organizational devices that guide the reader through your paper. There are two types: component heads and text heads. Component heads identify the different components of your paper and are not topically subordinate to each other. Examples include Acknowledgments and References and, for these, the correct style to use is ``Heading 5''. Use ``figure caption'' for your Figure captions, and ``table head'' for your table title. Run-in heads, such as ``Abstract'', will require you to apply a style (in this case, italic) in addition to the style provided by the drop down menu to differentiate the head from the text. Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. \subsection{Figures and Tables} \paragraph{Positioning Figures and Tables} Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are cited in the text. Use the abbreviation ``Fig.~\ref{fig}'', even at the beginning of a sentence. \begin{table}[htbp] \caption{Table Type Styles} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\ \cline{2-4} \textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\ \hline copy& More table copy$^{\mathrm{a}}$& & \\ \hline \multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.} \end{tabular} \label{tab1} \end{center} \end{table} \begin{figure}[htbp] \centerline{\includegraphics{fig1.png}} \caption{Example of a figure caption.} \label{fig} \end{figure} Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity ``Magnetization'', or ``Magnetization, M'', not just ``M''. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization \{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature (K)'', not ``Temperature/K''. \section*{Acknowledgment} The preferred spelling of the word ``acknowledgment'' in America is without an ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B. G.) thanks $\ldots$''. Instead, try ``R. B. G. thanks$\ldots$''. Put sponsor acknowledgments in the unnumbered footnote on the first page. \section*{References} Please number citations consecutively within brackets \cite{b1}. The sentence punctuation follows the bracket \cite{b2}. Refer simply to the reference number, as in \cite{b3}---do not use ``Ref. \cite{b3}'' or ``reference \cite{b3}'' except at the beginning of a sentence: ``Reference \cite{b3} was the first $\ldots$'' Number footnotes separately in superscripts. Place the actual footnote at the bottom of the column in which it was cited. Do not put footnotes in the abstract or reference list. Use letters for table footnotes. Unless there are six authors or more give all authors' names; do not use ``et al.''. Papers that have not been published, even if they have been submitted for publication, should be cited as ``unpublished'' \cite{b4}. Papers that have been accepted for publication should be cited as ``in press'' \cite{b5}. Capitalize only the first word in a paper title, except for proper nouns and element symbols. For papers published in translation journals, please give the English citation first, followed by the original foreign-language citation \cite{b6}. \section{Introduction} This document is a model and instructions for \LaTeX. Please observe the conference page limits. \section{Ease of Use} \subsection{Maintaining the Integrity of the Specifications} The IEEEtran class file is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations. \section{Prepare Your Paper Before Styling} Before you begin to format your paper, first write and save the content as a separate text file. Complete all content and organizational editing before formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on proofreading, spelling and grammar. Keep your text and graphic files separate until after the text has been formatted and styled. Do not number text heads---{\LaTeX} will do that for you. \subsection{Abbreviations and Acronyms}\label{AA} Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. \subsection{Units} \begin{itemize} \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''. \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. \item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''. \item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.) \end{itemize} \subsection{Equations} Number equations consecutively. To make your equations more compact, you may use the solidus (~/~), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in: \begin{equation} a+b=\gamma\label{eq} \end{equation} Be sure that the symbols in your equation have been defined before or immediately following the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at the beginning of a sentence: ``Equation \eqref{eq} is . . .'' \subsection{\LaTeX-Specific Advice} Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead of ``hard'' references (e.g., \verb|(1)|). That will make it possible to combine sections, add equations, or change the order of figures or citations without having to go through the file line by line. Please don't use the \verb|{eqnarray}| equation environment. Use \verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}| environment leaves unsightly spaces around relation symbols. Please note that the \verb|{subequations}| environment in {\LaTeX} will increment the main equation counter even when there are no equation numbers displayed. If you forget that, you might write an article in which the equation numbers skip from (17) to (20), causing the copy editors to wonder if you've discovered a new method of counting. {\BibTeX} does not work by magic. It doesn't get the bibliographic data from thin air but from .bib files. If you use {\BibTeX} to produce a bibliography you must send the .bib files. {\LaTeX} can't read your mind. If you assign the same label to a subsubsection and a table, you might find that Table I has been cross referenced as Table IV-B3. {\LaTeX} does not have precognitive abilities. If you put a \verb|\label| command before the command that updates the counter it's supposed to be using, the label will pick up the last counter to be cross referenced instead. In particular, a \verb|\label| command should not go before the caption of a figure or a table. Do not use \verb|\nonumber| inside the \verb|{array}| environment. It will not stop equation numbers inside \verb|{array}| (there won't be any anyway) and it might stop a wanted equation number in the surrounding equation. \subsection{Some Common Mistakes}\label{SCM} \begin{itemize} \item The word ``data'' is plural, not singular. \item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''. \item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) \item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates). \item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''. \item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased. \item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''. \item Do not confuse ``imply'' and ``infer''. \item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen. \item There is no period after the ``et'' in the Latin abbreviation ``et al.''. \item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''. \end{itemize} An excellent style manual for science writers is \cite{b7}. \subsection{Authors and Affiliations} \textbf{The class file is designed for, but not limited to, six authors.} A minimum of one author is required for all conference articles. Author names should be listed starting from left to right and then moving down to the next line. This is the author sequence that will be used in future citations and by indexing services. Names should not be listed in columns nor group by affiliation. Please keep your affiliations as succinct as possible (for example, do not differentiate among departments of the same organization). \subsection{Identify the Headings} Headings, or heads, are organizational devices that guide the reader through your paper. There are two types: component heads and text heads. Component heads identify the different components of your paper and are not topically subordinate to each other. Examples include Acknowledgments and References and, for these, the correct style to use is ``Heading 5''. Use ``figure caption'' for your Figure captions, and ``table head'' for your table title. Run-in heads, such as ``Abstract'', will require you to apply a style (in this case, italic) in addition to the style provided by the drop down menu to differentiate the head from the text. Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. \subsection{Figures and Tables} \paragraph{Positioning Figures and Tables} Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are cited in the text. Use the abbreviation ``Fig.~\ref{fig}'', even at the beginning of a sentence. \begin{table}[htbp] \caption{Table Type Styles} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\ \cline{2-4} \textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\ \hline copy& More table copy$^{\mathrm{a}}$& & \\ \hline \multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.} \end{tabular} \label{tab1} \end{center} \end{table} \begin{figure}[htbp] \centerline{\includegraphics{fig1.png}} \caption{Example of a figure caption.} \label{fig} \end{figure} Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity ``Magnetization'', or ``Magnetization, M'', not just ``M''. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization \{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature (K)'', not ``Temperature/K''. \section*{Acknowledgment} The preferred spelling of the word ``acknowledgment'' in America is without an ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B. G.) thanks $\ldots$''. Instead, try ``R. B. G. thanks$\ldots$''. Put sponsor acknowledgments in the unnumbered footnote on the first page. \section*{References} Please number citations consecutively within brackets \cite{b1}. The sentence punctuation follows the bracket \cite{b2}. Refer simply to the reference number, as in \cite{b3}---do not use ``Ref. \cite{b3}'' or ``reference \cite{b3}'' except at the beginning of a sentence: ``Reference \cite{b3} was the first $\ldots$'' Number footnotes separately in superscripts. Place the actual footnote at the bottom of the column in which it was cited. Do not put footnotes in the abstract or reference list. Use letters for table footnotes. Unless there are six authors or more give all authors' names; do not use ``et al.''. Papers that have not been published, even if they have been submitted for publication, should be cited as ``unpublished'' \cite{b4}. Papers that have been accepted for publication should be cited as ``in press'' \cite{b5}. Capitalize only the first word in a paper title, except for proper nouns and element symbols. For papers published in translation journals, please give the English citation first, followed by the original foreign-language citation \cite{b6}. \section{Introduction} Deep neural networks (DNNs) and convolutional neural networks (CNNs) have been used widely in various applications such as image classification, semantic segmentation, and object detection\cite{krizhevsky2017imagenet,liu2019recent, galvez2018object}. Training high-performance models is not an easy task, because it requires a large amount of data, powerful computational resources (GPUs), and efficient algorithms. Considering the expertise, cost, and time required for training models, they are considered as a kind of intellectual property that should be protected. \par There are two approaches to intellectual property protection of models: ownership verification and access control. The difference between these two approaches is that the former aims to identify the ownership of the models, but the latter aims to protect models from unauthorized access\cite{kiya2022overview}. The ownership verification methods are inspired by watermarking, where a watermark is embedded in the models and the embedded watermark is used to verify the ownership of the models\cite{uchida2017embedding,zhang2018protecting,darvish2019deepsigns,NEURIPS2019_75455e06,xue2021dnn,maung2021piracy}. However, ownership verification does not have the ability to restrict the execution of the models. Thus, in principle, attackers can freely exploit the models for their own benefit, or use it in adversarial attacks\cite{szegedy2013intriguing}. Therefore, in this paper, we focus on access control, which aims to prevent models from unauthorized access.\par A number of access control methods have been proposed as a model protection method. By encrypting images or feature maps with a secret key, a stolen model cannot be used to its full capacity without a correct secret key \cite{maungmaung_kiya_2021,maung2021protection,ito2021access}. However, these methods have never been applied to object detection models. In this paper, an access control method with encrypted feature maps is applied to object detection models for the first time, and the effectiveness of the proposed method is confirmed in an experiment. \begin{figure}[t] \centering \includegraphics[bb=0 0 750 357,scale=0.3]{figure/access_control.png} \caption{Overview of access control} \label{access_control} \end{figure} \section{proposed method} \subsection{Overview} An overview of the access control to protect the trained models from unauthorized access is shown in Fig. \ref{access_control}. The protected models are trained with the secret key $K$. When authorized users enter test images and the correct key $K$ into the protected models, the results are equivalent to the models in the unprotected state (the access control is not assumed). In contrast, when unauthorized users without the key $K$ enter only test images or test images and a wrong key $K'$ into the protected models, lower performance results are provided.\par As access control methods using a secret key, the input image encryption method \cite{maungmaung_kiya_2021} and the feature map encryption method \cite{ito2021access} have been proposed. Maung's method\cite{maungmaung_kiya_2021} focuses on access control of image classification models, where input images are divided into blocks and encrypted with a secret key using methods such as pixel shuffling, bit flipping, and format-preserving Feistel-based encryption (FFX)\cite{bellare2010addendum}. These encrypted images are used as training and test images. Since this method encrypts the images block by block, it changes the spatial information and cannot be used to protect the object detection models described below. \par Ito's method \cite{ito2021access} focuses on access control of semantic segmentation models, where models are trained and tested by randomly permuting the channels of feature maps selected by a secret key. This encryption method is spatially invariant. This property was confirmed to be very important for some applications such as semantic segmentation \cite{ito2021access}. Although this method has been validated for semantic segmentation, it has not been validated for object detection models. Therefore, in this paper, we propose an access control method for object detection models based on this method. \begin{figure*}[htb] \centering \includegraphics[bb=0 0 1327 305,scale=0.35]{figure/model.png} \caption{Architecture of object detection model (SSD300)} \label{proposed_model} \end{figure*} \begin{figure}[tb] \centering \includegraphics[bb=0 0 699 231,scale=0.35]{figure/perm_layer.png} \caption{Feature map encryption\cite{ito2021access}} \label{transform} \end{figure} \subsection{Encryption Method} There are multiple feature maps in CNNs as shown in Fig. \ref{proposed_model}. In the proposed method, selected feature maps are transformed by using a secret key in accordance with the procedure of learnable image encryption\cite{maungmaung_kiya_2021,adv-def}. Below is the procedure of the encryption, where $x$ is a selected feature map with a dimension of ($c \times h \times w$), $c$ is the number of channels, $h$ is the height, and $w$ is the width of the feature map. \begin{itemize} \item[1)]Generate a random vector with a size of $c$ using a secret key as in (1). \begin{equation} [\alpha_1,.,\alpha_i,\alpha_{i'},...,\alpha_c],\alpha_i \in \left\{1,...,c\right\} \end{equation} where{ $\alpha_i \ne \alpha_{i'}$ }, if $i\ne i'$.\par \item[2)]Replace each element $x(i,j,k)$ of $x$, $i\in \left\{1,...,c\right\}, j \in \left\{1,...,h\right\}, k \in \left\{1,...,w\right\}$ with $x(\alpha_i,j,k)$ so that $x$ is transformed into a feature map $x'$. Note that elements of $x'$, $x'(i,j,k)$ is equal to $x(\alpha_i,j,k)$. \end{itemize} This encryption is a spatial-invariant operation, so the spatial information of feature maps can be maintained (see Fig. \ref{transform}). This property is very important in object detection tasks, which predict position and classes of objects. \subsection{Model Training and Testing} In the proposed method, the previously mentioned transformation method is applied to selected feature maps in an object detection model at each iteration for a training model. SSD300\cite{liu2016ssd} based on VGG16\cite{simonyan2014very}, which was pretrained on the ILSVRC CLS-LOC dataset\cite{russakovsky2015imagenet} is used as an object detection model in this paper, where SSD300 has 11 feature maps as illustrated in Fig. \ref{proposed_model}. \par In testing the trained model, authorized users have the same key that is used for the training. When Authorized users apply query images to the model, they transform the same feature maps that are selected for the training with the key. If unauthorized users without the correct key steal the protected model, we assume that they transform the feature maps with an incorrect key or use the model without the transform. \subsection{Requirements of Protected Models} Protected models should meet the following requirements. \begin{itemize} \item It provides almost the same performance as that of models trained with plain images to authorized users with the secret key. \item It provides a degraded performance to unauthorized users without the correct key. \end{itemize} \section{experiments and results} \subsection{Setup} We used the PASCAL visual object classes (VOC) challenge 2007 \cite{everingham2010pascal}, and 2012 \cite{everingham2015pascal} trainval datasets for training, and the PASCAL VOC 2007 test dataset for testing. For data augmentation, the random sample crop, horizontal flip, and some photometric distortions described in \cite{liu2016ssd} were used for training models. In addition, due to the restrictions of SSD300 shown in Fig. \ref{proposed_model}, input images were resized to $300\times300$ pixels.\par Models were trained by using a stochastic gradient descent (SGD) optimizer with an initial learning rate of $10^{-\:3}$, a momentum value of 0.9, a weight decay value of 0.0005, and a batch size of 32. Models were also trained with a learning rate of $10^{-\:3}$ for 60k iterations, then continue training for 20k iterations with $10^{-\:4}$ and 40k iterations with $10^{-\:5}$. The overall objective loss function is a weighted sum of the localization loss and the confidence loss. In this paper, the confidence loss was the cross-entropy loss over multiple classes confidences, and the localization loss was the Smooth L1 loss between the predicted position and the ground truth position. \begin{figure*}[t] \scalebox{0.8}[0.8]{ \begin{tabular*}{50mm}{@{\extracolsep{\fill}}c|c|ccc} Ground Truth&Baseline&Correct ($K$)&Plain&Incorrect ($K'$) \\ \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 559 453,scale=0.18]{figure/1/gt.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 556 453,scale=0.18]{figure/1/baseline.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 558 452,scale=0.18]{figure/1/correct.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 559 453,scale=0.18]{figure/1/plain.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 558 452,scale=0.18]{figure/1/Incorrect.png} \end{minipage}\\ \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 558 369,scale=0.18]{figure/2/gt.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 558 368,scale=0.18]{figure/2/baseline.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 559 369,scale=0.18]{figure/2/correct.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 561 383,scale=0.18]{figure/2/plain.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 561 370,scale=0.18]{figure/2/Incorrect.png} \end{minipage}\\ \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 414 546,scale=0.18]{figure/3/gt.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 427 546,scale=0.18]{figure/3/baseline.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 405 548,scale=0.18]{figure/3/correct.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 409 550,scale=0.18]{figure/3/plain.png} \end{minipage} & \begin{minipage}{4truecm} \centering \includegraphics[bb=0 0 410 564,scale=0.18]{figure/3/Incorrect.png} \end{minipage}\\ \end{tabular*} } \caption{Examples of experimental result (Model-4)} \label{detection_result} \end{figure*} \subsection{Detection Performance} Mean average precision (mAP) \cite{liu2016ssd} with a range [0,1] was used as a metric for evaluating detection performance, where when a mAP value is closer to 1, it indicates a higher accuracy. In the experiment, a selected feature map was transformed with a key $K$ in accordance with the procedure in sec. II. In Table \ref{table:result}, Correct ($K$) indicates that the selected feature map was transformed with the same key $K$ as the training. Model-1 means that feature map 1 was selected for encryption, and Baseline indicates that training and testing were performed without any encryption. Fig. \ref{detection_result} shows examples of experimental result where Model-4 was used. \par From Table \ref{table:result} and Fig. \ref{detection_result}, the proposed method provides the same prediction results as the Baseline when the feature map is transformed using the correct key for the test. \begin{table}[bt] \caption{Detection accuracy (mAP) of proposed models} \label{table:result} \centering \begin{tabular}{|c|ccc|} \hline Selected feature map & Correct ($K$) & Plain & Incorrect ($K'$) \\ \hline Model-1& 0.7244 & 0.1363 & 0.0421 \\ Model-2& \textbf{0.7611} & \textbf{0.0091} & \textbf{0.0180} \\ Model-3& 0.7475 & 0.0091 & 0.0078 \\ Model-4& \textbf{0.7611} & \textbf{0.0023} & \textbf{0.0043} \\ Model-5& \textbf{0.7587} & \textbf{0.1672} & \textbf{0.1624} \\ Model-6& \textbf{0.7617} & \textbf{0.1732} & \textbf{0.1672} \\ Model-7& \textbf{0.7695} & \textbf{0.1768} & \textbf{0.1750} \\ Model-8& 0.7677 & 0.3529 & 0.3415\\ Model-9& 0.7705 & 0.5767 & 0.5678 \\ Model-10& 0.7705 & 0.7177 & 0.7027 \\ Model-11& 0.7512 & 0.7314 & 0.7252 \\ \hline Baseline& \multicolumn{3}{c|}{0.7690}\\ \hline \end{tabular} \end{table} \subsection{Robustness against Unauthorized Access} Two types of unauthorized access were considered in the experiment. Plain in Table \ref{table:result} represents that an unauthorized user without the key applied query images to protected models, without transforming the selected feature map. Incorrect ($K'$) in Table \ref{table:result} is that an unauthorized user without the key applied query images to protected models, after transforming the selected feature map with a randomly generated key $K'$. The result of Incorrect ($K'$) are the average value of 100 tests with random keys. \par From the table, Model-1$-$7 provided a low detection accuracy for both Plain and Incorrect ($K'$). On the other hand, when transforming the feature map of a deep layer, the resistance to unauthorized access is lost. We consider that the reason for this lies in the structure of SSD300. In order to detect objects of various scales in SSD, detection is performed using features from multiple layers (see Fig \ref{proposed_model}). Therefore, for example, in Model-9, layers 4, 7, and 8 can use the same features as Baseline. In other words, we consider that this is because the number of the same features as Baseline increases in the deeper layers.\par From Fig. \ref{detection_result}, the detection performance degraded significantly when the model was used illegally. Accordingly, the proposed models were robust enough against the unauthorized access. \begin{table}[tb] \caption{Detection accuracy (mAP) of models \\ with encrypted input images} \label{table:SHF} \centering \scalebox{0.9}{ \begin{tabular}{|c|c|ccc|} \hline method & block size & Correct ($K$) & Plain & Incorrect ($K'$) \\ \hline \multirow{5}{*}{pixel shuffling (SHF)} & 1 & 0.7710 & 0.7598 & 0.7603 \\ &4& 0.7154 & 0.5745 & 0.3883 \\ &12& 0.4891 & 0.1976 & 0.0910 \\ &20& 0.0083 & 0.0086 & 0.0065 \\ &60& 0.1284 & 0.0480 & 0.0416 \\ \hline \multicolumn{2}{|c|}{Proposed (Model-4)}& \textbf{0.7611} & \textbf{0.0023} & \textbf{0.0043} \\ \hline \multicolumn{2}{|c|}{Baseline} & \multicolumn{3}{c|}{0.7690}\\ \hline \end{tabular} } \end{table} \subsection{Comparison with encryption of input images} The proposed method was compared with a method to protect models with encrypted input images, which was proposed for image classification models\cite{maungmaung_kiya_2021}. In the method, there are three block-wise methods: pixel shuffling, bit flipping, and Format-preserving Feistel-based encryption (FFX)\cite{bellare2010addendum}, for encrypting input image. \par In this paper, pixel shuffling (SHF) with a block size of 1, 4, 12, 20, or 60 were applied to input images, and the encrypted images were used for training and testing. \par The experimental conditions are the same as in \textit{A} of sec. III. From Table \ref{table:SHF}, the detection accuracy was significantly lower than the proposed method under almost all block sizes. When the block size was small, the detection accuracy was high, but the resistance to unauthorized access was also degraded, so the models were not protected \cite{maungmaung_kiya_2021}. In contrast, when the block size was large, the resistance to unauthorized access was stronger, but the detection accuracy was greatly degraded. Therefore, the conventional method with encrypted input images is not effective in object detection models. \section{Conclusion} We proposed an access control method that uses encrypted feature maps transformation for object detection models for the first time. In the experiment, the proposed access control method was demonstrated not only to provide a high detection accuracy but also to robust enough against two types of unauthorized access. \section*{Acknowledgement} This study was partially supported by JSPS KAKENHI (Grant Number JP21H01327). \bibliographystyle{IEEEtran}
{ "timestamp": "2022-03-11T02:11:21", "yymm": "2202", "arxiv_id": "2202.00265", "language": "en", "url": "https://arxiv.org/abs/2202.00265" }
\section{Introduction} \label{sec:intro} Given a linear representation $(V,\rho)$ of a compact group $G$, the \emph{symmetry classes} (or isotropy classes) are defined as the sets of conjugacy classes of the symmetry subgroups of $G$ (see \cite{Bredon1960}, for more details). Finding explicitly the isotropy classes of a given representation has always been a difficult task, but it is known that there exists only a finite number of them. This fact was first conjectured by Montgomery \cite[problem 45]{eilenberg1949} and proved by Mostow in the case of the action of a compact Lie group on a compact manifold \cite{Mostow1957} using the result of Floyd \cite{floyd1957}. The explicit calculation of isotropy classes is an interesting problem not only in mathematics but also in mechanics because the symmetry classes are connected to material symmetries. The problem of classifying materials according to their symmetries goes back to the work of Lord Kelvin. Since then, many authors devoted a great effort to formulate the problem especially for $\SO(3)$ and $\OO(3)$ tensorial representations (used to model constitutive laws in mechanics). Surprisingly, it was only in 1996 that Forte and Vianello~\cite{forte1996} solved definitively the problem for the \emph{Elasticity tensor} (a fourth-order tensor). In that case, eight isotropy classes were found. Based on this method, the problem was solved for other constitutive laws. For instance, 16 isotropy classes were obtained for the \emph{Piezoelectricity tensor} (a third-order tensor)~\cite{Nye1985,ZB1994,Wel2004,New2005,ZTP2013}. Similar results were obtained for other constitutive tensor spaces \cite{forte1997symmetry,le2011number}. However, the Forte-Vianello approach requires rather fine calculations and reasoning to establish the classification. This complexity makes difficult its application to more involved situations, such as constitutive tensors of order greater than 4 or coupled constitutive laws involving a family of tensors~\cite{Jur1974,EM1990}. A systematic way to calculate the isotropy classes was proposed by Chossat and Guyard in~\cite{Chossat1994} using a binary operation between conjugacy classes. This operation was named the \emph{clips} operation in~\cite{Olive2013,Olive2014,Olive2019,Olive2021}, where it was generalized and used to determine isotropy classes for reducible representations. Clips tables for $\SO(3)$-subgroups were obtained first in~\cite{Chossat1994}. The problem is more complicated for $\OO(3)$-subgroups, since there exist three types of subgroups (see \autoref{sec:O3-subgroups}), where type I corresponds to $\SO(3)$-subgroups. Clips tables for $\OO(3)$-subgroups were calculated in~\cite{Olive2019}, except the ones between type II and type III subgroups. In this paper, we complete these results by providing these missing tables, which have never been obtained before. The clips tables \ref{tab:Clips} and \ref{tab:Clips2} given in this paper, together with the tables provided in~\cite{Olive2019}, furnish an exhaustive list of clips tables between $\OO(3)$-subgroups. Since the isotropy classes are known for the irreducible representations of $\OO(3)$~\cite{GSS1988,CLM1990}, this allows, in practice, to determine the isotropy classes of any reducible representation of $\OO(3)$. As an original application, we used these results to determine the 25 isotropy classes of the full 3D Piezoelectricity law, in which three constitutive tensors are involved: the Elasticity, the Piezoelectricity and the Permittivity tensors. \subsection*{Organization of the paper} The paper is organized as follows. In \autoref{sec:Iso_classes}, we recall basic material on symmetry classes and we introduce the \emph{clips operation}. In \autoref{sec:main_result}, we provide (in tables \ref{tab:Clips} and \ref{tab:Clips2}) the list of clips between $\OO(3)$-subgroups of type II and III. The proofs and calculations for these clips are given in \autoref{sec:proof_clips}. The application to the full Piezoelectricity law is detailed in \autoref{sec:piezoelectricity}, where 25 symmetry classes are found. To be self-contained, the classification of closed $\OO(3)$-subgroups, up to conjugacy, is recalled in \autoref{sec:O3-subgroups}. \section{Isotropy classes and Clips operation} \label{sec:Iso_classes} In this section, we recall the notions of isotropy groups and isotropy classes of a group representation and we introduce clips operation for sets of conjugacy classes. We consider a linear representation $(V,G,\rho)$ of a group $G$ on a finite dimensional vector space $V$, \textit{i.e.}, a group morphism \begin{equation*} \rho:\ G\ \rightarrow\ \mathrm{GL}(V) \end{equation*} with $\mathrm{GL}(V)$, the group of invertible linear mappings of $V$ into itself. The \emph{isotropy (or symmetry) group} of $\vv\in V$ is defined as \begin{equation*} G_{\vv} := \set{g\in G; \; \rho(g)\vv=\vv}, \end{equation*} and the \emph{isotropy (or symmetry) class} of $\vv$ is the conjugacy class $[G_{\vv}]$ of its isotropy group \begin{equation*} [G_{\vv}] := \set{gG_{\vv}g^{-1};\; g\in G}. \end{equation*} For a given subgroup $H\subset G$, the conjugacy class $[H]$ is an \emph{isotropy class} of the linear representation $(V,G,\rho)$ if $H$ is conjugate to some isotropy group $G_{\vv}$. Let us then define $\J(V)$ to be the set of all isotropy classes of the representation $V$ \begin{equation*} \J(V) := \set{[G_{\vv}];\; \vv\in V}. \end{equation*} As mentioned in the introduction, the finiteness of $\J(V)$ is proven when $G$ is a compact Lie group and $V$ is a compact manifold (see, for instance, \cite{Mostow1957}, \cite{Bredon1960}, \cite{Mann1962}). The result for a linear representation of a compact Lie group on a vector space follows immediately, since, then, an invariant scalar product exists and the restriction of the representation to the unit sphere, which is a compact manifold, has the same isotropy classes as the full representation. \begin{thm} Let $(V,G)$ be a linear representation of a compact Lie group $G$ on a vector space $V$. Then there exists a finite number of isotropy classes \begin{equation*} \J(V)=\set{[H_1],\dotsc, [H_n]}. \end{equation*} \end{thm} \begin{rem} The set of conjugacy classes $[H]$ of closed subgroups of a compact group is endowed with a partial order relation (see~\cite{Bredon1960}), given by \begin{equation*} [H]\preceq [K]\iff \exists g\in G,\quad gHg^{-1}\subset K. \end{equation*} The set $\J(V)$ always has a smallest element, say $[H_1]$ and a biggest element, say $[H_n]$. Hence, \begin{equation*} [H_1]\preceq [H_k] \preceq [H_n], \qquad k=1,\dotsc,n \end{equation*} \end{rem} Given this finiteness result, it is important to explicitly calculate the isotropy classes of a given representation. In the specific case of $\SO(3)$ linear representations, Michel~\cite{Michel1980} obtained the isotropy classes for the \emph{irreducible representations} and the ones for $\OO(3)$ irreducible representations were obtained by Ihrig and Golubitsky~\cite{Ihrig1984}. Thereafter, Chossat and Guyard \cite{Chossat1994} get the isotropy classes of a direct sum of two irreducible $\SO(3)$ representations. To do so, they introduced a binary operations on conjugate $\SO(3)$-subgroups that allows one to compute the set of isotropy classes $\J(V)$ of a direct sum $V = V_1\oplus V_2$ of linear representations of a group $G$, if we know the isotropy classes for each individual irreducible representations. Such an operation is generalized to a binary operation on all conjugacy classes of subgroups of a given group $G$ and is defined as follows \begin{defn}\label{def:clips_operation} For two subgroups $H_1$ and $H_2$ of a group $G$, we define the \emph{clips operation} of the conjugacy classes $[H_1]$ and $[H_2]$ as the subset of conjugacy classes \begin{equation*} [H_1]\circledcirc [H_2]:=\set{[H_1\cap gH_2g^{-1}],\quad g\in G}. \end{equation*} This definition immediately extends to two families (finite or infinite) $\mathcal{F}_1$ and $\mathcal{F}_2$ of conjugacy classes \begin{equation*} \mathcal{F}_1 \circledcirc \mathcal{F}_2=\underset{[H_i]\in \mathcal{F}_i}{\bigcup} [H_1]\circledcirc [H_2]. \end{equation*} \end{defn} The following lemma states a central result that serves in finding the isotropy classes of a reducible representation once we know the isotropy classes of irreducible ones (see~\cite{Olive2019} for a proof). \begin{lem}\label{lem:direct_sum} Let $V_1$ and $V_2$ be two linear representations of $G$. Then the set $\J(V_1\oplus V_2)$ of isotropy classes of the diagonal representation of $G$ on $V_1\oplus V_2$ is given by \begin{equation*} \J(V_1\oplus V_2)=\J(V_1)\circledcirc\J(V_2). \end{equation*} \end{lem} Using this result, one can find the isotropy classes of any representation $V$ provided we know \begin{enumerate} \item a stable decomposition $V=W_1\oplus \dotsc \oplus W_r$; \item the isotropy classes of each representations $W_k$; \item the clips table of $[H_1]\circledcirc [H_2]$ for all subgroups of $G$. \end{enumerate} Such Lemma~\ref{lem:direct_sum} has already been applied to find the isotropy classes of some $\OO(3)$ reducible representations, which are essentially the standard $\OO(3)$ representations on \emph{odd order} tensor spaces on $\RR^3$ (see~\cite{Olive2014,Olive2019}). To extend to all reducible $\OO(3)$ representations, new clips table of closed $\OO(3)$-subgroups have to be established. \section{Clips operation between closed $\OO(3)$-subgroups} \label{sec:main_result} As recalled in~\autoref{sec:O3-subgroups} (see also~\cite{Ihrig1984} for details), any closed $\OO(3)$-subgroup is either of type I, type II or type III. More specifically \begin{itemize} \item Every type I closed $\OO(3)$-subgroup is conjugate to one of the following list \begin{equation}\label{eq:List_typeI} \SO(3),\quad \OO(2),\quad \SO(2), \quad \DD_n, \quad \ZZ_n, \quad \tetra, \quad \octa, \quad \ico, \text{ or} \quad \1. \end{equation} \item Every type II closed $\OO(3)$-subgroup is conjugate to $H\oplus \ZZ_2^c$ with $H$ a type I subgroup and $\ZZ_2^c:=\set{-\id,\id}$. \item Every type III closed $\OO(3)$-subgroup is conjugate to one of the following list: \begin{equation}\label{eq:list_typeIII} \ZZ_{2n}^{-},\quad \Dnz,\quad \Dnd,\quad \octa^-,\quad \OO(2)^-. \end{equation} \end{itemize} Each closed subgroups from the lists~\eqref{eq:List_typeI} and~\eqref{eq:list_typeIII} is defined in \autoref{sec:O3-subgroups}. The clips tables have already been established for two type I subgroups (see~\cite[Table 1]{Olive2019}) and for two type III subgroups (see~\cite[Table 2]{Olive2019}). The clips operation between a type I and a type III subgroup is deduced from~\cite[Lemma 5.4]{Olive2019}. The clips between a type I and a type II subgroup or two type II subgroups are deduced from the clips between two type I subgroups, see remark below. \begin{rem} Note that \begin{enumerate} \item \label{rem:2typeII} If $H_1$ and $H_2$ are two subgroups of $\SO(3)$ of type I then \begin{equation*} [H_1]\circledcirc [H_2\oplus \ZZ_2^c]=[H_1]\circledcirc [H_2] \text{ and } [H_1\oplus \ZZ_2^c]\circledcirc [H_2\oplus \ZZ_2^c]=([H_1]\circledcirc [H_2])\oplus \ZZ_2^c. \end{equation*} \item For every subgroup $H$ of $\SO(3)$ we have \begin{equation*} [H]\circledcirc [\SO(3)\oplus \ZZ_2^c]=\set{[H]} \text{ and } [\1]\circledcirc [H]=\set{[\1]}. \end{equation*} \end{enumerate} \end{rem} We propose now clips operations between type II and type III subgroups. \begin{thm}\label{thm:Clips_TypeII_III} Let $\Gamma$ be a type III subgroup of $\OO(3)$ and $H\oplus \ZZ_2^c$ type II subgroup of $\OO(3)$. Then the clips operation $[\Gamma]\circledcirc [H\oplus \ZZ_2^c]$ always contain the isotropy class $[\1]$ and all remaining elements in $[\Gamma]\circledcirc [H\oplus \ZZ_2^c] $ are given in Table~\ref{tab:Clips} and Table~\ref{tab:Clips2}, where we have used the notations \begin{equation*} d:=\mathrm{gcd}(m,n),\quad d_k:=\mathrm{gcd}(k,n),\quad d'_k:=\mathrm{gcd}(k,m), \end{equation*} \begin{equation*}\label{eq:Def_Gamma_n} \mathcal{Z}(n):=\begin{cases} [\ZZ_{2}] & \text{ if } n \text{ even} \\ [\ZZ_{2}^-] & \text{ else} \\ \end{cases}, \quad \Gamma(m,n):=\begin{cases} [\DD_{2}],[\DD_2^z] & \text{if } m \text{ and } n \text{ even } \\ [\DD_{2}^z] &\text{if } m \text{ even and } n \text{ odd } \\ [\ZZ_{2}] &\text{if } m \text{ odd and } n \text{ even } \\ [\ZZ_{2}^-] &\text{if } m \text{ and } n \text{ odd } \end{cases} \end{equation*} and \begin{equation*} \mathsf{L}_{\octa}:=[\1],[\ZZ_2],[\ZZ_2^-],\Gamma(n,3),[\ZZ_{d_3}],[\DD_{d_3}],[\DD_{d_3}^z] \end{equation*} \end{thm} \begin{rem} In tables we have used the conventions \begin{equation*} [\ZZ_1]:=[\1],\quad [\DD_1]=[\ZZ_2],\quad [\DD_1^z]=[\ZZ_2^-], \quad [\DD_2^z]=[\DD_2^d]. \end{equation*} \end{rem} \begin{rem} Note that in \cite[figure 3]{Olive2019} it was stated that $\DD_2^z\subset\DD_{2n}^d$ when $n$ is odd. However, this is true for all $n$. \end{rem} \begin{table}[H] \footnotesize \centering \begin{tabular}{|c||>{\centering}p{3.8cm}|>{\centering}p{3.5cm}|c|} \toprule $\circledcirc$ & $[\ZZ_{2n}^-]$ & $[\Dnz]$ & $[\Dnd]$ \\ \hline $[\ZZ_m \oplus \ZZ_2^c]$ & \begin{tabular}{l} $ [\ZZ_{2d}^-] \text{ if } \frac{m}{d} \text{ even}$ \\ \\ \hline \\ $ [\ZZ_{d}] \quad \text{\footnotesize{ else}}$ \end{tabular} & \begin{tabular}{l} $ [\ZZ_d] \text{\footnotesize{ if }} m \text{\footnotesize{ odd}}$ \\ \\ \hline \\ $ [\ZZ_{d}],[\ZZ_2^-] \text{\footnotesize{ else}}$ \end{tabular} & \begin{tabular}{l} \\ $[\ZZ_2],[\ZZ_2^-],[\ZZ_{2d}^-] $ \footnotesize{ if $\frac{m}{d}$ even} \\ \\ \hline \\ $[\ZZ_2],[\ZZ_2^-], [\ZZ_d] $ \begin{tabular}{l} \footnotesize{if $m$ even} \\ \footnotesize{and $\frac{m}{d}$ odd} \end{tabular} \\ \\ \hline \\ $[\ZZ_{d}]$ \qquad \qquad \footnotesize{ else} \rule[-0.5cm]{0cm}{0cm} \end{tabular} \\ \hline $[\DD_m \oplus \ZZ_2^c]$ & \begin{tabular}{l} $[\ZZ_{2d}^-],\mathcal{Z}(n)$ \footnotesize{if $\frac{m}{d}$ even} \\ \\ \hline \\ $[\ZZ_{d}],\mathcal{Z}(n)$ \footnotesize{else} \end{tabular} & \begin{tabular}{l} \\ \begin{tabular}{l} $[\ZZ_d],[\ZZ_2^-]$ \\ $[\DD_{d_2}^z],[\DD_d^z]$ \end{tabular} \footnotesize{if $m$ even} \\ \\ \hline \\ \begin{tabular}{l} $[\ZZ_d],[\ZZ_2^-]$ \\ $[\ZZ_{d_2}],[\DD_d^z]$ \end{tabular}\footnotesize{else} \rule[-0.6cm]{0cm}{0cm} \end{tabular} & \begin{tabular}{l} \\ \begin{tabular}{l} $\Gamma(m,n),[\ZZ_2]$ \\ $[\ZZ_2^-],[\ZZ_{2d}^-],[\DD_{2d}^d] $ \end{tabular} \footnotesize{ if $\frac{m}{d}$ even} \\ \\ \hline \\ \begin{tabular}{l} $\Gamma(m,n),[\ZZ_2],[\ZZ_2^-]$ \\ $[\ZZ_d],[\DD_{d}],[\DD_{d}^z] $ \end{tabular} \begin{tabular}{l} \footnotesize{if $m$ even} \\ \footnotesize{and $\frac{m}{d}$ odd} \end{tabular} \\ \\ \hline \\ \ \ $\Gamma(m,n),[\ZZ_d],[\DD_{d}],[\DD_{d}^z]$ \quad \footnotesize{ else} \rule[-0.5cm]{0cm}{0cm} \end{tabular} \\ \hline $[\octa \oplus \ZZ_2^c]$ & \begin{tabular}{l} \\ \begin{tabular}{l} $[\ZZ_2]$ \\ $[\ZZ_{d_3}],[\ZZ_4]$\end{tabular} \footnotesize{if $4|n$} \\ \\ \hline \\ \begin{tabular}{l} $[\ZZ_2]$ \\ $[\ZZ_{d_3}],[\ZZ_4^-]$ \end{tabular} \begin{tabular}{l} \footnotesize{if $n$ even} \\ \footnotesize{and $4\nmid n$} \end{tabular} \\ \\ \hline \\ \ \ $[\ZZ_2^-],[\ZZ_{d_3}]$ \qquad \footnotesize{else}\end{tabular} & \begin{tabular}{l} $[\ZZ_{d_2}],[\ZZ_{d_3}],[\ZZ_{d_4}]$ \\ $[\ZZ_2^-],[\DD_{d_2}^z],[\DD_{d_3}^z],[\DD_{d_4}^z]$ \end{tabular} & \begin{tabular}{l} \\ \begin{tabular}{l} $\mathsf{L}_{\octa},[\DD_2],[\DD_2^z]$ \\ $[\ZZ_4],[\DD_4],[\DD_4^z]$ \end{tabular}\footnotesize{if $4|n$} \\ \\ \hline \\ \begin{tabular}{l} $\mathsf{L}_{\octa},[\DD_2]$ \\ $[\DD_2^z],[\ZZ_4^-],[\DD_4^d]$ \end{tabular} \begin{tabular}{l} \footnotesize{if $n$ even} \\ \footnotesize{and $4\nmid n$} \end{tabular} \\ \\ \hline \\ \begin{tabular}{l} $\mathsf{L}_{\octa},[\DD_2^z]$ \end{tabular} \footnotesize{else} \rule[-0.5cm]{0cm}{0cm} \end{tabular} \\ \hline \rule[0.5cm]{0cm}{0cm} $[\tetra \oplus \ZZ_2^c]$ & $[\ZZ_{d_3}],\mathcal{Z}(n)$ & $[\ZZ_2^-],[\ZZ_{d_2}],[\ZZ_{d_3}],[\DD_{d_2}^z]$ & $[\ZZ_2],[\ZZ_2^-],\Gamma(2,n),\ZZ_{d_3}$ \rule[-0.5cm]{0cm}{0cm} \\ \hline \rule[0.5cm]{0cm}{0cm} $[\ico \oplus \ZZ_2^c]$ & $\mathcal{Z}(n),[\ZZ_{d_3}],[\ZZ_{d_5}]$ & \begin{tabular}{l} \rule[0.5cm]{0cm}{0cm} $[\ZZ_{d_2}],[\ZZ_{d_3}],[\ZZ_{d_5}]$ \\ $[\ZZ_2^-],[\DD_{d_2}^z],[\DD_{d_3}^z],[\DD_{d_5}^z]$ \rule[-0.5cm]{0cm}{0cm} \end{tabular} & \begin{tabular}{l} $[\ZZ_2],[\ZZ_2^-],\Gamma(2,n),\Gamma(3,n),\Gamma(5,n)$ \\ $[\ZZ_{d_3}],[\DD_{d_3}],[\DD_{d_3}^z],[\ZZ_{d_5}],[\DD_{d_5}],[\DD_{d_5}^z]$ \end{tabular} \rule[-0.5cm]{0cm}{0cm} \\ \hline \rule[0.5cm]{0cm}{0cm} $[\SO(2)\oplus \ZZ_2^c]$ & $[\ZZ_{2n}^-]$ & $[\ZZ_2^-], [\ZZ_n]$ & $[\ZZ_2], [\ZZ_2^-],[\ZZ_{2n}^-]$ \rule[-0.5cm]{0cm}{0cm} \\ \hline \rule[0.5cm]{0cm}{0cm} $[\OO(2)\oplus \ZZ_2^c]$ & $[\mathcal{Z}(n)],[\ZZ_{2n}^-]$ & \rule[0.5cm]{0cm}{0cm} $ [\DD_{d_2}^z],[\DD_n^z] $ \rule[-0.5cm]{0cm}{0cm} & $\mathcal{Z}(n),[\DD_{d_2}],[\DD_2^z],[\Dnd]$ \rule[-0.5cm]{0cm}{0cm} \\ \bottomrule \end{tabular} \caption{Clips between type II and III $\OO(3)$-subgroups (first cases)} \label{tab:Clips} \end{table} \begin{table}[H] \footnotesize \centering \begin{tabular}{|c||c|c|} \toprule $\circledcirc$ & $[\octa^-]$ & $[\OO(2)^-]$ \\ \hline $[\ZZ_m \oplus \ZZ_2^c]$ & \begin{tabular}{l} \\ $[\ZZ_{d_3'}],[\ZZ_2^-],[\ZZ_4^-] $ \footnotesize{ if $4|m$} \\ \\ \hline \\ $[\ZZ_{d_2'}],[\ZZ_{d_3'}],[\ZZ_{d_2'}^-]$ \footnotesize{ else} \rule[-0.5cm]{0cm}{0cm} \end{tabular} & $[\ZZ_m],[\ZZ_{d_2'}^-]$ \\ \hline $[\DD_m \oplus \ZZ_2^c]$ & \begin{tabular}{l} \\ \begin{tabular}{l} $[\ZZ_2],[\ZZ_{d_3'}],[\ZZ_2^-]$ \\ $[\ZZ_4^-],[\DD_{d_3'}^z],[\DD_2^z],[\DD_4^d]$\end{tabular} \footnotesize{if $4|m$} \\ \\ \hline \\ \begin{tabular}{l} $[\ZZ_2],[\ZZ_{d_3'}],[\ZZ_2^-]$ \\ $[\DD_2],[\DD_{d_3'}^z],[\DD_2^z]$ \end{tabular} \begin{tabular}{l} \footnotesize{if $m$ even} \\ \footnotesize{and $4\nmid m$} \end{tabular} \\ \\ \hline \\ \ \ $[\ZZ_2],[\ZZ_{d_3'}],[\ZZ_2^-],[\DD_{d_3'}^z]$ \quad \footnotesize{else} \rule[-0.5cm]{0cm}{0cm}\end{tabular} & \begin{tabular}{l} \\ \begin{tabular}{l} $[\ZZ_m],[\ZZ_2^-]$ \\ $[\DD_2^z],[\DD_m^z]$ \end{tabular} \footnotesize{if $m$ even} \\ \\ \hline \\ \begin{tabular}{l} $[\ZZ_m],[\ZZ_2]$ \\ $[\ZZ_{2}^-],[\DD_m^z]$ \end{tabular} \quad \footnotesize{else} \rule[-0.6cm]{0cm}{0cm} \end{tabular} \\ \hline \rule[0.5cm]{0cm}{0cm} $[\octa \oplus \ZZ_2^c]$ & \rule[0.5cm]{0cm}{0cm} $[\ZZ_2],[\ZZ_{3}],[\ZZ_2^-],[\ZZ_4^-],[\DD_2^z],[\DD_3^z],[\DD_4^d],[\octa^-]$ \rule[-0.5cm]{0cm}{0cm} & \rule[0.5cm]{0cm}{0cm} $[\ZZ_2],[\ZZ_{3}],[\ZZ_4],[\ZZ_2^-],[\DD_2^z],[\DD_3^z],[\DD_4^z]$ \rule[-0.5cm]{0cm}{0cm} \\ \hline \rule[0.5cm]{0cm}{0cm} $[\tetra \oplus \ZZ_2^c]$ & $[\ZZ_2],[\ZZ_{3}],[\ZZ_2^-],[\DD_2],[\DD_2^z],[\tetra]$ & $ [\ZZ_2], [\ZZ_3], [\ZZ_2^-],[\DD_2^z]$ \rule[-0.5cm]{0cm}{0cm} \\ \hline \rule[0.5cm]{0cm}{0cm} $[\ico \oplus \ZZ_2^c]$ & $[\ZZ_2],[\ZZ_2^-],[\DD_2],[\DD_2^z],[\ZZ_{3}],[\DD_3^z],[\tetra]$ & \rule[0.5cm]{0cm}{0cm} $[\ZZ_2],[\ZZ_{3}],[\ZZ_5],[\ZZ_2^-],[\DD_2^z],[\DD_3^z],[\DD_5^z]$ \rule[-0.5cm]{0cm}{0cm} \\ \hline \rule[0.5cm]{0cm}{0cm} $[\SO(2)\oplus \ZZ_2^c]$ & $[\ZZ_3],[\ZZ_2^-],[\ZZ_4^-]$ & $ [\ZZ_2^-],[\SO(2)]$ \rule[-0.5cm]{0cm}{0cm} \\ \hline \rule[0.5cm]{0cm}{0cm} $[\OO(2)\oplus \ZZ_2^c]$ & $[\ZZ_2^-],[\DD_3^z], [\DD_4^d]$ & $ [\DD_2^z],[\OO(2)^-]$ \rule[-0.5cm]{0cm}{0cm} \\ \bottomrule \end{tabular} \caption{Clips between type II and III $\OO(3)$-subgroups (second cases)} \label{tab:Clips2} \end{table} \section{Proofs for clips operations of type II and III $\OO(3)$-subgroups} \label{sec:proof_clips} In this section, we provide the details of the proof of Theorem~\ref{thm:Clips_TypeII_III}. The notation $\vR(\nn,\theta)$, with $\theta\in [0;2\pi[$ and $\nn=(n_x,n_y,n_z)$, a unit vector, denotes the Rodrigues formula to represent a rotation by angle $\theta$ around $\nn$, which is given by \begin{equation*} \vR(\nn,\theta)=\exp(\theta j(\nn))=\id+j(\nn)\sin(\theta)+j(\nn)^2(1-\cos(\theta)) \end{equation*} where $j(\nn)$ denotes the antisymmetric matrix with entries \begin{equation*} j(\nn)=\begin{pmatrix} 0 & - n_z & n_y \\ n_z & 0 & -n_x \\ -n_y & n_x & 0 \end{pmatrix}. \end{equation*} To find the clips between subgroups of type II and III we use the following lemma \begin{lem}\label{lem:Inter_Type_I_et_III} Let $\Gamma=\Gamma_+\cup (-\gamma \Gamma_+)$ be a subgroup of type III where $\Gamma_+=\Gamma\cap \SO(3)$ and $-\gamma\in \Gamma \setminus \Gamma_+$. Then for every subgroup $H$ of $\SO(3)$ we have \begin{equation*} \Gamma\cap(H\oplus \ZZ_2^c)=(\Gamma_+\cap H)\cup(-((\gamma \Gamma_+)\cap H)). \end{equation*} \end{lem} \begin{proof} As we have $H\oplus \ZZ_2^c=H\cup (-H)$ we deduce that \begin{align*} \Gamma\cap (H\oplus \ZZ_2^c) & =(\Gamma_+\cup(-\gamma \Gamma_+))\cap(H\cup (-H)) \\ & =(\Gamma_+\cap H)\cup (\Gamma_+\cap(-H))\cup (-(\gamma \Gamma_+)\cap H)\cup (-(\gamma \Gamma_+)\cap(-H)) \\ & =(\Gamma_+\cap H)\cup (-((\gamma \Gamma_+)\cap H)) \end{align*} Hence the result. \end{proof} In the next subsections, we focus on clips operations $[\Gamma] \circledcirc [H\oplus \ZZ_2^c]$, so from definition~\ref{def:clips_operation} and lemma~\ref{lem:Inter_Type_I_et_III} we have to consider \begin{equation*} \Gamma\cap (g H g^{-1}\oplus \ZZ_2^c)=( \Gamma_+\cap (gHg^{-1}))\cup (-(\gamma \Gamma_+\cap (gHg^{-1}))),\quad \Gamma_+=\Gamma \cap \SO(3),\quad -\gamma\in \Gamma \setminus \Gamma_+ \end{equation*} with $g\in \SO(3)$. \begin{rem}\label{rem:Decomp_Intersection} It follows that if $H=\bigcup_{i=1}^N H_i$ with $H, H_{1},\dots,H_{N}$ are subgroups of $\SO(3)$ then for any type III subgroup $\Gamma=\Gamma_+\cup (-\gamma \Gamma_+)$ we have \begin{equation}\label{eq:Union_Intersection} \Gamma\cap (g H g^{-1}\oplus \ZZ_2^c)=\bigcup_{i=1}^N (\Gamma_+\cap (gH_{i}g^{-1}))\cup (-(\gamma \Gamma_+\cap (gH_{i}g^{-1}))). \end{equation} \end{rem} \subsection{Clips with $\ZZ_{2n}^-$} First, let us construct $\ZZ_{2n}^-$ from the couple $(\ZZ_{2n},\ZZ_n)$ as in the equation \ref{typeIII} of appendix \ref{sec:O3-subgroups} \begin{equation*} \ZZ_{2n}^-=\ZZ_{n}\cup (-\gamma\ZZ_n),\quad \gamma=\vR\left(\ee_3,\frac{\pi}{n}\right) \end{equation*} where we note $\ZZ_1:=\set{Id}$ for the case $n=1$. \begin{lem}\label{lem:clips_Zm} Let $n \geq 1$ and $m\geq 2$ be two integers and $d=\gcd(m,n)$. Then \begin{align*} [\ZZ_{2n}^{-}] \circledcirc [\ZZ_m \oplus \ZZ_2^c]= \begin{cases} \set{[\1],[\ZZ_{2d}^-]} & \text{ if $\frac{m}{d}$ even} \\ \set{[\1],[\ZZ_d]} & \text{ else} \end{cases} . \end{align*} \end{lem} \begin{proof} We deduce from Lemma~\ref{lem:Inter_Type_I_et_III} that \begin{equation*} \ZZ_{2n}^- \cap (g\ZZ_m g^{-1}\oplus \ZZ_2^c)=(\ZZ_n\cap g \ZZ_m g^{-1})\cup (-(\gamma\ZZ_n\cap g \ZZ_m g^{-1})),\quad \gamma=\vR\left(\ee_3,\frac{\pi}{n}\right). \end{equation*} In the case when $g\ee_3$ and $\ee_3$ are not colinear, such group reduces to $\1$, so we suppose now that $g\ee_3=\pm \ee_3$. We thus have to consider \begin{equation*} (\ZZ_n\cap \ZZ_m )\cup (-(\gamma\ZZ_n\cap \ZZ_m)) \end{equation*} where $\ZZ_n\cap \ZZ_m=\ZZ_d$ and $\gamma\ZZ_n\cap \ZZ_m$ is obtained by solving the equations of unknown $k_1,k_2\in \ZZ$ \begin{equation*} \frac{2k_1+1}{n}=\frac{2k_2}{m} \iff (2k_1+1)d m_1=2k_2d n_1,\quad \mathrm{gcd}(m_1,n_1)=1. \end{equation*} We get solutions only if $m_1=\frac{m}{d}$ is even. By replacing $m_1$ by $2p$ we get that $p$ devides $k_2$ hence $k_2=p k'$ with $k'$ odd. On one hand, $\frac{2k_2}{m}=\frac{2p k'}{2pd}$ and on the other hand, $\frac{2k_1+1}{n}=\frac{k' n_1 }{dn_1}$. We deduce that \begin{equation*} \gamma\ZZ_n\cap \ZZ_m=\set{\vR\left(\ee_3,\frac{(2k+1)\pi}{d}\right)} \end{equation*} \end{proof} In the following, recall that \begin{equation}\label{eq:Def_Zn} \mathcal{Z}(n):=\ZZ_{2n}^{-}\circledcirc (\ZZ_2\oplus \ZZ_2^c)= \begin{cases} [\ZZ_{2}] & \text{ if } n \text{ even} \\ [\ZZ_{2}^-] & \text{ else} \\ \end{cases}. \end{equation} \begin{lem}\label{lem:Z2nmoinsclipsDm} Let $n\geq 1$ and $m\geq 2$ be two integers and $d=\gcd(m,n)$. Then \begin{equation*} [\ZZ_{2n}^{-}] \circledcirc [\DD_m \oplus \ZZ_2^c]= \begin{cases} \set{[\1],[\ZZ_{2d}^-],\mathcal{Z}(n)} & \text{ if $\frac{m}{d}$ even} \\ \set{[\1],[\ZZ_d],\mathcal{Z}(n)} & \text{ else} \end{cases} . \end{equation*} \end{lem} \begin{proof} Recall from~\cite[Appendix A]{Olive2019} that \begin{equation*} \DD_m=\ZZ_m \cup_{i=1}^{m} \ZZ_2^{\bb_i},\quad \ZZ_{2}^{\bb_i}:=\set{e,\vR(\bb_i,\pi)} \end{equation*} where $\bb_i$ are called the secondary axis of the subgroup $\DD_m$, with \begin{equation*} \bb_1=\ee_1,\quad \bb_k=\vR\left(\ee_3,\frac{\pi}{m}\right)\bb_{k-1},\quad k=2,\dotsc,m. \end{equation*} We have \begin{align*} \ZZ_{2n}^- \cap (g\DD_m g^{-1}\oplus \ZZ_2^c)=\left(\ZZ_{2n}^- \cap (g\ZZ_m g^{-1}\oplus \ZZ_2^c) \right) \bigcup \left(\ZZ_{2n}^- \cap(g\ZZ_2^{\bb_i} g^{-1}\oplus \ZZ_2^c)\right). \end{align*} The left part of the union is deduced from lemma \ref{lem:clips_Zm}, and for the right part of the union we get $\mathcal{Z}(n)$ (if $g\bb_i=\pm \ee_3$), so we can conclude. \end{proof} \begin{lem}\label{lem:Z2nmClipsExceptionels} Let $d_3=\gcd(3,n)$ and $d_5=\gcd(5,n)$. We have \begin{align*} [\ZZ_{2n}^-]\circledcirc [\tetra\oplus\ZZ_2^c] & = \set{[\1],[\ZZ_{d_3}],\mathcal{Z}(n)},\quad [\ZZ_{2n}^-]\circledcirc[\ico\oplus \ZZ_2^c]=\set{[\1],[\mathcal{Z}(n)],[\ZZ_{d_3}],[\ZZ_{d_5}]}. \\ [\ZZ_{2n}^-]\circledcirc[\octa\oplus \ZZ_2^c] & = \begin{cases} \set{[\1],[\ZZ_2],[\ZZ_{d_3}],[\ZZ_4]} & \text{ if $n$ is even and $4|n$} \\ \set{[\1],[\ZZ_2],[\ZZ_{d_3}],[\ZZ_4^-]} & \text{ if $n$ is even but $4\nmid n$} \\ \set{[\1],[\ZZ_2^-],[\ZZ_{d_3}]} & \text{ if $n$ is odd} \end{cases}. \end{align*} \end{lem} \begin{proof} Let us consider decomposition~\eqref{eq:Decomposition_tetra1} of the group $\tetra$. From remark~\ref{rem:Decomp_Intersection} we thus have to consider \begin{equation*} (\ZZ_n\cap (g\DD_2 g^{-1}))\cup (-(\gamma \ZZ_n\cap (g\DD_2g^{-1})))\bigcup_{i=1}^4 (\ZZ_n\cap (g\ZZ_3^{\bs_{t_i}} g^{-1}))\cup (-(\gamma \ZZ_n\cap (g\ZZ_3^{\bs_{t_i}}g^{-1}))). \end{equation*} The only non--trivial cases are obtained for $g=\text{Id}$ or $g$ such that $g\ZZ_3^{\bs_{t_1}} g^{-1}=\ZZ_3$ (for instance). It remains to consider separately \begin{equation*} (\ZZ_n\cap \DD_2)\cup (-(\gamma \ZZ_n\cap \DD_2)) \text{ or } (\ZZ_n\cap \ZZ_3)\cup (-(\gamma \ZZ_n\cap \ZZ_3)) \end{equation*} directly deduced from previous lemmas. For the cubic case, we deduce from the decomposition~\eqref{eq:Decomposition_Cube1} and the remark~\ref{rem:Decomp_Intersection} that the intersection \begin{equation*} \ZZ_{2n}^- \cap (g\octa g^{-1}\oplus \ZZ_2^c) \end{equation*} always reduce either to \begin{equation*} \ZZ_{2n}^- \cap (g\DD_4^{(i)} g^{-1}\oplus \ZZ_2^c) \text{ or } \ZZ_{2n}^- \cap (g\DD_3^{(j)} g^{-1}\oplus \ZZ_2^c) \end{equation*} for some $i=1,2,3$ or $j=1,2,3,4$, taking for instance $g$ such that $g=\text{Id}$ or $g\bs_{t_1}=\ee_3$ (see \autoref{sec:O3-subgroups}). We thus obtain \begin{equation*} [\ZZ_{2n}^-]\circledcirc [\octa\oplus\ZZ_2^c]=\left([\ZZ_{2n}^-]\circledcirc [\DD_3\oplus\ZZ_2^c]\right)\cup \left([\ZZ_{2n}^-]\circledcirc [\DD_4\oplus\ZZ_2^c]\right). \end{equation*} and similarly, from the decomposition~\eqref{eq:Decomposition_Ico} of the subgroup $\ico$ we get \begin{equation*} [\ZZ_{2n}^-]\circledcirc [\ico\oplus\ZZ_2^c]=\left([\ZZ_{2n}^-]\circledcirc [\DD_2\oplus\ZZ_2^c]\right)\cup \left([\ZZ_{2n}^-]\circledcirc [\DD_3\oplus\ZZ_2^c]\right)\cup \left([\ZZ_{2n}^-]\circledcirc [\DD_5\oplus\ZZ_2^c]\right) \end{equation*} so the results is directly deduced from previous lemmas. \end{proof} Finally, as a direct consequence of lemma~\ref{lem:Inter_Type_I_et_III}, \begin{lem} For any integer $n\geq 1$ we have: \begin{equation*} [\ZZ_{2n}^-]\circledcirc [\SO(2)\oplus \ZZ_2^c]=\set{[\1],[\ZZ_{2n}^-]},\quad [\ZZ_{2n}^-]\circledcirc [\OO(2)\oplus \ZZ_2^c]=\set{[\1],[\mathcal{Z}(n)],[\ZZ_{2n}^-]} \end{equation*} with $\mathcal{Z}(n)$ given by~\eqref{eq:Def_Zn}. \end{lem} \subsection{Clips with $\Dnz$} \label{sec:withDnz} As explained in \autoref{sec:O3-subgroups}, the subgroup $\Dnz$ is obtained for $-\gamma=-\vR(\be_1,\pi)$, so that \begin{equation*} \Dnz=\ZZ_n \cup -\gamma \ZZ_n=\ZZ_n\cup \set{-\vR(\bb_1,\pi),\dotsc,\ -\vR(\bb_n,\pi)}, \quad \bb_1=\ee_1,\quad \bb_k=\vR\left(\ee_3,\frac{\pi}{n}\right)\bb_{k-1} \end{equation*} where $\langle \bb_i \rangle$ are called secondary axes of the dihedral subgroup $\DD_n$ (see~\cite[Appendix A]{Olive2019}). \begin{lem}\label{lem:DnvClipsZm} Let $m,\ n \geq 2$ be two integers and $d=\gcd(m,n)$. Then \begin{align*} [\Dnz] \circledcirc [\ZZ_m \oplus \ZZ_2^c]= \begin{cases} \set{[\1],[\ZZ_d]} & \text{ if $m$ is odd} \\ \set{[\1],[\ZZ_{2}^-],[\ZZ_d]} & \text{ if $m$ is even} \end{cases}. \end{align*} \end{lem} \begin{proof} From lemma~\ref{lem:Inter_Type_I_et_III} we have to consider \begin{align*} \Dnz \cap (g\ZZ_m g^{-1} \oplus \ZZ_2^c)=(\ZZ_n\cap g \ZZ_m g^{-1})\cup (-(\gamma\ZZ_n\cap g \ZZ_m g^{-1})),\quad \gamma=\vR(\be_1,\pi), \end{align*} where $\gamma \ZZ_n=\set{\vR(\bb_1,\pi),\dotsc,\ \vR(\bb_n,\pi)}$. Here, the only non--trivial cases are obtained when $g\ZZ_mg^{-1}=\ZZ_n$ (see lemma \ref{lem:clips_Zm}) or $\ZZ_2^{\bb_i}$ , which leads directly to the result. \end{proof} \begin{lem}\label{lem:DnvClipsDm} Let $m,\ n \geq 2$ be two integers and $d=\gcd(m,n)$ and $d_2=\gcd(2,n)$. Then \begin{align*} [\DD_n^z]\circledcirc [\DD_m\oplus \ZZ_2^c]= \begin{cases} \set{[\1],[\ZZ_d],[\ZZ_2^-],[\DD_{d_2}^z],[\DD_d^z]} \quad & \text{if $m$ is even} \\ \set{[\1],[\ZZ_d],[\ZZ_2^-],[\ZZ_{d_2}],[\DD_d^z]} \quad & \text{if $m$ is odd} \end{cases} . \end{align*} \end{lem} \begin{proof} By lemma \ref{lem:Inter_Type_I_et_III}, the result is deduced from the intersection \begin{align*} (\ZZ_n\cap g \DD_m g^{-1})\cup (-(\gamma\ZZ_n\cap g \DD_m g^{-1})),\quad \gamma\ZZ_n=\set{\vR(\bb_i,\pi),i=1,\dotsc,n}. \end{align*} By considering the decomposition $\DD_m=\ZZ_m \cup_{i=1}^{m} \ZZ_2^{\bb_i}$, where $\ZZ_{2}^{\bb_i}:=\set{e,\vR(\bb_i,\pi)}$, this intersection becomes \begin{equation*} (\ZZ_n\cap g \DD_m g^{-1})\cup (-(\gamma\ZZ_n\cap g \ZZ_m g^{-1})) \cup (-(\gamma\ZZ_n\cap g \cup_{i=1}^{m}\ZZ_2^{\bb_i} g^{-1})) \end{equation*} The first part of the union gives $\1$, $\ZZ_{d_2}$ or $\ZZ_d$ by \cite[lemma A.3]{Olive2019}. The second and the third part of the union give $\ZZ_{d_2'}$ only if $g\ee_3=\bb_i$ ($i=1,\dotsc,n$) and $\ZZ_2$ only if $g\bb_i=\bb_j$ ($i=1,\dotsc, n$ and $j=1,\dotsc,m$). We deduce the result by regrouping these subgroups according to the corresponding $g$. \end{proof} As for the proof of the lemma~\ref{lem:Z2nmClipsExceptionels}, we can use group decompositions~\eqref{eq:Decomposition_tetra1}--\eqref{eq:Decomposition_Cube1}--\eqref{eq:Decomposition_Ico} and intersections~\eqref{eq:Union_Intersection} so that previous lemmas~\ref{lem:DnvClipsZm} and~\ref{lem:DnvClipsDm} lead to \begin{lem} For any integer $n\geq 2$ we have \begin{align*} [\Dnz] \circledcirc [\tetra \oplus \ZZ_2^c] & =\set{[\1],[\ZZ_{2}^{-}],[\ZZ_{d_2}],[\ZZ_{d_3}],[\DD_{d_2}^{z}]} \\ [\Dnz] \circledcirc [\octa \oplus \ZZ_2^c] & =\set{[\1],[\ZZ_{d_2}],[\ZZ_{d_3}],[\ZZ_{d_4}],[\ZZ_{2}^{-}],[\DD_{d_2}^{z}],[\DD_{d_3}^z],[\DD_{d_4}^z] } \\ [\Dnz] \circledcirc [\ico \oplus \ZZ_2^c] & =\set{[\1],[\ZZ_{d_2}],[\ZZ_{d_3}],[\ZZ_{d_5}],[\ZZ_{2}^{-}],[\DD_{d_2}^z],[\DD_{d_3}^z],[\DD_{d_5}^z]} \end{align*} \end{lem} \begin{lem}\label{lem:DnzwithO(2)} For any integer $n\geq 2$, we have \begin{align*} [\DD_{n}^{z}] \circledcirc [\SO(2) \oplus \ZZ_2^c] & =\set{[\1],[\ZZ_2^-],[\ZZ_n]} \\ [\DD_{n}^{z}] \circledcirc [\OO(2) \oplus \ZZ_2^c] & = \set{[\1],[\DD_{d_2}^z],[\Dnz]} \end{align*} \end{lem} \begin{proof} We propose here the proof for the clips operation $[\Dnz] \circledcirc [\OO(2) \oplus \ZZ_2^c]$ as the other one is obtained in the same way. First, using lemma~\ref{lem:Inter_Type_I_et_III}, we have to consider intersection \begin{equation}\label{eq:Intersec_O2_Dnv} \left(\ZZ_n\cap (g\OO(2)g^{-1})\right)\cup (-(\gamma\ZZ_n\cap g \OO(2) g^{-1})),\quad \gamma=\vR(\be_1,\pi). \end{equation} For any $g\in \SO(3)$ we can write \begin{equation}\label{eq:Groupe_Conj_O2} g\OO(2)g^{-1}=\set{\vR(\uu,\theta),\quad \vR(\vv,\pi),\quad \theta\in [0;2\pi[,\quad \uu\cdot\vv=0,\quad \|\vv\|=1} \end{equation} where $\uu$ is some unit vector and $\uu\cdot\vv$ is the scalar product between $\uu$ and $\vv$. Only two non--trivial cases can occur: \begin{itemize} \item $\uu=\pm \ee_3$ in which case intersection~\eqref{eq:Intersec_O2_Dnv} reduces to $\Dnz$. \item $\uu=\pm\be_1$ in which case we have \begin{equation*} \ZZ_n\cap (g\OO(2)g^{-1})=\ZZ_{d_2},\quad \gamma\ZZ_n\cap g \OO(2) g^{-1}= \begin{cases} \set{\vR(\ee_1,\pi),\vR(\ee_2,\pi)} &\text{ if } n \text{ even} \\ \set{\vR(\ee_1,\pi)} &\text{ otherwise } \end{cases} \end{equation*} so we can conclude the proof. \end{itemize} \end{proof} \subsection{Clips with $\Dnd$} First we have (see \autoref{sec:O3-subgroups}) \begin{align*} \Dnd & =\DD_n\cup (-\gamma \DD_n),\quad \gamma=\vR\left(\ee_3,\frac{\pi}{n}\right) \end{align*} where we can write \begin{align*} \DD_n & =\set{\vR\left(\ee_3,\frac{2k_1\pi}{n}\right),\vR(\bb_{2k_1+1},\pi);\quad k_1=0,\dotsc,n-1} \\ -\gamma \DD_n & =\set{-\vR\left(\ee_3,\frac{(2k_2+1)\pi}{n}\right),-\vR(\bb_{2(k_2+1)},\pi);k_2=0,\dotsc,n-1} \end{align*} with \begin{equation}\label{eq:Secon_Axe_D2nm} \bb_1=\ee_1 \text{ and } \bb_k=\vR\left(\ee_3,\frac{\pi}{2n}\right)\bb_{k-1}. \end{equation} \begin{lem}\label{lem:D2nhclipsZm} Let $m,\ n \geq 2$ be two integers and $d=\gcd(m,n)$. Then \begin{align*} [\Dnd] \circledcirc [\ZZ_m \oplus \ZZ_2^c]= \begin{cases} \set{[\1],[\ZZ_2],[\ZZ_{2}^-],[\ZZ_{2d}^-]} & \text{ if $\frac{m}{d}$ even} \\ \set{[\1],[\ZZ_2],[\ZZ_{2}^-],[\ZZ_{d}]} & \text{ if $m$ even and $\frac{m}{d}$ odd} \\ \set{[\1],[\ZZ_d]} & \text{ else} \end{cases}. \end{align*} \end{lem} \begin{proof} From lemma \ref{lem:Inter_Type_I_et_III} we have to consider intersection \begin{equation*} \Dnd \cap (g\ZZ_m g^{-1} \oplus \ZZ_2^c)=(\DD_n\cap g \ZZ_m g^{-1})\cup (-(\gamma\DD_n\cap g \ZZ_m g^{-1}))\quad \gamma=\vR\left(\ee_3,\frac{\pi}{n}\right) \end{equation*} which can always reduces to $\1$. Otherwise we have only to consider three cases: \begin{itemize} \item $g\be_3=\be_3$ and we deduce intersection from lemma~\ref{lem:clips_Zm}; \item $g\be_3=\bb_{2k}$ for some $k$ and intersection reduces to $\ZZ_2^-$ for $m$ even; \item $g\be_3=\bb_{2k+1}$ for some $k$ and intersection reduces to $\ZZ_2$ for $m$ even, and we can conclude the proof. \end{itemize} \end{proof} For the following, we recall the notation introduced in theorem \ref{thm:Clips_TypeII_III} \begin{equation}\label{eq:Gammamn} \Gamma(m,n):=\begin{cases} [\DD_{2}],[\DD_2^z] &\text{ if } m \text{ and } n \text{ even } \\ [\DD_{2}^z] &\text{ if } m \text{ even and } n \text{ odd } \\ [\ZZ_{2}] &\text{ if } m \text{ odd and } n \text{ even } \\ [\ZZ_{2}^-] &\text{ if } m \text{ and } n \text{ odd } \end{cases} \end{equation} \begin{lem}\label{lem:D2nhclipsDm} Let $m,\ n \geq 2$ be two integers, $d=\gcd(m,n)$ and $\Gamma(m,n)$ defined by~\eqref{eq:Gammamn}. Then \begin{align*} [\DD_{2n}^{d}] \circledcirc [\DD_m \oplus \ZZ_2^c]= \begin{cases} \set{[\1],\Gamma(m,n),[\ZZ_2],[\ZZ_2^-],[\ZZ_{2d}^-],[\DD_{2d}^d]} & \text{ if $\frac{m}{d}$ is even} \\ \set{[\1],\Gamma(m,n),[\ZZ_2],[\ZZ_2^-],[\ZZ_d],[\DD_{d}],[\DD_{d}^z]} & \text{ if $\frac{m}{d}$ is odd and $m$ is even} \\ \set{[\1],\Gamma(m,n),[\ZZ_d],[\DD_{d}],[\DD_{d}^z]} & \text{ else} \end{cases}. \end{align*} \end{lem} \begin{proof} We have to consider intersection \begin{equation*} \left( \DD_n\cap (g\DD_m g^{-1}) \right)\cup \left(- ((\gamma\DD_n)\cap (g\DD_m g^{-1})) \right),\quad \gamma=\vR\left(\ee_3,\frac{\pi}{n}\right) \end{equation*} where (see \eqref{eq:AxesZnDn}) \begin{equation*} g\DD_m g^{-1}=\left\langle \vR\left( \uu,\frac{2\pi}{m}\right),\vR(\vv,\pi)\right\rangle. \end{equation*} Only seven non-trivial cases can occur: \begin{itemize} \item $\uu=\pm \ee_3$ and $\vv=\pm \bb_{2k+1}$ for some $k$ (see~\eqref{eq:Secon_Axe_D2nm}), and in the same way of the proof of lemma~\ref{lem:clips_Zm} we obtain $[\DD_{2d}^d]$ if $m/d$ is even, and $[\DD_d]$ otherwise. \item $\uu=\pm \ee_3$ and $\vv=\pm \bb_{2k}$ for some $k$ (see~\eqref{eq:Secon_Axe_D2nm}), and in the same way of the proof of lemma~\ref{lem:clips_Zm} we obtain $[\DD_{2d}^d]$ if $m/d$ is even, and $[\DD_d^z]$ otherwise. \item $\uu=\pm \ee_3$ only, and using again the proof of lemma~\ref{lem:clips_Zm} we obtain $[\ZZ_{2d}^-]$ if $m/d$ is even and $[\ZZ_d]$ otherwise. \item $\uu=\bb_{2k+1}$ for some $k$ and $\vv=\be_3$, in which case we obtain \begin{equation*} \begin{cases} [\DD_{2}] &\text{ if } m \text{ and } n \text{ even } \\ [\DD_{2}^z] &\text{ if } m \text{ even and } n \text{ odd } \\ [\ZZ_{2}] &\text{ if } m \text{ odd and } n \text{ even } \\ [\ZZ_{2}^-] &\text{ if } m \text{ and } n \text{ odds } \end{cases} \end{equation*} \item $\uu=\bb_{2k+1}$ for some $k$ only, and we obtain $[\ZZ_2]$. \item $\uu=\bb_{2k+2}$ for some $k$ and $\vv=\be_3$, in which case we obtain \begin{equation*} \begin{cases} [\DD_{2}^z] &\text{ if } m \text{ even } \\ [\ZZ_{2}] &\text{ if } m \text{ odd and } n \text{ even } \\ [\ZZ_{2}^-] &\text{ if } m \text{ and } n \text{ odds } \end{cases} \end{equation*} \item $\uu=\bb_{2k+2}$ for some $k$ only, where we obtain $[\ZZ_2^-]$ if $m$ is even, and $[\1]$ otherwise, so we can conclude the proof. \end{itemize} \end{proof} From groups decompositions~\eqref{eq:Decomposition_tetra1}--\eqref{eq:Decomposition_Cube1}-\eqref{eq:Decomposition_Ico} and remark~\ref{rem:Decomp_Intersection}, using the same proof as for the proof of the Lemma~\ref{lem:Z2nmClipsExceptionels} we get \begin{align*} [\Dnd]\circledcirc [\tetra\oplus\ZZ_2^c] & =\left([\Dnd]\circledcirc [\DD_2\oplus\ZZ_2^c]\right)\cup \left([\Dnd]\circledcirc [\ZZ_3\oplus\ZZ_2^c]\right) \\ [\Dnd]\circledcirc [\octa\oplus\ZZ_2^c] & =\left([\Dnd]\circledcirc [\DD_3\oplus\ZZ_2^c]\right)\cup \left([\Dnd]\circledcirc [\DD_4\oplus\ZZ_2^c]\right) \\ [\Dnd]\circledcirc [\ico\oplus\ZZ_2^c] & =\left([\Dnd]\circledcirc [\DD_5\oplus\ZZ_2^c]\right)\cup \left([\Dnd]\circledcirc [\DD_3\oplus\ZZ_2^c]\right)\cup \left([\Dnd]\circledcirc [\DD_2\oplus\ZZ_2^c]\right). \end{align*} As a direct consequence of previous lemmas, we thus get \begin{lem} For any integer $n\geq 2$ we have \begin{align*} [\Dnd]\circledcirc [\tetra\oplus\ZZ_2^c] & =\set{[\1],[\ZZ_2],[\ZZ_2^-],\Gamma(2,n),\ZZ_{d_3}} \\ [\DD_{2n}^{d}] \circledcirc [\octa \oplus \ZZ_2^c] & = \begin{cases} \set{\mathsf{L}_{\octa},[\DD_2],[\DD_2^z],[\ZZ_4],[\DD_4],[\DD_4^z]} & \text{if $4|n$} \\ \set{\mathsf{L}_{\octa},[\DD_2],[\DD_2^z],[\ZZ_4^-],[\DD_4^d]} & \text{if $n$ is even and $4\nmid n$} \\ \set{\mathsf{L}_{\octa},[\DD_2^z]} & \text{if $n$ is odd} \end{cases} \\ & \mathsf{L}_{\octa}:=[\1],[\ZZ_2],[\ZZ_2^-],\Gamma(n,3),[\ZZ_{d_3}],[\DD_{d_3}],[\DD_{d_3}^z] \\ [\Dnd]\circledcirc [\ico\oplus\ZZ_2^c] & =\left\lbrace [\1],[\ZZ_2],[\ZZ_2^-],\Gamma(2,n),\Gamma(3,n),\Gamma(5,n),[\ZZ_{d_3}],[\DD_{d_3}],[\DD_{d_3}^z] \right. \\ & \hspace*{2cm} \left. [\ZZ_{d_5}],[\DD_{d_5}],[\DD_{d_5}^z] \right\rbrace \end{align*} where $\Gamma(m,n)$ is given by~\eqref{eq:Def_Gamma_n} and $d_k=\text{gcd}(n,k)$ for $k=3,5$. \end{lem} Finally, we have \begin{lem} For any integer $n\geq 2$ we have \begin{align*} [\DD_{2n}^{d}] \circledcirc [\SO(2) \oplus \ZZ_2^c] & =\set{[\1],[\ZZ_2],[\ZZ_2^-],[\ZZ_{2n}^-]} \\ [\DD_{2n}^{d}] \circledcirc [\OO(2) \oplus \ZZ_2^c] & =\set{[\1],\mathcal{Z}(n),[\DD_{d_2}],[\DD_2^z],[\Dnd]}. \end{align*} \end{lem} \begin{proof} The first clips operation is obtained in the same way as the one given by lemma~\ref{lem:D2nhclipsZm}. We now have to consider intersection \begin{equation*} \left( \DD_n\cap (g\OO(2) g^{-1}) \right)\cup \left(- ((\gamma\DD_n)\cap (g\OO(2) g^{-1})) \right),\quad \gamma=\vR\left(\ee_3,\frac{\pi}{n}\right) \end{equation*} where $g\OO(2) g^{-1}$ is given by~\eqref{eq:Groupe_Conj_O2}. Three non--trivial cases can occur: \begin{itemize} \item $\uu=\pm \ee_3$ and we obtain $[\Dnd]$; \item $\uu=\bb_{2k+1}$ for some $k$, say $\uu=\bb_1=\ee_1$ (see~\eqref{eq:Secon_Axe_D2nm}). Then, if $n$ is even, there exists $k'$ such that $\bb_{2k'+1}=\ee_2$, otherwise there exists $k'$ such that $\bb_{2k'}=\ee_2$. We deduce that \begin{equation*} \DD_n\cap (g\OO(2) g^{-1})= \begin{cases} \DD_2 & \text{ if } n \text{ is even} \\ \text{Id},\vR(\ee_1,\pi) & \text{ otherwise} \end{cases} \end{equation*} and \begin{equation*} (\gamma\DD_n)\cap (g\OO(2) g^{-1})= \begin{cases} \emptyset & \text{ if } n \text{ is even} \\ \vR(\ee_3,\pi),\vR(\ee_2,\pi) & \text{ otherwise} \end{cases} \end{equation*} and we thus obtain $[\DD_2]$ if $n$ is even, or $[\DD_2^z]$ if $n$ is odd. \item $\uu=\bb_{2k}$ for some $k$, say $\uu=\bb_2$ (see~\eqref{eq:Secon_Axe_D2nm}). In such a case, whatever $n$'s parity, we always obtain $[\DD_2^z]$ . \item $\vv=\pm e_3$, since $\vv=g\bb_i$ then we have two cases: \begin{itemize} \item if $\bb_i$ is within the secondary axis of $\DD_{2n}$ then the discussion is similar to the previous case. \item if $\bb_i$ is not within the secondary axis of $\DD_{2n}$ then we obtain $\mathcal{Z}(n)$. \end{itemize} \end{itemize} \end{proof} \subsection{Clips with $\octa^-$} First, we introduce a useful decomposition of the subgroup $\octa^-$, deduced from~\eqref{eq:Decomposition_Cube}: \begin{equation}\label{eq:Decomposition_CubeMoins} \octa^-=\bigcup_{i=1}^3 (\ZZ_4^{\ee_i})^- \bigcup_{j=1}^3 \ZZ_3^{\bs_{t_j}} \bigcup_{k=1} (\ZZ_2^{\pmb{a}_{c_k}})^- \end{equation} where \begin{equation*} (\ZZ_4^{\ee_i})^-=\set{\text{Id},\vR(\ee_i,\pi),-\vR\left(\ee_i,\frac{\pi}{2}\right),-\vR\left(\ee_i,\frac{3\pi}{2}\right)},\quad (\ZZ_2^{\pmb{a}_{c_k}})^-=\set{\text{Id},-\vR(\pmb{a}_{c_k},\pi)}. \end{equation*} \begin{lem}\label{lem:octaclipsZm} Let $m\geq 2$ be an integer and $d_k'=\gcd(k,m)$ ($k=2,3$). Then we have \begin{align*} [\octa^-] \circledcirc [\ZZ_m \oplus \ZZ_2^c]= \begin{cases} \set{[\1],[\ZZ_{2}^-],[\ZZ_{d_3'}],[\ZZ_4^-]} & \text{if $4|m$} \\ \set{[\1],[\ZZ_{d'_2}],[\ZZ_{d'_2}^-],[\ZZ_{d_3'}]} & \text{otherwise} \end{cases}. \end{align*} \end{lem} \begin{proof} For any $g\in \SO(3)$, let us write $g\ZZ_m g^{-1}=\ZZ_m^{\uu}$ (see~\eqref{eq:Decomposition_CubeMoins}). When considering intersection \begin{equation*} \octa^{-} \cap (\ZZ_m^{\uu}\oplus \ZZ_2^c) \end{equation*} the only non-trivial cases are when $\uu=\pm\ee_i$, $\uu=\pm\bs_{t_j}$ or $\uu=\pm \pmb{a}_{c_k}$, and the problem then reduces to clips operations $[\ZZ_{2n}^-]\circledcirc [\ZZ_m\oplus \ZZ_2^c]$ for $n=1,2$ or $[\ZZ_3]\circledcirc [\ZZ_m\oplus \ZZ_2^c]$, which are already known (see lemma~\ref{lem:clips_Zm} and~\cite{Olive2019}), so we can conclude. \end{proof} For the following, instead of using decomposition~\eqref{eq:Decomposition_CubeMoins}, we will write $\octa^-$ using $\tetra$ (\autoref{sec:O3-subgroups}) \begin{equation*} \octa^-=\tetra \cup (-\gamma\tetra),\quad \gamma=\vR\left(\ee_3,\frac{\pi}{2}\right). \end{equation*} \begin{lem} Let $m\geq 2$ be an integer. We have \begin{align*} [\octa^-] \circledcirc [\DD_m \oplus \ZZ_2^c]= \begin{cases} \set{[\1],[\ZZ_2],[\ZZ_{2}^-],[\ZZ_{d_3'}],[\DD_{d_3'}^z],[\ZZ_4^-],[\DD_2^z],[\DD_4^d]} & \text{if $4|m$} \\ \set{[\1],[\ZZ_2],[\ZZ_{2}^-],[\ZZ_{d_3'}],[\DD_{d_3'}^z],[\DD_2],[\DD_2^z]} & \text{if $m$ is even and $4\nmid m$} \\ \set{[\1],[\ZZ_2],[\ZZ_2^-],[\ZZ_{d_3'}],[\DD_{d_3'}^z]} & \text{if $m$ is odd} \end{cases}. \end{align*} \end{lem} \begin{proof} For any $g\in \SO(3)$, let us write $g\DD_m g^{-1}=\DD_m^{\uu,\vv}$ with $\uu$,$\vv$ two non--zero orthogonal vectors (see~\eqref{eq:AxesZnDn}). From lemma~\ref{lem:Inter_Type_I_et_III} We then have to consider intersection \begin{equation*} \left(\tetra\cap \DD_{m}^{\uu,\vv}\right)\cup \left(-(\gamma\tetra\cap \DD_{m}^{\uu,\vv})\right) \end{equation*} and 8 non--trivial cases can occur: \begin{itemize} \item $\uu=\pm\ee_k$, says $\pm\ee_3$ and $\vv=\pm\ee_j$ with $j\neq k$, say $\vv=\pm\ee_1$. If $4\mid m$, then intersection reduces to $\DD_4^d$, while if $m$ is even and $4\nmid m$ we obtain $\DD_2$, and we get $\1$ otherwise. \item $\uu=\pm\ee_3$ (for instance) and $\vv=\pm \pmb{a}_{c_k}$, say $\pm\pmb{a}_{c_1}$. Once again, if $4\mid m$, then intersection reduces to $\DD_4^d$, but if $m$ is even and $4\nmid m$ we obtain a subgroup conjugate to $\DD_2^z$, otherwise we get $\ZZ_2^-$. \item $\uu=\pm\ee_3$ only, in which case we obtain a subgroup conjugate to $\ZZ_4^-$ if $4\mid m$, $\ZZ_2$ if $m$ is even and $4\nmid m$, and $\1$ otherwise. \item $\uu=\pm\pmb{a}_{c_1}$ (for instance) and $\vv=\pm \ee_3$, leading to a subgroup conjugate to $\DD_2^z$ if $m$ is even, or $\ZZ_2$ if $m $is odd. \item $\vv=\pm\ee_3$ only, so we obtain a subgroup conjugate to $\ZZ_2$. \item $\vv=\pmb{a}_{c_1}$ only and we obtain a subgroup conjugate to $\ZZ_2^-$. \item $\uu=\pm\bs_{t_1}$ (for instance) and $\vv=\pmb{a}_{c_2}$, so we get a subgroup conjugate to $\DD_3^z$ if $3\mid m$, otherwise we get a subgroup conjugate to $\ZZ_2^-$ \item $\uu=\bs_{t_1}$ only, and we finally get $\ZZ_{d'_3}$. \end{itemize} \end{proof} For the subgroups $\octa,\ \tetra$ and $\ico$, we use, as before, the decompositions~\eqref{eq:Decomposition_tetra1}--\eqref{eq:Decomposition_Cube1}-\eqref{eq:Decomposition_Ico}. \begin{lem} We have \begin{align*} [\octa^-] \circledcirc [\octa \oplus \ZZ_2^c]&=\set{[\1],[\ZZ_2],[\ZZ_2^-],[\ZZ_3],[\ZZ_4^-],[\DD_2^z],[\DD_3^z],[\DD_4^d],[\octa^-]}. \\ [\octa^-] \circledcirc [\tetra \oplus \ZZ_2^c]&=\set{[\1],[\ZZ_2],[\ZZ_2^-],[\DD_2],[\DD_2^z],[\ZZ_3],[\tetra]}. \\ [\octa^-] \circledcirc [\ico \oplus \ZZ_2^c]&=\set{[\1],[\ZZ_2],[\ZZ_2^-],[\ZZ_3],[\DD_2],[\DD_2^z],[\DD_3^z],[\tetra]}. \end{align*} \end{lem} \begin{lem}\label{lem:octaclipso(2)-} We have \begin{align*} [\octa^-] \circledcirc [\SO(2) \oplus \ZZ_2^c]&=\set{[\1],[\ZZ_3],[\ZZ_2^-],[\ZZ_4^-]}. \\ [\octa^-] \circledcirc [\OO(2) \oplus \ZZ_2^c]&=\set{[\1],[\ZZ_2^-],[\DD_3^z],[\DD_4^d]}. \end{align*} \end{lem} \begin{proof} The first clips operation is obtained in the same way as the one given by lemma~\ref{lem:octaclipsZm}. We now have to consider intersection \begin{equation*} \left( \tetra\cap (g\OO(2) g^{-1}) \right)\cup \left(- ((\gamma\tetra)\cap (g\OO(2) g^{-1})) \right),\quad \gamma=\vR\left(\ee_3,\frac{\pi}{n}\right) \end{equation*} where $g\OO(2) g^{-1}$ is given by~\eqref{eq:Groupe_Conj_O2}, $\tetra$ by \eqref{eq:Decomposition_tetra} and $\gamma\tetra$ by \begin{equation*} \gamma\tetra=\set{\vR(\pmb{a}_{c_k},\pi),\vR\left(\ee_i,\frac{\pi}{2}\right),\vR\left(\ee_i,\frac{3\pi}{2}\right)} \end{equation*} Three non--trivial cases can occur: \begin{itemize} \item $\uu=\pm \bs_{t_i}$ and we obtain $\DD_3^z$. \item $\uu=\pm \ee_i$, say $\uu=\ee_3$, then we obtain $\DD_4^d$. \item $\uu=\pm \pmb{a}_{c_k}$ for some $k$, then we obtain $\ZZ_2^-$. \end{itemize} \end{proof} \subsection{Clips with $\OO(2)^-$} We construct $\OO(2)^-$ as follows \begin{align*} \OO(2)^- & =\SO(2)\cup -(\gamma \SO(2)) \quad \text{where } \gamma=\vR(\ee_1,\pi) \\ & =\set{\vR(\ee_3,\theta),\theta \in [0,2\pi],-\vR(\bb,\pi)}. \end{align*} where $\vR(\bb,\pi)$ represent the symmetry with respect to all the axes in the $xy$ plane. The argumentation for the calculation of the clips with $\OO(2)^-$ is very similar to the ones for $\DD_n^z$ exposed in \autoref{sec:withDnz}. \begin{lem} Let $m\geq 2$ be an integer and $d_2'=\gcd(2,m)$. We have \begin{equation*} [\OO(2)^-]\circledcirc [\ZZ_m \oplus \ZZ_2^c]= \set{ [\1],[\ZZ_m],[\ZZ_{d_2'}^-] } \end{equation*} \end{lem} \begin{proof} We get the result by considering the following intersection (from lemma \ref{lem:Inter_Type_I_et_III}) \begin{equation*} \OO(2)^-\cap (g\ZZ_m g^{-1} \oplus \ZZ_2^c)=(\SO(2) \cap g \ZZ_m g^{-1})\cup (-(\gamma\SO(2)\cap g\ZZ_mg^{-1})) \end{equation*} where $\gamma=\vR(\ee_1,\pi)$ and $\gamma\SO(2)=\set{\vR(\bb,\pi)}$. \end{proof} The proof of the following lemma is similar to the proof of lemma \ref{lem:DnvClipsDm}. \begin{lem} For any integer $m\geq 2$, we have \begin{align*} [\OO(2)^-]\circledcirc [\DD_m\oplus \ZZ_2^c]= \begin{cases} \set{[\1],[\ZZ_m],[\ZZ_2^-],[\DD_2^z],[\DD_m^z]} \quad & \text{if $m$ is even} \\ \set{[\1],[\ZZ_m],[\ZZ_2],[\ZZ_2^-],[\DD_m^z]} \quad & \text{if $m$ is odd} \end{cases} . \end{align*} \end{lem} Using the decompositions~\eqref{eq:Decomposition_tetra1}--\eqref{eq:Decomposition_Cube1}-\eqref{eq:Decomposition_Ico} of the subgroups $\tetra,\ \octa$ and $\ico$, we get the following \begin{lem} We have \begin{align*} & [\OO(2)^-]\circledcirc [\tetra\oplus \ZZ_2^c]=\set{[\1],[\ZZ_2],[\ZZ_3],[\ZZ_2^-],[\DD_2^z]}. \\ & [\OO(2)^-]\circledcirc [\octa\oplus \ZZ_2^c]=\set{[\1],[\ZZ_2],[\ZZ_3],[\ZZ_4],[\ZZ_2^-],[\DD_2^z],[\DD_3^z],[\DD_4^z]}. \\ & [\OO(2)^-]\circledcirc [\ico\oplus \ZZ_2^c]=\set{[\1],[\ZZ_2],[\ZZ_3],[\ZZ_5],[\ZZ_2^-],[\DD_2^z],[\DD_3^z],[\DD_5^z]}. \end{align*} \end{lem} And finally we deduce the clips with $\SO(2)\oplus \ZZ_2^c$ and $\OO(2)\oplus \ZZ_2^c$ in the same way as in lemma \ref{lem:DnzwithO(2)}. \begin{lem} We have \begin{align*} & [\OO(2)^-]\circledcirc [\SO(2)\oplus \ZZ_2^c]=\set{[\1],[\ZZ_2^-],[\SO(2)]}. \\ & [\OO(2)^-]\circledcirc [\OO(2)\oplus \ZZ_2^c]=\set{[\1],[\DD_2^z],[\OO(2)^-]}. \end{align*} \end{lem} \section{Application to Piezoelectricity} \label{sec:piezoelectricity} We propose here to apply clips operation to the specific case of the \emph{Piezoelectricity law}. We introduce the space of constitutive tensors occurring in the mechanical description of Piezoelectricity, which describes the electrical behavior of a material subject to mechanical stress. It is defined by a triplet of tensors given by an elasticity tensor, a Piezoelectricity tensor and a permittivity tensor. Such a space is naturally endowed with an $\OO(3)$ representation, and the finite set of isotropy classes is obtained in theorem 5.1 below. We recall now the Piezoelectricity law, while details can be found in~\cite{Schouten1951,Landau2013,Roy99,Ieee88}. First, the mechanical state of a material is characterized by two fields of symmetric second order tensors: the stress tensor $\bsigma$ and the strain tensor $\beps$. The relation between these two fields forms the constitutive law that describes the mechanical behavior of a specific material. In linear elasticity, the relation is linear, given by \begin{equation*} \bsigma=\bE:\beps \end{equation*} which is known as the generalized Hook's law. Such fourth order elasticity tensor $\bE$ have the index symmetries \begin{equation*} \bE_{ijkl}=\bE_{jikl}=\bE_{ijlk}=\bE_{klij} \end{equation*} and we define the associated space of elasticity tensors $\Ela$, which is a 21 dimensional vector space. Similarly to the mechanical state, the electrical state of a material is described by two vector fields: the electric displacement field $\bd$ and the electric field $\be$. These two fields are related and the relation between them forms the constitutive law that describes the electrical behavior of a material. In the linear case, it is given by \begin{equation*} \bd=\bS.\be \end{equation*} where the second order symmetric tensor $\bS$ is the \emph{permittivity} tensor. We define $\Sym$ to be the vector space of the permittivity tensors, which is of dimension 6. Finally, the Piezoelectricity law is given by the coupled law \begin{equation*} \begin{cases} \bsigma=\bE:\beps-\be.\bP \\ \bd=\bP:\bsigma+\bS.\be \end{cases} \end{equation*} which involves a third order tensor $\bP$ called the \emph{Piezoelectricity} tensor, satisfying the index symmetry \begin{equation*} \bP_{ijk}=\bP_{ikj} \end{equation*} The vector space of Piezoelectricity tensors is an 18 dimensional vector space, noted $\Piez$. As a consequence, the linear electromechanical behavior of any homogeneous material is defined by a triplet $\mathcal{P}$ of constitutive tensors \begin{equation*} \mathcal{P}:=(\bE,\bP,\bS)\in \Ela \oplus \Piez \oplus \Sym \end{equation*} and we define $\mathcal{P}$iez to be the space of Piezoelectricity constitutive tensors: \begin{equation*} \mathcal{P}\text{iez}=\Ela \oplus \Piez \oplus \Sym. \end{equation*} The natural $\OO(3)$ representation on $(\bE,\bP,\bS)\in \mathcal{P}\text{iez}$ is given in any orthonormal basis by \begin{align}\label{eq:New_Tensors} (\rho(g)\bE)_{ijkl} & :=g_{ip}g_{jq}g_{kr}g_{ls}E_{pqrs},\quad (\rho(g)\bP)_{ijk}:=g_{ip}g_{jq}g_{kr}P_{pqr},\quad (\rho(g)\bS)_{ij}:=g_{ip}g_{jq}S_{pq} \end{align} where $g\in\OO(3)$. As a consequence of lemma~\ref{lem:direct_sum}, the isotropy classes $\J(\mathcal{P}\text{iez})$ can be deduced from isotropy classes $\Ela$, $\Piez$ and $\Sym$: \begin{equation*} \J(\mathcal{P}\text{iez})=\left(\J(\Ela)\circledcirc \J(\Piez)\right) \circledcirc \J(\Sym). \end{equation*} Recall from~\cite{Olive2021} the isotropy classes of $\Piez$ (the notations and definitions of $\OO(3)$-subgroups have been moved to \autoref{sec:O3-subgroups}) \begin{multline*} \J(\Piez)=\left\{[\1],[\ZZ_2],[\ZZ_3],[\DD_2^z],[\DD_3^z],[\ZZ_2^{-}],[\ZZ_4^-],[\DD_2],[\DD_3],[\DD_4^d],[\DD_6^d],\right.\\ \left.[\SO(2)],[\OO(2)],[\OO(2)^-],[\octa^-],[\OO(3)]\right\}. \end{multline*} The symmetry classes for $\OO(3)$ representation on $\Ela$ and $\Sym$ are the same as the symmetry classes for $\SO(3)$ representation which can be found in \cite{forte1997symmetry} and \cite{OKDD2018a}, except that each type I subgroup occurring in the list of isotropy classes has to be replaced by the corresponding type II subgroup (see \autoref{sec:O3-subgroups}). Indeed, $-\id$ acts trivially on $\Ela$ and $\Sym$, and we have \begin{equation*} \J(\Ela)=\set{[\1],[\ZZ_2\oplus \ZZ_2^c],[\DD_2\oplus \ZZ_2^c],[\DD_3\oplus \ZZ_2^c],[\DD_4\oplus \ZZ_2^c],[\octa\oplus \ZZ_2^c],[\OO(2)\oplus \ZZ_2^c],[\SO(3)\oplus \ZZ_2^c]}, \end{equation*} and \begin{equation*} \J(\Sym)=\set{[\DD_2\oplus \ZZ_2^c],[\OO(2)\oplus\ZZ_2^c],[\SO(3)\oplus \ZZ_2^c]}. \end{equation*} We deduce the isotropy classes of the Piezoelectricity law $\mathcal{P}\text{iez}=\Ela \oplus \Piez \oplus \Sym$ from lemma \ref{lem:direct_sum} by calculating the clips operations between the isotropy classes of $\Ela$, $\Piez$ and $\Sym$ (see \autoref{tab:Clips} and \autoref{tab:Clips2} for clips between type II and III $\OO(3)$-subgroups and \cite[table 1]{Olive2019} for clips between two type I and remark 3.1 (\ref{rem:2typeII}) for type I with type II and two type II). \begin{thm}\label{lem:J(Piez)} There exists 25 isotropy classes for the Piezoelectricity law $\Ela \oplus \Piez \oplus \Sym$ given by \begin{multline*} \J(\mathcal{P}\text{iez})=\left\{[\1],[\ZZ_2],[\ZZ_3],[\ZZ_4],[\DD_2],[\DD_3],[\DD_4],[\SO(2)],[\OO(2)],[\OO(3)],\right. \\ \left.[\ZZ_2\oplus \ZZ_2^c],[\DD_2\oplus \ZZ_2^c],[\DD_3\oplus \ZZ_2^c], [\DD_4\oplus \ZZ_2^c],[\octa\oplus \ZZ_2^c],[\OO(2)\oplus \ZZ_2^c], \right. \\ \left.[\ZZ_2^-],[\ZZ_4^-],[\DD_2^z], [\DD_3^z],[\DD_4^z],[\DD_2^d],[\DD_4^d],[\DD_6^d],[\octa^-],[\OO(2)^-]\right\}. \end{multline*} \end{thm}
{ "timestamp": "2022-02-02T02:12:51", "yymm": "2202", "arxiv_id": "2202.00261", "language": "en", "url": "https://arxiv.org/abs/2202.00261" }
\section{Introduction} \label{sec:intro} Indoor spaces play a crucial role in our everyday lives and serve a number of essential purposes. The problem of indoor layout design (apartment layout, workplace layout) has been tackled for a long time across several disciplines, including architectural design ~\cite{pile2013history} and ergonomics~\cite{kroemer2017fitting}. That is because the furniture arrangement in a room is inherently connected with the geometry and functionality of the space, but also with other aspects, like usability, aesthetics, cost-effectiveness, or quality. In this paper, we address the problem of data-driven layout synthesis, which recently has again become a research focus in computer graphics due to the advent of the next generation of generative machine learning~\cite{paschalidou2021atiss, para2021generative}. Despite recent progress, interior layout synthesis is still challenging for machine learning methods since indoor scenarios are characterized by high variability, making them very high-dimensional. Consequently, generative algorithms require large amounts of reliable data to learn the probability distributions adequately to synthesize realistic results. Additionally, indoor design requires expert knowledge, like architectural or ergonomic design principles, to ensure the created layouts allow high living quality. At the same time, indoor layout training data is difficult and expensive to obtain. Especially high-quality designs need to be crafted manually by professionals, making the process labor and time-intensive. Readily available datasets are often not well suited for the demanding training task and often lacks the aspects of expert knowledge, like ergonomic design principles. The data may have been created by non-experts and necessary design principles can be missing (cf. Figure~\ref{fig:bad_example}). It may further contain errors and geometric flaws, like incorrect overlaps, intersections, or misplaced objects, making it unsuitable or unreliable for further digital processing. We address these problems by using an autoregressive Transformer architecture with an additional information ``injected'' into the data-driven training process that is not contained in the data. Transformers are generative models originally proposed for natural language processing that have proven very successful in a wide range of domains~\cite{vaswani2017attention}. Recently, several methods have successfully used transformers for layout generation~\cite{para2021generative,wang2020sceneformer,paschalidou2021atiss}. \begin{figure}[t] \centering \includegraphics[width=0.999\columnwidth]{figures/bad_example_v3} \caption{Examples of potential errors in ground truth data (from the 3DFRONT dataset \cite{fu2021future,fu2021front}). Ergonomic issues (left room): (1) A window directly behind the TV causes glare on sunny days, making it difficult to watch due to a big contrast in brightness. (2) Insufficient illumination for reading a book without a light source behind or beside the bed. Geometric issues (right room): (1) Desk is intersecting with the bed and the closet; (2) closet is covering the door. } \label{fig:bad_example} \end{figure} We use data-driven learning, since a dataset distribution often captures properties of layouts that would be hard to describe with manually designed rules, but at the same time it may contain an undesirable bias or other undesirable properties for a given task. In our approach, a layout $L$ is defined as a set of discrete elements $L \coloneqq \{F_0, \dots, F_N\}$, each represented with a fixed-length parameter vector. A generative model learns to generate new layouts according to a probability distribution $p(L)$ that approximates the probability distribution of the dataset $p(L) \approx p_{\text{data}}(L)$. We propose to encode additional prior knowledge about a layout problem to obtain a learned distribution $p'(L)$ that reflects both the dataset distribution and the prior knowledge. This knowledge can be based on expert knowledge and allow to bias the learned probability distributions, such that specific properties of layouts are emphasized or diminished. In Section~\ref{sec:scores} we derive a set ergonomic rules from expert literature~\cite{kroemer2017fitting}, which we convert into differentiable cost functions that can be integrated into the Transformer training. We integrate prior information into the loss function to train a transformer network in two ways: (1) we utilize them as weighting factors of the input training samples. In other words, if a layout does not match well with the given goal, its contribution to the learning process is diminished. Further (2), we use them to assess the quality of samples proposed during the training process. In the second case, expert knowledge is defined to be differentiable w.r.t. the predicted probabilities. In such manner, it serves as a form of a prior in the loss function. We discus the details in Section~\ref{sec:gen}. In Section~\ref{sec:results} we evaluate the proposed method and compare it to a recent data-driven method that does not utilize additional knowledge~\cite{paschalidou2021atiss}. We show that with our approach we can improve the realism of generated room layouts. Finally, we generalize the manual loss to other examples, like overlap minimization to compensate potential geometric errors contained in the training data. In summary, the contributions of this paper are: \begin{itemize} \item We introduce an ergonomic loss for indoor layout design that improves the ergonomic quality of the layouts. We derive this loss from the expert knowledge in ergonomics (Section~\ref{sec:scores}). \item We integrate the manually designed differentiable loss into the training of a Transformer network that augments the data-driven information and allows the control of the learned probability distribution (Section~\ref{sec:gen}). We also show that this can be generalized to various other (differentiable) functions, like minimizing geometric overlaps. \item We empirically show that we can train a generative model which creates samples that have the similar realism-level to the ground truth data but increases the ergonomic quality, and we generalize the introduced manual loss to other functions (Section~\ref{sec:results}). \end{itemize} \section{Related Work}\label{sec:related} Interior spaces and their layouts are part of everyday life. Hence it is not surprising that such layouts are also an important part of multiple virtual domains, ranging from entertainment, architecture, to retail. For example, organizations such as Ikea and Wayfair are actively working toward understanding their customers needs~\cite{ataer2019room}. Typically, each domain has different requirements and needs, which require manual design~\cite{wayfairroomplanner}. In practice, designing layouts is a laborious task due to high dimensional design space, ranging from selecting relevant furniture pieces, to arranging the target space to fit the design goals. To alleviate such manual workflow, researchers have proposed multiple computational methods to assist in layout design. Below we classify previous work based on their approach. \subsection{Deep Learning Methods} With the rise of deep neural networks, numerous work increasingly rely on data to synthesize layouts. Typically, such methods employ neural networks, in which the network learns layout patterns from images, graphs, or other data. Such 3d scene data and the data modality is an important factor in deep learning~\cite{fu2021front}. Early deep learning work utilizes top-down images of layouts to understand object-object layout relationships~\cite{wang2018deep}. However, images do not naturally contain sufficient detail for the network to synthesize complex human-centered layouts. Graphs have also been proposed as a means to encode spatial layout information. Hence, a scene synthesis problem is transformed to predicting appropriate graph nodes and edges~\cite{wang2019planit,zhou2019scenegraphnet}. While graphs provide more fine grain control for synthesis than images, they do not readily encode ergonomic qualities. In addition to images and graphs, researchers explored how to use other 3d scene data representations for synthesis. \cite{li2019grains} synthesize scenes by sampling from a vector that represents the spatial structure of a scene. Such structure encodes a hierarchy of geometrical and co-occurrence relations of layout objects. \cite{zhang2020deep} proposed a hybrid approach that combines such vector representation with an image-based approach. The authors claim that images are needed to better capture local spatial relations. \cite{yang2021scene} combine such vector representation with Bayesian optimization to improve furniture placement predictions of the generative network. Most recently, researchers have proposed to use neural networks based on transformers~\cite{wang2020sceneformer,paschalidou2021atiss}. The authors mention that an advantage of transformers is a faster synthesis compared to other other deep-learning approaches. However, their work does not account for ergonomic qualities which results in misplaced furniture items. We demonstrate this point further in Section~\ref{sec:results}. \subsection{Other Approaches} Before the era of deep learning, early work considered layout as a mathematical optimization problem, where a set of constraints describe the layout quality in terms of energy~\cite{yu2011make,merrell2011interactive,weiss2018fast}. The layout is then optimized via stochastic or deterministic optimization process. Other researchers proposed data-driven methods. In such methods, abstract layout structure and object-object relations are extracted from scene data sets. Qi et al.~\shortcite{qi2018human} use interaction affordance maps for each layout object for stochastic layout synthesis. However, they only take into account static poses and spatial relationships between furniture pieces. Similarly, Fisher et al.~\shortcite{fisher2015activity} used annotated 3d scans of rooms to identify which activities does an environment support. Based on such activity maps they synthesize small objects arrangements. Other researchers also learn layout structure from 3d scans for scene synthesis~\cite{kermani2016learning}. They extract manually defined geometric relationships between objects from such scans, which are then placed using a stochastic optimization. Regrettably, previously mentioned synthesis work does not readily account for humans-centered ergonomic qualities, except for accessibility and spacing around furniture. Recently, researchers did attempt to incorporate human-centered considerations for 3d scene synthesis. Fu et al.~\shortcite{fu2017adaptive} use a graph of objects to guide a layout synthesis process. The authors signal that object related activities play a role in human-object relations. However, they only consider static human poses in relation to such activities. Zhang et al.~\shortcite{zhang2021joint} and Liang et al.~\shortcite{liang2019functional} focus on optimal work-space design. While the authors demonstrate novel use of simulation and dynamic capture of agent in action metrics, they only focus on mobility and accessibility based factors. In \cite{puig2018virthome}, the authors demonstrate how to evaluate the functionality of layouts. However, this work does not include 3d scene synthesis. While 3d scene synthesis work has made impressive progress in understanding how to create new virtual layouts, it is still a challenging problem, since it is difficult to objectively measure quality of resulting scene. Our work proposes to directly combine such qualities with recent novel deep learning architectures. \section{Ergonomic Rules}\label{sec:scores} Ergonomics is the scientific discipline concerned with understanding interactions among humans and other elements of a system. Instead of expecting humans to adapt to a design that might turn out to be uncomfortable, ergonomics aims to improve the design such that it suits its function and the needs of its users. In our approach, we study the literature of ergonomic guidelines~\cite{kroemer2017fitting} and derive a set of rules used to quantify an ergonomic quality of a design. To evaluate how a given layout suits the ergonomic rules, we define a set of activities a typical user would perform in the given room. An activity is a set or sequence of actions that need to be performed to accomplish a specific goal \cite{puig2018virthome}. An activity could be, for instance, reading a book or watching TV. Please refer to Table~\ref{tab:activities} for an association of activities we consider to the set of ergonomic rules we introduce. \begin{table}[b] \centering \small \caption{Associations of rules to activities that can be performed in an environment. Not all activities require all rules to be fulfilled. } \begin{tabular}{ p{6em}| c | c | c | c } & Reach & Visibility & Lighting & Glare \\ \hline Read book & & & yes & yes \\ \hline Watch TV & & yes & & yes \\ \hline Use computer & yes & yes & & yes \\ \hline Work at desk & yes & yes & yes & \\ \end{tabular}% \label{tab:activities}% \end{table}% \subsection{Implemented Rules}\label{sec:scores:scores} We consider the following ergonomic rules which are expressed as scalar cost functions in the range of $[0,1]$, where a lower value indicates a better score: (1) Reach, (2) Visibility, (3) Lighting, and (4) Glare. Please refer to Figure~\ref{fig:scores} for an illustration. We choose these four rules as examples for two reasons. First, they are all relevant for the kinds of activities that are often performed in the prevalent room types that are included in publicly available indoor layout datasets, i.e. reading a book in the bedroom, watching TV in the living room or using the computer in the library. The second reason is a practical one, since these rules can be defined as differentiable scalar functions in a range of $[0,1]$, which perfectly suits our needs. Additional rules that can be formulated in such fashion can easily be incorporated into our framework. The overall cost for a \scene\ is computed in a hierarchical manner as a combination of costs for certain activities, which themselves are combinations of individual ergonomic costs. In this section, we first describe the individual ergonomic cost functions for each rule, followed by the activities we use to evaluate the layouts. \subsubsection{Reach}\label{sec:scores:reach} While being seated, a person only has limited mobility and thus objects that need to be interacted with should be within a distance that is easy to reach without the need to stand up. We can broadly categorize the area around a seated person into $3$ zones. In the inner zone, objects can be reached without much effort, while objects in the outer zone are beyond reach. Objects in the middle zone can still be reached, but require more effort the further away they are. We model this reach loss $\loss{R}$ as a sigmoid function that measures how difficult it is to reach an object at position $q$ from position $p$: \begin{equation} \loss{R} = \frac{1.0}{1.0 + \exp \left( -\beta_R \left( \norm{q - p} - d_R \right) \right)} \,. \end{equation} The function is centered at $d_R$ with scaling parameter $\beta_R$. We use $d_R = 0.8$ and $\beta_R = 15$ to model the zones of easy and extended reach. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{figures/ergo_rules_v5} \caption{Ergonomic rules implemented in our system. We chose these guidelines as they are essential in most indoor scenarios, like reading a book, watching TV, or working at the desk or the computer. We convert the rules to scalar cost functions (cf. Section~\ref{sec:scores}) and evaluate them using activities (cf. Table~\ref{tab:activities}). } \label{fig:scores} \end{figure} \subsubsection{Visibility}\label{sec:scores:vis} Visibility cost measures how visible an target object is from the viewpoint of the avatar given by position $p$ and viewing direction $u$. This measure is important for activities like watching TV or using the computer (cf. Table~\ref{tab:activities}), since seating furniture with sub-optimal positions or orientations may require the user to take on unhealthy postures. To introduce this cost as smooth scalar function $E_v$ which can be minimized, we define the cost to increase with the angle between the two vectors $u$ and $v = \frac{q-p}{\norm{q-p}}$: \begin{equation} E_V = 1 - \left(\frac{ 1 + \langle u, v \rangle }{2} \right)^2 \,. \end{equation} \subsubsection{Lighting}\label{sec:scores:light} Lighting cost measures how well an object is illuminated by light sources in the room. Ideally, when looking at an object, the viewer and the light source should be positioned in the same half-space of the viewed object, as otherwise the object itself would partially obstruct the direct illumination and cause self-shadowing. A light source $b_i$ is thus well suited for illuminating the object at position $q$ when viewed from position $p$ as long as the position-to-object vector $v = \frac{q-p}{\norm{q-p}}$ and the vector $l_i = \frac{q-b_i}{\norm{q-b_i}}$ pointing from a light source at position $b_i$ to $q$ do not point in opposite directions: \begin{equation*} e^L_i = \left(1 - \frac{ 1 + \langle v, l_i \rangle }{2} \right)^4 \,. \end{equation*} Since multiple light sources can contribute to this cost, we compute their contribution by applying the $\softmin$ function to the vector $e^l = [e^l_i]_{i \in B}$ and using them as weights for computing the weighted sum: \begin{equation} E_L = \langle e^l, \softmin(\beta \cdot e^l) \rangle , \end{equation} with $\beta$ being a temperature parameter that determines the hardness of the $\softmin$ function. We use $\beta = 10$. Since the computation of indirect illumination is prohibitively expensive, we only consider direct lighting. \subsubsection{Glare}\label{sec:scores:glare} Glare cost $E_g$ measures the decrease in visual performance from strong brightness contrast caused by having bright light sources in the field of view. Given position-to-object vector $v = \frac{q-p}{\norm{q-p}}$ and glare vector $g_i = \frac{b_i-p}{\norm{b_i-p}}$ pointing from $p$ to the light source at $b_i$, the cost increases as the angle between the vectors decreases: \begin{equation*} e^G_i = \left(\frac{ 1 + \langle v, g_i \rangle }{2} \right)^4 \,. \end{equation*} Similar to the lighting cost we compute the weighted sum of multiple light sources using the $\softmax$ function for computing the weights: \begin{equation} E_G = \langle e^g, \softmax(\beta \cdot e^g) \rangle \,. \end{equation} For simplicity, we do not consider indirect glare, such as light sources that are reflected by a computer screen. Ceiling lights such as chandeliers are also excluded from this rule since light sources positioned above the field of view have a smaller impact on visual performance \cite{kroemer2017fitting}. \subsection{Activity Evaluation} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{sitting_locations_v3} \caption{Human activity in the room based on the example of \emph{Watch TV}. For all possible sitting locations $p_j$ an avatar is sampled and the ergonomic rules for visibility and glare are evaluated. The final contribution is the weighted sum of costs over every combination of a sitting possibility $p_j$ and all TVs $q_k$. Please refer to Section~\ref{sec:scores} for more details.} \label{fig:activity} \end{figure} We evaluate the ergonomic \score\ of a \scene\ in the context of activities that are typically performed in rooms of a given category. Based on research on this topic \cite{puig2018virthome}, we select $4$ such activities which we label as \emph{Read book}, \emph{Watch TV}, \emph{Use computer} and \emph{Work at desk}. To evaluate an activity, it is necessary to compute the ergonomic costs relevant to that activity (cf. Table \ref{tab:activities}). We furthermore use a logarithmic function to re-scale the ergonomic cost functions to more strongly punish scenes with high costs, for example \begin{equation}\label{eq:scaling} \bar{E}_{R} = -\ln (1.0 + \epsilon - E_{R}) , \end{equation} with the scaling functions for the other rules defined analogously. We use $\epsilon = \exp(5)$, so that when $E_{R} = 1$, then $\bar{E}_{R} = 5$. For the activity \emph{Read book}, proper illumination conditions are the most important factor, so we need to apply the rules for lighting and glare. Given the position $p_j$ of seating furniture (like beds, chairs, or sofas), an associated object position $q_j$ (a book close to $p_j$) and light sources $B$ we define \begin{equation*} e^{book}_j = \frac{\bar{E}_L \left(p_j,B,q_j \right) + \bar{E}_G \left(p_j,B,q_j \right)}{2} \,. \end{equation*} Since we do not require all possible positions to have a good score for every activity, we once again use the $\softmin$ function to compute a weighted sum of costs for the \scene. That way, if there is only one position that is suitable for an activity, it will be the only one with a large contribution to the \scene \,cost, while having multiple suitable positions will have them contribute equally. For a set of positions $p_j \in P$ we therefore have \begin{equation} E_{book} = \langle e^{book}, \softmin(\beta \cdot e^{book}) \rangle , \end{equation} with $e^{book} = [e^{book}_j]_{j \in P}$ and using $\beta = 10$. The other activities are defined similarly. For \emph{Watch TV}, we require the TV to be visible from a piece of seating furniture and there should not be a light source in the field of view. We therefore compute the visibility and glare costs for positions $p_j$ with orientation $u_j$ (for chairs, beds, sofas) and TVs with position $q_k$: \begin{equation*} e^{tv}_{j,k} = \frac{\bar{E}_V \left(p_j,u_j,q_k \right) + \bar{E}_G \left(p_j,B,q_k \right)}{2} \,. \end{equation*} Since there can be multiple TVs in a room in addition to multiple pieces of seating furniture, we need to compute the weighted sum of costs over every combination of $p_j$ and $q_k$, using $e^{tv} = [e^{tv}_{j,k}]_{j \in P,k \in Q}$: \begin{equation} E_{tv} = \langle e^{tv}, \softmin(\beta \cdot e^{tv}) \rangle \,. \end{equation} The same rules are required for the activity \emph{Use computer}, in addition to the reach rule since the seating furniture and computer should be in close proximity. We do not evaluate the lighting rule because the direction from which the light illuminates the computer is not as important, since the computer screen is already illuminated. Using $q_k$ to denote the positions of computers we define \begin{equation*} e^{comp}_{j,k} = \frac{\bar{E}_V \left(p_j,u_j,q_k \right) + \bar{E}_G \left(p_j,B,q_k \right) + \bar{E}_R \left(p_j,q_k \right)}{3} \,. \end{equation*} Finally, for the activity \emph{Work at desk} we apply the rules visibility, lighting and reach. Since the viewing angle is mostly directed downward toward the desk during this activity, it is not necessary to consider direct glare caused by light sources in the room. Given table positions $q_k$ and light sources $B$ we compute \begin{equation*} e^{work}_{j,k} = \frac{\bar{E}_V \left(p_j,u_j,q_k \right) + \bar{E}_L \left(p_j,B,q_j \right) + \bar{E}_V \left(p_j,q_k \right)}{3} . \end{equation*} In order to compute the overall \score\ $E$ for a \scene\ we take the average of all activity costs that are possible in the \scene\ (e.g. if there is no computer in the scene, we do not evaluate the cost for \emph{Use computer}): \begin{equation*} E = \frac{\sum_a \delta_{a} E_{a}}{\sum_a \delta_{a}} , \end{equation*} with $a \in \braces{book,\ tv,\ comp,\ work}$ and $\delta_{a} = 1$ if the corresponding activity can be performed in the \scene\ and $\delta_{a} = 0$ otherwise. \section{\Scene\ Generation with Expert Knowledge} \label{sec:gen} A loss designed by an expert, such as the ergonomic cost, defines desirable properties of \scene s that may not be fully realized in a dataset. However, while minimizing the expert loss may be necessary to obtain a desirable \scene, it is usually not sufficient, since a manually defined loss can usually not describe \emph{all} desirable properties of a \scene\ exhaustively. Thus, our goal is to combine the expert loss with a data-driven generative model for \scene s. We use Transformers~\cite{vaswani2017attention} as generative model, which are currently the state-of-the-art for \scene\ generation~\cite{para2021generative, wang2020sceneformer, paschalidou2021atiss}. We first present our Transformer-based generative model and then describe how we integrate our ergonomic cost into our training setup. \subsection{Layout Representation} Transformers are sequence generators that originate from natural language processing. A \scene\ is generated step-wise as a sequence of discrete tokens $S=(s_1, \dots, s_n)$, one token $s_i$ at a time. Thus, we first need to define a sequence representation of our \scene s. \begin{figure} \centering \includegraphics[width=0.488\textwidth,trim={0 23 0 0},clip]{images/transformer/sequences_v4} \caption{A \scene\ is represented as a sequence $S=(s_1, \dots, s_n)$. Each individual token $s_i$ in the sequence represents an attribute of a \furnobj, such as its category, orientation, dimensions or position. } \label{fig:sequences} \end{figure} \subsubsection{Sequence representation} Each \furnobj\ is represented as a 6-tuple $\furn{i} = (\cat{i},\ori{i},\width{i},\depth{i},\xpos{i},\ypos{i})$, with $\cat{i}$ indicating the object category, such as \emph{chair} or \emph{table}, $\ori{i}$ the orientation, $\width{i}$ the width, $\depth{i}$ the depth, and $\xpos{i}$ and $\ypos{i}$ being the x- and y-coordinates of the bottom left corner of the \furnobj\ (cf.~Figure~\ref{fig:sequences}). The \furnobj s in each scene are ordered based on the object category, with categories that tend to have larger objects before categories of smaller objects. If two objects have the same category, their order is arbitrary. Previous work~\cite{paschalidou2021atiss} has shown that randomizing the order of objects that do not admit a consistent ordering can be beneficial. The shape of the room itself is represented as the \furnobj\ $\furn{0}$ and is always at the beginning of the sequence. We concatenate the 6-tuples of the ordered \furnobj s and add a special stop token to the end of the sequence to obtain the sequence $S$. Similar to previous work~\cite{wang2020sceneformer}, we use two additional parallel sequences to provide context for each token in $S$: a position sequence $S^P = (1, 2, \dots, n)$ that provides the global position in the sequence, and an index sequence $S^I = (1, 2, \dots, 6, 1, 2 \dots, 6)$ that describes the index of a token inside the 6-tuple of a \furnobj. An example of these sequences can be seen in Figure \ref{fig:sequences}. \subsubsection{Quantization} Transformers typically operate with discrete token values. By learning to predict a probability for each possible value of a token, a transformer can model arbitrary distributions over token values. To obtain discrete values, we quantize all object parameters except orientations $o_i$ and categories $c_i$ uniformly between the minimum and maximum extent of the room along the longest axis, plus a margin of one quantization level on either side (i.e. below the minimum and above the maximum extent) to allow for windows and doors to extend slightly beyond the bounds of the room. Orientations $o_i$ are uniformly quantized in $[0, 2\pi)$, adjusting the resolution to preserve axis-aligned orientations as integer values. We use a resolution of $r = 256$. Categories $c_i$ do not require quantization as they are already integers. For more details, please refer to Appendix \ref{app:quantization}. \subsubsection{Sequence generation} Our Transformer-based sequence generator $f_\theta$ factors the probability distribution over sequences $S$ into a product of conditional probabilities over individual tokens: \begin{equation*} p(S| \theta) = \prod_i p(s_i | s_{<i}, \theta), \end{equation*} where $s_{<i} \coloneqq s_1, \dots, s_{i-1}$ is the partial sequence up to (excluding) $i$. Given a partial sequence $s_{<i}$, our model predicts the probability distribution over all possible discrete values for the next token: $p(s_i | s_{<i}, \theta) = f_\theta(s_{<i}, s_{<i}^{P}, s^I_{<i})$ that can be sampled to obtain the next token $s_i$. Here $s_{<i}^{P}$ and $s^I_{<i}$ are the corresponding partial position and index sequences that are fully defined by the index $i$. We implement $f_\theta$ as a GPT-2 model~\cite{radford2019language} using the implementation included in the Huggingface library \cite{wolf2020transformers}. \subsection{Ergonomic Loss} We face two main challenges when trying to use the ergonomic cost as a loss for a Transformer-based generative model. First, the ergonomic cost is defined on an entire \scene, while the generative model receives a loss for each generated token. Second, the ergonomic cost is defined over continuous \furnobj\ parameters, while the generative model outputs a distribution over discrete values, making gradient propagation from the ergonomic cost to the generative model difficult. To tackle the first challenge, we observe that transformers are typically trained with a strategy called \emph{teacher forcing}, where the partial sequence $s_{<i}$ preceding the current token $s_i$ is taken from a ground truth \scene. Thus, when generating a token $s_i$, we can evaluate the ergonomic cost on the \scene\ defined by $s_{<i}, s_i, s_{>i}$, where only $s_i$ is generated and both the preceding tokens $s_{<i}$ and the following tokens $s_{>i}$ are taken from the ground truth, effectively evaluating $s_i$ in the context of the ground truth \scene. \begin{figure} \centering \includegraphics[width=0.46\textwidth]{interpolation_v5} \caption{To propagate the ergonomic loss back to the token probabilities, we choose the maximum of the discrete values of the predicted token and convolve the neighborhood with a Gaussian kernel. The kernel is centered at the discrete maximum. } \label{fig:interpolation} \end{figure} To solve the second challenge, we need an ergonomic loss that is differentiable w.r.t. the probabilities $p(s_i | s_{<i}, \theta)$ predicted by our generative model. A straight-forward solution computes the expected value of the ergonomic cost $E$ over all possible values $v_j$ of a token $\sum_j E(s_{<i},\ v_j,\ s_{>i}) P(s_i=v_j | s_{<i}, \theta)$. This solution is differentiable w.r.t. the probabilities, but requires an evaluation of the ergonomic cost for each possible value of a token, which is prohibitively expensive. Instead, we opt for a less exact but much more efficient approach, where only a single evaluation of the ergonomic cost per token is needed. We compute the ergonomic loss $\mathcal{L}_E$ as the ergonomic cost for the expected value of a token in a small window around the most likely value of the token: \begin{gather} \mathcal{L}_E = E(s_{<i},\ \bar{v},\ s_{>i}), \text{ with}\\ \bar{v} = \frac{\sum_{j} \left( \mathcal{N}(v_j | \hat{v}, \sigma)\ P(s_i=v_j | s_{<i}, \theta)\ v_j \right)}{\sum_{j} \left( \mathcal{N}(v_j | \hat{v}, \sigma)\ P(s_i=v_j | s_{<i}, \theta) \right)}, \nonumber \end{gather} where $\mathcal{N}(x|\hat{v}, \sigma)$ is the normal distribution centered at $\hat{v}$ with standard deviation $\sigma$. $\hat{v}$ is the token value with highest probability, and $\sigma$ is set to $1/32$ of the full value range in our experiments. Figure \ref{fig:interpolation} illustrates the approach. This loss provides gradients to all values in smooth window. Note that increasing the size of the window by increasing $\sigma$ would propagate the gradient to a larger range of token values, but could also result in expected token values $\bar{v}$ that are in low-probability regions of the distribution $p(s_i | s_{<i}, \theta)$, since the distribution may be multi-modal. The total loss function $\mathcal{L}$ is then given by \begin{equation}\label{eq:total_loss} \begin{aligned} \mathcal{L} \parent{\sample{k}} = \beta_T \mathcal{L}_T \parent{\sample{k}} + \beta_E \mathcal{L}_E \parent{\sample{k}}, \end{aligned} \end{equation} with $\mathcal{L}_T$ being the cross-entropy loss, $\mathcal{L}_E$ being our proposed ergonomic loss and $\beta_T, \beta_E$ being weights that determine the influence of the two loss terms to the overall loss. We use $\beta_T = 1-E\parent{\sample{k}}$ and $\beta_E = E\parent{\sample{k}}$, such that the cross-entropy loss has higher influence for training samples with better ergonomic \score\, while the ergonomic loss is more important for samples with lower ergonomic \score. Essentially, we want the network to learn about the general target distribution from examples that are already considered good, while learning how to improve the ergonomic \score\ from bad examples. Please note that we do not apply the scaling function defined in Eq. \ref{eq:scaling} when computing the ergonomic \score\ for the weights so that they remain in the range of $[0,1]$. In Section \ref{sec:res:ablation}, we discuss the influence of the weights $\beta_T$ and $\beta_E$ in more detail. \subsection{Training and Inference} \subsubsection{Training.} We train our models using the 3DFRONT dataset \cite{fu2021future,fu2021front} as training data. Since some room types in the dataset only contain few samples, we make use of a transfer learning strategy. We first train a base model containing training samples of all room types using a learning rate of $0.00005$. This base model is then fine-tuned for each room type using a learning rate of $0.00002$ for the Bedrooms dataset and $0.00001$ for the other room types to prevent overfitting to the smaller datasets. The effect of this strategy is discussed in Section \ref{sec:res:transfer_learning}. As hyperparameters for our networks we use $12$ hidden layers, $8$ attention heads, embedding dimensionality of $256$, dropout probability of $0.1$ and a batch size of $32$. Each network is trained for $10$ epochs, with the number of training samples per epoch being $8$ times the number of samples in the training set, so that each augmented variation of a \scene\ is seen once per epoch (cf. Section \ref{sub:results_dataset} for details). For the learning rate, we use a linear rate of decay and a warm-up period of $1$ epoch. These parameters were determined empirically in preliminary experiments. For \scene\ synthesis, we always choose the learned network parameters of the epoch with the smallest validation loss during training. Our networks are trained on Google Colab, using a machine with a NVIDIA Tesla P100 GPU. When only using the cross-entropy loss, training for one epoch takes $115$ seconds on average. Adding our ergonomic loss increases training times to $578$ seconds per epoch on average, since we cannot make use of parallelization for \scene\ evaluation as easily. There is room for further optimizations in this aspect. \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{images/plots/bedrooms_ergo} \caption{Training loss, validation loss and ergonomic loss for each version of our network used in the ablation study, evaluated on the Bedrooms dataset. Networks V2 and V3 which include our proposed ergonomic loss term significantly decrease the ergonomic \score\ of \scenes\ during training.} \label{fig:plot_bedrooms} \end{figure*} \subsubsection{Inference.} During inference, we follow a similar approach to the strategy proposed by Sceneformer \cite{wang2020sceneformer}, using top-p nucleus sampling with $p=0.9$ for the object categories, as well as the attributes of the room, doors and windows. For the attributes of other object categories, we always pick the token with the highest probability. The \scenes\ synthesized by the transformer network often include intersecting objects which greatly disturb the perceived realism of a \scene. We therefore follow the approach of similar methods like Sceneformer and check for object intersections during inference. After the attributes of a \furnobj\ have been generated, we check if the object can be inserted into the scene without causing large intersections. If this is not the case, we resample the category and other attributes of the current object. If this resampling approach fails too often (we choose a limit of $20$ attempts experimentally), we discard the entire \scene\ and start anew. Certain pairs of object categories are excluded from this check, e.g. chairs can be put underneath a table and thus do not cause collisions. In terms of computation time, the intersection-detection process is the bottleneck of the inference process. If we do check for intersection during inference, it takes $1653$ seconds for our models to synthesize $1000$ \scene sequences, for $1.653$ seconds per \scene \,on average. If we do not perform intersection-checks between objects, we can make use of parallelization to greatly reduce inference time. In such a setup, our networks can synthesize $1000$ \scene\ sequences in $27$ seconds for $0.027$ seconds per scene on average. \subsubsection{Scene reconstruction.} Since our networks only generate the 2d bounding boxes of \furnobjs, we use an additional post-processing step to reconstruct a 3d scene from the generated \scene. For each \furnobj, we select the 3d model of the same category with the smallest difference in bounding box dimensions from the models in the 3DFRONT dataset~\cite{fu2021future,fu2021front}. For categories not included in the dataset, such as doors and windows, we handpick a few suitable models from online sources~\cite{turbosquid}. As a final step, the vertical position of each object is adjusted based on its category. The position of some categories like windows and chandeliers are set to a fixed height. We label some categories as supporting objects (like tables and stands) and others as supported objects (like indoor lamps and TVs). If there is an intersection between a supporting and supported object, the vertical position of the supported object is adjusted to be placed on top of the supporting object. \section{Results and Evaluation} \label{sec:results} \subsection{Dataset} \label{sub:results_dataset} We use the 3DFRONT dataset \cite{fu2021future,fu2021front} to evaluate our proposed approach. In a pre-processing step, we parse the data to extract rooms belonging to the categories Bedroom, Dining Room, Living Room and Library. For this purpose we use the filter criteria provided by ATISS \cite{paschalidou2021atiss}, consisting of a list of rooms for each category, as well as a split into training, testing and validation data. We use the rooms marked as \emphasis{train} for our training sets and combine those marked as \emphasis{test} and \emphasis{val} for our validation sets. Since we opt to only use rectangular rooms, we filter out rooms with more complex shapes. For the Bedrooms dataset, this results in $4040$ rooms for the training set and $324$ rooms for the validation set. For most \furnobjs, their attributes such as the category and the transformation of the corresponding 3d model data can be directly extracted from the room data. Since separate 3d models for doors and windows are not provided with the dataset, we extract their positions and bounding box dimensions from the mesh data with corresponding labels. Since doors are only provided with each house and not attached to individual rooms, we include a door with the \furnobjs\ of a room if its distance to the closest wall of the room is lower than a chosen threshold and its orientation is aligned with that of the wall. Additionally, we group some of the object categories in the dataset that are very similar to each other, while filtering out some others that occur only in very few rooms, for a total of $31$ categories that we use across all room types. Since the dataset is typically lacking object categories that are necessary to properly evaluate the ergonomic \score\ of a \scene, we augment the dataset with additional objects in the following way. For each \scene, there is a $50\%$ chance to place a \furnobj\ of the indoor lamp category in the center of every stand and side-table object. In the same manner, a computer object is placed at the center of each desk object in a \scene\ with a probability of $50\%$. Finally, every TV stand object is augmented with a TV object. \subsection{Ablation}\label{sec:res:ablation} To evaluate the influence of our proposed ergonomic loss, we define $4$ versions of our network that are trained with different loss functions. Recall that the total loss function given in Eq. \ref{eq:total_loss} is defined as the weighted sum of the cross-entropy loss $\mathcal{L}_T$ and the ergonomic loss $\mathcal{L}_E$ with weights $\beta_T, \beta_E$. Using these weight parameters, we define the following $4$ versions of our network: \begin{itemize} \item V0, with $\beta_T = 1$ and $\beta_E = 0$, \item V1, with $\beta_T = 1-E\parent{\sample{k}}$ and $\beta_E = 0$, \item V2, with $\beta_T = 1$ and $\beta_E = 1$, \item V3, with $\beta_T = 1-E\parent{\sample{k}}$ and $\beta_E = E\parent{\sample{k}}$. \end{itemize} In other words, V0 only uses the cross-entropy loss with each input sample having equal weight, V1 uses the cross-entropy loss with each sample being weighted by its ergonomic \score, V2 uses the sum of cross-entropy loss and ergonomic loss and V3 uses a weighted sum of cross-entropy loss and ergonomic loss, weighted by the ergonomic \score\ of each sample. Figure \ref{fig:plot_bedrooms} depicts the cross-entropy loss and ergonomic loss evaluated on both the training and validation sets for each version, using the Bedroom dataset for training. The results show a significant decrease in ergonomic loss for both V2 and V3 which make use of our ergonomic loss term during training. While both versions perform similarly in terms of ergonomic loss, the validation loss of V2 is much higher than that of other versions, suggesting that V2 performs worse at learning the target distribution which can decrease the perceived realism of synthesized scenes. V1 only yields a small decrease of ergonomic loss during training, since weighting the training samples by their ergonomic \score\ only reduces the influence of bad training samples without teaching the network how to improve the sample. However, this still has a noticeable effect on the synthesized scenes as we will discuss in Section \ref{sub:user_study}. \subsection{Room-conditioned Layout Synthesis}\label{sub:user_study} \begin{figure}[t] \centering \includegraphics[width=0.98\columnwidth]{images/plots/user_study_ergo_realism_gt.pdf} \caption{Room-conditioned \scene\ synthesis. We synthesize $20$ \scene\ variations for each floor plan in the Bedrooms validation set and evaluate the ergonomic \score. The left chart shows the mean ergonomic loss of the synthesized \scenes, with the $80\%$ confidence interval of the mean shown in black. The realism of the synthesized \scenes\ is evaluated in a user study. The right chart shows how the \scenes\ synthesized using each method are perceived compared to the ground truth, with a negative value meaning that the ground truth is seen as more realistic. Our proposed approach (V3) improves the ergonomic \score\ of the scenes, while still being perceived as similarly realistic as the ground truth. } \label{fig:user_study_ergo} \end{figure} We use the $4$ versions of our network introduced in the previous section for \scene\ synthesis and evaluate the results in terms of both realism and ergonomic loss. In order to evaluate the realism of our generated results, we perform a perceptual study in which we ask participants to compare pairs of Bedroom \scenes\ with the question of which \scene\ is more realistic. We compare \scenes\ from $6$ sources in this study: the ground truth \scenes\ from the 3DFRONT dataset \cite{fu2021future,fu2021front}, \scenes\ generated with ATISS \cite{paschalidou2021atiss} which we train using the code provided on their website, as well as the $4$ versions of our proposed method which we label V0, V1, V2 and V3. To allow for a direct comparison, we use the attributes of the room, doors and windows from the ground truth data for each \scene\ and only generate the rest of the \furnobjs\ using the selected methods. For each \scene\ in the validation set we generate $20$ variations each using ATISS and our trained networks and create sets of size $6$ that contain one \scene\ of each method generated from the same floor plan. Since ATISS does not handle any collisions between \furnobjs\ and even some of the ground truth \scenes\ may contain such collisions, we discard the entire set if one of its \scenes\ contains an intersection between \furnobjs\ larger than a threshold, which we set as $20\%$ of the smaller bounding box area. For our networks, we perform intersection-checks during inference, only discarding a set if an intersection-free \scene\ cannot be generated after $20$ attempts. Since our networks may also try to generate additional windows or doors, we simple resample the category in such a case. Finally, the ATISS \scenes\ are augmented with additional objects such as indoor lamps and computers in the same manner as explained in Section \ref{sub:results_dataset}. For the user study, we randomly select $50$ sets from all sets of synthesized \scenes\ and ask users to compare the \scenes\ in terms of realism. In each comparison, the user is shown a pair of \scenes\ from the same set, each represented by a top-view image and an animated 3d rendering with the camera rotating around the scene. Users are asked which \scene\ is more realistic on a $7$-point scale. We use Amazon Mechanical Turk to conduct the user study. A total of $327$ users participated in the study. Each pair of \scenes\ was shown twice to $5$ users each for a total of $10$ comparisons per scene pair. The left side of the Figure \ref{fig:user_study_ergo} shows the mean ergonomic \score\ of all \scenes\ created for the user study. As can be seen, our networks V1, V2 and V3 perform better at generating \scenes\ with lower ergonomic \score, reducing the mean ergonomic \score\ by $26.6\%$, $39.8\%$ and $48.9\%$ respectively compared to the ground truth data. The right side of Figure \ref{fig:user_study_ergo} shows how the users perceive the realism of synthesized \scenes\ compared to those of the ground truth, with a negative value meaning that the ground truth is seen as more realistic. The responses show that ATISS is considered significantly less realistic than the ground truth. On the other hand, \scenes\ generated by our networks V0, V1 and V3 are seen similarly realistic, while V2, though still considered to be more realistic than ATISS, is seen as less realistic than the ground truth. Though both V2 and V3 synthesize \scenes\ with better ergonomic \scores\ than other methods, only V3 also manages to preserve the perceived realism of the generated \scenes. We therefore conclude that our proposed approach V3 is the most suitable for fulfilling the objective of synthesizing realistic \scenes\ with a good ergonomic \score. \subsection{Generalization to Other Loss}\label{sec:res:overlap} \begin{figure}[t] \centering \includegraphics[width=0.98\columnwidth]{images/plots/user_study_overlap_realism_gt.pdf} \caption{Evaluation of \scenes\ synthesized by networks using our geometric intersection loss. The left chart shows that our networks V1, V2, and V3, which all make use of this loss, generate \scenes\ with less intersections between \furnobjs. The right chart shows how the realism of the \scenes\ is perceived by users compared to the ground truth, with a negative value meaning that the ground truth is seen as more realistic. \Scenes\ generated by V2 and V3 are seen as similarly realistic as the ground truth. } \label{fig:user_study_intersection} \end{figure} To show that our proposed approach can also be used with other loss terms, we perform another experiment in which we replace the ergonomic loss term with a geometric term that aims to reduce intersections between \furnobjs\ in the generated \scenes. This is especially useful since both our and existing approaches that use transformers for indoor scene synthesis \cite{wang2018deep,wang2020sceneformer} have shown difficulties in generating intersection-free \scenes\ without additional post-processing. To compute the intersection loss $\mathcal{L}_X$ between $2$ \furnobjs\ $\furn{i}$ and $\furn{j}$, we take $9$ sample points $q_{i,h}$ of $\furn{i}$ consisting of the bounding box center, edge midpoints and corners. Then the weighted sum of the signed distance of each sample point to $\furn{j}$ is computed using \begin{equation}\label{eq:loss_intersections} \begin{aligned} \mathcal{L}_X\parent{\furn{i},\furn{j}} = \begin{cases} \frac{\sum \limits_{h=0}^{8} \beta_h \cdot \min \parent{ d \parent{q_{i,h},\furn{j}}, 0} }{\sum \limits_{h=0}^{8} \beta_h} &\text{for}\ i > 0, j = 0 \\[4pt] \frac{\sum \limits_{h=0}^{8} \beta_h \cdot \max \parent{ -d \parent{q_{i,h},\furn{j}}, 0} }{\sum \limits_{h=0}^{8} \beta_h} &\text{for}\ i > 0, i \neq j > 0 \\[4pt] 0 &\text{otherwise}, \end{cases} \end{aligned} \end{equation} with $d \parent{q_{i,h},\furn{j}}$ as the signed distance from sample point $q_{i,h}$ to \furnobj\ $\furn{j}$, and $\beta_h$ denoting the weight for each sample point. We use $\beta_h = 16$ for the center, $\beta_h = 4$ for the edge midpoints and $\beta_h = 1$ for the corners. The first case of Equation \ref{eq:loss_intersections} penalizes \furnobjs\ outside the room boundaries, while the second case penalizes intersections between objects. Since this function has no upper limit, we clamp the intersection loss at $1.0$ when we use it to weight the training samples. We conduct another perceptual study in the same manner as described in Section \ref{sub:user_study}, using the intersection loss instead of the ergonomic loss during training of the network and for the evaluation of the \scenes. For the generation of the results, we also skip the intersection-detection step this time. Figure \ref{fig:user_study_intersection} shows that the additional loss term allows the transformer networks to generate \scenes\ with significantly fewer intersections between \furnobjs\ (V1, V2, V3) compared to those without the additional loss (ATISS, V0). The results of our user study show that \scenes\ generated by V2 or V3 are generally seen as more realistic and similar to the ground truth compared to those generated by ATISS, V0 or V1. \subsection{Unconditional Layout Synthesis} \begin{figure}[t] \centering \includegraphics[width=0.98\columnwidth]{images/plots/generated_both_mean_losses.pdf} \caption{\Scene\ synthesis without room-conditioning, meaning that the room dimensions, windows and doors are synthesized as well. We generate and evaluate $10000$ \scenes\ for each method and compare their \score\ to that of scenes in the training and validation sets. On the left, we use the ergonomic loss as an additional term for our models, while on the right we use the intersection loss as an additional term. The y-axis shows the corresponding mean \score\ of synthesized \scenes\ for each method, with the $80\%$ confidence interval of the mean shown in black. } \label{fig:generated_bedrooms_ergo} \end{figure} In Sections \ref{sub:user_study} and \ref{sec:res:overlap} we have demonstrated our networks capability to synthesize \scenes\ when given a partial sequence including the room, doors and windows as a starting condition. However, our models are also capable of generating entire \scenes\ including these types of elements from scratch. Examples of such \scenes\ can be seen in Figure \ref{fig:res:ours:other_cat}, where we show synthesized bedrooms in the top row in addition to examples for other room types. For each of our trained models, we generate $10000$ \scenes\ and evaluate the results using our ergonomic loss. The mean ergonomic \score\ of the resulting \scenes\ can be seen on the left of Figure \ref{fig:generated_bedrooms_ergo}. Compared to the training data that the networks learned from, the mean ergonomic \score\ of the \scenes\ synthesized by our networks V1, V2 and V3 is $39.1\%$, $66.7\%$ and $61.8\%$ smaller respectively. Additionally, we evaluate \scenes\ generated using our alternative geometric loss term which aims to reduce intersections between objects in the same manner. The right of Figure \ref{fig:generated_bedrooms_ergo} shows the mean intersection loss of the $10000$ scenes synthesized by each version of our network. As can be seen, the intersection loss of \scenes\ generated with V0 is $62.2\%$ higher than that of the training data \scenes. The other versions of our network all yield an improvement compared to V0, with V2 \scenes\ having $19.0\%$ higher intersection loss than the training \scenes, and V1 and V3 having $21.2\%$ and $18.7\%$ lower intersection loss. While V3 produces better results than V1 for room-constrained \scene\ synthesis (Figure \ref{fig:user_study_intersection}), the two versions perform similarly for unconditioned synthesis. We reason that V1 can already provide a significant improvement for simple loss functions, like the geometric intersection loss, while V2 is better at improving scenes when the loss function is more complex, such as our proposed ergonomic loss. Since V3 combines the advantages of both V1 and V2, and has proven to be effective in both of our studies, we conclude that it is the model best suited for the general case. \subsection{Evaluation of Transfer Learning} \label{sec:res:transfer_learning} \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth]{images/plots/plot_transfer_learning.pdf} \caption{By pre-training the network on a general dataset containing samples from all room types and then fine-tuning the network for a specific room type, the validation loss can be decreased significantly, especially for small datasets.} \label{fig:transfer_learning} \end{figure*} To evaluate the effectiveness of our proposed transfer learning strategy, we train networks from scratch using only the training data from each individual room category and compare the cross-entropy loss to that of our networks which are first trained on a general set of training data before being fine-tuned for a room category. Figure~\ref{fig:transfer_learning} shows that the transfer learning strategy already yields a lower training and validation loss after the first epoch of fine-tuning. While the training loss for networks that are trained from scratch eventually approaches that of the pre-trained network, the validation loss remains higher throughout. This effect is less pronounced when the size number of training samples is sufficiently large, as is the case with the Bedrooms dataset. For small training datasets however, transfer learning proves to be a good strategy for improving the training process. \newcommand\figw{0.99} \begin{figure*}[!t] \centering \begin{subfigure}[t]{\figw\textwidth} \centering \caption*{Results Ours V3: Bedrooms} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V3_Room1} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V3_Room15} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V3_Room17} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V3_Room40} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V3_Room41} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V3_Room45} \\ \hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V3_Room1_4} \hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V3_Room15_2}\hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V3_Room17_1}\hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V3_Room40_3}\hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V3_Room41_3}\hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V3_Room45_1}\hfill \label{fig:res_v3} \end{subfigure} \begin{subfigure}[t]{\figw\textwidth} \centering \caption*{Results Ours V0: Bedrooms} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V0_Room1} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V0_Room15} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V0_Room17} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V0_Room40} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V0_Room41} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_V0_Room45} \\ \hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V0_Room1_4} \hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V0_Room15_2}\hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V0_Room17_1}\hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V0_Room40_3}\hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V0_Room41_3}\hfill \includegraphics[width=0.15\textwidth,trim={0 40 0 20},clip]{images/renderings/conditional_picks_512/Main_V0_Room45_1}\hfill \label{fig:res_v0} \end{subfigure} \begin{subfigure}[t]{\figw\textwidth} \centering \caption*{Results ATISS: Bedrooms} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_atiss_Room1} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_atiss_Room15} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_atiss_Room17} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_atiss_Room40} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_atiss_Room41} \includegraphics[width=0.16\textwidth,trim={0 80 0 80},clip]{images/renderings/conditional_picks_512/Main_atiss_Room45} \\ \hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20},clip]{images/renderings/conditional_picks_512/Main_atiss_Room1_4} \hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20},clip]{images/renderings/conditional_picks_512/Main_atiss_Room15_2}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20},clip]{images/renderings/conditional_picks_512/Main_atiss_Room17_1}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20},clip]{images/renderings/conditional_picks_512/Main_atiss_Room40_3}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20},clip]{images/renderings/conditional_picks_512/Main_atiss_Room41_3}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20},clip]{images/renderings/conditional_picks_512/Main_atiss_Room45_1}\hfill \label{fig:res_atiss} \end{subfigure} \begin{subfigure}[t]{\figw\textwidth} \centering \caption*{Ground Truth: Bedrooms} \includegraphics[width=0.16\textwidth,trim={0 80 0 90}, clip]{images/renderings/conditional_picks_512/Main_gt_Room1} \includegraphics[width=0.16\textwidth,trim={0 80 0 90}, clip]{images/renderings/conditional_picks_512/Main_gt_Room15} \includegraphics[width=0.16\textwidth,trim={0 80 0 90}, clip]{images/renderings/conditional_picks_512/Main_gt_Room17} \includegraphics[width=0.16\textwidth,trim={0 80 0 90}, clip]{images/renderings/conditional_picks_512/Main_gt_Room40} \includegraphics[width=0.16\textwidth,trim={0 80 0 90}, clip]{images/renderings/conditional_picks_512/Main_gt_Room41} \includegraphics[width=0.16\textwidth,trim={0 80 0 90}, clip]{images/renderings/conditional_picks_512/Main_gt_Room45} \\ \hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20}, clip]{images/renderings/conditional_picks_512/Main_gt_Room1_4} \hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20}, clip]{images/renderings/conditional_picks_512/Main_gt_Room15_2}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20}, clip]{images/renderings/conditional_picks_512/Main_gt_Room17_1}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20}, clip]{images/renderings/conditional_picks_512/Main_gt_Room40_3}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20}, clip]{images/renderings/conditional_picks_512/Main_gt_Room41_3}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 20}, clip]{images/renderings/conditional_picks_512/Main_gt_Room45_1}\hfill \label{fig:res_gt} \end{subfigure} \caption{Selected synthesis results used in the conditional user study as described in Section~\ref{sec:results}. The input to all algorithms were the same: floorplan, including given positions of windows and doors. From to top to bottom, we show the results of our Transformer variant V3, variant V0, following ATISS~\cite{paschalidou2021atiss}, and the ground truth. Please refer to supplemental material for all results, including our V0, V1 and V2 variants. } \label{fig:res:comp:gt_at_v3} \end{figure*} \newcommand\figwidth{1.0} \begin{figure*}[!t] \centering \begin{subfigure}[t]{\figwidth\textwidth} \centering \caption*{Results Ours V3: Bedrooms} \includegraphics[width=0.16\textwidth,trim={0 90 0 80}, clip]{images/renderings/bedroom_free_picks_512/bedroom0} \includegraphics[width=0.16\textwidth,trim={0 90 0 80}, clip]{images/renderings/bedroom_free_picks_512/bedroom5} \includegraphics[width=0.16\textwidth,trim={0 90 0 80}, clip]{images/renderings/bedroom_free_picks_512/bedroom6} \includegraphics[width=0.16\textwidth,trim={0 90 0 80}, clip]{images/renderings/bedroom_free_picks_512/bedroom10} \includegraphics[width=0.16\textwidth,trim={0 90 0 80}, clip]{images/renderings/bedroom_free_picks_512/bedroom11} \includegraphics[width=0.16\textwidth,trim={0 90 0 80}, clip]{images/renderings/bedroom_free_picks_512/bedroom15}\\ \hfill \includegraphics[width=0.15\textwidth,trim={0 0 0 40}, clip]{images/renderings/bedroom_free_picks_512/bedroom0_1}\hfill \includegraphics[width=0.15\textwidth,trim={0 0 0 40}, clip]{images/renderings/bedroom_free_picks_512/bedroom5_3}\hfill \includegraphics[width=0.15\textwidth,trim={0 0 0 40}, clip]{images/renderings/bedroom_free_picks_512/bedroom6_3}\hfill \includegraphics[width=0.15\textwidth,trim={0 0 0 40}, clip]{images/renderings/bedroom_free_picks_512/bedroom10_1}\hfill \includegraphics[width=0.15\textwidth,trim={0 0 0 40}, clip]{images/renderings/bedroom_free_picks_512/bedroom11_2}\hfill \includegraphics[width=0.15\textwidth,trim={0 0 0 40}, clip]{images/renderings/bedroom_free_picks_512/bedroom15_1}\hfill \label{fig:res:other_bed} \end{subfigure} \begin{subfigure}[t]{\figwidth\textwidth} \centering \caption*{Results Ours V3: Living Rooms} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/living1_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/living2_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/living3_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/living4_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/living5_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/living6_top}\\ \hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/living1}\hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/living2}\hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/living3}\hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/living4}\hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/living5}\hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/living6}\hfill \label{fig:res:other_living} \end{subfigure} \begin{subfigure}[t]{\figwidth\textwidth} \centering \caption*{Results Ours V3: Dining Rooms} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/dining1_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/dining2_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/dining3_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/dining4_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/dining5_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/dining6_top}\\ \hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/dining1}\hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/dining2}\hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/dining3}\hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/dining4}\hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/dining5}\hfill \includegraphics[width=0.15\textwidth,trim={0 10 0 30},clip]{images/renderings/other_512_jpg/dining6}\hfill \label{fig:res:other_dining} \end{subfigure} \begin{subfigure}[t]{\figwidth\textwidth} \centering \caption*{Results Ours V3: Libraries} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/library1_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/library2_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/library3_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/library4_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/library5_top} \includegraphics[width=0.16\textwidth,trim={0 80 0 80}, clip]{images/renderings/other_512_jpg/library6_top}\\ \hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 40}, clip]{images/renderings/other_512_jpg/library1}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 40}, clip]{images/renderings/other_512_jpg/library2}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 40}, clip]{images/renderings/other_512_jpg/library3}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 40}, clip]{images/renderings/other_512_jpg/library4}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 40}, clip]{images/renderings/other_512_jpg/library5}\hfill \includegraphics[width=0.15\textwidth,trim={0 20 0 40}, clip]{images/renderings/other_512_jpg/library6}\hfill \label{fig:res:other_libs} \end{subfigure} \caption{Selected samples of different room categories freely synthesized using our variant V3 and transfer learning. As datasets for other room categories we pre-train the network on all categories and next specialize it on specific categories (refer to Sec.\ref{sec:results} for the detailed description.) } \label{fig:res:ours:other_cat} \end{figure*} \section{Results and Evaluation} \label{sec:results} \subsection{Dataset} \label{sub:results_dataset} We use the 3DFRONT dataset \cite{fu2021future,fu2021front} to evaluate our proposed approach. In a pre-processing step, we parse the data to extract rooms belonging to the categories Bedroom, Dining Room, Living Room and Library. For this purpose we use the filter criteria provided by ATISS \cite{paschalidou2021atiss}, consisting of a list of rooms for each category, as well as a split into training, testing and validation data. \tomer{can you briefly provide the reader with more details for the filter and splitting criteria?} We use the rooms marked as \emphasis{train} for our training sets and combine those marked as \emphasis{test} and \emphasis{val} for our validation sets. Since we opt to only use rectangular rooms, we also filter out rooms with more complex floor shapes. For most \furnobjs, their attributes such as the category and the transformation of the corresponding 3d model data can be directly extracted from the room data. Since separate 3d models for doors and windows are not provided with the dataset, we extract their positions and bounding box dimensions from the mesh data with corresponding labels. Since doors are only provided with each house and not attached to individual rooms, we include a door with the \furnobjs\ of a room if its distance to the closest wall of the room is lower than a chosen threshold and its orientation is aligned with that of the wall. Additionally, we group some of the object categories in the dataset that are very similar to each other, while filtering out some others that occur only in very few rooms (please refer to the supplemental material for a full list). \tomer{more dataset details are need. for example, how many rooms, objects or object categories do we use?} Since the dataset is lacking some object categories that are necessary to properly evaluate the ergonomic \score\ of a \scene, we augment the dataset with additional objects in the following way. For each \scene, there is a $50\%$ chance to place a \furnobj\ of the indoor lamp category in the center of every stand and side-table object. In the same manner, a computer object is placed at the center of each desk object in a \scene\ with a probability of $50\%$. Finally, every TV stand object is augmented with a TV object. \tomer{how fast is the synthesis method, is this real time to create one scene, or does it more than a few seconds? } \subsection{Training and inference} We train all of our networks using $12$ hidden layers, $8$ attention heads, embedding dimensionality of $256$, dropout probability of $0.1$ and a batch size of $32$. \tomer{should we add an network architecture illustration with the input and output?} Each network is trained for $10$ epochs, with the number of training samples per epoch being $8$ times the number of samples in the training set, so that each augmented variation of a \scene\ is seen once per epoch. For the learning rate, we use a linear rate of decay and a warm-up period of $1$ epoch. \tomer{why these choices for epoch and warm up?} We first train a base model containing training samples of all room types using a learning rate of $0.00005$. This base model is then fine-tuned for each room type using a learning rate of $0.00002$ for the Bedrooms dataset and $0.00001$ for the other room types. \tomer{why different learning rates? and why these specific ones? what happens if you pick a lower or higher rate.} For \scene\ generation, we always choose the network parameters that yielded the smallest validation loss during training. \tomer{how long does the training take overall and on what machine gpu combination } During inference, we follow a similar approach to the strategy proposed by Sceneformer \cite{wang2020sceneformer}, using top-p nucleus sampling with $p=0.9$ for the object categories, as well as the attributes of the room, doors and windows. For the attributes of other object categories, we always pick the token with the highest probability. The \scenes\ generated by the transformer network often include intersecting objects which greatly disturb the perceived realism of a \scene. We therefore follow the approach of similar methods like Sceneformer and check for object intersections during inference. After the attributes of a \furnobj\ have been generated, we check if the object can be inserted into the scene without causing large intersections. If this is not the case, we resample the category and other attributes of the current object. If this resampling approach fails too often (we choose a limit of $20$ attempts experimentally), we discard the entire \scene\ and start anew. Certain pairs of object categories are excluded from this check, e.g. chairs can be put underneath a table and thus do not cause collisions. Since our networks only generate the 2d bounding boxes of \furnobjs, we use an additional post-processing step to reconstruct a 3d scene from the generated \scene. For each \furnobj, we select the 3d model of the same category with the smallest difference in bounding box dimensions from the models in the 3DFRONT dataset. For categories not included in the dataset, such as doors and windows, we handpick a few suitable models from the Turbosquid website \cite{xxx}. As a final step, the vertical position of each object is adjusted based on its category. The position of some categories like windows and chandeliers are set to a fixed height. We label some categories as supporting objects (like tables and stands) and others as supported objects (like indoor lamps and TVs). If there is an intersection between a supporting and supported object, the vertical position of the supported object is adjusted to be placed on top of the supporting object. \begin{figure}[t] \centering \includegraphics[width=0.49\columnwidth]{figures/variants_comparison_1_v0.pdf}\hfill \includegraphics[width=0.49\columnwidth]{figures/variants_comparison_2_v0.pdf}\\ \includegraphics[width=0.49\columnwidth]{figures/variants_comparison_1_v0.pdf}\hfill \includegraphics[width=0.49\columnwidth]{figures/variants_comparison_1_v0.pdf} \caption{\przem{TODO: numerical comparison variants of the loss. } \tomer{the axis labels font needs to be larger} } \label{fig:variants} \end{figure} \subsection{Ablation study} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{images/plots/bedrooms_ergo} \caption{Training loss, validation loss and ergonomic loss of generated scenes for each version of our network used in the ablation study. \tomer{the title, legend, and axis labels font needs to be larger} } \label{fig:plot_bedrooms} \end{figure*} We evaluate the influence of our proposed ergonomic loss using an ablation study. The combined loss function for an input sample $\sample{k}$ is given as \begin{equation} \begin{aligned} L \parent{\sample{k}} = \beta_T L_T \parent{\sample{k}} + \beta_E L_E \parent{\sample{k}}, \end{aligned} \end{equation} with $L_T$ being the cross-entropy loss, $L_E$ being our proposed ergonomic loss and $\beta_T, \beta_E$ being weights that determine the influence of the two loss terms to the overall loss. We train $4$ versions of our network for this ablation study: \begin{itemize} \item V0, with $\beta_T = 1$ and $\beta_E = 0$, \item V1, with $\beta_T = 1-E\parent{\sample{k}}$ and $\beta_E = 0$, \item V2, with $\beta_T = 1$ and $\beta_E = 1$, \item V3, with $\beta_T = 1-E\parent{\sample{k}}$ and $\beta_E = E\parent{\sample{k}}$. \end{itemize} In other words, V0 only uses the cross-entropy loss with each input sample having equal weight, V1 uses the cross-entropy loss with each sample being weighted by its ergonomic \score, V2 uses the sum of cross-entropy loss and ergonomic loss and V3 uses a weighted sum of cross-entropy loss and ergonomic loss, weighted by the ergonomic \score\ of each sample. Figure \ref{fig:plot_bedrooms} shows the training loss, validation loss and mean ergonomic \score\ of generated scenes for each version. The ergonomic loss is evaluated once for each training epoch by randomly generating $10000$ \scenes without performing any intersection-detection during inference. The results show that V1, where we assign a higher weight to samples with lower ergonomic \score\, reduces the mean ergonomic \score\ of generated scenes by $23.6\%$ compared to V0 which does not incorporate any information on the ergonomic \score\ of the input samples. V2 and V3, which both make use of our proposed ergonomic loss function during training can further improve this to a $43.9\%$ and $48.1\%$ reduction respectively. While V2 and V3 achieve a similar performance in terms of ergonomic loss, both the training and validation loss of V3 are lower, suggesting that it is better at generating \scenes\ similar to the target distribution than V2. We also compare the ergonomic \score\ of scenes generated by our models with intersection-detection during inference enabled. For this comparison, we use the model parameters of the epoch with the lowest validation loss for each version of our networks. Figure \ref{fig:boxplot_ergo_bedrooms} \subsection{User study} \label{sub:user_study} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{images/plots/user_study_ergo_loss_mean.pdf} \includegraphics[width=0.47\textwidth]{images/plots/user_study_ergo_realism.pdf} \caption{Room-conditioned \scene\ synthesis. We synthesize $20$ \scene\ variations for each floor plan in the Bedrooms validation set and evaluate the ergonomic \score. The top shows the mean ergonomic loss of the synthesized \scenes, with the $95\%$ confidence interval of the mean shown in black. The realism of the synthesized \scenes\ is evaluated in a user study. The bottom graphic shows how the \scenes\ synthesized with our V3 network are perceived compared to other methods, with a higher value meaning that V3 is seen as more realistic. \tomer{1) the y-label in the top figure should be clearer, how does the loss relate to ergonomic score? 2) what do the realism numbers in the y-axis mean? 3) if higher means better, v3 preforms poorly according to the caption. } } \label{fig:user_study_ergo} \end{figure} In order to evaluate the realism of our generated results, we perform a user study in which we ask participants to compare pairs of Bedroom \scenes\ with the question of which \scene\ is more realistic. We compare \scenes\ from $6$ sources in this user study: the ground truth \scenes\ from the 3DFRONT dataset \cite{fu2021future,fu2021front}, \scenes\ generated with ATISS \cite{paschalidou2021atiss} which we train using the code provided on their website, as well as the $4$ versions of our proposed method which we label V0, V1, V2 and V3. \tomer{how many users participated in the study? are they random or layout experts?} To allow for a direct comparison, we use the attributes of the room, doors and windows from the ground truth data for each \scene\ and only generate the rest of the \furnobjs\ using the selected methods. For each \scene\ in the validation set we generate $20$ variations each using ATISS and our trained networks and create sets of size $6$ that contain one \scene\ of each method generated from the same floor plan. Since ATISS does not handle any collisions between \furnobjs\ and even some of the ground truth \scenes\ may contain such collisions, we discard the entire set if one of its \scenes\ contains an intersection between \furnobjs\ larger than a threshold, which we set as $20\%$ of the smaller bounding box area. Since our networks may try to generate additional windows or doors, we simple resample the category in such a case. Finally, the ATISS \scenes\ are augmented with additional objects such as indoor lamps and computers in the same manner as explained in Section \ref{sub:results_dataset}. \tomer{what are the results of the user study?} \subsection{Evaluation of transfer learning strategy} To evaluate the effectiveness of our proposed transfer learning strategy, we train some networks from scratch using only the training data from each individual room category and compare the cross-entropy loss to that of our networks which are first trained on a general set of training data before being fine-tuned for a room category. Figure \ref{} shows that the transfer learning strategy already yields a lower training and validation loss after the first epoch of fine-tuning. While the training loss of the networks that are trained from scratch eventually approaches that of the pre-trained network, the validation loss remains high throughout. \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{images/ergonomics/bad_example} \caption{\przem{TODO: visual comparison of alternative (goemetric) loss. With and without. } \tomer{can we get rid of the grey in the image above? white background is nicer} } \label{fig:alternative_loss} \end{figure} \subsection{Generalization} To show that our proposed approach can also be used with other manual priors, we perform another experiment in which we replace the ergonomic loss term with a term that aims to reduce intersections between \furnobjs\ in the generated \scenes. This is especially useful since both our and existing approaches that use transformers for indoor scene synthesis \cite{wang2018deep,wang2020sceneformer} have shown difficulties in generating intersection-free scenes without additional post-processing. To compute the intersection loss $L_X$ between $2$ \furnobjs\ $\furn{i}$ and $\furn{j}$, we take $9$ sample points $p_{i,h}$ of $\furn{i}$ consisting of the bounding box center, edge midpoints and corners. Then the weighted sum of the signed distance of each sample point to $\furn{j}$ is computed using \begin{equation}\label{eq:loss_intersections} \begin{aligned} L_X\parent{\furn{i},\furn{j}} = \begin{cases} \frac{\sum \limits_{h=0}^{8} \beta_h \cdot \min \parent{ f \parent{p_{i,h},\furn{j}}, 0} }{\sum \limits_{h=0}^{8} \beta_h} &\text{for}\ i > 0, j = 0 \\[4pt] \frac{\sum \limits_{h=0}^{8} \beta_h \cdot \max \parent{ -f \parent{p_{i,h},\furn{j}}, 0} }{\sum \limits_{h=0}^{8} \beta_h} &\text{for}\ i > 0, i \neq j > 0 \\[4pt] 0 &\text{otherwise}, \end{cases} \end{aligned} \end{equation} with $f \parent{p_{i,h},\furn{j}}$ being the signed distance from sample point $p_{i,h}$ to \furnobj\ $\furn{j}$ and $\beta_h$ being a weight for each sample point. We use $\beta_h = 16$ for the center, $\beta_h = 4$ for the edge midpoints and $\beta_h = 1$ for the corners. The first case of Equation \ref{eq:loss_intersections} penalizes \furnobjs\ outside the room boundaries, while the second case penalizes intersections between objects. We conduct another user study in the same manner as described in Section \ref{sub:user_study}, using the intersection loss instead of the ergonomic loss during training of the network and for the evaluation of the \scenes. For the generation of the results, we also skip the intersection-detection step this time. Figure \ref{fig:results_intersection} shows that the additional loss term allows the transformer networks to generate \scenes\ with significantly fewer intersections between \furnobjs\ (V1, V2, V3) compared to those without the additional loss (ATISS, V0). The results of our user study also show that \scenes\ generated by V1, V2 or V3 are generally seen as more realistic and similar to the ground truth compared to those generated by ATISS or V0. \begin{figure*}[p] \centering \begin{subfigure}[t]{0.99\textwidth} \centering \caption{GT} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_gt_Room0} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_gt_Room1} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_gt_Room23} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_gt_Room27} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_gt_Room33} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_gt_Room37} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_gt_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_gt_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_gt_Room18_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_gt_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_gt_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_gt_Room33_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_gt_Room37_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_gt_Room40_1} \label{fig:res_gt} \end{subfigure} \begin{subfigure}[t]{0.99\textwidth} \centering \caption{ATISS} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_atiss_Room0} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_atiss_Room1} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_atiss_Room23} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_atiss_Room27} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_atiss_Room33} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_atiss_Room37} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_atiss_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_atiss_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_atiss_Room1_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_atiss_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_atiss_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_atiss_Room33_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_atiss_Room37_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_atiss_Room40_1} \label{fig:res_atiss} \end{subfigure} \begin{subfigure}[t]{0.99\textwidth} \centering \caption{V0} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V0_Room0} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V0_Room1} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V0_Room23} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V0_Room27} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V0_Room33} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V0_Room37} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V0_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V0_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V0_Room1_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V0_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V0_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V0_Room33_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V0_Room37_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V0_Room40_1} \label{fig:res_v0} \end{subfigure} \begin{subfigure}[t]{0.99\textwidth} \centering \caption{V3} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V3_Room0} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V3_Room1} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V3_Room23} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V3_Room27} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V3_Room33} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V3_Room37} \includegraphics[width=0.135\textwidth,trim={0 0 0 50},clip]{images/renderings/main/Main_V3_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V3_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V3_Room1_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V3_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V3_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V3_Room33_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V3_Room37_1} \includegraphics[width=0.135\textwidth,trim={0 40 0 20},clip]{images/renderings/main/Main_V3_Room40_1} \label{fig:res_v3} \end{subfigure} \caption{Three simple graphs} \label{fig:three graphs} \end{figure*} \begin{comment} \begin{figure*}[p] \centering \begin{subfigure}[t]{0.99\textwidth} \centering \caption{GT} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_gt_Room0} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_gt_Room1} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_gt_Room8} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_gt_Room18} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_gt_Room23} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_gt_Room27} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_gt_Room33} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_gt_Room37} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_gt_Room40} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_gt_Room42} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_gt_Room0_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_gt_Room1_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_gt_Room8_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_gt_Room18_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_gt_Room23_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_gt_Room27_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_gt_Room33_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_gt_Room37_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_gt_Room40_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_gt_Room42_1} \label{fig:res_gt} \end{subfigure} \begin{subfigure}[t]{0.99\textwidth} \centering \caption{ATISS} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_atiss_Room0} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_atiss_Room1} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_atiss_Room8} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_atiss_Room18} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_atiss_Room23} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_atiss_Room27} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_atiss_Room33} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_atiss_Room37} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_atiss_Room40} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_atiss_Room42} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_atiss_Room0_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_atiss_Room1_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_atiss_Room8_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_atiss_Room18_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_atiss_Room23_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_atiss_Room27_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_atiss_Room33_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_atiss_Room37_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_atiss_Room40_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_atiss_Room42_1} \label{fig:res_atiss} \end{subfigure} \begin{subfigure}[t]{0.99\textwidth} \centering \caption{V0} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V0_Room0} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V0_Room1} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V0_Room8} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V0_Room18} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V0_Room23} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V0_Room27} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V0_Room33} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V0_Room37} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V0_Room40} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V0_Room42} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V0_Room0_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V0_Room1_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V0_Room8_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V0_Room18_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V0_Room23_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V0_Room27_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V0_Room33_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V0_Room37_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V0_Room40_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V0_Room42_1} \label{fig:res_v0} \end{subfigure} \begin{subfigure}[t]{0.99\textwidth} \centering \caption{V1} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V1_Room0} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V1_Room1} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V1_Room8} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V1_Room18} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V1_Room23} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V1_Room27} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V1_Room33} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V1_Room37} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V1_Room40} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V1_Room42} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V1_Room0_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V1_Room1_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V1_Room8_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V1_Room18_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V1_Room23_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V1_Room27_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V1_Room33_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V1_Room37_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V1_Room40_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V1_Room42_1} \label{fig:res_v1} \end{subfigure} \begin{subfigure}[t]{0.99\textwidth} \centering \caption{V2} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V2_Room0} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V2_Room1} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V2_Room8} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V2_Room18} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V2_Room23} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V2_Room27} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V2_Room33} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V2_Room37} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V2_Room40} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V2_Room42} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V2_Room0_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V2_Room1_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V2_Room8_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V2_Room18_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V2_Room23_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V2_Room27_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V2_Room33_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V2_Room37_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V2_Room40_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V2_Room42_1} \label{fig:res_v2} \end{subfigure} \begin{subfigure}[t]{0.99\textwidth} \centering \caption{V3} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V3_Room0} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V3_Room1} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V3_Room8} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V3_Room18} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V3_Room23} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V3_Room27} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V3_Room33} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V3_Room37} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V3_Room40} \includegraphics[width=0.09\textwidth,trim={0 0 0 100},clip]{images/renderings/main/Main_V3_Room42} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V3_Room0_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V3_Room1_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V3_Room8_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V3_Room18_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V3_Room23_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V3_Room27_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V3_Room33_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V3_Room37_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V3_Room40_1} \includegraphics[width=0.09\textwidth,trim={0 140 0 40},clip]{images/renderings/main/Main_V3_Room42_1} \label{fig:res_v3} \end{subfigure} \caption{Three simple graphs} \label{fig:three graphs} \end{figure*} \end{comment} \section{Limitations and Conclusions} \label{sec:discussion} \paragraph{Limitations} Our proposed approach has a number of limitations. Designing layouts is a complex high dimensional problem that includes modalities including selecting 3D furniture model that fit well together stylistically~\cite{weiss2020image,lun2015elements}; architectural elements such as room shapes walls and floor plans~\cite{wu2019data}; and various other aspects of lighting and illumination conditions~\cite{vitsas2020illumination}. While important, such methods are orthogonal to our layout synthesis focused scope. The implementation of our model also has a few technical limitations. We only demonstrate support for rectangular rooms, 2-dimensional \scenes\ and sorted sequences of \furnobjs. Solutions to these problems have already been discussed in recent work (\cite{wang2018deep,wang2020sceneformer,paschalidou2021atiss}) and are not inherently incompatible with our approach, though the effect of extending the problem domain in these directions needs to be further examined. Furthermore, while our ergonomic loss functions are derived from ergonomics literature, they are only theoretical models and and have not been evaluated in a real-life setting. We think that the problem of translating the vast number of ergonomic rules and interior design guidelines into differentiable functions to quantify the ergonomic quality of indoor layouts can be a promising topic of further research. While we have demonstrated that our approach of incorporating expert knowledge into the Transformer training process produces promising results, we think that this is only the first step in combining data-driven and rule-based learning using state-of-the-art deep-learning models such as Transformers. We believe that future research in this direction can assist with making data-driven learning approaches more applicable to domains where large amounts of high-quality data with desired properties are not readily available. \paragraph{Conclusions} We presented a novel method for the synthesis of indoor layouts, which combines data-driven learning and manually designed expert knowledge. To our knowledge, we are the first to propose such a solution to the problem. The main benefit of our approach is that it allows emphasizing features that might be underrepresented in the data or might not be contained at all. At the same time, we maintain the benefits of a data-driven approach which is important for layout generation which is high-dimensional and ill-defined. Manually crafting all design rules needed to synthesize comparable results would give a high-dimensional problem, but more importantly, it would be very difficult to define all necessary rules manually. Hence, combining both expert knowledge and distribution learned from data gives us the benefits from both worlds. As a technical contribution, we proposed a modern Transformer network that can be trained using a loss function composed of cross-entropy and additional knowledge. In particular, we demonstrated that simply adding the additional loss term can decrease the networks capability of synthesizing realistic results since the two loss terms may serve conflicting objectives. We have shown that weighting the two loss terms on a per-sample basis leads to results that fulfill the additional objective well and still maintain a high degree of realism. Further, we introduced expert knowledge in the form of cost functions derived from ergonomics, whose goal is to improve layouts to be more usable and comfortable for humans. We also introduced another loss that minimizes the overlap of objects in the room. This shows the generality of our approach and, at the same time, it also serves as another application to improve datasets containing potential geometric errors. We described the details of our implementation (we will release our code on GitHub), and we evaluated the method thoroughly. We introduced four variants for our novel loss and provided a rigorous ablation study. We showed numerical quantitative results and performed two user studies (each with 327 participants on Amazon Mechanical Turk) where the variants of our method out-perform recent related work. We also used our system to synthesize a large set of realistic results. Our method is meant to help professionals and amateurs in the future to address the problem of interior layout design. \section{Sequence Representation Details} \label{app:quantization} The category $\cat{i}$ of a \furnobj\ is obtained by simply assigning each category a unique integer value. To obtain the integer-valued \furnobj\ attributes from the real-valued attributes of the objects in the \scene, we employ the following quantization scheme. Since the real-valued orientation $\rori{i}$ is within the range $(-\pi,\pi]$, we use \begin{equation*} \begin{aligned} \ori[k]{i} = \frac{\rori[k]{i} - \parent{\frac{2\pi}{\res} -\pi}}{2\pi - \frac{2\pi}{\res}} \parent{r-1} , \end{aligned} \end{equation*} to obtain the integer-valued orientation $\ori{i}$, with $\res$ being the resolution of the quantization, such that $\ori[k]{i} \in \braces{0,\ldots,\res-1}$. This formulation guarantees that the 4 cardinal directions, which are the most common orientations for \furnobjs, are each represented by an integer value if $\res \bmod 4 = 0$. For the other attributes, instead of setting a predetermined range of possible values, we determine the range in relation to the size of the room. Intuitively, it can be understood as dividing the room using a uniform grid, and placing all other objects in alignment with this grid (Figure \ref{fig:quantization_grid}). To achieve this, we need to treat the quantization of the room separately from the other \furnobjs. We assume that the bounding box of each input room is axis-aligned, with the bottom left corner of the room positioned at $(0,0)$ and the orientation of the room aligned with the positive y-axis, which corresponds to real-valued attributes $\rori[k]{0} = 0$, $\rxpos[k]{0} = 0$ and $\rypos[k]{0} = 0$. Parsing the ground truth data, we extract the minimum and maximum room dimensions $\rwidth{min} = \min_{k} \rwidth[k]{0}$, $\rdepth{min} = \min_{k}, \rdepth[k]{0}$, $\rwidth{max} = \max_{k} \rwidth[k]{0}$ and $\rdepth{max} = \max_{k} \rdepth[k]{0}$. The grid cell size $\cell{k}$ used for the quantization of the \furnobjs\ $\furn[k]{i}$ is then determined using the greater dimension of each room. For $\rwidth[k]{0} \geq \rdepth[k]{0}$ we obtain \begin{equation}\label{eq:quant_room} \begin{aligned} \cell{k} = \frac{\rwidth[k]{0}}{\res-2} , \\ \width[k]{0} = \frac{\rwidth[k]{0} - \rwidth{min}}{\rwidth{max} - \rwidth{min}} \parent{\res-2} , \end{aligned} \end{equation} with the opposite case defined analogously, using the depth instead of the width to compute the grid cell size $\cell{k}$. In practice, since it is necessary to differentiate between the two possible cases $\rwidth[k]{0} \geq \rdepth[k]{0}$ and $\rwidth[k]{0} < \rdepth[k]{0}$ when reconstructing the \scene, we store this information by setting the real-valued room orientation $\rori[k]{0} = -\frac{\pi}{2}$ and swapping the width $\width[k]{0}$ and depth $\depth[k]{0}$ values in the latter case, such that $\width[k]{0}$ always indicates the greater dimension of the room. Please note that, since $\rwidth[k]{0} = \parent{\res-2} \cell{k}$, the room does not occupy the entire range of the grid, even along its greater dimension. The reason for this is that windows and doors may be positioned outside the room boundaries, so we ensure that there is at least one row or column of the grid available beyond each wall of the room. \begin{figure} \centering \includegraphics[width=0.46\textwidth]{images/transformer/grid_v3} \caption{The quantization of the positions and dimensions of \furnobjs\ is dependent on the room dimensions. It can be understood as fitting a grid of fixed size into the room and expressing the object attributes as multiples of the cell size of the grid.} \label{fig:quantization_grid} \end{figure} After computing the grid cell size $\cell{k}$ from the dimensions of the room, the integer-valued attributes of all other \furnobjs\ are then given by \begin{equation} \begin{aligned} \width[k]{i} &= \frac{\rwidth[k]{i} - \cell{k}}{\cell{k}} & \text{for}\ &i > 0, \\ \depth[k]{i} &= \frac{\rdepth[k]{i} - \cell{k}}{\cell{k}} & \text{for}\ &i \geq 0, \\ \xpos[k]{i} &= \frac{\rxpos[k]{i} + \cell{k}}{\cell{k}} & \text{for}\ &i \geq 0, \\ \ypos[k]{i} &= \frac{\rypos[k]{i} + \cell{k}}{\cell{k}} & \text{for}\ &i \geq 0 \,. \end{aligned} \end{equation} Windows and doors are treated slightly differently, as we set their depth $\rdepth[k]{0} = \cell{k}$ and adjust their position such that they are always touching the room boundary, since their actual position in the \scene\ can vary depending on the thickness of the walls. Since all sequences need to be of the same length in order to be used as input to the transformer network, we append the padding token with value $\res$ to each sequence until they reach the desired maximum length. In our case, we want to represent a maximum of $\nfurn = 21$ \furnobjs\ including the room, resulting in a length of $\ntoken = 126$. In order to reconstruct a \scene\ from a given integer-valued sequence, the quantization process is reversed. The orientation of a \furnobj\ is given by \begin{equation} \begin{aligned} \rori[k]{i} = \parent{\frac{2\pi}{\res} -\pi} + \frac{\ori[k]{i}}{r-1} \parent{2\pi - \frac{2\pi}{\res}} \,. \end{aligned} \end{equation} To reconstruct the room, we first need to consider if $\rori[k]{0} = -\frac{\pi}{2}$. If this is not the case, the real-valued attributes of the room are obtained using \begin{equation} \begin{aligned} \rwidth[k]{0} = \rwidth{min} + \frac{\width[k]{0}}{\res-2} \parent{\rwidth{max}-\rwidth{min}} , \\ \cell{k} = \frac{\rwidth[k]{0}}{\res-2} \,. \end{aligned} \end{equation} Otherwise, we swap the width $\width[k]{0}$ and depth $\depth[k]{0}$ values, set $\rori[k]{0} = 0$ and compute the grid cell size $\cell{k}$ using the depth instead of the width of the room. Finally, the attributes of the other \furnobjs\ are recovered using \begin{equation} \begin{aligned} \rwidth[k]{i} &= \width[k]{i} \cell{k} + \cell{k} & \text{for}\ &i > 0, \\ \rdepth[k]{i} &= \depth[k]{i} \cell{k} + \cell{k} & \text{for}\ &i \geq 0,\\ \rxpos[k]{i} &= \xpos[k]{i} \cell{k} - \cell{k} & \text{for}\ &i \geq 0,\\ \rypos[k]{i} &= \ypos[k]{i} \cell{k} - \cell{k} & \text{for}\ &i \geq 0\,. \end{aligned} \end{equation} \begin{figure*}[t] \centering \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Realism Study: Ours V3} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room33} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room37} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room0_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room1_4} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room33_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room37_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Realism Study: Ours V0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room33} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room37} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room0_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room1_4} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room33_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room37_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Realism Study: ATISS~\cite{paschalidou2021atiss}} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_atiss_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_atiss_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_atiss_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_atiss_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_atiss_Room33} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_atiss_Room37} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_atiss_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_atiss_Room0_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_atiss_Room1_4} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_atiss_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_atiss_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_atiss_Room33_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_atiss_Room37_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_atiss_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \caption*{Realism Study: Ground Truth} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/main/Main_gt_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/main/Main_gt_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/main/Main_gt_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/main/Main_gt_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/main/Main_gt_Room33} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/main/Main_gt_Room37} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/main/Main_gt_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_gt_Room0_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_gt_Room1_4} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_gt_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_gt_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_gt_Room33_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_gt_Room37_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_gt_Room40_3} \end{subfigure} \caption{Selected synthesis results used in the conditional user study. The input to all algorithms were the same: floorplan, including given positions of windows and doors. From to top to bottom, we show the results of our Transformer variant V3, variant V0, following ATISS~\cite{paschalidou2021atiss}, and the ground truth. Please refer to supplemental material for all results, including our V0, V1 and V2 variants. } \label{fig:study:M:gt_at_v3} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Realism Study: Ours V3} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room33} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room37} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V3_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room0_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room1_4} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room33_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room37_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V3_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Realism Study: Ours V2} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V2_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V2_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V2_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V2_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V2_Room33} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V2_Room37} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V2_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V2_Room0_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V2_Room1_4} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V2_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V2_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V2_Room33_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V2_Room37_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V2_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Realism Study: Ours V1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V1_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V1_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V1_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V1_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V1_Room33} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V1_Room37} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V1_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V1_Room0_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V1_Room1_4} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V1_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V1_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V1_Room33_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V1_Room37_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V1_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Realism Study: Ours V0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room33} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room37} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/main/Main_V0_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room0_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room1_4} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room27_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room33_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room37_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/main/Main_V0_Room40_3} \end{subfigure} \caption{Selected synthesis results used in the conditional user study. The input to all algorithms were the same: floorplan, including given positions of windows and doors. From to top to bottom, we show the results of our Transformer variants V3, V2, V1 and V0. } \label{fig:study:M:v0123} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Intersection Study: Ours V3} \includegraphics[width=0.135\textwidth,trim={0 20 0 20} ,clip]{images/renderings/overlap/Overlap_V3_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20} ,clip]{images/renderings/overlap/Overlap_V3_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20} ,clip]{images/renderings/overlap/Overlap_V3_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20} ,clip]{images/renderings/overlap/Overlap_V3_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20} ,clip]{images/renderings/overlap/Overlap_V3_Room8} \includegraphics[width=0.135\textwidth,trim={0 20 0 20} ,clip]{images/renderings/overlap/Overlap_V3_Room42} \includegraphics[width=0.135\textwidth,trim={0 20 0 20} ,clip]{images/renderings/overlap/Overlap_V3_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room1_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room27_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room8_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room42_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Intersection Study: Ours V0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_V0_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_V0_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_V0_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_V0_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_V0_Room8} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_V0_Room42} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_V0_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room1_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room27_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room8_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room42_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Intersection Study: ATISS~\cite{paschalidou2021atiss}} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_atiss_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_atiss_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_atiss_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_atiss_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_atiss_Room8} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_atiss_Room42} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_atiss_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_atiss_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_atiss_Room1_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_atiss_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_atiss_Room27_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_atiss_Room8_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_atiss_Room42_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_atiss_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Intersection Study: Ground Truth} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_gt_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_gt_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_gt_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_gt_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_gt_Room8} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_gt_Room42} \includegraphics[width=0.135\textwidth,trim={0 20 0 20}, clip]{images/renderings/overlap/Overlap_gt_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10}, clip]{images/renderings/overlap/Overlap_gt_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10}, clip]{images/renderings/overlap/Overlap_gt_Room1_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10}, clip]{images/renderings/overlap/Overlap_gt_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10}, clip]{images/renderings/overlap/Overlap_gt_Room27_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10}, clip]{images/renderings/overlap/Overlap_gt_Room8_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10}, clip]{images/renderings/overlap/Overlap_gt_Room42_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10}, clip]{images/renderings/overlap/Overlap_gt_Room40_3} \end{subfigure} \caption{Selected synthesis results used in the conditional user study testing intersections. The input to all algorithms were the same: floorplan, including given positions of windows and doors. From to top to bottom, we show the results of our Transformer variant V3, variant V0, following ATISS~\cite{paschalidou2021atiss}, and the ground truth. Please refer to supplemental material for all results, including our V0, V1 and V2 variants. } \label{fig:study:OL:gt_at_v3} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Intersection Study: Ours V3} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V3_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V3_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V3_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V3_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V3_Room8} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V3_Room42} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V3_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room1_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room27_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room8_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room42_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V3_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Intersection Study: Ours V2} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V2_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V2_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V2_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V2_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V2_Room8} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V2_Room42} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V2_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V2_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V2_Room1_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V2_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V2_Room27_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V2_Room8_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V2_Room42_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V2_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Intersection Study: Ours V1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V1_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V1_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V1_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V1_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V1_Room8} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V1_Room42} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V1_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V1_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V1_Room1_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V1_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V1_Room27_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V1_Room8_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V1_Room42_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V1_Room40_3} \end{subfigure} \begin{subfigure}[t]{1.0\textwidth} \centering \caption*{Intersection Study: Ours V0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V0_Room0} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V0_Room1} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V0_Room23} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V0_Room27} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V0_Room8} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V0_Room42} \includegraphics[width=0.135\textwidth,trim={0 20 0 20},clip]{images/renderings/overlap/Overlap_V0_Room40}\\ \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room0_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room1_3} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room23_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room27_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room8_1} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room42_2} \includegraphics[width=0.135\textwidth,trim={0 20 0 10},clip]{images/renderings/overlap/Overlap_V0_Room40_3} \end{subfigure} \caption{Selected synthesis results used in the conditional user study testing intersections. The input to all algorithms were the same: floorplan, including given positions of windows and doors. From to top to bottom, we show the results of our Transformer variants V3, V2, V1 and V0. } \label{fig:study:OL:v0123} \end{figure*}
{ "timestamp": "2022-02-02T02:09:22", "yymm": "2202", "arxiv_id": "2202.00185", "language": "en", "url": "https://arxiv.org/abs/2202.00185" }
\section{Background} \label{sec:background} \subsection{Feature Learning} \label{subsec:feature_learning} We consider three complementary feature learning algorithms. The first is the VAE, which learns an $L$-dimensional reduction of a dataset by optimizing a reconstruction objective. It posits a generative model $p\left(z\right)p_{\xi}\left(x \vert z\right)$ of the data; $p\left(z\right)$ is a prior on latent features and $p_{\xi}\left(x \vert z\right)$ is a likelihood parameterized by $\xi$. The algorithm finds a pair $\xi, \varphi$ maximizing the lower bound, \begin{align*} \log p_{\xi}\left(x\right) \geq \mathbb{E}_{q_{\varphi}}\left[\log p_{\xi}(x \mid z)\right]-D_{KL}\left(q_{\varphi}(z \mid x) \| p(z)\right) \end{align*} where $q_{\varphi}\left(z \vert x\right) = \mathcal{N}\left(\mu_{\varphi}\left(x\right), \sigma^{2}_{\varphi}\left(x\right)\right)$ maps raw data examples to distributions in a latent space. This problem is nonconvex, and the solution is non-deterministic. There are many implementations of VAEs; our experiments follow \cite{van2017neural}. Second, we learn supervised features through a CNN. A CNN regressor optimizes an empirical estimate of $\mathbf{E}\|y - f_{W_{1:J}}\left(x\right)^{T}\beta\|_{2}^{2}$ over $W_{1:J}$ and $\beta$. Here, $f_{W_{1:J}}$ transforms the raw input into the ``final layer'' features, and is defined recursively according to \begin{align*} f^{j}_{W_{1:j}}\left(x\right) &= \sigma\left(W_{j}f^{j - 1}_{W_{1:(j - 1)}}\left(x\right)\right)\\ f^{0}\left(x\right) &= x \end{align*} where $\sigma\left(x\right) := x \indic{x \geq 0}$ and matrices $W$ are restricted to the set of convolutions. Like in the VAE, this solved through first-order optimization methods. Our implementation is the CBR architecture from \cite{raghu2017svcca}. Third, we use a random convolutional features (RCF) model \cite{rahimi2008weighted}. A random sample of $L$ training examples $x_{i_1}, \dots, x_{i_L} \in \reals^{w \times h \times c}$ is selected; the $x_{i}$'s are assumed to be $c$-channel images with dimension $w\times h$. For each sample, a random $s \times s$ patch, denoted $w_{p} \in \reals^{s \times s \times c}$, is extracted. For any $c$-channel image $x$, the $l^{th}$ feature $z_{l}$ is found by convolving $x$ with $w_{l}$ and spatially averaging over activations. This model uses random training image patches as convolutional kernels, rather than learning them from scratch. The features $z_{1}, \dots, z_{L}$ are analogous to the features $f_{W_{1:J}}\left(x\right)$ in the CNN. To train an RCF, the training data are featurized into $\*Z \in \reals^{n \times L}$. Then, a ridge regression model is trained from $\*Z$ to the $y$, giving an estimate $\hat{\beta}$. For a new example $x^{\ast}$, the same image patches $w_{1}, \dots, w_{L}$ are used to form $z^{\ast}$, and predictions are made with $z^{\ast T}\hat{\beta}$. This model does not require gradient based training, and it can serve as a fast baseline. \subsection{Procrustes Analysis} \label{subsec:procrustes} Given centered $\mathbf{X}$ and $\mathbf{Y}$, the Procrustes problem finds a rotation $\mathbf{R}$ solving, \begin{align*} \min_{\mathbf{R} \in \mathcal{O}\left(p, p\right)} \|\mathbf{X} - \mathbf{Y}\mathbf{R}\|^{2}_{F}, \end{align*} where $\mathcal{O}\left(p, p\right)$ is the space of $p\times p$ orthonormal matrices. The solution can be shown to be $\hat{\mathbf{R}} = \mathbf{U}^{T}\mathbf{V}$ for $\mathbf{U}$ and $\mathbf{V}$ obtained by the SVD of $\mathbf{X}^{T}\mathbf{Y}$ \cite{friedman2001elements, gower1975generalized}. For $B$ matrices $\mathbf{X}_{1}, \dots, \mathbf{X}_{B}$, the generalized Procrustes problem finds $B$ rotations $\mathbf{R}_{1}, \dots, \mathbf{R}_{B}$ and mean $\mathbf{M}$ solving \begin{align*} \min_{\mathbf{R}_{1}, \dots, \mathbf{R}_{B} \in \mathcal{O}\left(p, p\right), M} \sum_{b = 1}^{B} \|\mathbf{X}_{b}\mathbf{R}_{b} - \mathbf{M}\|_{F}^{2}. \end{align*} While there is no closed form solution, the optimization can be solved by cyclically updating each $\mathbf{R}_{b}$ via standard Procrustes problems and then updating $\mathbf{M} = \frac{1}{B} \sum_{b = 1}^{B} \mathbf{X}_{b} \mathbf{R}_{b}$ \cite{friedman2001elements}. \subsection{PCA and the Bootstrap} \label{subsec:pca_bootstrap} Several approaches are available for bootstrapping PCA. The total bootstrap computes $B$ principal planes by applying PCA to $B$ resampled versions of the data \cite{chateau1996assessing}. For each replication, rows are sampled with replacement, viewed as draws from a larger population. The associated principal axes may be reflected or swapped with one another, so the associated sample coordinates are not directly comparable, and coordinates must be aligned. This is often accomplished through a Procrustes or conjoint analysis \cite{elguero1988confidence}. In either case, the cloud of $B$ points associated with each sample in the resulting reference space is used to form a confidence region for it. In contrast, fixed-effects PCA views the rows of the data matrix as the entire population of interest \cite{josse2016confidence}. The source of randomness in this case is measurement noise around a low-rank model, not sampling from a larger population of rows. By fitting a measurement noise model and resampling residuals, a parametric bootstrap provides confidence regions for the true latent coordinates in the low-rank model. We also note that Bayesian factor analysis approaches sampling from the posterior of latent coordinates \cite{ren2017bayesian, ren2020bayesian}. Like the fixed-effects PCA model, these approaches specify an explicit low-rank model with measurement noise. Like the total bootstrap, underlying factors may be swapped, and alignment is necessary. \section{Methods} \label{sec:methods} This section describes ways to adapt the bootstrap approaches above to the feature learning context. Our raw data are $n$ samples $x_i \in \mathcal{X}$, where $\mathcal{X}$ is the raw data domain, e.g., images, text sentences, or audio signals. A corresponding set $y_i in \reals$ of responses may be available. The full data are $\mathcal{D} = \left(x_i, y_i\right)_{i = 1}^{n}$. A \textit{feature learner} is a parameterized mapping $T\left(\cdot; \theta\right): \mathcal{X} \to \reals^{L}$ taking data from $\mathcal{X}$ and representing it in $\reals^{L}$. For example, in a text data application, we expect the learner to transform a set of raw word sequences into a vector of features reflecting the topic of the document. $\theta$ is estimated from data, typically through an optimization, \begin{align} \label{eq:optim} \hat{\theta} := \arg\min_{\theta \in \Theta} \mathcal{L}\left(\mathcal{D}, T\left(\cdot; \theta\right)\right) \end{align} for some loss $\mathcal{L}$. In an unsupervised feature learner, candidates $\theta \in \Theta$ are functions of $x_{1}, \dots, x_{n}$ alone. For a supervised feature learner, the class includes functions of both $x_{1}, \dots, x_{n}$ and $y$. To simplify notation, we will write $z_{i} = T\left(x_{i}; \hat{\theta}\right) \in \reals^{L}$ to denote the learned features for observation $i$. A challenge is that the learned features are not the same from one run to the next; the $l^{th}$ learned feature from run 1 need not have any relationship with the $l^{th}$ feature from run 2. This is a consequence of using stochastic optimization in \ref{eq:optim}. However, even if there is no direct correspondence across runs, they may all reflect the same underlying latent features. In particular, projections from dimensionality reductions from the two datasets may be similar, after applying an appropriate alignment. Suppose that data have been split into a feature learning set, indexed by $I \subset \{1, \dots, n\}$ and an inference set, indexed by $I^{C}$. The fraction $\frac{1}{n}\absarg{I}$ used for feature learning is a hyperparameter whose influence is empirically studied below. The learning set $\left(x_{i}\right)_{i \in I}$ is resampled $B$ times, leading to $B$ different feature extractors, $T\left(\cdot; \hat{\theta}^{b}\right)$ which can then be applied to the full dataset, yielding learned features $\*Z_{b} \in \reals^{n \times L}$ \subsection{Nonparametric Bootstrap} \label{subsec:nonparametric_bootstrap} Like the total bootstrap, one approach to comparing embeddings across feature extractors is to perform a dimensionality reduction on each and then align the projections. For each $b$, compute a singular value decomposition, \begin{align*} \*Z_{b, I^{C}} &= \hat{\*U}_{b}\hat{\*\Sigma}_{b}\hat{\*V}_{b}^{T} \end{align*} where the index $I^{C}$ means that only features associated with the inference set are used. Define coordinates for sample $i$ with respect to the top $K$ right singular vectors using $l_{i}^{b} = \sum_{k = 1}^{K} \hat{u}^{b}_{ik}\hat{\sigma}_{k}^{b}$. These can be stacked into $\*L_{b} \in \reals^{\absarg{I^{C}} \times K}$. A Procrustes analysis applied to $\*L_{1}, \dots, \*L_{B}$ learns a series of rotation matrices $\*R_{1}, \dots, \*R_{B}$ aligning the projections. For each sample $i$, compute a mean and covariance matrix based on the $B$ vectors $\*R_{b}l_{i}^{b}$. These are used to create $1-\alpha$ level confidence areas for each inference sample in the $K$-dimensional projection space. This approach plugs-in an estimate for a ``true'' low-dimensional $\*L$, assuming that this representation is noisily observed and then subject to arbitrary rotations. Note that if the true latent representations are subject to more general transformations (e.g., translations) across runs of the extractor, this assumption may not be appropriate. The advantage of this approach is that it does not require a parametric model for simulating new versions of $\*Z_{b}$. The price to pay is that it is necessary to train $B$ feature extraction models $T\left(\cdot, \hat{\theta}_{b}\right)$, which can be a computational burden, even if it is parallelizable. Further, confidence areas are not computed for samples in the feature learning set $\left(x_{i}\right)_{i \in I}$. However, if the uncertainty of sample-level projections is assumed to vary smoothly, then a heuristic is to consider the uncertainty of a sample in $I$ as comparable to those of its nearby samples in $I^{C}$. \begin{figure} \centering \includegraphics[width=1\textwidth]{combined_summary_graphic.png} \caption{A summary of the proposed bootstrap procedures. All begin by splitting data into training and inference sets, used for feature learning and confidence region construction, respectively. The nonparametric bootstrap (a) trains $B$ separate feature learners, each of which are used for feature extraction and dimensionality reduction before being aligned. The parametric bootstrap (b) trains a single feature learner and then simulates and aligns an ensemble of $B$ latent coordinates for each sample. The compromise (c) trains a smaller set of feature learners but further resamples residuals (like in the parametric bootstrap) to increase the number of apparent bootstrap replicates.} \label{fig:combined_summary_graphic} \end{figure} \subsection{Parametric Bootstrap} \label{subsec:parametric_bootstrap} To avoid the computational complexity associated with training $B$ feature extractors, we consider a parametric bootstrap, which simulates $\*Z_{b}$ by resampling residuals from an fitted low-rank model, analogous to the fixed-effects PCA approach \cite{josse2016confidence}. Suppose that variation across $x_{i}$ is induced by latent features $l_{i} \in \reals^{K}$. The feature learning process is modeled as, \begin{align} \label{eq:para_boot} \*Z &= \*L \*V^{T} + \*E & E_{ij} &\sim \mathcal{N}\left(0, \sigma_{\*E}^2\right) \\ y &= \*L \beta + \epsilon & \epsilon_{i} &\sim \mathcal{N}\left(0, \sigma_{\epsilon}^2\right) \end{align} where $\*L \in \reals^{n \times K}$ stacks the $l_i$ and $E_{ij}$ is the $ij^{th}$ element of $\*E$. Only $\*Z$ is available for predicting the response $y$. To simulate $\*Z_{b}$ based on a single set of observed latent features $\*Z$, we resample rows of $\*Z$ in the inference set $I^{C}$ and compute the associated rank-$K$ truncated SVD, $\hat{\*U}\hat{\*\Sigma}\hat{\*V}^T$. Then we draw, \begin{align*} \*Z_{b} = \left(\hat{\*U}\hat{\*\Sigma} + \*E_{b}\right)\*\Pi_{b}, \end{align*} where $\*E_{b} \in \reals^{n \times K}$ is obtained by resampling entries of $\*Z - \hat{\*Z}$ and $\*\Pi_{b} \in \reals^{n \times K}$ is a random permutation matrix, reflecting the fact that coordinates of the feature extractor need not match from one run to the next. Alignment and confidence area construction then proceeds as in section \ref{subsec:nonparametric_bootstrap}. \subsection{Compromise} \label{subsec:compromise} We adapt the mechanism above to simulate $\*Z_{1}, \dots, \*Z_{B}$ in the case where we have more than one trained feature extractor $T\left(\cdot, \hat{\theta}_s\right)$, for $s = 1, \dots, S$. Set $S < B$, so the feature learning phase is less costly than in section \ref{subsec:nonparametric_bootstrap}. Begin by extracting $\*Z_{s}$ on resampled versions of the inference set, using the $S$ extractors $T\left(\cdot, \hat{\theta}_{s}\right)$. Then compute their truncated, rank-$K$ SVDs $\hat{\*U}_{s}\hat{\*\Sigma}_{s}\*V_{s}^T$. New feature sets are simulated from, \begin{align*} \*Z_{b}= \left(\hat{\*U}_{s\left(b\right)}\hat{\*\Sigma}_{s\left(b\right)} + \*E_{b}\right)\*\Pi_{b}, \end{align*} where $s\left(b\right)$ is drawn uniformly from $1, \dots, S$ and $\*E_{b}$ resamples entries across all $\*Z_{s} - \hat{\*Z}_{s}$. Given the $B$ resampled $\*Z_{b}$, we generate confidence regions as before. \section{Simulations} \label{sec:simulation} We conduct two simulation studies. The first uses a low-rank model and permits calculation of coverage rates, but it is less representative of realistic feature learning settings. The second generates images using a spatial point process with variation reflecting a small set of latent parameters. The distributed feature learning associated with this setting prevents us from computing the coverage of confidence ellipsoids, but its complexity more accurately reflects practice. \subsection{Low-rank model} \label{subsec:low_rank_model} The first simulation generates samples $\*X \in \reals^{n \times D}$ using, \begin{align} \label{eq:low_rank1} \*X &= \*U\*\Sigma \*V^{T} + \*E & \*\Sigma &= \text{diag}\left(c\*1_{K}\right) & \*U &\sim \textnormal{Haar}\left(n, K\right)\\ y & = \*U \*\Sigma \beta + \epsilon & \beta &= \left(b \*1_{\frac{K}{2}}, -b \*1_{\frac{K}{2}}\right) & \*V &\sim \textnormal{Haar}\left(D, K\right)\\ &&&& E_{ij} &\sim \mathcal{N}\left(0, \sigma_{\*E}^{2} \right)\\ &&&& \epsilon_{i} &\sim \mathcal{N}\left(0, \sigma_{y}^2\right) \end{align} where $\textnormal{Haar}\left(n, K\right)$ denotes a random orthonormal matrix with $n$ rows and $K$ columns. $\*X$ is a random rank-$K$ matrix observed with Gaussian noise. $y$ is a response depending on the latent coordinates of each row. The specific parameters we use are $N = 1000, D = 100, c = 100, b = 1, K = 2, \sigma^2_{E} = 0.1$. Note that this is a model of the data $\*X$, not the features $\*Z$, as in equation \ref{eq:para_boot}. As a feature extractor, we use a randomly perturbed and permuted SVD-based estimate of the latent coordinates, \begin{align} \begin{split} \label{eq:low_rank2} \*X &= \hat{\*U}\hat{\*\Sigma}\hat{\*V}^T \\ \*Z &:= \left(\hat{\*U}_{\hat{K}}\hat{\*\Sigma}_{\hat{K}} + \tilde{\*E}\right) \*\Pi \end{split} \end{align} where the subscript $\hat{K}$ denotes that only the top $\hat{K}$ left singular vectors and values are used and $\tilde{E}_{ij} \sim \mathcal{N}\left(0, 0.1^2\right)$. The permutation $\*\Pi$ and noise $\tilde{\*E}$ mimic the variation across retrained feature extractors. Given this source data and feature extractors, we apply all three bootstrap methods to this data, generating $B = 1000$ bootstrap replicates in each case. For the compromise approach, we train $S = 100$ separate feature extractors. The resulting 95\% confidence ellipses are shown in Figure \ref{fig:low_rank_projections}. Qualitatively, the parametric and nonparametric bootstrap approaches provide similar output. A gradient across the colors of $y$ reflects accurate estimation of the true latent factors. We have Procrustes aligned the true coordinates $\*U \*\Sigma$ (squares) with the $B$ bootstrap replicates, and the fact that most squares are contained within ellipses suggests that the bootstrap accurately reflects uncertainty in the estimated projections. In fact, for the parametric and nonparametric approaches, the empirical coverage rates of these ellipses are 96.4\% and 95.2\%, respectively. On the other hand, the compromise approach appears to be overly conservative, with a coverage of 99.9\%. This behavior arises in the remaining simulations and data analysis as well. \begin{figure} \centering \includegraphics[width=\textwidth]{low_rank_projections.png} \caption{Projections from the low-rank data simulation. Each ellipse gives the confidence area for the latent coordinates of one sample. Squares are the positions of the true low-rank coordinates after Procrustes rotating them to align with the centers of the ellipses. The confidence areas for the nonparametric bootstrap are smaller than those for the parametric bootstrap. Those for the compromise method are conservative.} \label{fig:low_rank_projections} \end{figure} \subsection{Spatial point process} \label{subsec:point_process} In this simulation, we generate a collection of images using a point process where parameters vary from image to image. Intuitively, each image represents cells viewed through a microscope, and different latent parameters influence the cell ecosystem. A single response value $y$ is associated with these latent parameters. Example images for varying $y$ are given in Figure \ref{fig:matern_example}. We generate 10,000 of these $64 \times 64 \times 3$-dimensional RGB images. \subsubsection{Generation} \label{subsubsec:generation} Locations of cells are governed by an intensity function drawn from a two-dimensional marked Log Cox Matern Process (LCMP) \cite{diggle2013spatial}. Recall that a Matern process is a Gaussian process with covariance function, \begin{align} \label{eq:cov_lcmp} C_{\nu, \alpha}(\|x - y\|)=\sigma^{2} \frac{2^{1-\nu}}{\Gamma(\nu)}\left(\sqrt{2 \nu} \frac{\|x - y\|}{\alpha}\right)^{\nu} K_{\nu}\left(\sqrt{2 \nu} \frac{\|x - y\|}{\alpha}\right), \end{align} where $\alpha$ acts like a bandwidth parameter and $\nu$ controls roughness. Our LCMP has $R$ classes (cell types). This can be constructed as follows. First, a nonnegative process $\Lambda\left(x\right)$ is simulated along the image grid, $\Lambda\left(x\right) \sim \exp{\mathcal{N}\left(0, \mathbf{C}_{\nu_{\Lambda}, \alpha_{\Lambda}}\right)}$, where $\mathbf{C}_{\nu_{\Lambda}, \alpha_{\Lambda}}$ is the covariance matrix induced by equation \ref{eq:cov_lcmp}. This is a baseline intensity that determines the location of cells, regardless of cell type. $R$ further processes are then simulated, $B_{r}\left(x\right) \sim \exp{\beta_{r} + \mathcal{N}\left(0, \mathbf{C}_{\nu_{B}, \alpha_{B}}\right)} $. These processes reflect relative frequencies of the $R$ classes at any location $x$; the intercept $\beta_r$ makes a class either more or less frequent across all positions $x$. Given these intensity functions, we can simulate $N$ cell locations by drawing from an inhomogeneous Poisson process with intensity $\Lambda\left(x\right)$. For a cell at location $x$, we assign it cell type $r$ with probability $\frac{B_{r}^{\tau}\left(x\right)}{\sum_{r^\prime = 1}^{R} B^{\tau}_{r^\prime}\left(x\right)}$. We have introduced a temperature $\tau$ controlling the degree of mixedness between cell types at a given location. \begin{figure} \centering \includegraphics[width=\textwidth]{generation_mechanism} \caption{Example images, for low (top), average (middle), and high (bottom) values of $y_i$. For each sample, three relative intensity functions $B_{r}\left(x\right)$ are generated, shown as a greyscale heatmaps. Samples drawn from each process are overlaid as circles. The final images combine points across processes, removing the underlying intensity function, which is not available to the feature learner. Small $y_i$ values are associated with smoother, less structured intensity functions.} \label{fig:matern_example} \end{figure} To complete the procedure for simulating images, we add two last sources of variation — the number of cells and cell size. The number of cells per image is drawn uniformly from 50 to 1000. The cells from class $R$ are drawn with a random $\text{Gamma}\left(5, \lambda_{r}\right)$ radius. A summary of all parameters used to generate each image is given in Supplementary Table \ref{tab:sim_params}. Each parameter is drawn uniformly within its range, which has been chosen to provide sufficient variation in image appearance. These parameters are the ``true'' underlying features associated with the simulated images; they give the most concise description of the variation observed across the images. The response $y$ is a hand-specified linear combination of these parameters, the reasoning behind the combination is discussed in the \texttt{generate.Rmd} script in the accompanying compendium, see Supplementary Section \ref{sec:reproducibility}. \subsubsection{Experiments} \label{subsubsec:experiments} We study the influence of the following parameters, \begin{itemize} \item Learning vs. inference split sizes. We vary the proportion of data used for learning and inference. We sample $I$ so that $\frac{1}{n}\absarg{I} \in \{0.15, 0.5, 0.9\}$. \item Models trained. For feature extractors, we train CNN, VAE, and RCF models on the learning split $I$. \item Model complexity. We train VAEs whose hidden layer has dimensionality $L \in \{32, 64, 128\}$. Similarly, we vary the number of first-layer convolutional filters in the CNN model across $L \in \{ 32, 64, 128\}$. For the RCF, we use $L \in \{256, 512, 1024\}$ random features. This increase reflects the fact that more random features must be considered before a subset of predictive ones are identified. \item Inference strategy. We use the parametric, nonparametric, and compromise bootstrap strategies from section \ref{sec:methods} to estimate confidence areas for the projections obtained by feature learners. \end{itemize} Figure \ref{fig:distributed_hm} shows the activations of learned features across 2000 images for two perturbed versions of the training data when 90\% of the data are used for inference and $L = 64$ (CNN, VAE) and $512$ (RCF). The learned features correspond to, \begin{itemize} \item CNN: Activations from the final hidden layer of neurons, used directly as input for the regression. \item VAE: Spatially-pooled activations from the middle, encoding layer of the variational autoencoder. \item RCF: The spatially-pooled activations corresponding to each random convolutional feature. \end{itemize} Note that, across algorithms, there is no simple correspondence between learned and source features (i.e., parameters of the underlying simulation). Instead, there are clusters of learned features, corresponding to a pattern across multiple source features. We also find subsets of features across all models that are only weakly correlated with any source feature. This has been referred to as distributed representation learning \cite{hinton1984distributed, le2014distributed}. Certain source features appear ``easier'' to represent than others, in the sense that more of the learned features are strongly correlated with them. Many features are correlated with $N_{i}$, the total number of cells in the image, and $\lambda_{i1}$, the size of the cells from Process 1. Depending on the model, the bandwidth $\alpha_{ir}$, roughness $\nu_{ir}$, and prevalence $\beta_{ik}$ parameters are either only weakly or not at all correlated with learned features. Even when features detect variation in $\alpha_{ir}$ and $\nu_{ir}$, they cannot disambiguate between these two parameters. Finally, the CNN and VAE features tend to be more clustered, with strong correlation across several source features. In contrast, the RCF features show more gradual shifts in correlation strength. They also show relatively little variation in correlation strength across features other than $\lambda_{i1}$ and $N_{i}$. \begin{figure} \centering \includegraphics[width=\textwidth]{combined_hm} \caption{Each feature learning algorithm learns a distributed representation of the true underlying features in the simulation. Within each heatmap, rows correspond to the parameters in Supplementary Table \ref{tab:sim_params}. Columns are activations of learned features; they have been reordered using the package \protect\cite{barter2018superheat}. The color of a cell gives the correlation between true and learned features. Blue and Burgundy encode positive and negative correlation, respectively.} \label{fig:distributed_hm} \end{figure} Example confidence areas across models and bootstrapping approaches are given in Figure \ref{fig:simulation_projection_combined}a. In contrast to the Figure \ref{fig:low_rank_projections} of the low-rank simulation, the areas from the nonparametric bootstrap are larger than those from the parametric bootstrap. This disagreement suggests that the proposed mechanism of equations \ref{eq:low_rank1} and \ref{eq:low_rank2} is insufficient for characterizing differences in learned features that arise between runs of more complex feature extractors -- multiple runs must be used for to account for randomness in algorithmic feature learning. As before, the compromise bootstrap has larger confidence areas than either parametric or nonparametric approaches on their own. In general, the RCF tends to have smaller confidence areas compared to the CNN and VAE. \begin{figure} \centering \includegraphics[width=\textwidth]{simulation_projection_combined.png} \caption{(a) The 95\% confidence areas associated with projections from the spatial point process simulation. Each point correspond to one image. Only the setting with 90\% of the data used for feature learning and the midsize models ($L = 64$ for the CNN and VAE, $L = 512$ for the RCF) are shown. (b) A view of confidence areas for the CNN across a range of learning split fractions ($0.15, 0.5, 0.9$) and model complexities ($L = 32, 64, 128$), all using the nonparametric approach.} \label{fig:simulation_projection_combined} \end{figure} Figure \ref{fig:simulation_projection_combined}b shows confidence regions for a single model (CNN) and bootstrap procedure (parametric) across a range of model complexities and split proportions. For larger $L$, projections are further from the origin, suggesting larger activations on average. The fraction of data used for feature learning does not appear to affect the strength of the association with the response $y$ or the size of the projection uncertainties. Corresponding figures for the other models are provided in Supplementary Section \ref{sec:supplementary_figures}. \section{Data Analysis} \label{sec:data_analysis} We next analyze the spatial proteomics dataset reported in \cite{keren2018structured}, which found a relationship between the spatial organization of Triple Negative Breast Cancer (TNBC) tissues and disease progression. In a classical proteomics study, the expression levels for a set of proteins is measured for a collection of cells, but the cell locations are unknown. In contrast, these data provide for each patient (1) an image delineating cell boundaries and (2) the protein expression levels associated with each cell in the images. We only work with spatial cell delineations, not protein expression levels. This allows us to study feature learning within the images without having to worry about linking expression and image data, which is itself a complex integration problem. The data are $2048 \times 2048$-dimensional images, one for each of 41 patients. Each pixel has a value 1 through 7 encoding which of 7 categories of tumor or immune cell types the pixel belongs. To ensure that the cell types are treated as categorical, we transform pixels to their one-hot encodings, resulting in a collection of $2048 \times 2048 \times 7$ binary matrices. \begin{figure} \centering \includegraphics[width=\textwidth]{example_cells} \caption{Example patches from the TNBC data. Panels are ordered by $y_i$, the (log) fraction of cells they contain that belong to the tumor. This provides signal for the supervised algorithms, whose goal is to correctly place patches from new patients along this gradient.} \label{fig:example_cells} \end{figure} To setup a prediction problem, we split each image into $512 \times 512 \times 7$ patches. These patches are our $x_{i}$. Patches from 32 of the patients are reserved for feature learning. Four among these 32 are used as a development split, to tune parameters of the feature learning algorithms. As a response variable, we use $y_{i} = \log\left(\frac{\#\{\text{Tumor cells in }x_{i}\}}{\#\{\text{Immune cells in }x_i\}}\right)$. These $y_i$ provide signal for the supervised feature learners. Example cell patches are shown in Figure \ref{fig:example_cells}. We fit the same models (CNN, VAE, and RCF) as discussed in section \ref{sec:simulation}, varying model complexity over the same parameters as before. As a baseline, we compare against a ridge regression with pixelwise composition features. We train a model with $y$ as a response and the average number of pixels belonging to each of the cell-type categories as a $7$-dimensional feature vector. This helps to determine whether the model has learned interesting features for counting cells, like cell size and boundaries, rather than simply averaging across pixel values. Indeed, Figure \ref{fig:tnbc_baseline} shows that, except in models with low capacity $L$, performance is improved when learning features algorithmically. \begin{figure} \centering \includegraphics[width=\textwidth]{tnbc_baseline} \caption{Relative performance of feature learning strategies on the TNBC data. Linked points come from the same bootstrap replicate. The black line gives the baseline ridge regression approach using the manually generated raw pixel counts for each of the cell types as predictors. Predictions from the development split are omitted; this split has few patches, and the estimated MSE's have high variance.} \label{fig:tnbc_baseline} \end{figure} To characterize features and their uncertainty, we perform $B = 100$ iterations for each of the parametric, nonparametric, and compromise bootstrap strategies. In each case, the samples are generated from patches reserved for inference. Two-dimensional projections for fixed model complexities are given in Figure \ref{fig:64_coordinates}. As visible by the color gradients, all methods learn to differentiate between patches with small and large values of $y_{i}$, even the VAE, which is unsupervised. Comparing rows, the RCF appears to give the most stable representations, while the coordinates for the VAE have larger confidence areas in general. For all models, some projections appear to be more uncertain than others. Moreover, the certain axes directions tend to be more uncertain than others, reflected by the eccentricity of ellipses. For example, viewing estimates from the nonparametric approach, the VAE projections have the highest uncertainty for low values of Dimension 2. Analogously, high values of Dimension 2 have high uncertainty in the RCF. For the CNN and RCF, the three bootstrap approaches give qualitatively similar conclusions about regions with higher and lower uncertainty, though the average sizes of the confidence areas differ. The size of confidence areas for the compromise approach in this case seem intermediate between those of the parametric and nonparametric approaches. For the VAE, the bootstrap approaches do not appear to agree. The compromise approach generally gives much larger confidence areas, potentially reflecting a failure of the Procrustes alignment in this case. Though this figure only displays one $L$ for each model, we find few differences in the projection uncertainty across models with different complexities, see Supplementary Figure \ref{fig:nonparametric_coordinates} in the supplementary materials. Figure \ref{fig:tnbc_imagegrid-PCA-1-2} overlays example patches onto aligned coordinates. In the CNN, samples in the bottom right have a high fraction of immune cells, those on the top left are mostly tumor, and those at the top right have lower cell density. In light of the confidence regions in Figure \ref{fig:nonparametric_coordinates}, embeddings of immune cell rich images tend to show more variability across bootstraps. In the RCF, the first dimension similarly reflects the tumor vs. immune gradient. The lower uncertainty of projections along this axis suggests that this gradient is reliably captured across runs of the feature extractor. The lower right region of the VAE has high cell diversity, and the upper left has lower density. It seems that regions with higher cell diversity and larger proportions of immune cells also have more uncertain embeddings. From the top right to the bottom left, there is again an tumor to immune transition. \begin{figure} \centering \includegraphics[width=\textwidth]{64_coordinates.png} \caption{Confidence areas from the TNBC application. Points are shaded by $y_{i} = \log\left(\frac{\#\{\text{Tumor cells in }x_{i}\}}{\#\{\text{Immune cells in }x_i\}}\right)$ , which provides the supervisory signal to the CNN and RCF during feature extraction. Models and bootstrap procedures are arranged along rows and columns, respectively. Only the models with intermediate complexity ($L = 64$ for the CNN and VAE, $L = 512$ for the RCF) are shown. Analogous figures for other $L$ are given in the supplementary materials.} \label{fig:64_coordinates} \end{figure} \begin{figure} \centering \makebox[\textwidth][c]{\includegraphics[width=\textwidth]{tnbc_image_grid}} \caption{A version of the nonparametric bootstrap column from the dimensionality reductions in Figure \ref{fig:64_coordinates}, overlaying representative samples across the learned feature space. Cells are color coded as in Figure \ref{fig:example_cells}. Note that the overall shape of the region in which images are displayed mirrors the shape of the cloud of points in Figure \ref{fig:64_coordinates}.} \label{fig:tnbc_imagegrid-PCA-1-2} \end{figure} \section{Discussion} \label{sec:discussion} We have adapted existing approaches for evaluating the uncertainty of projections in dimensionality reduction for use in the context of algorithmically learned features. We have conducted an empirical study using simulations of varying complexity and a spatial proteomics data analysis problem, applying a representative suite of feature learning algorithms. We found that in more complex settings, a parametric bootstrap based on a single set of learned features does not reflect the degree of uncertainty present when comparing features derived from independently trained models. Our results raise several questions for further study. It is natural to ask to what extent similar behaviors are exhibited across other data domains, model types, or training regimes. For example, it would not be unusual to represent the cell data in our case study using a marked graph linking neighboring cells. Do the features learned by a graph autoencoder have similar stability properties? More generally, are there classes of feature learning algorithms that all share behavior from the stability perspective? In other domains, we may ask whether our methods can be adapted to text or audio data. Though the proposed bootstraps provide similar conclusions in the low-rank simulation, they differ in the point process simulation and spatial proteomics data analysis. This suggests that mechanisms in equations \ref{eq:low_rank1} and \ref{eq:low_rank2} may not reflect the behavior of repeated feature learning in more complex situations. The assumption that features can be rotated to align runs may also be problematic, and more general transformations across feature learning runs are plausible. For the CNN and RCF models in the data analysis example, we find that confidence areas for the compromise approach are intermediate between those for the parametric and nonparametric bootstraps. However, in both simulations, it tends to be larger, and we have no explanation. Finally, though we have empirically found coverage rates for the parametric and nonparametric bootstraps to be acceptable in the low-rank simulation, we have not theoretically studied the properties of these procedures. This could clarify differences that arise in projection uncertainties between bootstrap and feature learning approaches. \section*{Acknowledgments} The author thanks Susan Holmes, Karl Rohe, three reviewers, and the editor for feedback which improved the manuscript. Research was performed with assistance of the UW-Madison Center For High Throughput Computing (CHTC). \bibliographystyle{apacite}
{ "timestamp": "2022-02-02T02:08:59", "yymm": "2202", "arxiv_id": "2202.00180", "language": "en", "url": "https://arxiv.org/abs/2202.00180" }
\section{Introduction} Successful training of large neural networks heavily depends on the choice of a good optimizer as well as careful tuning of the hyperparameters such as the learning rate, momentum, weight decay, etc. Among the hyperparameters, the learning rate schedule is one of the most important elements for achieving the best accuracy. In many cases, the schedule consists of an initial ramp-up followed by a number of stair-case decays throughout the training phase. The schedule is typically tailored manually for the problem and requires re-training the model numerous times. The most recently proposed optimizers for deep neural networks (RMSProp~\cite{rmsprop}, AdaGrad~\cite{adagrad}, Adam~\cite{adam}, etc.) have been based on adapting the gradient via a (diagonal) pre-conditioner matrix. However, these techniques still require a carefully tuned learning rate schedule to achieve the optimal performance. Furthermore, little has been done for adapting the per-coordinate or the overall step-size. The common idea among the few available approaches ~\cite{dbd,rprop,nicol,hypergrad} is to move faster along directions that are making progress and punish those that alternate often. However, for the previous methods, 1) the formulation is not general enough to be compatible with different optimizers; and 2), the update equations are based on inferior heuristics or in some cases use the incorrect gradients~\cite{nicol} (as will be discussed later). In this paper, we aim to unify such approaches in a more rigorous manner. Concretely, we propose an abstraction in the form of a momentum update by passing the pre-conditioned gradient, proposed by an arbitrary internal optimizer to the meta algorithm. We then make the step-size adaptive by introducing the following hyperparameters: one overall step-size scale as well as local gain factors for each coordinate. The scale and gains are non-negative and trained using the Unnormalized Exponentiated Gradient (EGU) updates~\cite{eg}. As a brief introduction, EGU minimizes a function $f$ by adding a relative entropy (a.k.a. Kullback-Leibler) divergence~\cite{kl} as an \emph{inertia} term. The goal of adding the inertia term is to keep the updated parameters $\bm{\theta}^{t+1}$ close to the previous parameter $\bm{\theta}^t$ at step $t$: \begin{equation} \label{eq:min} \bm{\theta}^{t+1} = \argmin_{\,\tilde{\thet} \succeq \bm{0}}\big\{\sfrac{1}{\eta}\, D_{\text{\tiny RE}}(\tilde{\thet}, \bm{\theta}^t) + f(\tilde{\thet})\big\}\,, \end{equation} where $\eta > 0$ is a learning rate parameter and \[ D_{\text{\tiny RE}}(\u, \v) = \sum_i\big(u_i \log\frac{u_i}{v_i} - u_i + v_i\big)\, . \] The update multiplies each parameter by an exponentiated gradient factor: \begin{equation} \label{eq:egu} \bm{\theta}^{t+1} = \bm{\theta}^t \odot \exp\big(-\eta\, \nabla_{\bm{\theta}} f(\bm{\theta}^t)\big)\, , \qquad\text{(EGU)} \end{equation} where $\nabla_{\bm{\theta}} f(\bm{\theta}^t)$ denotes the gradient of the objective function $f(\bm{\theta})$ w.r.t. $\bm{\theta}$ evaluated at $\bm{\theta}^t$ and $\odot$ denotes element-wise product\footnote{The exact minimization of \eqref{eq:min} uses the gradient $\nabla_{\bm{\theta}} f(\bm{\theta}^{t+1})$ which is then approximated by $\nabla_{\bm{\theta}} f(\bm{\theta}^t)$ in the EGU update. More on this later.}. The multiplicative form of the update ensures $\bm{\theta}^{t+1} \succcurlyeq \bm{0}$ at any time. The properties of the EGU update have been studied extensively in the online learning literature~\cite{eg,hedge,percwin,singer,pnorm,warmuth2008}. Specifically, it has been shown that the EGU update converges significantly faster than gradient descent in cases when only a small subset of the dimensions are relevant~\cite{winnow,matrixwinnow,wincolt}. As a result, EGU is extremely efficient in discovering the relevant dimensions while immediately damping the irrelevant ones. Also, the EGU update naturally maintains the non-negativity of the parameters. Finally, the multiplicative form of the update allows exploring a wider range of values for the parameters more rapidly. \paragraph{Contributions} In this paper, we build upon the ideas of~\cite{dbd,nicol,hypergrad} and introduce a unified approach for step-size adaptation based on the unnormalized exponentiated gradient updates. The main goal of the paper is to revisit the previously developed ideas in different domains and show cases where such updates are effective on large deep neural networks. In summary: \vspace{-0.1cm} \begin{itemize}[leftmargin=4mm] \item We introduce a step-size adaptation framework which introduces per-coordinate gains as well a step-size scale for the update. We apply the EGU updates on these hyperparameters. \item Our formulation is versatile and accepts any pre-conditioned gradient by an adaptive gradient optimizer as input. Thus, it can be coupled with a wide range of commonly used optimizers. \item We show extremely promising use cases for our adaptive step-size method and discuss potential extensions of such adaptive methods. \item We show the efficacy of our method by conducting an extensive set of experiments on large-scale neural network on benchmark datasets and publish the code for reproducibility at:~\url{https://users.soe.ucsc.edu/~eamid/funnel.html}. \end{itemize} \subsection{Related Work} Introducing per-coordinate gains dates back to the Delta-Bar-Delta (DBD) method~\cite{dbd} where the set of gains are adaptively updated using the sign agreements of the current gradient and an exponential running average (EMA) of the past gradients. DBD uses a mixture of additive and multiplicative updates where the gains for the coordinates with agreeing gradient signs are increased by a constant amount and the remaining gains are damped by a multiplicative factor. Later, these ideas were extended to different settings~\cite{survey-dbd,sutton}. More relevantly, a local gain adaptation method was introduced in~\cite{nicol} where the goal was to update the gains using the EGU updates. However, the wrong gradient term (gradient w.r.t. $\log$-gains instead of gradient w.r.t. gains) was used on the final updates, i.e. \[ \bm{\theta}^{t+1} = \bm{\theta}^t \odot \exp\big(-\eta\, \nabla_{\log\bm{\theta}} f(\bm{\theta}^t)\big) = \bm{\theta}^t \odot \exp\big(-\eta\, \bm{\theta} \odot \nabla_{\bm{\theta}} f(\bm{\theta}^t)\big)\, .\qquad\text{(incorrect EGU)} \] Thus this update simply amounts to GD updates on the $\log$-gains of the parameters followed by an exponentiation. A more recent approach~\cite{precond} uses a PSD pre-conditioner gain matrix, which is trained with gradient descent updates on the factorized from. In this paper, we only consider the diagonal pre-conditioner and leave the extensions to the matrix case to the future work. In terms of step-size scale adaptation, the more recent Hypergradient Descent method~\cite{hypergrad} introduces a single adaptive scale parameter into the existing optimizers and applies gradient descent updates on the scale. The authors also propose a multiplicative form of the update which is proportional to the value of the scale parameter. We show that this multiplicative form of the update is a crude approximation of the EGU update, used in our method. \section{A Meta Algorithm for Adaptive Step-size} Our adaptive learning rate meta algorithm accepts as input a pre-conditioned gradient from an \emph{internal optimizer} $\mathcal{D}$ based on the value of the current weight parameters $\bm{w}^t$ (and possibly, the cumulative statistic of all the previous steps). The internal optimizer only interacts with the meta algorithm via the values of the parameter and its role is to only generates the pre-conditioned gradients. Let $\tilde{\g}^t \coloneqq \widetilde{\nabla}_{\bm{w}} L(\bm{w}^t|\, \mathcal{X}^t)$ denote the pre-conditioned gradient generated by the internal optimizer at step $t$ using the batch of data $\mathcal{X}^t$. Our \textbf{``Funnel''} meta algorithm applies the following update: \begin{equation} \label{eq:egdd-main} \boxed{ \begin{split} \bm{\nu}^{t+1} & = \mu\, \bm{\nu}^t + \eta\, \big(\bm{p}^{t+1} \odot \tilde{\bm{g}}^t\big)\\ \bm{w}^{t+1} & = \bm{w}^t - s^{t+1}\, \bm{\nu}^{t+1}\, . \end{split} } \end{equation} The update~\eqref{eq:egdd-main} in fact resembles a heavy-ball momentum update\footnote{A similar abstraction can be applied using Nesterov momentum~\cite{nesterov}.} with \emph{base learning rate} $\eta$ and \emph{momentum} hyperparameter $\mu$, with the addition of two extra elements: 1) a non-negative {per-coordinate gain} vector $\bm{p} \succcurlyeq \bm{0}$ which component-wise multiplies the pre-conditioned gradient vector, and 2) a non-negative step-size \emph{scale} $s \geq 0$ which scales the final step. The goal of the gain hyperparameter $\bm{p}$ is to independently scale each coordinate of the pre-conditioned gradient vector. On the other hand, the step-size scale $s$ adjust the final update that is added to the parameters. The gain as well as the scale hyperparameters are updated along with the weights at each step. Since both gains and scale are non-negative, a natural choice for the update is the EGU update~\eqref{eq:egu}. The multiplicative form of EGU updates allows these hyperparameters to effectively adapt to the dynamics of training by changing in a wider range of values more rapidly. The updates are applied by first calculating the gradient of the loss w.r.t. each hyperparameter. That is, \begin{equation*} \begin{split} \label{eq:dbd-gain-up} \nabla_{\bm{p}} L(\bm{w}^{t}|\, \mathcal{X}) & = -\nabla_{\bm{w}} L(\bm{w}^{t}|\, \mathcal{X}) \odot s^{t+1}\,\frac{\partial}{\partial \bm{p}}\big(\underbrace{\mu\, \bm{\nu}^t + \bm{p} \odot \nabla_{\bm{w}} L(\bm{w}^{t-1}|\, \mathcal{X})}_{\bm{\nu}^{t}}\big)\big)\\[-2mm] & \approx -s^{t+1}\nabla_{\bm{w}} L(\bm{w}^{t}|\, \mathcal{X}) \odot \nabla_{\bm{w}} L(\bm{w}^{t-1}|\, \mathcal{X})\, . \end{split} \end{equation*} where we omit the long-term dependencies on the gain hyperparameters. Let $\bm{g}^t \coloneqq \nabla_{\bm{w}} L(\bm{w}^t|\, \mathcal{X}^t)$ denote the gradient of the loss using the batch of data $\mathcal{X}^t$. We also remove the $s^{t+1}$ term from the gradient to reduce the inter-dependency of the gains and the scale. Thus, the exponentiated gradient gain update becomes \begin{equation} \label{eq:gain-pre} \bm{p}^{t+1} = \bm{p}^t \odot \exp\big(\gamma_p\,\bm{g}^t \odot \tilde{\bm{g}}^{t-1} \big)\, , \end{equation} where $\gamma_p \geq 0$ is the gain learning rate hyperparameter. Notice that update~\eqref{eq:gain-pre} depends on the value of the gradient on the current batch $\mathcal{X}^t$ and the value of the pre-conditioned gradient at the previous batch $\mathcal{X}^{t-1}$. To account for the stochasticity of the gradients due to different batches of data, we replace the second term in the gradient by an exponential moving average (EMA) of all the past pre-conditioned gradients, that is, \begin{equation} \label{eq:gain-up} \bm{p}^{t+1} = \bm{p}^t \odot \exp\big(\gamma_p\,\bm{g}^t \odot \emat^t \big)\, , \end{equation} where \[ \bm{m}^{t+1} = \beta\, \bm{m}^t + (1 - \beta)\, \tilde{\g}^t \text{ \,\, and \,\, } \emat^{t+1} = \frac{\bm{m}^{t+1}}{1 - \beta^{t+1}}\, . \] The hyperparameter $0 \leq \beta \leq 1$ is the decay factor for the pre-conditioned gradient EMA and $\emat^t$ corrects the initialization bias of $\bm{m}^t$ at zero. Similarly, for the the step-size scale hyperparameter $s$, we have \[ \nabla_{s} L(\bm{w}^{t}|\, \mathcal{X}) = -\nabla_{\bm{w}} L(\bm{w}^{t}|\, \mathcal{X}) \cdot \bm{\nu}^t\, . \] Thus, applying the EGU updates results in \begin{equation} \label{eq:scale-up} s^{t+1} = s^t \exp\big(\gamma_s\,\bm{g}^t \cdot \bm{\nu}^{t} \big)\, , \end{equation} where $\gamma_s \geq 0$ is the scale learning rate hyperparameter. The pseudo-code for the Funneled Stochastic Gradient Descent with Momentum is shown in Algorithm~\ref{alg:funnel}. \iffalse Let $\mathcal{X} = \{(\bm{x}_n,y_n)\}_{n=1}^N$ denote a given set of input-target pairs where $\bm{x}_n \in \mathbb{R}^d$ and $y_n \in \mathbb{R}$. The Delta-Delta method is originally proposed for learning rate adaptation in the LMS (i.e. squared loss) setting~\cite{dbd}. Denoting the loss over all examples by $L(\bm{w}|\, \mathcal{X})$, the Delta-Delta method applies a gradient update using the gradient of the loss $L$ w.r.t. the parameters $\bm{w}$ \begin{equation} \label{eq:delta-delta} \bm{w}^{t+1} = \bm{w}^t - \eta\, \bm{g}^{t} \odot \nabla_{\bm{w}} L(\bm{w}^{t}|\, \mathcal{X})\, , \end{equation} where $\eta \geq 0$ denotes the (fixed) learning rate and $\bm{g}^t \geq \bm{0}$ is the vector of non-negative (per dimension) {\it gains}. In order to accelerate the convergence, the Delta-Delta method also applies an update on the gains using the gradient of the loss $L$ w.r.t. $\bm{g}$ at $\bm{g}^t$, that is, \begin{equation} \begin{split} \label{eq:dbd-gain-up} \bm{g}^{t+1} & = \bm{g}^t - \sfrac{\alpha_g}{\eta}\, \nabla_{\bm{g}} L(\bm{w}^{t}|\, \mathcal{X})\\ & \approx \bm{g}^t -\sfrac{\alpha_g}{\eta}\,\nabla_{\bm{w}} L(\bm{w}^{t}|\, \mathcal{X}) \odot \frac{\nabla}{\nabla \bm{g}}\big(\bm{w}^t - \eta\, \bm{g} \odot \nabla_{\bm{w}} L(\bm{w}^{t-1}|\, \mathcal{X})\big)\big)\\ & = \bm{g}^t + \alpha_g\, \nabla_{\bm{w}} L(\bm{w}^{t}|\, \mathcal{X}) \odot \nabla_{\bm{w}} L(\bm{w}^{t-1}|\, \mathcal{X})\, . \end{split} \end{equation} in which $\alpha_g \geq 0$ denotes the gain learning rate. DBD, Hyperparam, discuss mini-batch setting. \section{The Funnel Method} In this section, we describe our proposed method for learning rate adaptation. We also propose a normalized update to efficiently handle the gradient scale problem. \subsection{Adaptive Exponentiated Learning Rate} Similar to DD, we introduce a per-coordinate gain to control the step-size along each dimension based on the amount of progress. In addition to adaptively adjusting the gain along each coordinate, we also introduce a scale for the base learning rate to control the total step-size for all weights. That is, we consider \begin{equation} \label{eq:our-update} \bm{w}^{t+1} = \bm{w}^t - s^t\eta_0\, \bm{g}^{t} \odot \nabla_{\bm{w}} L(\bm{w}^{t}|\, \mathcal{X}^t)\, , \end{equation} where $s^t > 0$ is an adaptive scale for the base learning rate $\eta$ and $\mathcal{X}^t$ denotes the batch of examples in round $t$. Treating $s$ and $\eta_0$ independently is a technical choice which allows adjusting the base learning rate $\eta_0$ freely for learning rate schedule. Updating the gain variables requires computing the gradient in two consecutive steps using different batch of examples. Note that the stochasticity of the updates introduces a significant amount of noise in the gain gradients, specially that the per-coordinate gradient depends on only two values. Instead, we consider a exponentially decaying running sum of the gradients to eliminate the noise. That is, we define \[ \bm{m}^{t+1} = (1 - \beta)\, \bm{m}^t + \beta\, \nabla_{\bm{w}} L(\bm{w}^{t}|\, \mathcal{X}^t)\, . \] where $0 < \beta < 1$ is a decay factor. Thus, we use $-\bm{m}^t \odot \bm{m}^{t-1}$ as a proxy for $\nabla_{\bm{w}} L(\bm{w}^{t}|\, \mathcal{X}^t)$. Next, instead of applying a gradient descent update on the gains, we use an exponentiated gradient (EG) update [], \begin{equation} \label{eq:eg-gain} \bm{g}^{t+1} = \bm{g}^t\odot\, \exp\big(\alpha_g\, \bm{m}^t \odot \bm{m}^{t-1} \big)\, . \end{equation} Note that the EGU update~\eqref{eq:eg-gain} naturally maintains the non-negativity constraint on $\bm{g}^{t+1}$. Additionally, it has been shown that the EGU update recovers the sparse solutions when only a small subset of dimensions are relevant. This means that EGU can immediately recover the desirable directions for obtaining a good solution while damping the irrelevant noisy directions. \begin{algorithm}[t] \centering \caption{Funnel}\label{alg:funnel} \begin{algorithmic} \State \textbf{Input:} Gradient $\nabla$, accumulators $(\bm{m}, \bm{s})$, accumulator momentums $(\beta, \tilde{\beta})$, learning rate scale $s$, gain $\bm{g}$, hyper-learning rates $(\alpha_g, \alpha_s)$\smallskip \State $\bm{m}_{\text{\tiny new}} \gets \beta\,\bm{m} + (1-\beta)\, \nabla$ \State $\bm{s}_{\text{\tiny new}} \gets \tilde{\beta}\,\bm{s} + (1-\tilde{\beta})\, \bm{g}^t \odot \nabla$ \State $\bm{g}_{\text{\tiny new}} \gets \bm{g}\odot\, \exp\big(\alpha_g\, \bm{m}_{\text{\tiny new}} \odot \bm{m} \big)$ \State $s_{\text{\tiny new}} \gets s\exp\Big(\alpha_{s}\,\frac{\bm{m}_{\text{\tiny new}}}{\Vert\bm{m}_{\text{\tiny new}}\Vert}\cdot \frac{\bm{s}}{\Vert \bm{s}\Vert}\Big)$ \end{algorithmic} \vspace{-0.103cm} \end{algorithm} The gradient w.r.t. the learning rate scale $s$ becomes \begin{align*} \nabla_s L(\bm{w}^t|\mathcal{X}^t) &= \nabla_{\bm{w}} L(\bm{w}^t|\mathcal{X}^t) \cdot \frac{\nabla}{\nabla s}\big(\bm{w}^{t-1}\!\! - s \eta \bm{g}^{t-1}\!\odot \nabla_{\bm{w}} L(\bm{w}^{t-1}| \mathcal{X}^{t-1})\big)\\ & = -\nabla_{\bm{w}} L(\bm{w}^t|\,\mathcal{X}^t) \cdot \big(\eta\bm{g}^{t-1}\odot \nabla_{\bm{w}} L(\bm{w}^{t-1}|\, \mathcal{X}^{t-1})\big) \end{align*} In order to reduce the noise in the gradient calculation, we similarly use an exponentially decaying running sum of {\it scaled} gradients \[ \bm{s}^{t+1} = (1 - \tilde{\beta})\, \bm{s}^t + \tilde{\beta}\, \bm{g}^t\odot\nabla_{\bm{w}} L(\bm{w}^{t}|\, \mathcal{X}^t)\, . \] where $0 < \tilde{\beta} < 1$ is a decay factor. Finally, we apply an EGU update on the learning rate scale parameter $s$, that is, \begin{equation} \label{eq:eg-alpha} s^{t+1} = s^t\exp\big(\alpha_{s}\,\bm{m}^t\cdot \bm{s}^{t-1}\big)\, . \end{equation} where $\alpha_{s} > 0$ is a learning rate parameter for the scale $s$. We call our exponentiated learning rate rate adaptation method ``{\large F}UNNEL'' because it is designed to guide the update step along directions that yield larger progress. \,\,\, \fi \section{Discussion of the Updates} In this section, we provide an intuitive explanation of the updates~\eqref{eq:gain-up} and \eqref{eq:scale-up} in terms of the gradient flow. We also discuss normalized updates and their connection to previous methods. \begin{figure*}[t!] \vspace{-1.0cm} \begin{center} \subfigure[]{\includegraphics[width=0.38\textwidth]{figs/fig1.pdf}} \subfigure[]{\includegraphics[width=0.38\textwidth]{figs/fig2.pdf}} \vspace{-0.25cm} \end{center} \vspace{0.8cm} \caption{Intuitive explanation of the updates in terms of adjusting the scale and gains based on the gradient flow: (a) adjusting the per-coordinate gain based on the alignments of the gradients in each direction. This requires an independence assumption among different coordinates. (b) Adjusting the step-size scale based on the alignment of the gradient flow. An implicit gradient update corresponds to the case where the previous update step and the new gradient are equal.}\label{fig:flow} \end{figure*} \subsection{Coordinate-wise Descent Directions} Notice that the gain hyperparameters $\bm{p}$ in~\eqref{eq:egdd-main} are multiplied element-wise by the pre-conditioned gradient. This is equivalent to introducing a coordinate-wise scale on the pre-conditioned gradients, which implies an independence assumption across the coordinates. In Figure~\ref{fig:flow}(a), we pictorially show the negative gradient $-\bm{g}^t = -\nabla_{\bm{w}}L(\bm{w}^t)$ as well as the negative pre-conditioned gradient $\tilde{\g}^t = -\widetilde{\nabla}_{\bm{w}}L(\bm{w}^t)$ at $\bm{w}^t$. Assuming that we take a step along the negative pre-conditioned gradient, we arrive at the new point $\bm{w}^{t+1}$, at which we plot the new negative gradient $-\bm{g}^{t+1} = -\nabla_{\bm{w}}L(\bm{w}^{t+1})$. We also re-draw the negative pre-conditioned gradient $-\tilde{\g}^t$ at the previous step with a dashed vector. Looking at each coordinate as an independent one-dimensional optimization problem, the descent direction for each coordinate can be obtained by decomposing $-\bm{g}^{t+1}$. Combining this with the update~\eqref{eq:gain-pre} implies that those coordinates for which the corresponding component of $-\bm{g}^t$ would still have been a \emph{descent direction} at the new point $\bm{w}^{t+1}$ are assigned a higher gain value. For the remaining coordinates for which the direction of descent direction switches, the gain values are reduced\footnote{Note that a better approach would be to consider the inter-dependency of all coordinates for updating the gain values. This requires applying a matrix form of gains on the gradients, which in some settings would be drastically more extensive. We consider extensions of our formulation to the gain matrix pre-conditioner and its efficient approximations as a future research direction.}. \subsection{Adjusting the Discretization Step-size of the Gradient Flow} When minimizing a loss function $L$ in a full-batch setting, the steepest descent update can be motivated as an iterative procedure that minimizes the loss while remaining close the current value of the parameters in terms of Euclidean distance. This is achieved by minimizing the loss $L$ plus a squared Euclidean divergence as the inertia term, \begin{equation} \label{eq:gd-motovation} \bm{w}^{t+1} = \argmin_{\wt \in \mathbb{R}^d}\, \sfrac{1}{s^t}\,\Vert\!\wt - \bm{w}^t\Vert_2^2 + L(\wt)\, . \end{equation} Minimizing~\eqref{eq:gd-motovation} directly results in \begin{align} \label{eq:gd-implicit} \sfrac{1}{s^t}\,\big(\bm{w}^{t+1} - \bm{w}^t\big) + \nabla_{\bm{w}}L(\bm{w}^{t+1}) = \bm{0}\, ,\ \text{\,\,\, i.e.\,\,\, } \bm{w}^{t+1} - \bm{w}^t = - s\, \nabla_{\bm{w}}L(\bm{w}^{t+1})\, , \end{align} called the \emph{implicit} gradient descent update ~\cite{pnorm,he2008explicit}. The term implicit implies that the update is motivated using the gradient of the loss function $L$ at a \emph{future} point. In practice, the update is approximated by the \emph{explicit} form in which the gradient at $\bm{w}^{t+1}$ is replaced by the gradient at $\bm{w}^t$, that is $\nabla_{\bm{w}}L(\bm{w}^t)$. It has been shown in many cases that the implicit update results in superior convergence than the explicit form~\cite{pnorm,implicit}. The difference between implicit and explicit update stems from the discretization error of the \emph{gradient flow} in continuous-time, \begin{equation} \label{eq:gd-flow} \dot{\bm{w}}(t) = -\nabla_{\bm{w}}L(\bm{w}(t))\,, \end{equation} where $\dot{\bm{w}} \coloneqq \frac{\partial \bm{w}}{\partial t}$ denotes the time derivative of $\bm{w}$. Note that the implicit update~\eqref{eq:gd-implicit} can also be recovered as a \emph{backward Euler} approximation of the gradient flow~\eqref{eq:gd-flow} with \emph{step-size} $s$, while a \emph{forward Euler} approximation results in the explicit update. The difference between the two approximations depends on the \emph{smoothness} of the gradients as well as the step-size $s$. That is, for smooth regions where the change in gradient from $\bm{w}^t$ to $\bm{w}^{t+1}$ is small, a larger step-size $s$ can be adopted and vice versa. Now consider an update $\Delta^t$ proposed by a given optimizer at step $t$, that is, \begin{equation} \label{eq:delta-update} \bm{w}^{t+1} - \bm{w}^t = -s^t\, \Delta^t\, . \end{equation} This is shown pictorially in Figure~\ref{fig:flow}(b). Note that the update $\Delta^t$ is not necessarily the steepest descent direction and can be generated by any pre-conditioning of the gradient and/or addition of momentum terms internally by the optimizer. However, when $\Delta^t \approx \partial_{\bm{w}}L(\bm{w}^{t+1})$ implies that the update~\eqref{eq:delta-update} closely approximates the implicit update~\eqref{eq:gd-implicit}. This implies that the gradient of the function $L$ in the neighborhood of $\bm{w}^t$ is \emph{smooth} enough such that a larger step-size $\bm{s}^{t+1}$ in the next iteration is plausible. This property can be roughly quantified in terms of alignments of the directions by only considering the cosine of the angle $\psi$ between the two vectors, as discussed in the next section. \begin{figure*} \vspace{-0.4cm} {\centering \begin{minipage}{0.85\linewidth} \begin{algorithm}[H] \centering \caption{Funnelled Stochastic Gradient Descent with Momentum}\label{alg:funnel} \begin{algorithmic} \State \textbf{Input:} Loss function $L$, internal optimizer $\mathcal{D}$, initial parameter $\bm{w}^0$, base learning rate $\eta$, pre-conditioned gradient EMA decay factor $\beta$, gain and step-scale learning rate hyperparameters $(\gamma_p, \gamma_s)$\smallskip \State $t, \bm{m}^0, \bm{p}^0, s^0 \gets 0, \bm{0}, \bm{1}, 1$ \Comment{Initialization} \While{\,\,$\bm{w}^t$ not converged}\,\, \State Obtain\, $\bm{g}^t = \nabla_{\bm{w}} L(\bm{w}^t|\, \mathcal{X}^t)$ \Comment{Gradient} \State Obtain\, $\tilde{\g}^t = \widetilde{\nabla}_{\bm{w}} L(\bm{w}^t|\, \mathcal{X}^t)$ from $\mathcal{D}(L, \bm{w}^t, \mathcal{X}^t)$ \Comment{Pre-conditioned gradient} \State $\bm{p}^{t+1} \gets \begin{cases}\bm{p}^t\odot\, \exp\big(\gamma_p\, \bm{g}^t \odot \emat^{t} \big) & \text{(Unnormalized)}\\[1mm] \bm{p}^t\odot\, \exp\big(\gamma_p \sign(\bm{g}^t) \odot \sign(\bm{m}^t) \big) & \text{(Normalized)} \end{cases}$ \State $s^{t+1} \gets \begin{cases}s^t\exp\Big(\gamma_{s}\,\bm{g}^t\cdot \bm{\nu}^t\Big)& \text{(Unnormalized)}\\[2mm] s^t\exp\Big(\gamma_{s}\,\frac{\bm{g}^t}{\Vert\bm{g}^t\Vert}\cdot \frac{\bm{\nu}^t}{\Vert \bm{\nu}^t\Vert}\Big)& \text{(Normalized)}\end{cases}$ \State $\bm{m}^{t+1} \gets \beta\,\bm{m}^t + (1-\beta)\, \tilde{\g}^t$ \State $\bm{\nu}^{t+1} \gets \mu\, \bm{\nu}^t + \eta\, \big(\bm{p}^{t+1}\, \odot\, \tilde{\g}^t\big)$ \State $\bm{w}^{t+1} \gets \bm{w}^t - s^{t+1}\, \bm{\nu}^{t+1}$ \Comment{Parameter update} \State $t \gets t+1$ \EndWhile \State \textbf{return}\, $\bm{w}^t$ \end{algorithmic} \end{algorithm} \end{minipage} \par } \end{figure*} \subsection{Normalized Updates} The updates~\eqref{eq:gain-up} and~\eqref{eq:scale-up} are highly dependant on the norm of the gradients at each layer and therefore, may require carefully tuned $(\gamma_p, \gamma_s)$ hyperparameters in each layer. In order to make the updates applicable across different with the same $(\gamma_p, \gamma_s)$, we consider normalized versions of our gain update~\eqref{eq:gain-up} as well as the learning rate scale update~\eqref{eq:scale-up}. We can approximate the gain update~\eqref{eq:gain-up} by normalizing $\bm{g}^t $ and $\bm{m}^{t-1}$ along each coordinate. This corresponds to $\bm{g}^t \oslash \vert \bm{g}^t \vert = \sign(\bm{g}^t )$ and $\bm{m}^{t-1} \oslash \vert \bm{m}^{t-1} \vert = \sign(\bm{m}^{t-1})$, i.e., using the signs of the gradient and the EMA term\footnote{Note that the EMA $\bm{m}^{t-1}$ term and the bias corrected version $\emat^{t-1}$ have the same sign.}. This yields the normalized gain update, \begin{equation} \label{eq:eg-gain-norm} \bm{g}^{t+1} = \bm{g}^t\odot\, \exp\big(\gamma_p\, \sign(\bm{g}^t) \odot \sign(\emat^t) \big) \end{equation} We can also apply a similar normalization to the learning rate scale update~\eqref{eq:scale-up}. That is, we apply a normalized update by dividing $\bm{g}^t $ and $\bm{\nu}^{t-1}$ by their $L_2$-norms i.e. only considering the directions and replacing the inner-product by the \emph{cosine similarity}, \begin{equation} \label{eq:eg-scale-norm} s^{t+1} = s^t\exp\Big(\gamma_{s}\,\frac{\bm{g}^t }{\Vert\bm{g}^t \Vert}\cdot \frac{\bm{\nu}^t}{\Vert \bm{\nu}^t\Vert}\Big)\, . \end{equation} The normalized update~\eqref{eq:eg-scale-norm} is specially useful for optimizing multi-layer deep neural networks where the size of the layers vary significantly across the network. The normalization assures that a single hyperparameter $\gamma_s$ can be applied across layers. Note that the learning rate update in~\cite{hypergrad} can be recovered as an approximation of~\eqref{eq:eg-scale-norm}. That is, using the approximation $\exp(x) \approx 1 + x$ yields \[ s^{t+1} \approx s^t\Big(1 + \gamma_{s}\,\frac{\bm{g}^t }{\Vert\bm{g}^t \Vert}\cdot \frac{\bm{\nu}^t}{\Vert \bm{\nu}^t\Vert}\Big)\, . \] which is the adaptive update proposed in Hypergradient Descent~\cite{hypergrad}. A similar approximation on the normalized gain updates~\eqref{eq:eg-gain-norm} resembles the multiplicative update form of the DBD method~\cite{dbd}. \begin{figure} \vspace{-0.3cm} \centering \subfigure[]{ \includegraphics[width=0.22\linewidth]{figs/rot0.pdf}\label{fig:rot0} }\,\, \subfigure[]{ \includegraphics[width=0.22\linewidth]{figs/rot45.pdf}\label{fig:rot45} }\,\, \subfigure[]{ \includegraphics[width=0.22\linewidth]{figs/rot90.pdf}\label{fig:rot90} } \caption{Samples from the rotated MNIST datasets: random background with (a) 0 degrees, (b) 45 degrees, and (c) 90 degrees rotation.} \label{fig:mnist-digits} \end{figure} \begin{figure}[t!] \vspace{-0.4cm} \centering \subfigure[]{ \includegraphics[width=0.315\linewidth]{figs/mnist_4k.pdf}\label{fig:mnistk} } \subfigure[]{ \includegraphics[width=0.315\linewidth]{figs/lrscale.pdf}\label{fig:mnist-lr-scales} } \subfigure[]{ \includegraphics[width=0.315\linewidth]{figs/gains.pdf}\label{fig:mnist-gains} } \caption{(a) Top-1 accuracy on shifting MNIST dataset. Every 100k steps (~100 epochs) we shift the datasets (b) Evolution of learning rate scales and (b) Gain values for some of the parameters throughout training.} \label{fig:mnist-results} \end{figure} \begin{figure}[t!] \vspace{-0.6cm} \centering \subfigure[]{ \includegraphics[width=0.45\linewidth]{figs/scales-mob.pdf}\label{fig:mob_lr_scales} } \subfigure[]{ \includegraphics[width=0.45\linewidth]{figs/gains-mob.pdf}\label{fig:mob_gains} } \caption{Scale and gain hyperparameters of the MobileNetV1 model: (a) step-size scales for different layers, (b) gains for a subset of coordinates in two different layers. Funnel successfully discovers a decay schedule for all the layers, with a different decay rate. Also, notice that using Funnel the gains vary at a different rate for each layer and each coordinate. Also, some of the gains ramp up initially for the first $\sim$5k steps before starting to decay.} \label{fig:mobnet} \vspace{-0.4cm} \end{figure} \section{Experiments} \label{sec:experiments} In this section, we show two cases where our adaptive step-size Funnel method proves to be extremely effective. In the first part of the experiments, we consider the problem of distribution shift in the data during training. For this, we create a synthetic dataset by rotating the MNIST dataset of handwritten digits. We show how Funnel can effectively improve the performance by adapting to the new distribution more rapidly. Next, we consider the setting where we remove the learning rate schedule for training of large-scale models for image classification. We show how the performance of the baseline (as well as the adaptive gradient methods) deteriorates by this change while Funnel can successfully discover a good schedule, adaptively. \subsection{Distribution Shift} Adaptive optimization methods work well on standard datasets where the distribution of the data is fixed. Typical tuning procedure for static datasets involves a decaying learning rate schedule in case of Adam or implicit decay schedule for AdaGrad due to accumulation of gradient statistics. On real world problems where models is training on a stream of freshly arriving data (e.g. click through rate prediction in online advertisement or recommender systems), there is a natural shift in distribution in the dataset over time (e.g. user preferences can change). This requires the optimization method to be adaptive to changing distribution. To illustrate the advantage of our proposed step-size adaptive mechanism in such cases, we simulate distribution shift on the MNIST dataset of handwritten digits~\cite{mnist} dataset. We split the train, validation, and test into three disjoint sets. For each set, we replace pixel values less than 1e-2 with a random uniform from $[0, 1]$ and rotate the images by 0, 45, and 90 degrees each as shown in Figure~\ref{fig:mnist-digits}. We train a logistic regression model with AdaGrad~\cite{adagrad} as well with Funnel (with AdaGrad as the internal optimizer) at batch size 10k, for 100 epochs for each set sequentially (90 degrees first, followed by 0 degrees, and finally 45 degree). Results are presented in Figure~\ref{fig:mnist-results}. Results indicate that AdaGrad's performance is deteriorated when we switch the training set where as with Funnel higher top-1 accuracy is obtained. We also show the evolution of the learning rate scale, and the per-coordinate gains throughout the training and find the optimization method is able to adjust these hyperparameters in a data dependent way (see Figure~\ref{fig:mnist-lr-scales} and Figure~\ref{fig:mnist-gains}). \subsection{Adaptive Learning Rate Schedule} In this section, we conduct experiments on large-scale convolutional neural networks where the baseline model is trained with a highly tuned learning rate schedule. We compare the performance of different optimizer on the same network when the learning rate schedule is removed. We For each experiment, we tune the remaining hyperparameters of the optimizers independently. We also repeat each experiment for the best tuning 5 times and average the results. We consider the following models for the experiments: 1) MobileNetV1 model on CIFAR-10 dataset, and 2) ResNet50 model on the ImageNet dataset. \subsubsection{MobileNetV1 on CIFAR10 Dataset} For the MobileNetV1 trained on the CIFAR10 model, we consider the SGD Momentum optimizer as the baseline optimizer. The baseline learning rate schedule consists of two staircase decays without any initial ramp-ups. We train the baseline model for 150k steps with a batch size of 50. Next, we remove the learning rate schedule and train the model using the SGD Momentum optimizer as well as the Funneled SGD Momentum. For the vanilla Momentum, we use the same learning rate and momentum values as the baseline. We also use the same learning rate and momentum values for Funnel and tune the gain and scale learning rate values in the range $[10^{-5},\, 10^{-3}]$. We set $\beta=0.9$ for the gradient EMA. We also allow the gains and the scale to vary in the range $[0,\, 10^3]$. We repeat each experiment 5 times for 150k iterations and using the same batch size of 50. The best performance for Funnel is achieved with $(\gamma_p, \gamma_s) = (10^{-4}, 10^{-3})$. The results are shown in Table~\ref{tab:mob}. As can be seen from the table, the top-1 accuracy of the baseline Momentum model drops by around 5\% by removing the learning rate schedule. However, our Funnel method can match the baseline performance without using a learning rate schedule. We plot the scale and gain values for a subset of the layers in Figure~\ref{fig:mobnet}. In Figure~\ref{fig:mob_lr_scales}, we plot the scales for 2 depthwise and 2 pointwise convolutional layers. As can be seen from the figure, Funnel is able to recover a decay schedule for all the layers, with a different decay rate. In Figure~\ref{fig:mob_gains}, we show a gains for a subset of coordinates for a depthwise as well as a pointwise convolutional layer. Notice that the gains vary at a different rate for each layer and each coordinate. Also, some of the gains ramp up initially for the first $\sim$5k steps before starting to decay. \subsubsection{ResNet50 on ImageNet} We also consider the ResNet50 model trained on the ImageNet dataset. The baseline optimizer corresponds to an SGD Momentum with an initial ramp-up followed by a stair-case decay learning rate schedule. We train the model for 100 epochs using a batch size of 4096. Next, we retrain the model using SGD Momentum, AdaGrad-EMA, and Funneled SGD Momentum optimizers while removing the learning rate schedule. The reason for choosing AdaGrad is the fact the the effective learning rate is naturally decayed for this optimizer, thus mimicking a decaying schedule. Note that AdaGrad-EMA is a slight variant of AdaGrad where we apply an EMA on the pre-conditioned gradients. The original formulation of AdaGrad performs poorly in this setting. We similarly do a hyperparameter search for all the models. The best performing learning rate for AdaGrad-EMA is achieved at $10^{-3}$. For Funnel, we use the original learning rate as the baseline ($\eta = 0.1$) and set $(\gamma_p, \gamma_s) = (10^{-4}, 5\times 10^{-3})$. The results are shown in Table~\ref{tab:resnet}. As can be seen from the table, Funnel achieves the best performance among the other optimizers when the learning rate schedule is removed. \begin{table}[t!] \vspace{-0.4cm} \caption{MobileNetV1 top-1 and top-5 accuracy on the CIFAR-10 dataset: the top baseline result is obtained using a learning rate schedule. We compare the performance of different optimizers when the learning rate schedule is removed.} \label{tab:mob} \begin{center} \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lcc} \toprule Method & Top-1 Test Accuracy & Top-5 Test Accuracy\\ \midrule Momentum (with lr-schedule) & $89.51 \pm 0.18$ & $99.35 \pm 0.05$\\ \midrule Momentum (without lr-schedule) & $84.14 \pm 0.49$ & $98.93 \pm 0.35$\\ Funneled Momentum (without lr-schedule) & $\mathbf{89.61 \pm 0.28}$ & $\mathbf{98.94 \pm 0.26}$\\ \bottomrule \end{tabular} } \end{center} \vspace{-0.4cm} \end{table} \begin{table}[t!] \caption{ResNet50 top-1 and top-5 accuracy on the ImageNet dataset: the top baseline result is obtained using a learning rate schedule. We compare the performance of different optimizers when the learning rate schedule is removed.} \label{tab:resnet} \begin{center} \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lcc} \toprule Method & Top-1 Test Accuracy & Top-5 Test Accuracy\\ \midrule Momentum (with lr-schedule) & $76.57 \pm 0.16$ & $93.21 \pm 0.04$\\ \midrule Momentum (without lr-schedule) & $53.80 \pm 0.86$ & $78.67 \pm 0.74$\\ AdaGrad-EMA (without lr-schedule) & $70.70 \pm 0.24$ & $89.74 \pm 0.17$\\ Funneled Momentum (without lr-schedule) & $\mathbf{72.39 \pm 0.09}$ & $\mathbf{90.97 \pm 0.06}$\\ \bottomrule \end{tabular} } \end{center} \vspace{-0.4cm} \end{table} \section{Conclusion and Future Work} We provided an adaptive method for unifying the existing ideas in the domain of learning rate adaptation in a more rigorous manner. This is done by introducing a per-coordinate gain as well as an overall step-size scale and updating these hyperparameter using the well-known unnormalized exponentiated gradient updates. Our meta algorithm can easily adapt to many widely used optimizers, without special modification. We present very promising experimental results for our new adaptive method, e.g. for adapting to distribution shift and finding effective learning rate schedules. As long-term goal is to also replace the common gradient descent based optimizers by updates from the exponentiated gradient family. For this, a more rigorous study with a focus on large-scale applications is needed. The advantage of using EG updates is that for this family many tools are already available for handling distribution shifts in the data~\cite{herbsterwarmuth,bousquetwarmuth}. \bibliographystyle{plain}
{ "timestamp": "2022-02-02T02:06:57", "yymm": "2202", "arxiv_id": "2202.00145", "language": "en", "url": "https://arxiv.org/abs/2202.00145" }
\section{Introduction} \label{s:intro} Exploiting extra data, e.g., labeled data from a related task, or unlabeled data from the same task, is a powerful way of reducing the number of training data required to learn a given task. This idea lies at the heart of burgeoning fields like transfer, meta-, semi- and self-supervised learning, and these fields have developed a wide variety of methods to incorporate such extra information. To give a few examples, methods for transfer learning fine-tune a representation that was pretrained on labeled data from another---ideally related---task. Methods for semi-supervised learning pretrain the representation using unlabeled data, which may come from the same task or from other related tasks, before using the labeled data. In this paper, we ask the question: what is the \emph{best} way to exploit extra data for learning a task? In other words, if we have \emph{some} pool of data---be it labeled or unlabeled, from the same task, or from another task---what is the \emph{optimal} way to pretrain a representation? As posed, the answer to the question above depends upon the downstream task that we seek to solve. But we can ask a more reasonable question by recognizing that a pretrained representation can be thought of as a Bayesian prior (or a sample from it). Fundamentally, a prior restricts the set of models that can be fitted upon the task. So we could instead ask: \emph{how to best use the extra data to restrict the set of models that we could fit on the desired task}. This paper formalizes the question using the concept of reference priors and makes the following contributions. \begin{enumerate}[(1), nosep,wide,labelwidth=0ex, labelindent=\parindent] \item We \textbf{formalize the problem of ``how to best pretrain a model''} using the theory of reference priors, which are objective, uninformative Bayesian priors computed by maximizing the mutual information between the task and the weights. We show how these priors maximize the KL-divergence between the posterior computed from the task and the prior, on average over the distribution of the unknown future data. This allows the samples from the task to maximally influence the posterior. We discuss how reference priors are supported on a discrete set of atoms in the weight space. We \textbf{develop a method to compute reference priors for deep networks}. To our knowledge, this is the \textbf{first instantiation of reference priors for deep networks that preserves their characteristic discrete nature}. \item We \textbf{formalize semi-supervised learning as computing a reference prior} where the learner is given access to a pool of unlabeled data and seeks to compute a prior using this data. This formulation sheds light upon the \textbf{theoretical underpinnings of existing state of the art methods such as FixMatch}. We show that techniques such as consistency regularization and entropy minimization which are commonly used in practice can be directly understood using the reference prior formulation. \item We \textbf{formalize transfer learning as building a two-stage reference prior} where the learner gets access to data in two stages and computes a prior that is optimal for data from the second stage. Such a prior has the flavor of ignoring certain parts of the weight space depending upon whether data from the first stage was similar to that from the second stage, or not. This formulation is useful because it is an information-theoretically optimal way to pretrain using a source task for the goal of transferring to the target task. This objective is closely related to the predictive Information Bottleneck principle. \item We show an empirical study of our formulations on the CIFAR-10 and CIFAR-100 datasets. We show that \textbf{our methods to compute reference priors provide results that are competitive with state of the art} methods for semi-supervised learning, e.g., we obtain an \textbf{accuracy of 85.45\% on CIFAR-10 with 5 labeled samples/class}. We obtain significantly better accuracy than well-tuned fine-tuning for transfer learning, even for very small sample sizes. \end{enumerate} \section{Related Work and Discussion} \label{s:related} \textbf{Reference priors in Bayesian statistics} We build upon the theory of reference priors which was developed in the objective Bayesian statistics literature~\citet{bernardo1979reference,berger1988priors,berger2009formal}. The main idea used in our work is that non-asymptotic reference priors allow us to exploit the finite samples from the task in a fundamentally different way than classical Bayesian inference. If the number of samples from the task available to the learner is finite, then the prior should also select only a finite number of models. Reference priors are not common in the machine learning literature. A notable exception is~\citet{nalisnick2017variational} who optimize a variational lower bound and demonstrate results on small-scale problems. The main technical distinction of our work is that we explicitly use the discrete prior instead of a variational approximation. \textbf{Information theory} Discreteness is seen in many problems with an information-theoretic formulation, e.g., capacity of a Gaussian channel under an amplitude constraint~\citep{smith1971information}, neural representations in the brain~\citet{laughlin1981simple}, and biological systems~\citep{mayer2015well}. \citep{mattingly2018maximizing,abbottScalingLawDiscrete2019} have developed these ideas to study how reference priors select ``simple models'' which lie on certain low-dimensional ``edges'' of the model space. We believe that the methods developed in our paper are effective \emph{because} of this phenomenon. Our choice of using a small order $n$ for the prior is directly motivated by their examples. Our formulation of \textbf{semi-supervised learning} sheds light on the working of current SSL methods. For example, the reference prior can automatically enforce consistency regularization of predictions across augmentations~\citep{tarvainen2017mean, berthelot2019mixmatch}, as we discuss in~\cref{s:impl}. Similarly, minimizing the entropy of predictions on unlabeled data, either explicitly~\citep{grandvalet2005semi, miyato2018virtual} or using pseudo-labeling methods~\citep{lee2013pseudo, sajjadi2016mutual}, is another popular technique. This is automatically achieved by the objective in~\cref{eq:ref_ssl}. Disagreement-based methods~\citep{zhou2010semi} employ multiple models and use confident models to soft-annotate unlabeled samples for others. Disagreements in our formulation are encouraged by the entropy $H(y^n \,|\, x^n)$ in~\cref{eq:ref_ssl}. If $p(y^n \,|\, x^n)$ is uniform, which is encouraged by the reference prior objective, particles disagree strongly with each other. While \textbf{transfer learning} is a key component of a large number of applications today, e.g,~\citep{devlin2019bert,kolesnikovbigtransferbit2020}, a central question that remains unanswered is how one should pretrain a model if the eventual goal is to transfer to a target task. There have been some attempts at addressing this via the Information Bottleneck, e.g.,~\citet{gao2020free}. This question becomes particularly challenging when transferring across domains, or for small sample sizes~\citep{davatzikos2019machine}. Reference priors are uniquely suited to tackle this question: our two-stage experiment in~\cref{s:two_stage} is the \emph{optimal} way pretain on the source task. As our experiments show, this is better than fine-tuning in the low-sample regime \cref{s:expt:transfer}). \section{Methods} \label{s:methods} This section discusses a key property of reference priors that enables us to calculate them numerically, namely that they are supported on a discrete set in the weight space (\cref{s:discrete}). It then formulates reference priors for semi-supervised (\cref{s:ssl}) and transfer learning (\cref{s:two_stage,s:transfer}). \subsection{Existence and discreteness of reference priors} \label{s:existence} \label{s:discrete} Rigorous theoretical development of reference priors has been done in the statistics literature. We focus on their applications. We however mention some technical conditions under which our development remains meaningful. A reference prior does not exist if $I_\pi(w; z^n)$ is infinite~\citep{berger1988priors}. For the concept of a reference prior to remain meaningful, we make the following technical assumptions. (i) $\pi$ is supported on a compact set $\Omega \subset \reals^p$, and (ii) if $p_{\pi}( z^n) = \int_{\Omega} \dd{w} \pi(w) p(z^n \,|\, w)$ is the marginal, then $\text{KL}( p_w, p_\pi )$ is a continuous function of $w$ for any $\pi$. Under these conditions, the $n$-order prior $\pi_n^*$ exists and $I_{\pi_n}(w; z^n)$ is finite; see~\citep[Lemma 2.14]{zhang1994discrete}. Now assume that $\pi_n^*$ exists and is unique up to a set of measure zero. Let $\Omega_n = \{ w \in \Omega: \pi_n^*(w) > 0 \}$ be the support of $\pi_n^*$ and $z^n$ be a discrete random variable with $C$ atoms. If $\{ p(z^n \,|\, w): w \in \Omega_n\}$ is compact, then $\pi_n^*$ is discrete with no more than $C$ atoms~\citep[Lemma 2.18]{zhang1994discrete}). \begin{remark}[Blahut-Arimoto algorithm with particles] \label{rem:ba_particles} Since the optimal prior is discrete, we can maximize the mutual information directly by identifying the best set of atoms. We set the prior have the form $\pi_n^* = \sum_{i=1}^K K^{-1} \delta(w-w^i)$ where $\{w^1,\ldots, w^K \}$ are the $K$ atoms. We call these atoms ``particles''. Using standard back-propagation, we can then compute the gradient of the objective in~\cref{eq:ref_pin} with respect to each particle (note that each particle's gradient depends upon all other particles). \end{remark} \subsection{Visualizing the reference prior for deep networks} \label{rem:visua_prior} One cannot directly visualize the high-dimensional particles $w$ in $\pi_n^*$. But we can think of each particle $w$ as representing a probability distribution $f(w) \in \reals^{nC}$ given by \[ \rbr{\sqrt{p_w(y=1 \,|\, x_1)}, \sqrt{p_w(y=2 \,|\, x_1)}, \ldots, \sqrt{p_w(y=C \,|\, x_n)}}. \] and use a method for visualizing such distributions developed in~\citet{Quinn13762} that computes a principal component analysis (PCA) of such vectors $\{ f(w^1), \ldots, f(w^K)\}$ shown in~\cref{fig:manifold_boundary}. See~\cref{s:app:visualizing} for more details. \begin{figure}[htpb] \centering \includegraphics[width=0.8\linewidth]{fig/Manifold_boundary_rasterized.pdf} \vspace*{-1em} \caption{\textbf{Reference prior (green) for binary classification on MNIST}. A three-dimensional embedding of the probability distributions of $K=3000$ atoms in the reference prior after 50,000 iterations of the BA algorithm (green) for a binary classification problem on MNIST (digits 3 vs. 5). Particles were initialized randomly (blue) and they are nearby in this embedding because at initialization, the logits of each particle are uniformly distributed. Orange shows particle locations after 5,000 iterations. As the reference prior objective in~\cref{eq:ref_pin} is optimized, the particles increasingly make more diverse predictions (orange) and towards the end (green) these particles spread apart in the prediction space.} \label{fig:manifold_boundary} \end{figure} This experiment demonstrates that we can instantiate reference priors for deep networks in a scalable fashion even for a large number of particles $K$. It provides a visual understanding of how atoms of the prior are diverse models in prediction space, just like the atoms in~\cref{fig:coin_convergence_jeffreys}. \textbf{How to choose the number of atoms $K$ in the reference prior?} Each particle in this paper is a deep network, so must be careful to ensure that we do not maintain an unduly large number of atoms in the prior. \citet{abbottScalingLawDiscrete2019} suggest a scaling law for $K$ in terms of the number of samples $n$, e.g., $K \sim n^{4/3}$ for a problem with two biased coins. We will instead treat $K$ as a hyper-parameter. This choice is motivated from the emergent low-dimensional structure of the green particles in~\cref{fig:manifold_boundary}; see the further analysis in in~\cref{s:expt:analysis}. \begin{remark}[Variational approximations of reference priors] \label{rem:variational_approximation} \citet{nalisnick2017variational} maximize a lower bound on $I_\pi(w; z)$ and replace the term $p(z) = \int \dd{w} \pi(w) p(z \,|\, w)$ in~\cref{eq:ref_defn} by the so-called VR-max estimator $\max_w \log p(z \,|\, w)$ where the maximum is evaluated across a set of samples from $\pi(w)$~\citep{liEnyiDivergenceVariational2016}. They use a continuous variational family parameterized by neural networks. However, reference priors are supported on a discrete set. Using a continuous variational family, e.g., a Gaussian distribution, to approximate $\pi_n^*$ is computationally beneficial but it is detrimental to the primary purpose of the prior, namely to discover diverse models. This is also seen in~\cref{fig:manifold_boundary} where it would be difficult to construct a variational family whose distributions put mass mostly on the green points. We therefore do not use variational approximations. \end{remark} \begin{remark}[Reference prior depends upon the number of samples and its atoms are diverse models] \label{rem:small_diverse} \cref{eq:ref_defn} encourages the likelihood $p(z^n \,|\, w)$ of atoms in the reference prior to be maximally different from that of other atoms. This gives us intuition as to why the prior should have finite atoms. Consider the covering number in learning theory~\citep{bousquet2003introduction} where we endow the model space with a metric that measures disagreement between two hypotheses over $n$ samples. Smaller the number of samples $n$, smaller the covering number, and smaller the effective set of models considered. The reference prior is similar. If we only have few samples $n$, then it is not possible for the likelihood in Bayes law to distinguish between a large set of models and assign them different posterior probabilities. The prior therefore puts probability mass only on a finite set of atoms, and just like the coin-tossing experiment in~\cref{eg:bias}, these atoms have diverse outputs on the $n$ samples. This ability of the prior to select a small set of representative models is extremely useful for training deep networks with few data and it was our primary motivation. \end{remark} \subsection{Reference priors for semi-supervised learning} \label{s:ssl} Consider the situation where we are \textbf{given inputs $x^n$, their corresponding labels $y^n$ and unlabeled inputs $x^u$}. Our goal is semi-supervised learning, i.e., to use $x^u$ to build a prior $\pi^*(w)$ that selects models that can be learned using the labeled data $(x^n, y^n)$. Recall that since $\pi^*$ is a prior, it should not depend on $(x^n, y^n)$. Just like the construction of the reference prior in~\cref{s:reference_priors}, we can maximize \beq{ \aed{ I_\pi(y^n, x^n; w) & = \E_{x^n, (y^n \,|\, x^n, w), w \sim \pi} \sbr{\log \f{p(y^n \,|\, x^n, w)}{p_\pi(y^n \,|\, x^n)}}\\ & = \a \E_{x^u} \sbr{H(y^u \,|\, x^u)} - \E_{x^u, w \sim \pi} \sbr{H(y^u \,|\, x^u, w)}, } \label{eq:ref_ssl} } where $p_\pi(y^n \,|\, x^n) = \int \dd{w} \pi(w) \prod_{i=1}^n p(y_i \,|\, x_i, w)$ and likewise for $p_\pi(y^u \,|\, x^u)$. The first step is simply the definition of $I_\pi$: it is the KL-divergence of the posterior after seeing $(x^n, y^n)$ with respect to the prior $\pi(w)$. The second step is the key idea and its rationale is as follows. If we know that inputs $x^u$ and $x^n$ come from the same task, then we can use samples $x^u$ to compute the expectation over $x^n$. For the same reason, we can average over outputs $y^u$ which are predicted by the network in place of the fixed labels $y^n$. Let us emphasize that both $x^u$ and $y^u$ are averaged out in the objective above. Predictions on new samples $x$ are made using the Bayesian posterior predictive distribution \beq{ \aed{ p(y \,|\, x, x^n, y^n) \propto \int \dd{w} \pi_n^*(w) p(y \,|\, x, w) p(y^n \,|\, x^n, w). } \label{eq:bayes_posterior_predictive_distribution} } \textbf{An intuitive understanding of~\cref{eq:ref_ssl}} Assume for now that we know the number of classes $C$ (although the objective is valid even if that is not the case). If our prior has $K$ particles, then the second term is the average of the per-particle entropy of the predictions. The objective encourages each particle $w_i$ to predict confidently, i.e., to have a small entropy in its output distribution $p_{w_i}(y \,|\, x)$. The first term is the entropy of the average predictions: $p_\pi(y^n \,|\, x^n)$, and it is large if particles predict different outputs $y^n$ for the same inputs $x^n$, i.e., they disagree with each other. We treat the constant $\a$ (which should be 1 in the definition of mutual information) as a hyper-parameter to allow control over this phenomenon. \textbf{The reference prior semi-supervised learning objective encourages particles to be dissimilar but confident models (not necessarily correct).} \subsection{Reference priors for a two-stage experiment} \label{s:two_stage} We first develop the idea using generic random variables $z^n$. Consider a situation when we \textbf{see data in two stages, first $z^m$, and then $z^n$}. How should we select a prior, and thereby the posterior of the first stage, such that the posterior of the second stage makes maximal use of the new $n$ samples? We can extend the idea in~\cref{s:ssl} in a natural way to address this question. We can \textbf{maximize the KL-divergence between the posterior of the second stage and the posterior after the first stage, on average, over samples $z^n$.} Since we have access to samples $z^m$, we need not average over them, we can compute the posterior $p(w \,|\, z^m)$ from these samples given the prior $\pi(w)$. First, notice that $p( w, z^n \,|\, z^m) = p(w \,|\, z^{m+n}) p(z^n \,|\, z^m) = p(z^n \,|\, w) p(w \,|\, z^m) $. We can now write \beq{ \aed{ \pi_{n \mid m}^* &= \argmax_\pi I_{p(w \,|\, z^m)}(w; z^n)\\ &:= \int \dd{z^n} p(z^n \,|\, z^m )\ \text{KL}(p(w \,|\, z^{m+n}), p(w \,|\, z^m ) ) \\ & = \int \dd{w} p(w \,|\, z^m) \int \dd{z^n}p( z^n \,|\, w ) \log \f{ p( z^n \,|\, w ) }{ p( z^n \,|\, z^m )}, } } where $p(w \,|\, z^m) \propto p( z^m \,|\, w) \pi(w)$ and $p( z^n \,|\, z^m ) = \int \dd{w} p( z^n \,|\, w) p(w \,|\, z^m)$. The key observation is that if the reference prior~\cref{eq:ref_pin} has a unique solution, we should have that the optimal $p(w \,|\, z^m) \equiv \pi_n^*(w)$. This leads to \beq{ \pi_{n \mid m}^*(w) \propto \pi_n^*(w)\ p( z^m \,|\, w)^{-1}. \label{eq:ref_pinm} } This prior puts \emph{less} probability on regions which have high likelihood on old data $z^m$ whereby the posterior is maximally informed by the new samples $z^n$. Given knowledge of old data, the prior \emph{downweighs regions} in the weight space that could bias the posterior of the new data. We also have $\pi_{n \mid m}^* = \pi_n^*$ for $m = 0$ which is consistent with~\cref{eq:ref_pin}. As $m \to \infty$, this prior ignores the part of the weight space that was ideal for $z^m$. See~\cref{s:app:two_stage_coin_tossing} for an example. \begin{remark}[Averaging over $z^m$ in the two-stage experiment] If we do not know the outcomes $z^m$ yet, the prior should be calculated by averaging over both $z^m, z^n$ \beq{ \aed{ &\pi^* = \argmax_\pi \int \dd{z^m} p(z^m) I_{p(w \,|\, z^m)}(w; z^n)\\ &:= I_\pi(w; z^{m+n}) - I_\pi(w; z^m) = H(w \,|\, z^m) - H(w \,|\, z^{m+n}). } \label{eq:ref_pinm_avg} } The encourages multiple explanations of initial data $z^m$, i.e., high $H(w \,|\, z^m)$, so as to let the future samples $z^n$ select the best one among these explanations, i.e., reduce the entropy $H(w \,|\, z^{m+n})$. It is interesting to note that neither is this two-stage prior equivalent to maximizing $I_\pi(w; z^{m+n})$, nor is it simply the optimal prior corresponding to objectives $I_\pi(w; z^m)$ or $I_\pi(w; z^n)$. Both~\cref{eq:ref_pinm,eq:ref_pinm_avg} therefore indicate that two-stage priors are useful when we have \emph{some} data \emph{a priori}, this can be either unlabeled samples from the same task, or labeled samples from some other task. \end{remark} \begin{remark}[A softer version of the two-stage reference prior] The objective in~\cref{eq:ref_pinm_avg} resembles the predictive information bottleneck (IB) of~\citet{bialek2001predictability}, or its variational version in~\citet{alemi2020variational}, which seek to learn a representation, say $w$, that maximally forgets past data while remaining predictive of future data \beq{ \textstyle \max_{p(w \,|\, z^m)} I(w; z^n) - \b I(w; z^m). \label{eq:pib} } The parameter $\b$ in~\cref{eq:pib} gives this objective control over how much information from the past is retained in $w$. We take inspiration from this and construct a variant of~\cref{eq:ref_pinm} \beq{ \aed{ \pi_{n \mid m}^{\b}(w) &\propto \pi_n^*(w) p( z^m \,|\, w)^{-\b}\quad \text{for}\ \b \in (0,1).\\ \implies p( w \,|\, z^{m+n} ) &\propto p(z^n \,|\, w) p(z^m \,|\, w)^{1-\b} \pi_n^*(w). } \label{eq:ref_pinm_beta} } We should use $\b=0$ when we expect that data from the first stage $z^m$ is similar to data $z^n$ from the second stage. This allows the posterior to \emph{benefit} from past samples. If we expect that the data are different, then $\b=1$ ignores regions in the weight space that predict well for $z^m$. This is similar to the predictive IB where a small $\b$ encourages remembering the past and $\b=1$ encourages forgetting. \end{remark} \subsection{Reference priors for transfer learning} \label{s:transfer} Consider the two-stage experiment where in the first stage we obtain $m$ samples $(\xm_s, \ym_s)$ from a ``source'' task $P^s$ and the second stage consists of $n$ samples $(\xn_t, \yn_t)$ from the ``target'' task $P^t$. Our goal is to calculate a prior $\pi(w)$ that best utilizes the target task data. Bayesian inference for this problem involves first computing the posterior $p(w \,|\, \xm_s, \ym_s )\propto p(\ym_s \,|\, w, \xm_s) \pi(w)$ from the source task and then using it as a prior to compute the posterior for the target task $p(w \,|\, \xn_t, \yn_t, \xm_s, \ym_s)$. Just like~\cref{s:reference_priors}, \textbf{the key idea again is to maximize the KL-divergence between the two posteriors} $\text{KL} \rbr{p(w \,|\, \xn_t, \yn_t, \xm_s, \ym_s),\ p(w \,|\, \xm_s, \ym_s)}$, but averaged over samples $\xm_s$ and $\xn_t$. \textbf{Case 1: Access to unlabeled data from the source $\xm_s$ and the target task $\xn_t$} We should average the KL-divergence over both the source and target predictions $\ym_s$ and $\yn_t$ and maximize \beq{ \aed{ \E_{\xm_s, \xn_t, \ym_s \,|\, \xm_s, \yn_t \,|\, \xn_t} \text{KL} \rbr{p(w \,|\, \xn_t, \yn_t, \xm_s, \ym_s ), p( w \,|\, \xm_s, \ym_s)} } \label{eq:ref_pi_transfer_1} } over the prior $\pi$. Here $p_{\pi}( \ym_s \,|\, \xm_s) = \E_{w \sim \pi} p(\ym_s \,|\, \xm_s, w)$ and $p_{\pi}( \yn_t \,|\, \xn_t) = \E_{w \sim \pi} p(\yn_t \,|\, \xn_t, w)$, respectively. Note that averages over $\xm_s$ and $\xn_t$ are computed using samples while averages over $\ym_s \,|\, \xm_s$ and $\yn_t \,|\, \xn_t$ are computed using the model's predictions. \textbf{Case 2: $\xm_s, \ym_s$ are fixed and known, and we have a pool of unlabeled target data $\xn_t$} Since we already know the labels for the source task, we will only average over $\xn_t$ and $\yn_t$ and maximize \beq{ \aed{ \E_{ \xn_t, \yn_t \,|\, \xn_t} \text{KL} \rbr{p(w \,|\, \xn_t, \yn_t, \xm_s, \ym_s), p( w \,|\, \xm_s, \ym_s )}; } \label{eq:ref_pi_transfer_2} } here $p_{\pi}(\yn_t \,|\, \xn_t) = \int \dd{w} \pi(w) p(\yn_t \,|\, \xn_t, w)$. \begin{remark}[Connecting~\cref{eq:ref_pi_transfer_1,eq:ref_pi_transfer_2} to practice] Both objectives can be written down as \beq{ \pi^* = \argmax_\pi I_\pi(w; \yn_t, \xn_t, \xm_s, \ym_s) - I_\pi(w; \xm_s, \ym_s) \label{eq:ref_transfer_mi} } with the distinction that while in Case 1, we average over all quantities, namely $p(\xm_s), p(\ym_s), p(\xn_t), p(\yn_t)$ while in Case 2, we fix $\xm_s$ and $\ym_s$ to the provided data from the source task. Case 2 is what is typically called transfer learning. Case 1, where one has access to \emph{only unlabeled data} from a source task \emph{that is different from the target task} is not typically studied in practice. Like~\cref{eq:ref_pinm_beta}, we can again introduce a coefficient $\b$ on the second term in~\cref{eq:ref_transfer_mi} to handle the relatedness between source and target tasks. \end{remark} \subsection{Practical tricks for implementing reference priors} \label{s:impl} The reference prior objective is conceptually simple but it is difficult to implement it directly using deep networks and modern datasets. We next discuss some practical tricks that we have developed. \textbf{(1) Order of the prior $n$ versus the number of samples} \citet{bernardo1979reference} set the order of the prior $n$ to be the same as the number of samples. We make a distinction between the two and restrict our experiments to order $n = 2,3$. Mathematically, this amounts to computing averages in~\cref{eq:ref_pin} or~\cref{eq:ref_ssl} over only sets of $n$ samples at a time rather than all of them. This significantly reduces the class of models considered in the reference prior by \emph{pretending} that there is a small number of samples available for training the task---which is useful, and also true in practice, for over-parametrized deep networks. This choice is also motivated by the low-dimensional structure in the reference prior in~\cref{fig:manifold_boundary}. Note that we are \emph{not} restricting to small order $n$ for computational reasons, i.e., computing the expectation over all classes $y^n$ in~\cref{eq:ref_ssl} can be done in a single forward pass. \textbf{(2) Using cross-entropy loss to bias particles towards good parts of the weight space} The posterior~\cref{eq:bayes_posterior_predictive_distribution} suggests that we should first compute the prior, and then weight each particle by the likelihood of the labeled data. In practice, we combine these two steps into a single objective \beq{ \aed{ \max_\pi \g I_\pi(w; y^u, x^u) + \E_{w \sim \pi}\sbr{\log p(y^n \,|\, x^n, w)}, } \label{eq:ref_mi_ce} } where $\g$ is a hyper parameter, $x^n, y^n$ are labeled samples. \cref{eq:ref_mi_ce} allows us to directly obtain particles that both have high probability under the prior and a high likelihood. This is different from the correct Bayesian posterior (which would set $\g = 1$, we use $\g = 1/2$) but it is a trick often employed in the SSL literature. The second term restricts the search space for the particles in $\pi(w)$. \textbf{(3) Data augmentation} State of the art SSL methods use heavy data augmentation, e.g., RandAugment~\citep{cubuk2020randaugment} and CTAugment~\citep{berthelot2019remixmatch}, which both have about 20 transformations. Some are weak augmentations such as mirror flips and crops while some others are strong augmentations such as color jitter. Methods such as FixMatch~\citep{sohn2020fixmatch} or MixMatch~\citep{berthelot2019mixmatch} use weak augmentations to get soft labels for predictions on strong augmentations. We compute the entropy term $H(y^u \,|\, x^u, w)$ in~\cref{eq:ref_ssl} using the distribution $p_G(y \,|\, x, w) = \mathbb{E}_{g \sim G} [p_w(y \,|\, g(x), w)]$ where $G = G_1 \cup G_2$ is the set of weak ($G_1$) and strong ($G_2$) augmentations. Let $g_i \sim G_i$ be an augmentation and denote $p_{g_i} \equiv p_w(y \,|\, g_i(x), w)$ for $i \in \{1, 2\}$. In every mini-batch we use $p_G(y \,|\, x, w) \approx \t p_{g_1} + (1 - \t) p_{g_2}$ where $\tau$ is a hyper-parameter. This gives accuracy that is reasonable (about 87\% for 500 samples) but a bit lower than state of the art SSL methods. We noticed that if we use an upper bound on the entropy from Jensen's inequality \beq{ \scalemath{1}{ - \E_{x^u} \int \dd{y^u} p_G(y^u \,|\, x^u, w)\sbr{ \t \log p_{g_1} + (1 - \t)\log p_{g_2}}} \label{eq:tau} } then we can close this gap in accuracy (see~\cref{tab:cifar10_ssl}). This is perhaps because the cross-entropy terms, e.g., $-p_{g_1} \log p_{g_2}$, force the predictions of the particles to be consistent across both types of augmentations, just like the objective in FixMatch or MixMatch. Our formulation is thus useful to not only understand SSL but also to tweak it to perform as well as current methods and thereby shed light on the theoretical underpinnings of their performance. \textbf{(4) Computing $H(y^u \,|\, x^u, w)$} A number of SSL methods work by creating pseudolabels from weakly augmented data, which seems to be a key ingredient of good accuracy in our experience with these methods. We use two heuristics to compute the entropy term $H(y^u \,|\, x^u, w)$ that are motivated by this. First, we follow FixMatch and only use unlabeled data with confident predictions to compute $H(y^u \,|\, x^u, w)$. A datum $x$ contributes to the objective only if $\max_y p_w(y|g_1(x), w) > 0.95$. Second, if $G_1$ is the set of weak augmentations (see previous point), methods like FixMatch and MixMatch use $\argmax_y p(y \,|\, g_1(x), w)$ as a pseudo-label but do not update this using the back-propagation gradient. This prevents the more reliable predictions on $G_1$ from changing. We also employ this trick. As a result, the entropy term $-\tau^2 p_{g_1} \log p_{g_1}$ is a constant in~\cref{eq:tau}. To normalize the terms coming from $\t$ in~\cref{eq:tau}, we set $\gamma$ in~\cref{eq:ref_mi_ce} to $1/(1 - \tau^2)$ instead of 1. We have also developed an argument to choose the appropriate value of $\t=1/3$ that we explain in~\cref{s:app:setup}. \section{Empirical Study} \label{s:expt} \subsection{Setup} We evaluate on CIFAR-10 and CIFAR-100~\citep{krizhevsky2009learning}. For SSL, we use 50--1000 labeled samples, i.e., 5--100 samples/class and use the rest of the samples in the training set as unlabeled samples. For transfer learning, we construct 20 five-way classification tasks from CIFAR-100 and use 1000 labeled samples from the source and 100 labeled samples from the target task. All experiments use the WRN 28-2 architecture~\citep{zagoruyko2016wide}, same as in~\citet{berthelot2019mixmatch}. For all our experiments, the reference prior is of order $n=2$ and has $K=4$ particles. We run all our methods for 200 epochs, with $\tau=1/3$ in~\cref{eq:tau} and $\alpha=0.1$ in~\cref{eq:ref_ssl}. We set $\gamma = (1 - \tau^2)^{-1}$ as discussed in~\cref{s:impl}. For inference, each particle maintains an exponential moving average (EMA) of the weights (this is common in SSL~\citep{tarvainen2017mean}). \cref{s:app:setup} provides more details. \subsection{Semi-supervised learning} \label{s:expt:ssl} \textbf{Baseline methods} We compare to a number of recent methods such as FixMatch~\citep{sohn2020fixmatch}, MixMatch~\citep{berthelot2019mixmatch}, DASH~\citep{xu2021dash}, SelfMatch~\citep{kim2021selfmatch}, Mean Teacher~\citep{tarvainen2017mean}, Virtual Adversarial Training~\citep{miyato2018virtual}, and Mixup~\citep{berthelot2019mixmatch}. { \renewcommand{\arraystretch}{1.25} \begin{table}[htpb] \centering \Large \rowcolors{1}{}{black!5} \resizebox{\linewidth}{!}{ \begin{tabular}{l|lllll} \rowcolor{white} \toprule Method & \multicolumn{5}{c}{Samples} \\ \rowcolor{white} & 50 & 100 & 250 & 500 & 1000 \\ \midrule Mixup & - & - & 52.57 & 63.86 & 74.28 \\ VAT & - & - & 63.97 & 73.89 & 81.32 \\ Mean Teacher & - & - & 52.68 & 57.99 & 82.68 \\ MixMatch & 64.21\tstar & 80.29\tstar & 88.91\tstar & 90.35\tstar & 92.25\tstar \\ FixMatch (RA) & \entry{86.19}{3.37} (40) & 90.12\tstar & \entry{94.93}{0.65} & 93.91\tstar & 94.3\tstar \\ FixMatch (CTA) & \entry{88.61}{3.35} (40) & - & \entry{94.93}{0.33} & - & - \\ DASH (RA) & \entry{86.78}{3.75} (40) & - & \entry{95.44}{0.13} & - & - \\ DASH (CTA) & \entry{90.84}{4.31} (40) & - & \entry{95.22}{0.12} & - & - \\ SelfMatch & \entry{93.19}{1.08} (40) & - & \entry{95.13}{0.26} & - & - \\ FlexMatch & \entry{95.03}{0.06} (40) & - & \entry{95.02}{0.09} & - & - \\ \midrule Deep Reference Prior & \entry{85.45}{2.12} & \entry{88.53}{0.67} & \entry{92.13}{0.39} & \entry{92.94}{0.22} & \entry{93.48}{0.24} \\ \bottomrule \end{tabular} } \caption{ \textbf{Classification accuracy of different semi-supervised learning methods on CIFAR-10.} \textbf{Note:} RA and CTA in the methods column indicate that RandAugment or CTAugment were used for augmentations. Entries with * were evaluated by us using open-source implementations from the original authors for 256 epochs. All other entries are from original papers. Entries with ``(40)'' indicate that 40 labeled samples were used instead of 50. } \label{tab:cifar10_ssl} \end{table} } \cref{tab:cifar10_ssl} compares the accuracy of different SSL methods on CIFAR-10. We find that the reference prior approach is competitive with a number of existing methods, e.g., it is remarkably close to FixMatch on all sample sizes (notice the error bars). There is a gap in accuracy at small sample sizes (40--50) when compared to recent methods. It is important to note that these recent methods employ a number of additional tricks, e.g., FlexMatch implements curriculum learning on top of FixMatch, DASH and FlexMatch use different thresholding for weak augmentations (this increases their accuracy by 2-5\%), SelfMatch has higher accuracies because of a self-supervised pretraining stage, FixMatch (CTA) outperforms its RA variant by 1.5\% which indicates CTA augmentation is beneficial (we used RA). It is also extremely expensive to train SSL algorithms for 1000 epochs (all methods in~\cref{tab:cifar10_ssl} do so), we trained for 200 epochs. This experiment shows that our approach to SSL can obtain results that are competitive to sophisticated empirical methods without being explicitly formulated to enforce properties like label consistency with respect to augmentations. This also indicates that reference priors could be a good way to explain the performance of these existing methods, which is one of our goals in this paper. \subsection{Transfer learning} \label{s:expt:transfer} Just like we did in~\cref{s:impl} for SSL, we instantiate~\cref{eq:ref_pinm_beta} and~\cref{eq:ref_pi_transfer_2}, by combining prior selection, pretraining on the source task and likelihood of the target task, into one objective, \beq{ \aed{ \textstyle \g I_\pi(w; y^u_t, x^u_t) &+ \E_{w \sim \pi} \sbr{\log p(w, \yn_t \,|\, \xn_t)}\\ &+ (1 - \b) \E_{w \sim \pi} \sbr{\log p(w, \ym_s \,|\, \xm_s)}, } \label{eq:transfer_expt_obj} } where $\g = 1/2$ and $\b=1/2$ are hyper-parameters, $(\xm_s, \ym_s)$ are labeled data from the source task ($m=1000$), $(\xn_t, \yn_t)$ are labeled data from the target task ($n=100$) and $x^u_t$ are unlabeled samples from the target task (all other samples). \textbf{As baselines}, we use three methods: (a) fine-tuning, which is a very effective strategy for transfer learning~\citep{dhillon2019a,kolesnikovbigtransferbit2020} but it cannot use unlabeled target data, (b) using only labeled target data (this is standard supervised learning), and (c) using only labeled and unlabeled target data without any source data (this is simply SSL, or $\b=1$ in~\cref{eq:transfer_expt_obj}). \cref{fig:matrix} compares the performance for pairwise transfer across 5 tasks from CIFAR-100. Our reference prior objective in~\cref{eq:transfer_expt_obj} obtains much better accuracy than fine-tuning which indicates that it leverages the unlabeled target data effectively. For each task, the accuracy is much better than both standard supervised learning and semi-supervised learning using our own reference prior approach~\cref{eq:ref_mi_ce}; both of these indicate that the labeled source data is being used effectively in~\cref{eq:transfer_expt_obj}. \begin{figure}[!h] \centering \includegraphics[width=0.4\linewidth]{fig/refprior_source.pdf} \includegraphics[width=0.4\linewidth]{fig/finetune.pdf} { \renewcommand{\arraystretch}{1.2} \begin{table}[H] \centering \LARGE \rowcolors{1}{}{black!5} \resizebox{0.9\linewidth}{!}{ \begin{tabular}{l|rrrrr} \rowcolor{white} \toprule \rowcolor{white} Method \qquad \qquad Task ($\rightarrow$) & Vehicles-1 & Vehicles-2 & Fish & People & Aq. Mammals \\ \midrule Supervised Learning & 42.2 & 63.2 & 56.8 & 31.0 & 42.6 \\ Deep Reference Prior (SSL) & 63.6 & 75.2 & 54.6 & 34.0 & 47.4 \\ \bottomrule \end{tabular} } \end{table} } \caption{\textbf{Top: Accuracy (\%) of deep reference priors (left) and fine-tuning (right) for transfer learning tasks in CIFAR-100}. Cells are colored red/green relative to the median accuracy of each row. Darker shades of green indicate that the source task is more suitable for transfer. For example, Vehicles-1 as source is the best for all tasks according to the reference prior (left) (which is optimal in theory) but fine-tuning cannot replicate this. The accuracy of cells in the left panel is better than the corresponding cells on the right, e.g., the gap in accuracy is 34.8\% for Vehicles 2 $\rightarrow$ Vehicles 1. \textbf{Bottom: Accuracy (\%) of supervised learning and SSL for all 5 tasks}. Each number here should be compared to the corresponding row of the matrices in the top panel, e.g., Vehicles 2 has 86\% accuracy when transferred from Vehicles 1 using our transfer method (left), it has 66\% accuracy from fine-tuning (right), while the same task achieves 63.2\% accuracy when trained by itself using supervised learning (table first row) and 75.2\% accuracy when trained using unlabeled target data (table second row). Therefore the reference prior-based transfer objective can leverage both labeled source data as well as unlabeled target data. This pattern is consistent for all tasks. } \label{fig:matrix} \end{figure} \subsection{Ablation and analysis} \label{s:expt:analysis} This section presents ablation and analysis experiments for SSL on CIFAR-10 with 1000 labeled samples. We study the reference prior for different settings (i) varying the order $n$ of the prior, (ii) varying the number of particles in the BA algorithm ($K$), (iii) exponential moving averaging of the weights for each particle. We also study the two entropy terms in the reference prior objective individually. We use a reference prior of order $n=2$ in all our experiments. We see in~\cref{tab:app:order} that \textbf{changing the order of the prior} leads to marginal (about 1\%) changes in the accuracy. \begin{table}[htpb] \centering \rowcolors{1}{}{black!5} \resizebox{0.85\linewidth}{!}{ \begin{tabular}{l|rrrr} \rowcolor{white} \toprule Method \qquad \qquad Order ($\rightarrow$) & 2 & 3 & 4 & 5 \\ \midrule Deep Reference Prior ($K=2$) & 91.76 & 90.53 & 91.51 & 91.36 \\ \bottomrule \end{tabular} } \caption{The order of the reference prior has a minimal impact on the accuracy.} \label{tab:app:order} \end{table} \begin{table}[htpb] \centering \rowcolors{1}{}{black!5} \resizebox{0.85\linewidth}{!}{ \begin{tabular}{l|rrrr} \rowcolor{white} \toprule \rowcolor{white} Method \qquad \qquad \#Particles ($\rightarrow$) & 2 & 4 & 8 & 16 \\ \midrule Deep Reference Prior ($n=2$) & 91.3 & 91.76 & 89.79 & 90.72 \\ \bottomrule \end{tabular} } \caption{Number of particles has a minimal impact on accuracy.} \label{tab:app:particles} \end{table} We next \textbf{vary the number of particles} in the prior in~\cref{tab:app:particles} and find that the accuracy is relatively consistent when the number of particles varies from $K=2$ to $K=16$. This seems surprising because a reference prior ideally should have an infinite number of atoms, when it approximates Jeffreys prior. We should not a priori expect $K=2$ particles to be sufficient to span the prediction space of deep networks. But our experiment in ~\cref{fig:manifold_boundary} provides insight into this phenomenon. It shows that the manifold of diverse predictions is low-dimensional. Particles of the reference prior only need to span these few dimension and we can fruitfully implement our approach using very few particles. \textbf{Effect of exponential moving averaging (EMA)} We use EMA on the weights of each particle (independently). \cref{tab:app:ema} analyzes the impact of EMA. As noticed in other semi-supervised learning works~\citep{berthelot2019mixmatch,sohn2020fixmatch}, EMA improves the accuracy by 2-3\% regardless of the number of labeled samples used. \begin{table}[htpb] \centering \rowcolors{1}{}{black!5} \resizebox{\linewidth}{!}{ \begin{tabular}{l|rrrrr} \rowcolor{white} \toprule \rowcolor{white} Method \qquad \#Samples ($\rightarrow$) & 50 & 100 & 250 & 500 & 1000 \\ \midrule EMA & \entry{85.45}{2.12} & \entry{88.53}{0.67} & \entry{92.13}{0.39} & \entry{92.94}{0.22} & \entry{93.48}{0.24} \\ No EMA & \entry{82.36}{2.13} & \entry{85.64}{0.43} & \entry{89.75}{0.36} & \entry{90.06}{1.71} & \entry{91.57}{0.25} \\ \bottomrule \end{tabular} } \caption{Using EMA for weights of each particle is beneficial and improves accuracy by 2-3\%.} \label{tab:app:ema} \end{table} \begin{figure}[htpb] \centering \includegraphics[width=0.495\linewidth]{fig/particles_acc_v2.pdf} \includegraphics[width=0.495\linewidth]{fig/entropies.pdf} \vspace*{-2em} \caption{\textbf{(Left)} Accuracy of individual particles in the prior during training (250 labeled samples). The individual particles have diverse predictions due to the entropy term $H(y^n \,|\, x^n)$, the accuracy of the ensemble is larger than the accuracy of any single particle. \textbf{(Right)} Evolution of entropy terms $H(y^u \,|\, x^u, w)$ and $H(y^u \,|\, x^u)$ for two cases (500 labeled samples and 50 labeled samples). While $H(y^u \,|\, x^u)$ is expected to be larger than $H(y^u \,|\, x^u, w)$ in~\cref{eq:ref_ssl} since KL-divergence is non-negative, this is not always the case since we approximate $H(y^u \,|\, x^u, w)$ by an upper-bound obtained from Jensen's inequality for data augmentation as discussed in~\cref{s:impl}. } \label{fig:particle_accuracy_entropy} \end{figure} \textbf{The two entropy terms in the reference prior objective} \cref{fig:particle_accuracy_entropy} (left) shows how, because of the entropy term $H(y^u \,|\, x^u)$, the accuracy of particles is quite different during training. Particles have different predictive abilities ( 7\% range in test error) but the Bayesian posterior predictive distribution has a higher accuracy than any of them. \cref{fig:particle_accuracy_entropy} (right) tracks the two entropy terms in the objective. For large number of labeled data (500, blue) the entropy $H(y^u \,|\, x^u)$ which should always be higher than $H(y^u \,|\, x^u, w)$ in~\cref{eq:ref_ssl} is lower (this is not the case for 50 samples, red). This is likely a result of the cross-entropy term in the modified objective in~\cref{eq:ref_mi_ce} which narrows the search space of the particles. This experiment is an important insight into the working of existing semi-supervised learning methods as well, all of which also have a similar cross-entropy objective in their formulation. It points to the fact that at large sample-sizes, the cross-entropy loss and not the semi-supervised learning objective could dominate the training procedure. \section{Background} \label{s:background} \subsection{Setup} Consider a dataset $\hat{P}_n = \cbr{(x_i, y_i)}_{i=1}^n$ with $n$ samples that consists of inputs $x_i \in \reals^d$ and labels $y_i \in \cbr{1,\ldots,C}$. Each sample of this dataset is drawn from a joint distribution $P(x, y)$ which we define to be the ``task''. We will use the shorthand $x^n = (x_1,\ldots,x_n)$ and $y^n = (y_1,\ldots,y_n)$ to denote all inputs and labels. Let $w \in \reals^p$ be the weights of a probabilistic model which evaluates $p_w(y \,|\, x)$. We will use a random variable $z$ with a probabilistic model $p_w(z)$ when we do not wish to distinguish between inputs and labels. Given a prior on weights $\pi(w)$, Bayes law gives the posterior \( p(w \,|\, x^n, y^n) \propto p(y^n \,|\, x^n, w) \pi(w). \) The Fisher Information Matrix (FIM) $g \in \reals^{p \times p}$ has entries $g(w)_{kl}=$ \[ \f 1 n \sum_{i=1}^n \sum_{y=1}^C p_w(y \,|\, x_i) \partial_{w_k} \log p_w(y \,|\, x_i) \partial_{w_l} \log p_w(y \,|\, x_i). \] It can be used to define the Jeffreys prior $\pi_J(w) \propto \sqrt{\det g(w)}$. Jeffreys prior is reparameterization invariant, i.e., it assigns the same probability to a set of models irrespective of our choice of parameterization of those models. It is an uninformative prior, e.g., it imposes some generic structure on the problem (reparameterization invariance). \subsection{Reference Priors} \label{s:reference_priors} To make the choice of a prior more objective, \citet{bernardo1979reference} suggested that uninformative priors should maximize some divergence, say the Kullback-Leibler (KL) divergence $\text{KL}(p(w \,|\, z), \pi(w)) = \int \dd{w} p(w \,|\, z) \log \rbr{p(w \,|\, z)/\pi(w)}$, between the prior and the posterior for data $z$. The rationale for doing so is to allow the data to dominate the posterior rather than our choice of the prior. Since we do not know the data \emph{a priori} while picking the prior, we should maximize the \emph{average} KL-divergence over the data distribution $p(z)$. This amounts to maximizing the mutual information \beq{ \aed{ &\pi^* = \argmax_\pi I_\pi(w; z) \\ &:= \int \dd{z} \dd{w} p(z) p(w \,|\, z) \log \f{p(w \,|\, z)}{\pi(w)} = H(w) - H(w \,|\, z) } \label{eq:ref_defn} } where $p(z) = \int \dd{w} \pi(w) p(z \,|\, w)$ and $H(w) = \int \dd{w} \pi(w) \log \pi(w)$ is the Shannon entropy; the conditional entropy $H(w \,|\, z)$ is defined analogously. Mutual information is a natural quantity for measuring the amount of missing information about $w$ provided by data $z$ if the initial belief was $\pi$. The prior $\pi^*(w)$ is known as a reference prior. It is invariant to a reparameterization of the weight space because mutual information is invariant to reparameterization. The reference prior does not depend upon the samples $\hat{P}_n$ but only depends on their distribution $P$. The objective to calculate reference prior $\pi^*$ above may not be analytically tractable and therefore Bernardo also suggested computing $n$-reference priors. We call $n$ the ``order'' and deliberately overload the notation for the number of samples $n$; the reason will be clear soon. \beq{ \aed{ \textstyle \pi_n^* = \argmax_\pi I_\pi(w; z^n) &= H(w) - H(w \,|\, z^n), } \label{eq:ref_pin} } using $n$ samples and then setting $\pi^* := \lim_{n \to \infty} \pi_n^*$ under appropriate technical conditions~\citep{berger1988priors}. Reference priors are asymptotically equivalent to Jeffreys prior for one-dimensional problems. In general, they differ for multi-dimensional problems but it can be shown that Jeffreys prior is the continuous prior that maximizes the mutual information~\citep{clarke1994jeffreys}. \subsection{Blahut-Arimoto algorithm} \label{s:blahut_arimoto} The Blahut-Arimoto algorithm~\citep{arimoto1972algorithm,blahut1972computation} is a method for maximizing functionals like~\cref{eq:ref_defn} and leads to iterations of the form $\pi^{t+1}(w) \propto \exp \rbr{\text{KL}(p(z \,|\, w), p(z))} \pi^{t}(w)$. It is typically implemented for discrete variables, e.g., in the Information Bottleneck~\citep{tishbyInformationBottleneckMethod1999}. In this case, maximizing mutual information is a convex problem and therefore the BA algorithm is guaranteed to converge. Such discretization is difficult for high-dimensional deep networks. We therefore implement the BA algorithm using particles; see~\cref{rem:ba_particles}. \begin{example}[Estimating the bias of a coin] \label{eg:bias} To ground intuition, consider the estimation of the bias of a coin $w \in [0,1]$ using $n$ trials. If $z$ denotes the number of heads (which is a sufficient statistic), we have $p(z \,|\, w) = w^z (1-w)^{n-z} n!/(z! (n-z)!)$. For $n=1$, since we know that $I(w; z^1) \leq \log 2$ with this one bit of information, we can see that \( \pi^*_1(z) = (\delta(w) + \delta(1-w))/2 \) is the reference prior that achieves this upper bound. This result is intuitive: if we \emph{know} that we have only one observation, then the optimal uninformative prior should put equal probability mass on the two exhaustive outcomes $w=0$ (heads) and $w=1$ (tails). We can numerically calculate $\pi_n^*$ for different values of $n$ using the BA algorithm (\cref{fig:coin_convergence_jeffreys}). \end{example} \begin{figure}[htpb] \centering \includegraphics[width=0.325\linewidth]{fig/Coin_N=1.pdf} \includegraphics[width=0.325\linewidth]{fig/Coin_N=10.pdf} \includegraphics[width=0.325\linewidth]{fig/Coin_N=50.pdf} \caption{\textbf{Reference prior for the coin-tossing model} for $n=1, 10, 50$ (from left to right) computed using the Blahut-Arimoto algorithm. Atoms are critical points of the gray line which is $\text{KL}(p(z^n), p(z^n \,|\, w))$. The prior is discrete for finite order $n < \infty$~\citep{mattingly2018maximizing}. Atoms of the prior are maximally different from each other, e.g., for $n=1$, they are on opposite corners of the parameter space. As the number of samples increases, the separation between atoms of the prior reduces. The prior converges to Jeffreys prior $\pi_J(w) \propto \rbr{w (1-w)}^{-1}$ as $n \to \infty$.} \label{fig:coin_convergence_jeffreys} \end{figure} \section{Electronic Submission} \label{submission} Submission to ICML 2022 will be entirely electronic, via a web site (not email). Information about the submission process and \LaTeX\ templates are available on the conference web site at: \begin{center} \textbf{\texttt{http://icml.cc/}} \end{center} The guidelines below will be enforced for initial submissions and camera-ready copies. Here is a brief summary: \begin{itemize} \item Submissions must be in PDF\@. \item \textbf{New to this year}: If your paper has appendices, submit the appendix together with the main body and the references \textbf{as a single file}. Reviewers will not look for appendices as a separate PDF file. So if you submit such an extra file, reviewers will very likely miss it. \item Page limit: The main body of the paper has to be fitted to 8 pages, excluding references and appendices; the space for the latter two is not limited. For the final version of the paper, authors can add one extra page to the main body. \item \textbf{Do not include author information or acknowledgements} in your initial submission. \item Your paper should be in \textbf{10 point Times font}. \item Make sure your PDF file only uses Type-1 fonts. \item Place figure captions \emph{under} the figure (and omit titles from inside the graphic file itself). Place table captions \emph{over} the table. \item References must include page numbers whenever possible and be as complete as possible. Place multiple citations in chronological order. \item Do not alter the style template; in particular, do not compress the paper format by reducing the vertical spaces. \item Keep your abstract brief and self-contained, one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. The title should have content words capitalized. \end{itemize} \subsection{Submitting Papers} \textbf{Paper Deadline:} The deadline for paper submission that is advertised on the conference website is strict. If your full, anonymized, submission does not reach us on time, it will not be considered for publication. \textbf{Anonymous Submission:} ICML uses double-blind review: no identifying author information may appear on the title page or in the paper itself. \cref{author info} gives further details. \textbf{Simultaneous Submission:} ICML will not accept any paper which, at the time of submission, is under review for another conference or has already been published. This policy also applies to papers that overlap substantially in technical content with conference papers under review or previously published. ICML submissions must not be submitted to other conferences and journals during ICML's review period. Informal publications, such as technical reports or papers in workshop proceedings which do not appear in print, do not fall under these restrictions. \medskip Authors must provide their manuscripts in \textbf{PDF} format. Furthermore, please make sure that files contain only embedded Type-1 fonts (e.g.,~using the program \texttt{pdffonts} in linux or using File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3) might come from graphics files imported into the document. Authors using \textbf{Word} must convert their document to PDF\@. Most of the latest versions of Word have the facility to do this automatically. Submissions will not be accepted in Word format or any format other than PDF\@. Really. We're not joking. Don't send Word. Those who use \textbf{\LaTeX} should avoid including Type-3 fonts. Those using \texttt{latex} and \texttt{dvips} may need the following two commands: {\footnotesize \begin{verbatim} dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi ps2pdf paper.ps \end{verbatim}} It is a zero following the ``-G'', which tells dvips to use the config.pdf file. Newer \TeX\ distributions don't always need this option. Using \texttt{pdflatex} rather than \texttt{latex}, often gives better results. This program avoids the Type-3 font problem, and supports more advanced features in the \texttt{microtype} package. \textbf{Graphics files} should be a reasonable size, and included from an appropriate format. Use vector formats (.eps/.pdf) for plots, lossless bitmap formats (.png) for raster graphics with sharp lines, and jpeg for photo-like images. The style file uses the \texttt{hyperref} package to make clickable links in documents. If this causes problems for you, add \texttt{nohyperref} as one of the options to the \texttt{icml2022} usepackage statement. \subsection{Submitting Final Camera-Ready Copy} The final versions of papers accepted for publication should follow the same format and naming convention as initial submissions, except that author information (names and affiliations) should be given. See \cref{final author} for formatting instructions. The footnote, ``Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.'' must be modified to ``\textit{Proceedings of the $\mathit{39}^{th}$ International Conference on Machine Learning}, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).'' For those using the \textbf{\LaTeX} style file, this change (and others) is handled automatically by simply changing $\mathtt{\backslash usepackage\{icml2022\}}$ to $$\mathtt{\backslash usepackage[accepted]\{icml2022\}}$$ Authors using \textbf{Word} must edit the footnote on the first page of the document themselves. Camera-ready copies should have the title of the paper as running head on each page except the first one. The running title consists of a single line centered above a horizontal rule which is $1$~point thick. The running head should be centered, bold and in $9$~point type. The rule should be $10$~points above the main text. For those using the \textbf{\LaTeX} style file, the original title is automatically set as running head using the \texttt{fancyhdr} package which is included in the ICML 2022 style file package. In case that the original title exceeds the size restrictions, a shorter form can be supplied by using \verb|\icmltitlerunning{...}| just before $\mathtt{\backslash begin\{document\}}$. Authors using \textbf{Word} must edit the header of the document themselves. \section{Format of the Paper} All submissions must follow the specified format. \subsection{Dimensions} The text of the paper should be formatted in two columns, with an overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches between the columns. The left margin should be 0.75~inches and the top margin 1.0~inch (2.54~cm). The right and bottom margins will depend on whether you print on US letter or A4 paper, but all final versions must be produced for US letter size. Do not write anything on the margins. The paper body should be set in 10~point type with a vertical spacing of 11~points. Please use Times typeface throughout the text. \subsection{Title} The paper title should be set in 14~point bold type and centered between two horizontal rules that are 1~point thick, with 1.0~inch between the top rule and the top edge of the page. Capitalize the first letter of content words and put the rest of the title in lower case. \subsection{Author Information for Submission} \label{author info} ICML uses double-blind review, so author information must not appear. If you are using \LaTeX\/ and the \texttt{icml2022.sty} file, use \verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information will not be printed unless \texttt{accepted} is passed as an argument to the style file. Submissions that include the author information will not be reviewed. \subsubsection{Self-Citations} If you are citing published papers for which you are an author, refer to yourself in the third person. In particular, do not use phrases that reveal your identity (e.g., ``in previous work \cite{langley00}, we have shown \ldots''). Do not anonymize citations in the reference section. The only exception are manuscripts that are not yet published (e.g., under submission). If you choose to refer to such unpublished manuscripts \cite{anonymous}, anonymized copies have to be submitted as Supplementary Material via CMT\@. However, keep in mind that an ICML paper should be self contained and should contain sufficient detail for the reviewers to evaluate the work. In particular, reviewers are not required to look at the Supplementary Material when writing their review (they are not required to look at more than the first $8$ pages of the submitted document). \subsubsection{Camera-Ready Author Information} \label{final author} If a paper is accepted, a final camera-ready copy must be prepared. For camera-ready papers, author information should start 0.3~inches below the bottom rule surrounding the title. The authors' names should appear in 10~point bold type, in a row, separated by white space, and centered. Author names should not be broken across lines. Unbolded superscripted numbers, starting 1, should be used to refer to affiliations. Affiliations should be numbered in the order of appearance. A single footnote block of text should be used to list all the affiliations. (Academic affiliations should list Department, University, City, State/Region, Country. Similarly for industrial affiliations.) Each distinct affiliations should be listed once. If an author has multiple affiliations, multiple superscripts should be placed after the name, separated by thin spaces. If the authors would like to highlight equal contribution by multiple first authors, those authors should have an asterisk placed after their name in superscript, and the term ``\textsuperscript{*}Equal contribution" should be placed in the footnote block ahead of the list of affiliations. A list of corresponding authors and their emails (in the format Full Name \textless{}email@domain.com\textgreater{}) can follow the list of affiliations. Ideally only one or two names should be listed. A sample file with author names is included in the ICML2022 style file package. Turn on the \texttt{[accepted]} option to the stylefile to see the names rendered. All of the guidelines above are implemented by the \LaTeX\ style file. \subsection{Abstract} The paper abstract should begin in the left column, 0.4~inches below the final address. The heading `Abstract' should be centered, bold, and in 11~point type. The abstract body should use 10~point type, with a vertical spacing of 11~points, and should be indented 0.25~inches more than normal on left-hand and right-hand margins. Insert 0.4~inches of blank space after the body. Keep your abstract brief and self-contained, limiting it to one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. \subsection{Partitioning the Text} You should organize your paper into sections and paragraphs to help readers place a structure on the material and understand its contributions. \subsubsection{Sections and Subsections} Section headings should be numbered, flush left, and set in 11~pt bold type with the content words capitalized. Leave 0.25~inches of space before the heading and 0.15~inches after the heading. Similarly, subsection headings should be numbered, flush left, and set in 10~pt bold type with the content words capitalized. Leave 0.2~inches of space before the heading and 0.13~inches afterward. Finally, subsubsection headings should be numbered, flush left, and set in 10~pt small caps with the content words capitalized. Leave 0.18~inches of space before the heading and 0.1~inches after the heading. Please use no more than three levels of headings. \subsubsection{Paragraphs and Footnotes} Within each section or subsection, you should further partition the paper into paragraphs. Do not indent the first line of a given paragraph, but insert a blank line between succeeding ones. You can use footnotes\footnote{Footnotes should be complete sentences.} to provide readers with additional information about a topic without interrupting the flow of the paper. Indicate footnotes with a number in the text where the point is most relevant. Place the footnote in 9~point type at the bottom of the column in which it appears. Precede the first footnote in a column with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can appear in each column, in the same order as they appear in the text, but spread them across columns and pages if possible.} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{icml_numpapers}} \caption{Historical locations and number of accepted papers for International Machine Learning Conferences (ICML 1993 -- ICML 2008) and International Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was produced, the number of accepted papers for ICML 2008 was unknown and instead estimated.} \label{icml-historical} \end{center} \vskip -0.2in \end{figure} \subsection{Figures} You may want to include figures in the paper to illustrate your approach and results. Such artwork should be centered, legible, and separated from the text. Lines should be dark and at least 0.5~points thick for purposes of reproduction, and text should not appear on a gray background. Label all distinct components of each figure. If the figure takes the form of a graph, then give a name for each axis and include a legend that briefly describes each curve. Do not include a title inside the figure; instead, the caption should serve this function. Number figures sequentially, placing the figure number and caption \emph{after} the graphics, with at least 0.1~inches of space before the caption and 0.1~inches after it, as in \cref{icml-historical}. The figure caption should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. You may float figures to the top or bottom of a column, and you may set wide figures across both columns (use the environment \texttt{figure*} in \LaTeX). Always place two-column figures at the top or bottom of the page. \subsection{Algorithms} If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic'' environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. \cref{alg:example} shows an example. \begin{algorithm}[tb] \caption{Bubble Sort} \label{alg:example} \begin{algorithmic} \STATE {\bfseries Input:} data $x_i$, size $m$ \REPEAT \STATE Initialize $noChange = true$. \FOR{$i=1$ {\bfseries to} $m-1$} \IF{$x_i > x_{i+1}$} \STATE Swap $x_i$ and $x_{i+1}$ \STATE $noChange = false$ \ENDIF \ENDFOR \UNTIL{$noChange$ is $true$} \end{algorithmic} \end{algorithm} \subsection{Tables} You may also want to include tables that summarize material. Like figures, these should be centered, legible, and numbered consecutively. However, place the title \emph{above} the table with at least 0.1~inches of space before the title and the same after it, as in \cref{sample-table}. The table title should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. \begin{table}[t] \caption{Classification accuracies for naive Bayes and flexible Bayes on various data sets.} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Data set & Naive & Flexible & Better? \\ \midrule Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\ Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\ Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\ Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\ Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\ Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\ Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\ Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Tables contain textual material, whereas figures contain graphical material. Specify the contents of each row and column in the table's topmost row. Again, you may float tables to a column's top or bottom, and set wide tables across both columns. Place two-column tables at the top or bottom of the page. \subsection{Theorems and such} The preferred way is to number definitions, propositions, lemmas, etc. consecutively, within sections, as shown below. \begin{definition} \label{def:inj} A function $f:X \to Y$ is injective if for any $x,y\in X$ different, $f(x)\ne f(y)$. \end{definition} Using \cref{def:inj} we immediate get the following result: \begin{proposition} If $f$ is injective mapping a set $X$ to another set $Y$, the cardinality of $Y$ is at least as large as that of $X$ \end{proposition} \begin{proof} Left as an exercise to the reader. \end{proof} \cref{lem:usefullemma} stated next will prove to be useful. \begin{lemma} \label{lem:usefullemma} For any $f:X \to Y$ and $g:Y\to Z$ injective functions, $f \circ g$ is injective. \end{lemma} \begin{theorem} \label{thm:bigtheorem} If $f:X\to Y$ is bijective, the cardinality of $X$ and $Y$ are the same. \end{theorem} An easy corollary of \cref{thm:bigtheorem} is the following: \begin{corollary} If $f:X\to Y$ is bijective, the cardinality of $X$ is at least as large as that of $Y$. \end{corollary} \begin{assumption} The set $X$ is finite. \label{ass:xfinite} \end{assumption} \begin{remark} According to some, it is only the finite case (cf. \cref{ass:xfinite}) that is interesting. \end{remark} \subsection{Citations and References} Please use APA reference format regardless of your formatter or word processor. If you rely on the \LaTeX\/ bibliographic facility, use \texttt{natbib.sty} and \texttt{icml2022.bst} included in the style-file package to obtain this format. Citations within the text should include the authors' last names and year. If the authors' names are included in the sentence, place only the year in parentheses, for example when referencing Arthur Samuel's pioneering work \yrcite{Samuel59}. Otherwise place the entire reference in parentheses with the authors and year separated by a comma \cite{Samuel59}. List multiple references separated by semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.' construct only for citations with three or more authors or after listing all authors to a publication in an earlier reference \cite{MachineLearningI}. Authors should cite their own work in the third person in the initial version of their paper submitted for blind review. Please refer to \cref{author info} for detailed instructions on how to cite your own papers. Use an unnumbered first-level section heading for the references, and use a hanging indent style, with the first line of the reference flush against the left margin and subsequent lines indented by 10 points. The references at the end of this document give examples for journal articles \cite{Samuel59}, conference publications \cite{langley00}, book chapters \cite{Newell81}, books \cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports \cite{mitchell80}, and dissertations \cite{kearns89}. Alphabetize references by the surnames of the first authors, with single author entries preceding multiple author entries. Order references for the same authors by year of publication, with the earliest first. Make sure that each reference includes all relevant information (e.g., page numbers). Please put some effort into making references complete, presentable, and consistent, e.g. use the actual current name of authors. If using bibtex, please protect capital letters of names and abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz in your .bib file. \section*{Accessibility} Authors are kindly asked to make their submissions as accessible as possible for everyone including people with disabilities and sensory or neurological differences. Tips of how to achieve this and what to pay attention to will be provided on the conference website \url{http://icml.cc/}. \section*{Software and Data} If a paper is accepted, we strongly encourage the publication of software and data with the camera-ready version of the paper whenever appropriate. This can be done by including a URL in the camera-ready copy. However, \textbf{do not} include URLs that reveal your institution or identity in your submission for review. Instead, provide an anonymous URL or upload the material as ``Supplementary Material'' into the CMT reviewing system. Note that reviewers are not required to look at this material when writing their review. \section*{Acknowledgements} \textbf{Do not} include acknowledgements in the initial version of the paper submitted for blind review. If a paper is accepted, the final camera-ready version can (and probably should) include acknowledgements. In this case, please place such acknowledgements in an unnumbered section at the end of the paper. Typically, this will include thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support. \nocite{langley00} \section{Details of the experimental setup} \label{s:app:setup} \paragraph{Architecture} For experiments on CIFAR-10 and CIFAR-100 (\cref{s:expt}), we consider a modified version of the Wide-Resnet 28-2 architecture ~\citep{zagoruyko2016wide}, which is identical to the one used in \citet{berthelot2019mixmatch}. This architecture differs from the standard Wide-Resnet architecture in a few important aspects. The modified architecture has Leaky-ReLU with slope 0.1 (as opposed to ReLU), no activations or batch normalization before any layer with a residual connection, and a momentum of 0.001 for batch-normalization running mean and standard-deviation (as opposed to 0.1, in other words these statistics are made to change very slowly). We observed that the change to batch-normalization momentum has a very large effect on the accuracy of semi-supervised learning. For experiments on MNIST (\cref{s:app:mnist}), we use a fully-connected network with 1 hidden layer of size 32. We use the hardtanh activation in place of ReLU for this experiment; this is because maximizing the mutual information has the effect of increasing the magnitude of the activations for ReLU networks. One may use weight decay to control the scale of the weights and thereby that of the activations but in an effort to implement the reference prior exactly, we did not use weight decay in this model. Note that the nonlinearities for the CIFAR models are ReLUs. \paragraph{Datasets} For semi-supervised learning, we consider the CIFAR-10 dataset with the number of labeled samples varying from 50--1000 (i.e., 5--100 labeled samples per class). Semi-supervised learning experiments use all samples that are not a part in the labeled set, as unlabeled samples. For transfer learning, we construct two tasks from MNIST (task one is a 5-way classification task for digits 0--4, and task two is another 5-way classification task for digits 5--9). For this experiment, we use labeled source data but do not use any labeled target data. This makes our approach using a reference prior similar to a purely unsupervised method. The CIFAR-100 dataset is also utilized in the transfer learning setup (\cref{s:expt:transfer}). We consider five 5-way classification tasks from CIFAR-100 constructed using the super-classes. The five tasks considered are Vehicles-1, Vehicles-2, Fish, People and Aquatic Mammals. The selection of these tasks were motivated from the fact that some pairs of tasks are known to positively impact each other (Vehicles-1, Vehicles-2), while other pairs are known to be detrimental to each other (Vehicles-2, People); see the experiments in~\citet{rameshModelZooGrowing2022}. \paragraph{Optimization} SGD with Nesterov momentum on a Cosine-annealed learning rate schedule with warmup was used in our experiments on CIFAR-10 and CIFAR-100. The initial learning rate was set to $0.03 \times K$ where $K$ denotes the number of particles. The scaling factor of $K$ exists to counteract the normalization constant in the objective from averaging across all particles. The momentum coefficient for SGD was set to 0.9 and weight decay to $5K^{-1}\times 10^{-4}$. Mixed-precision (32-bit weights, 16-bit gradients) was used to expedite training. Training was performed for 200 epochs unless specified otherwise. Experiments on MNIST also used SGD for computing the reference prior. SGD was used with a constant learning rate of 0.001 with Nesterov's acceleration, momentum coefficient of 0.9 and weight decay of $10^{-5}$. \paragraph{Definition of a single Epoch} Note that since we iterate over the unlabeled and labeled data (each with different number of samples), the notion of what is an epoch needs to be defined differently. In our work, one epoch refers to 1024 weight updates, where each weight update is calculated using a batch-size of 64 for the labeled data of batch size 64, and a batch-size of 448 for the unlabeled data. \paragraph{Exponential Moving Average (EMA)} In all CIFAR-10 and CIFAR-100 experiments, we also implement the Exponential Moving Average (EMA)~\citep{tarvainen2017mean}. In each step, the EMA model is updated such that the new weights are the weighted average of the old EMA model weights, and the latest trained model weights. The weights for averaging used in our work (and most other methods) are 0.999 and 0.001 respectively. Note that EMA only affects the particle when it is used for testing, it does not affect how weight updates are calculated during training. We exclude batch-normalization running mean and variance estimates in EMA. \paragraph{Data Augmentations} We use random-horizontal flips and random-pad-crop (padding of 4 pixels on each side) as weak augmentations for the CIFAR-10 and CIFAR-100 datasets. For SSL experiments on CIFAR-10, we use RandAugment~\citep{cubuk2020randaugment} for strong augmentations. No data augmentations were used for MNIST. \paragraph{Picking the value of $\t$ in~\cref{eq:tau}} Let $G_1$ and $G_2$ be the sets of weak and strong augmentations respectively. For $g_1 \sim G_1$ and $g_2 \sim G_2$, let us write down the upper bound in~\cref{eq:tau} from Jensen's inequality in detail \[ \E_{x^u} \int \dd{ y^u} \sbr{- \t^2 p_{g_1} \log p_{g_1} - \t( 1- \t) p_{g_2} \log p_{g_1} - (1-\t)\t p_{g_1} \log p_{g_2} - (1 - \t )^2 p_{g_2} \log p_{g_2} }. \] The upper bound is thus a weighted sum of the entropy terms $-p_{g_1} \log p_{g_1}, -p_{g_2} \log p_{g_2}$, and cross entropy terms $-p_{g_2} \log p_{g_1}, -p_{g_1} \log p_{g_2}$. If we were to pick $\t = 1/2$ like FixMatch, then since $(1 - \t )^2 + \t^2 = 2 \t (1- \t)$ for $\t=1/2$, the entropy and cross entropy terms will contribute equally to the loss function. However in practice, since we do not update $p_{g_1}$ using the back-propagation gradient to protect the predictions from deteriorating on the weakly augmented images, one of the entropy terms $- p_{g_1} \log p_{g_1}$ is dropped. In such a situation, to ensure that cross entropy and entropy terms provide an equal contribution to the gradient, we would like $(1- \t)^2 = 2 \t (1- \t)$ which gives $\t = 1/3$. \section{Visualizing the reference prior} \label{s:app:visualizing} We can think of each particle $w$ as representing a probability distribution \[ \reals^{nC} \ni f(w) = \rbr{\sqrt{p_w(y=1 \,|\, x_1)}, \sqrt{p_w(y=2 \,|\, x_1)}, \ldots, \sqrt{p_w(y=C \,|\, x_n)}}. \] and use a method for visualizing such distributions developed in~\citet{Quinn13762} that computes a principal component analysis (PCA) of such vectors $\{ f(w^1), \ldots, f(w^K)\}$. This method computes an isometric embedding of the space of probability distributions. The rationale behind the choice of $f(w)$ is that for two weight vectors $w, w'$, the Euclidean distance between $f(w)$ and $f(w')$ is the Hellinger divergence between the respective probability distributions, \beqs{ \aed{ \norm{f(w) - f(w')}^2 &= \f{1}{2 n} \sum_{i=1}^{n} \sum_{k=1}^{C} \rbr{ \sqrt{p_w( y = k \,|\, x_i )} - \sqrt{p_{w'}( y = k \,|\, x_i )}}^2 \\ & = \f{1}{n} \sum_{i=1}^{n} d_H^2\rbr{p_w( \cdot \,|\, x_i ), p_{w'}( \cdot \,|\, x_i )}, } } where \[ d^2_H(P, Q) = \f{1}{2} \int \rbr{\sqrt{\dd{P}} - \sqrt{\dd{Q}}}^2 \] is the Hellinger distance. In other words, the prediction vector $f(w)$ maps the weights $w$ into a $(n C)-$dimensional space. The Euclidean metric in this space corresponds to the Hellinger distance in the space of probability distributions. We can therefore compute the principal component analysis (PCA) of these vectors and project the vectors $f(w)$ into lower-dimensions to visualize them, as done in~\cref{fig:manifold_boundary}. \section{Additional Experiments} \label{s:app:expt} \subsection{Unsupervised transfer learning on MNIST} \label{s:app:mnist} For the following experiments on MNIST, the reference prior is of order $n= 2$ and has $K= 50$ particles. We run our methods for $1024$ epochs. We first compare deep reference priors with fine-tuning for transfer learning. The parameter $\b$ controls the degree to which the posterior~\cref{eq:ref_pinm_beta} is influenced by the target data. If we have $\beta=1$, then the posterior is maximally influenced by target data after being pretrained on the source data. We instantiate~\cref{eq:ref_pinm_beta}, by combining prior selection, pretraining on the source task into one objective, \beq{ \aed{ \textstyle \max_\pi \g I_\pi(w; y^u, x^u) + (1 - \b) \E_{w \sim \pi}\log p(w; y^s \,|\, x^s), } \label{eq: mnist_transfer} } where $\g$ and $\b$ are hyper-parameters. Solving~\cref{eq: mnist_transfer} requires no knowledge from target data labels, therefore the setting here is pure unsupervised clustering for target task dataset. We compare this objective to fine-tuning which adapts a model trained on labeled source to the labeled target data. In this experiment, all samples from the source task (about 30,000 images across 5 classes) were used for both the reference prior and fine-tuning. \begin{table}[H] \centering \rowcolors{1}{}{black!5} \resizebox{0.65\linewidth}{!}{ \begin{tabular}{l|rrrrr} \rowcolor{white} \toprule \rowcolor{white} Method \qquad \# Labeled target data ($\rightarrow$) &0 & 50 & 100 & 250 & 500 \\ \midrule \textbf{Source (0--4) to Target (5--9)}\\ Fine-Tuning & - & 71.1 & 78.8 & 86.6 & 93.0\\ Deep Reference Prior Unsupervised Transfer & 87.4 & - & - & -& - \\ \midrule \textbf{Source (5--9) to Target(0--4)}\\ Fine-Tuning & - & 90.2 & 92.4 & 94.7 & 96.2\\ Deep Reference Prior Unsupervised Transfer & 95.2 & - & - & -& - \\ \bottomrule \end{tabular} } \caption{ \textbf{Accuracy (\%) of unsupervised reference-prior based transfer} (digits 0--4) to the target task (digits 5--9). We see that transfer using source and unlabeled target data using the reference prior performs as well as fine tuning with labeled source data and 250 labeled target data. Even if MNIST is a simple dataset, this is a remarkable demonstration of how effective the reference prior is at making use of both the labeled source data and unlabeled target data. } \label{tab:mnist_transfer1} \end{table} \section{Two-stage experiment for coin tossing} \label{s:app:two_stage_coin_tossing} In~\cref{s:two_stage}, we consider a situation when we obtain data in two stages, first $z^m$, and then $z^n$. We propose a prior $\pi^*$~\cref{eq:ref_pinm_avg} such that the posterior of the second stage makes the maximal use of the new $n$ samples. In this section, we visualize $\pi^*$ in the parameter space using a two-stage coin tossing experiment. Consider the estimation of the bias of a coin $w \in [0,1]$ using two-stage $m+n$ trials. There are $m$ trials in first stage and $n$ trails in second stage. If $z$ denotes the number of heads in total, we have $p(z \,|\, w) = w^z (1-w)^{m+n-z} (m+n)!/(z! (m+n-z)!)$. We numerically find $\pi^*$ for different values of $m$ and $n$ using the BA algorithm (\cref{fig:app:coin_two_stage1} and \cref{fig:app:coin_two_stage2}). \begin{figure}[htpb] \centering \includegraphics[width=0.3\linewidth]{fig/Coin_M=1_N=1.pdf} \includegraphics[width=0.3\linewidth]{fig/Coin_M=1_N=10.pdf} \includegraphics[width=0.3\linewidth]{fig/Coin_M=1_N=40.pdf} \caption{\textbf{Reference prior for the two stage coin-tossing model (see~\cref{eq:ref_pinm_avg})} for $m=1$ and $n=1, 10, 40$ (from left to right) computed using the Blahut-Arimoto algorithm. Atoms are critical points of the gray line which is $\text{KL}(p(z^{m+n}), p(z^{m+n} \,|\, w))- \text{KL}(p(z^m), p(z^m \,|\, w))$. The prior is again discrete for finite order $n < \infty$. We see how this reference prior behaves for different values of $\a = m/n$, e.g., for $\a \to 0$ this prior $\pi^*$ is close to $\pi_n^*$ in~\cref{eq:ref_pin} but there are still some differences between them. This shows that the two-stage reference prior is not the same as the single-stage reference prior.} \label{fig:app:coin_two_stage1} \end{figure} \begin{figure}[htpb] \centering \includegraphics[width=0.3\linewidth]{fig/Coin_M=10_N=1.pdf} \includegraphics[width=0.3\linewidth]{fig/Coin_M=30_N=1.pdf} \caption{\textbf{Reference prior for the two stage coin-tossing model} (see~\cref{eq:ref_pinm_avg}) for $n=1$ and $m=10, 30$ (from left to right) computed using the Blahut-Arimoto algorithm. Atoms are critical points of the gray line which is $\text{KL}(p(z^{m+n}), p(z^{m+n} \,|\, w))- \text{KL}(p(z^m), p(z^m \,|\, w))$.} \label{fig:app:coin_two_stage2} \end{figure}
{ "timestamp": "2022-02-02T02:09:29", "yymm": "2202", "arxiv_id": "2202.00187", "language": "en", "url": "https://arxiv.org/abs/2202.00187" }
\section{Introduction} \label{sec:introduction} Recent progress in 3D acquisition and reconstruction technology makes the capture of 3D scenes ubiquitous. In dynamic point clouds, each frame consists of a list of data points with 3D coordinates and RGB color values. Since point clouds in raw format would require a huge amount of bandwidth for transmission there has been a significant interest in point cloud compression techniques, which has led to MPEG standardization efforts \cite{overview2019}, considering both video-based point cloud compression (V-PCC) and geometry-based point cloud compression (G-PCC) \cite{overview2019,overview2020}. Methods for inter-frame (temporal) prediction have been proposed to achieve efficient compression of dynamic point clouds. These methods can be grouped into three main categories. In \textit{voxel-based} schemes \cite{Dorina2016}, where a motion vector (MV) is estimated for each voxel, a few points in both the prediction and reference frames are selected as anchors to establish correspondence via spectral matching, leading to a set of sparse MVs. Then, using a smoothness constraint, a dense set of MVs can be obtained from the sparse set to provide motion for all remaining points. In \textit{patch-based} techniques \cite{Xu2020}, motion estimation (ME) is considered as an unsupervised 3D point registration process wherein a MV is estimated by iterative closest point (ICP) \cite{ICP} for each patch generated by K-means clustering. In this paper we focus on \textit{block-based} methods, where frames to be predicted are partitioned into several non-overlapping 3-dimensional blocks of a given size. For each block, the best matching block in a reference frame is selected according to specific matching criteria, which can be based purely on geometry, e.g., an ICP-based approach that generates rigid transforms \cite{Rufael2017}, or can use a combination of geometry and color attribute information \cite{Ricardo2017}. Recent work has also focused on block-based motion search speedup, including both efficient search pattern design and search window reduction \cite{Camilo2018,Camilo2019,Souto2020,SantosICIP2021}. Our work is motivated by the observation that ME with sub-pixel accuracy is an essential tool for modern video coding \cite{Girod1993}, while all the aforementioned ME methods for dynamic point clouds are based on integer-voxel displacements. There are two main reasons why a direct extension of video-based fractional ME to 3D contexts is not straightforward. First, point clouds are irregularly distributed within each frame, i.e., only those voxels that correspond to the surfaces of objects in the scene contain attribute information. Thus, while interpolation of attributes at new voxel locations can be based in conventional methods, we have the additional challenge of choosing only those new voxel locations that are consistent with object surfaces, even though those surfaces are not explicitly known. For example, we would like to avoid creating additional, fractional accuracy voxels \textit{inside} an object. Second, voxels are inconsistent across frames, i.e., both the number of voxels and their distribution in space are different from frame to frame. Thus, since two matching blocks in consecutive frames will in general have a different number of voxels containing attribute information, we will need to develop alternatives to the one-to-one pixel (or sub-pixel) matching commonly used for conventional video. In this paper, we focus on fractional-voxel motion estimation (FvME) under the assumption that integer-voxel MVs (IvMVs) have already been obtained using an existing integer-voxel motion estimation (IvME) scheme \cite{Ricardo2017,Camilo2018,Camilo2019,Souto2020}. Specifically, in this paper we use precomputed IvMVs from a public database \cite{IMVdataset}. In our approach, we start by creating fractional voxels between pairs of \textit{neighboring} occupied integer voxels. Neighboring voxels are used to favor consistency with object surfaces, without requiring explicit estimation of the surfaces. Then, a higher resolution point cloud is obtained by interpolating attributes at each fractional voxel from the values at nearby integer voxels. FvME is implemented by searching fractional-voxel MVs (FvMVs) around the positions given by IvMVs and selecting the fractional displacement leading to the lowest motion compensation prediction error. Motion-compensated prediction is implemented by directly copying, as the attribute for a voxel in a block in the current frame, the attribute of the \textit{nearest} voxel in the matched blocked in the reference frame. Our proposed FvME scheme leads to improved performance over transform-based approaches without inter or intra prediction and is also significantly better than temporal prediction methods based on the IvMVs from \cite{IMVdataset}. \section{Fractional-Voxel Motion Estimation and Compensation} \label{sec:FME} \subsection{Motivation} Real-world scenes and objects are captured by multiple calibrated and synchronized RGB or RGB-D cameras clusters from various viewing angles \cite{GROOT,8idataset}. After stitching and voxelization, dynamic point clouds are generated on integer grids. Note that the 3D voxel coordinates are obtained as integer approximations to the ``true'' positions of the object in 3D space, while the optimal displacement between frames is unlikely to be exactly integer. Thus, a fractional voxel displacement can be better than an integer-one, so that higher resolution MVs have the potential to provide more accurate motion and hence more accurate MC prediction. Furthermore, distortion due to lossy coding in previously reconstructed point cloud frames can lead to higher prediction errors, while camera noise, lighting change in the capture environment, object movements, etc., may also result in noisy color attributes and in imperfect matches during motion search \cite{Zhao2017}. Thus, as for conventional video, where it is well known that fractional motion compensation contributes to noise removal, the process of generating higher resolution point clouds and attributes at fractional voxel locations can contribute to denoising and lead to improvements in the quality of the reference frames. \subsection{Occupied fractional voxels} \label{sec:OFV} In this section, we define fractional voxels and describe our proposed method for interpolation. Based on the same design philosophy used for images and videos, fractional voxels are created at selected intermediate locations between voxels on the integer resolution grid. We define a fractional voxel of $1/2$ resolution (1/2-voxel), as a voxel at the mid point between any two neighboring integer-voxels. As noted in the introduction, not all integer-voxels are ``occupied'' in 3D space, and those that are occupied typically correspond to object surfaces. Thus, in our proposed method, new fractional voxels are created only at locations in 3D space that are (approximately) consistent with the surfaces implied the location of occupied integer voxels and attributes are interpolated only at these newly created fractional-voxels. We say that two integer voxels with coordinates $v_j$ and $v_k$ are neighbors if their distance is below a threshold $\rho$. Then, a fractional voxel is created only between neighbors $v_j$ and $v_k$ (assumed to be close enough so that they are likely to belong to the same surface) and the corresponding interpolated color attribute is computed as: \begin{equation} \begin{split} & C(v_i) = \frac{1}{2} \left( C(v_j) + C(v_k) \right), \\ & \text{with } \; L(v_i) = \frac{1}{2}(L(v_j)+L(v_k)) \;\; \text{and} \;\; \text{dist}(v_j,v_k) \leq \rho, v_j,v_k\in V_i, \label{equ:identification} \end{split} \end{equation} where $v_i$ is a voxel in the fractional-voxel set $V_f$ with color signal $C(v_i)$, $v_j$ and $v_k$ are voxels in the integer-voxel set $V_i$ with color signal $C(v_j)$ and $C(v_k)$, respectively. $L(\cdot)$ represents the coordinates of the voxel. $\rho$ is the distance threshold and $\text{dist}(v_j,v_k)$ measures the Euclidean distance between the coordinates of $v_j$ and $v_k$. Note that different pairs of integer voxels may produce the same fractional voxel. Thus, to remove repeated fractional voxels after interpolation, attributes that belong to the same fractional voxel and are obtained by interpolation from different pairs of neighboring voxels are merged by averaging. Fig.~\ref{fig:identification} shows several examples of possible fractional-voxels locations, where we can see that interpolation based on neighboring integer voxels tends to favor increasing the voxel resolution on the (implicit) surface where the voxels are located. \begin{figure}[htb] \begin{subfigure}[t]{0.42\textwidth} \includegraphics[width = 1\linewidth]{./figures/halfvoxel.png} \caption{} \label{fig:halfvoxel} \end{subfigure} \begin{subfigure}[t]{0.53\textwidth} \includegraphics[width = 1\textwidth]{./figures/identification_v2_cropperd_eduardo.png} \caption{} \label{fig:identification} \end{subfigure} \caption{Integer and fractional voxels. Figure \ref{fig:halfvoxel} depicts all possible candidate integer and 1/2-voxels positions Figure \ref{fig:identification} shows 3 examples of occupied integer voxel positions with corresponding fractional voxels obtained from neighboring integer voxels. Note that these interpolated fractional voxels are more likely to belong to the same surface as the neighboring integer voxels they were obtained from.} \end{figure} \subsection{ME with fractional-voxel accuracy} Due to the inconsistency of voxel distributions in consecutive frames, it is difficult to establish exact one-to-one correspondences between the voxels in two matching blocks. To generalize MC prediction for fractional motion in 3D space, we start by super-resolving the reference frame as described in Section~\ref{sec:OFV}. As we can see from Fig.~\ref{fig:superresolution}, the continuity among voxels and their corresponding attributes is significantly increased underlying surfaces, which provide better predictors when high resolution motion is available. The low pass filtering used for interpolation also contributes to attribute noise removal. \begin{figure*}[h] \centering \includegraphics[width=1\textwidth]{./figures/SR.png} \caption{Comparison between the original and super-resolved reference block.} \label{fig:superresolution} \end{figure*} Next, we estimate MVs in fractional precision for MC. The entire ME process is a coarse-to-fine procedure, including IvME and FvME. Each estimated MV is obtained as the sum of an IvMV and a FvMV displacement. Assuming the IvMV $MV_i$ is given, the optimal FvMV $MV^{opt}_f$ is searched from a set of candidate fractional displacements. Since we super-resolve the reference frame in 1/2-voxel precision, each coordinate of a fractional displacement $MV_f$ can take values in $\lbrace -\frac{1}{2}, 0, \frac{1}{2} \rbrace$, resulting in $27$ possible displacements. For a given fractional displacement $MV_f$, we predict each attribute in the current block from its nearest voxel in the translated super-resolved reference block, as depicted in Fig.~\ref{fig:correspondence}. Then the displacement with the smallest prediction error is chosen, that is, \begin{equation} \begin{split} MV^{opt}_f & = \arg\min_{MV_f} \limits \sum_{v_i\in V(B_p)}{E_{pred}(C(v_i),C(v_{j'}))}, \\ ~\text{s.t. } \; & j' = \arg\min_{j} \limits (\text{dist}(v_i,v_j)),v_j\in V(B_{rMC}^s), \\ & L_b(B_{rMC}^s) = L_b(B_{r}^s) + MV,\\ & MV = MV_f + MV_i, \label{equ:FvME} \end{split} \end{equation} where $B_r^s$ and $B_{rMC}^s$ represent the super-resolved reference block before and after translation with $MV$, respectively, $v_i$ and $v_j$ are voxels with color signals $C(v_i)$ and $C(v_j)$ in blocks $B_p$ and $B_{rMC}^s$, respectively. $E_{pred}(\cdot,\cdot)$ is the function for measuring the prediction error. $\text{L}_b(\cdot)$ represents the coordinates of the block while $\text{V}(\cdot)$ represents the set of voxels within the block. \begin{figure}[htb] \begin{center} \includegraphics[width = 1\linewidth]{./figures/MC2.png} \caption{Motion-compensated prediction.} \label{fig:correspondence} \end{center} \end{figure} \subsection{MC prediction with fractional-voxel accuracy} Finally, we apply MC prediction using the obtained MVs in fractional precision. Specifically, once the voxels in the reference block are translated using the integer motion vector $MV_i$, they are further shifted by the obtained optimal fractional displacement $MV^{opt}_f$, as shown in \eqref{equ:FvME}. Then, temporal correspondence are established from voxels in the predicted block $B_p$ to their nearest neighbours in the translated super-resolved reference block $B_{rMC}^s$ for motion-compensated prediction. The attribute of each voxel in the predicted block is predicted by copying the attribute of its corresponding voxel in the reference frame, that is, \begin{equation} \begin{split} \forall v_i \in B_p \text{ , } C(v_i) = C(v_{j'}) & ~\text{s.t. } j' = \arg\min_{j} \limits (\text{dist}(v_i,v_j)),v_j\in B_{rMC}^s. \label{equ:directcopy} \end{split} \end{equation} \section{Experiments} \label{sec:experiments} \subsection{Dataset} In this section, we evaluate the proposed FvME scheme for compression of color attributes of the dataset of \cite{8idataset}, which consists of four sequences: \textit{longdress}, \textit{redandblack}, \textit{loot} and \textit{soldier}. Each sequence contains 300 frames. Note that we assume IvMVs are given and are used to estimate FvMVs. Since IvMVs derived using different algorithms may lead to different FvMVs with disparate coding performance, we start from the publicly available 3D motion vector database \cite{IMVdataset}. The IvMVs in \cite{IMVdataset} are selected to minimize a hybrid distance metric, $\delta = \delta_g + 0.35\delta_c$, which combines $\delta_g$, the average Euclidean distance between the voxels, and $\delta_c$, the average color distance in Y-channel\footnote{Note that the resulting IvMVs aim to select matching blocks with similar geometry ($\delta_g$) and color attributes ($\delta_c$) but there is no guarantee that this metric, and in particular the relative weight between the distances ($0.35$) is the optimal choice in terms of coding efficiency. Thus, it will be shown in our motion compensation experiments, that these IvMVs can sometimes lead to performance below that of encoding methods that do not use motion compensation. In these cases performance can be improved by local refinement of the IvMVs from the database.}. We only consider motion for $16\times16\times16$ sized blocks. We implement a conventional inter-coding system where previously decoded frames are used as reference. \subsection{Experimental Settings} Following the MPEG Call for Proposal (CfP) for point cloud compression \cite{CTC2017}, we evaluate the proposed block-based FvME scheme (Proposed FvME) in groups of 32 frames, with the first frame coded in intra mode, and the rest coded using inter prediction. The threshold distance between integer voxels for interpolating fractional voxels is set to $\rho=\sqrt{3}$ in \eqref{equ:identification}. Colors are transformed from RGB to YUV spaces, and each of the Y, U, and V channels are processed independently. When searching for the best candidate FvMV in \eqref{equ:FvME}, we use the squared distances to measure prediction errors. All blocks in the intra-coded frames undergo region adaptive graph Fourier transform (RA-GFT) \cite{RAGFT} while, in the inter-coded frames, all blocks are motion-compensated. After MC prediction, the residues are transformed using the graph Fourier transform (GFT) \cite{zhang2014point}. To compute the GFT, a threshold graph is built inside every block wherein voxels are connected to each other if their Euclidean distance is less than or equal to $\sqrt{3}$. If after thresholding, the resulting graph for a block is not connected, a complete graph is built instead, which results in the transformed coefficients consisting of a single approximation coefficient (DC) and multiple detail coefficients (ACs) for each block. The DC coefficients of all blocks are concatenated and encoded together. Then, the AC coefficients are coded block by block. This approach is equivalent to a single level RA-GFT \cite{RAGFT}. For all transforms, we perform uniform quantization and entropy code the coefficients using the adaptive run-length Golomb-Rice algorithm (RLGR) \cite{RLGR}. As for FvMVs overheads, since there are 27 FvMVs in total, 8 bits are used to signal each FvMV. For IvMVs, we use 4 bits to signal the value and 1 bit to signal sign for each axis and therefore, 15 total bits are used to represent an IvMV. The overheads of FvMVs and IvMVs are entropy coded by Lempel–Ziv–Markov chain algorithm \cite{LZMA}. We considered the following schemes as baselines. IvME using the database motion (DM) for MC prediction and using DM with additional integer local refinement (DM+RF) for MC prediction. The local refinement uses different criteria that aims to minimize color errors only, instead of the hybrid errors used in the database. DM is refined by additional local search in integer precision to improve its matching accuracy over the original ones. The locally refined range for each axis is set to be $[-1,1]$, which entirely encloses fractional positions searched in the proposed FvME scheme. To evaluate the benefits of high resolution references and FvMVs, we propose two inter coding schemes which are using super-resolved reference blocks, with and without fractional motion vectors for compensated prediction. First, to evaluate the super-resolution method, we implement a scheme that considers IvME using integer local refined DM and super-resolved reference blocks for prediction, which is denoted by ``DM+RF+SR''. The difference between DM+RF and DM+RF+SR is the resolution of the reference block. Then, to evaluate benefits of FvMVs, we implement a scheme that uses fractional resolution in both reference blocks and motion vectors, which is denoted by ``proposed FvME''. For a fair comparison between inter coding schemes, all other test conditions are the same. Additionally, to make our performance evaluation more complete, we include two state of the art (all intra) anchor solutions, namely, RA-GFT \cite{RAGFT} and region adaptive Haar transform (RAHT) \cite{raht}. For RA-GFT, block size 16 is used. The residues are entropy coded by RLGR. \subsection{Evaluation Metrics} The evaluation metrics are the number of bits per voxel (bpv) and average peak signal-to-noise ratio over Y component (PSNR-Y), \begin{equation} PSNR_Y = -10 \log_{10} \left (\frac{1}{T} \sum_{t = 1}^T \frac{ \| Y_t - \hat{Y_t} \|_2^2 }{ 255^2 N_t} \right), bpv = \frac{\sum_{t = 1}^T b_t}{\sum_{t = 1}^T N_t}, \label{equ:bpv} \end{equation} where $Y_t$ and $\hat{Y_t}$ represent original and reconstructed signals on the same voxels of $t$-th frame respectively, $T$ is the total number of frames, $b_t$ is the bits required to encode YUV components of $t$-th, including IvMVs and FvMVs overhead when necessary, and $N_t$ is the total number of occupied voxels in $t$-th frame. The Bjontegaard-Delta \cite{BDBR} results for bitrate (BD-rate) are also reported. \subsection{Experimental Results and Analysis} Rate distortion (RD) curves are shown in Fig.~\ref{fig:RDcurves}. We first note that using only the original IvME from the database \cite{IMVdataset}, results in sub-optimal performance compared to RAHT and RA-GFT. This is in part due to the criteria used in \cite{IMVdataset} to choose the optimal MV based on geometry and color information. After local refinement with integer precision, the performance of IvME (DM+RF) improves significantly with respect to IvME (DM) but it is still far from being competitive with other techniques. Further improvements have been shown to be achievable by using per block intra/inter mode decision \cite{Souto2020}. \begin{figure}[htb] \begin{center} \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width = \linewidth]{./figures/longdress_extra.png} \vspace{-6mm} \caption{longdress} \end{subfigure} \hspace{-6mm} \vspace{-1mm} \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width = \linewidth]{./figures/soldier_extra.png} \vspace{-6mm} \caption{soldier} \end{subfigure} \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width = \linewidth]{./figures/redandblack_extra.png} \vspace{-6mm} \caption{redandblack} \end{subfigure} \hspace{-6mm} \vspace{-1mm} \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width = \linewidth]{./figures/loot_extra.png} \vspace{-6mm} \caption{loot} \end{subfigure} \end{center} \vspace{-5mm} \caption{Rate distortion curves of 8iVFBv2 sequences.} \label{fig:RDcurves} \vspace{-2mm} \end{figure} \begin{table}[H] \centering \resizebox{0.8\linewidth}{!}{ \begin{tabular}{ |p{3.5cm} |p{1.5cm}|p{1.5cm} |p{1.5cm}|p{1.5cm} |} \hline \textbf{\scriptsize anchors $\backslash$ sequences} & \textit{\scriptsize longdress} & \textit{\scriptsize soldier} & \textit{\scriptsize redandblack} & \textit{\scriptsize loot} \\ \hline { \scriptsize {IvME(DM+RF)} } & { \scriptsize $-43.93\%$} & { \scriptsize $-63.96\%$} & { \scriptsize $-57.98\%$} & { \scriptsize $-64.43\%$}\\ \hline { \scriptsize {RAHT} } & { \scriptsize $-39.94\%$} & { \scriptsize $-81.91\%$} & { \scriptsize $-51.10\%$} & { \scriptsize $-73.69\%$}\\ \hline { \scriptsize {RA-GFT(b=16)} } & { \scriptsize $-16.19\%$} & { \scriptsize $-72.46\%$} & { \scriptsize $-24.37\%$} & { \scriptsize $-61.44\%$}\\ \hline \end{tabular}} \caption{The BD-rate performances of the proposed scheme over baselines} \label{tab:BDBR} \end{table} After the reference blocks are super-resolved, the performance of the proposed IvME (DM+RF+SR) is further improved with respect to DM+RF, even without increasing MV resolution. The DM+RF+SR scheme can be better than the intra schemes in some cases with the advantage of complexity lower than that of the proposed FvME. Finally, after we increase MV resolution to 1/2-voxel, further coding gains are obtained, outperforming intra coding baselines, RA-GFT and RAHT, with average gains of $2.8dB$ and $4.6dB$, respectively. The method is always better than DM+RF+SR but at the cost of higher complexity due to additional motion search. The results show that both interpolated fractional voxels and high resolution MVs lead to higher coding gain and outperform both inter coding with IvME and non predictive transform based schemes. Table~\ref{tab:BDBR} summarizes the performance of the proposed method over IvME (DM+RF), RA-GFT, and RAGT in terms of BD-rate. The proposed FvME can achieve $57\%$ average bitrate reduction over IvME (DM+RF). Compared with the prior arts, the proposed scheme can achieve $61\%$ and $43\%$ bitrate reduction on average over RAHT and RA-GFT respectively. Compared with IvME (DM+RF), the proposed FvME scheme increases the number of voxels at most eightfold (because of super resolution), and requires evaluating additional 27 fractional displacements for ME. Therefore, the complexity of the proposed FvME is larger than the complexity of IvME (DM+RF) by a constant factor (independent of the point cloud size). \section{Conclusions} \label{sec:conclusion} This paper describes a fractional-voxel motion estimation scheme tailored for attribute compression in 3D dynamic point clouds. Our scheme defines and identifies fractional voxels to be interpolated and provides a motion compensation prediction method by super-resolution and temporal correspondence. Extensive experiments show superior performance over the prior art. Our work reveals the benefits given by high resolution references and the further improvements given by fractional-voxel motion vectors for dynamic point clouds color compression. \section{References} \bibliographystyle{IEEEbib.bst}
{ "timestamp": "2022-02-02T02:08:32", "yymm": "2202", "arxiv_id": "2202.00172", "language": "en", "url": "https://arxiv.org/abs/2202.00172" }
\section{Introduction} \label{sec:intro} Musical instrument sound synthesizers based on deep neural networks (DNNs) have been actively studied \cite{donahue2018adversarial,newt,htp,drumgan,u_net, crash}. Such synthesizers can generate high-quality musical instrument sounds and also allow us to edit the sounds by appropriately changing their inputs and parameters. Recently, an approach called differentiable digital signal processing (DDSP) has gathered attention \cite{ddsp_ref}. This approach utilizes classical signal processing components for a DNN-based sound synthesizer and enables us to train the synthesizer in an end-to-end manner. The DDSP autoencoder is one of the state-of-the-art DNN-based synthesizers categorized in this approach \cite{ddsp_ref}. It reconstructs an input audio signal by a classical signal processing technique called spectral modeling synthesis (SMS) \cite{sms}, which separately models harmonic and inharmonic parts of the signal. The control signals of the SMS are computed by a DNN. As a latent representation, this DNN transforms the input into three interpretable parameters corresponding to pitch, timbre, and loudness: fundamental frequency ($F_0$), timbre feature, and loudness. We call these parameters the synthesis parameters. By appropriately changing the synthesis parameters, we can flexibly edit the pitch, timbre, and loudness of the input signal. However, the DDSP autoencoder cannot be applied directly to a mixture of harmonic sounds because it is designed only for a monophonic harmonic signal. One straightforward method to solve this problem is to separate the mixture into individual sources and apply the DDSP autoencoder to each of them. Despite the recent progress of DNN-based audio source separation methods \cite{bunrireport}, it is difficult to always obtain separated signals indistinguishable from the clean ones. Since the DDSP autoencoder is trained with clean musical instrument audio signals, the artifacts and interferer signals included in the separated signals can cause the performance degradation of the DDSP autoencoder. In fact, the separated signals obtained with a state-of-the-art score-informed source separation method partly included the interferer signals, and the signals reconstructed by the DDSP autoencoder were considerably different from the target signals, which we will show later in section \ref{sec:eval}. Furthermore, in practice, we often need to edit mixtures of sounds made by the same instruments. Although the separation of such mixtures has been studied recently \cite{A1,A2}, it is more difficult than the separation of sounds made by the different instruments. In this paper, we propose a method for directly estimating the synthesis parameters of the individual sources from a mixture audio signal. We take not the \textit{separation-and-analysis} approach described above but an \textit{analysis-by-synthesis} approach. That is, we construct a model that describes a generative process of the mixture, and we estimate the synthesis parameters by fitting the mixture generated with the model to an observed mixture. By removing the synthesis parameter extraction part from the input in the DDSP autoencoder, we can use it to synthesize the source from the synthesis parameters. The proposed model represents the mixture as the sum of the outputs of the source synthesizers driven with their own synthesis parameters. We call this model the \textit{DDSP mixture model}. Using the pretrained source synthesizers, we fit the output of the proposed model to the observed mixture by a gradient descent algorithm. Owing to the interpretability of the synthesis parameters, we can use musical score information for the initialization of the synthesis parameters. Recent source separation literature has shown that the use of score information improves the separation performance \cite{gakuhukouka1, gakuhukouka2, gakuhukouka3,simsd}, which may be true for our problem. We experimentally examine the effect of the score-based initialization of the synthesis parameters. \section{RELATED WORKS} \label{sec:relatedwork} \subsection{DDSP Autoencoder} \label{sec:ddsp} The DDSP autoencoder consists of an encoder, a decoder, and an SMS module. Fig. \ref{fig:ddsp} shows a schematic illustration of the architecture of the DDSP autoencoder. The encoder extracts the synthesis parameters of $T$ frames from an input signal $\bm{x}\in \mathbb{R}^{N}$ with a length of $N$. Let $t=1,\ldots,T$ be the frame index. The $F_0$ at frame $t$, denoted by $f_t\geq 0$, is computed by a pretrained CREPE model \cite{crepe}, which is one of the state-of-the-art $F_0$ estimators. The timbre feature of size $D$, $\bm{z}_t\in\mathbb{R}^D$, is calculated by a timbre encoder, which computes mel-frequency cepstral coefficients (MFCCs) from the input signal and feeds them into a DNN. The loudness $l_t\in\mathbb{R}$ is computed by applying A-weighting to the power spectrum of the input signal and taking its logarithm. The decoder is a DNN that transforms the synthesis parameters into the control signals of the SMS module in frames. See \cite{ddsp_ref} for the detailed architecture of the decoder and timbre encoder. The SMS module separately generates harmonic and inharmonic signals and adds them together. The harmonic signal is generated as the sum of sinusoids with piecewise linear frequencies and amplitudes. These frequencies are computed by linearly interpolating $f_{t}$ and its harmonics up to the signal time resolution. The amplitudes are the linearly interpolated versions of framewise amplitudes outputted by the decoder. To generate the inharmonic signal, the decoder outputs the magnitude frequency responses of a time-varying finite impulse response filter in frames. We apply a Hann window to the discrete Fourier transforms of these responses and convolve them with a white noise signal in the frequency domain. A reverb module implemented by a convolutional layer is optionally applied to the sum of the harmonic and inharmonic signals and outputs a synthesized signal $\bm{\hat{x}}\in\mathbb{R}^{N}$. The timbre encoder, decoder, and reverb module are trained so that the multiscale spectral loss between $\bm{x}$ and $\bm{\hat{x}}$ is minimized \cite{ddsp_ref}. This loss uses short-time Fourier transforms (STFTs) of the two inputs with frames of $I$ different lengths. It is defined as \begin{align} L(\bm{x},\bm{\hat{x}}) &= \sum_{i=1}^{I}L_i(\bm{x},\bm{\hat{x}}), \label{eq:mss_loss}\\ L_i(\bm{x},\bm{\hat{x}}) &= \lVert \mathcal{F}_i\bm{x} - \mathcal{F}_i\bm{\hat{x}}\rVert_1 + \lVert \log \mathcal{F}_i\bm{x} - \log \mathcal{F}_i \bm{\hat{x}}\rVert_1, \end{align} where $\mathcal{F}_i$ returns the magnitude STFT of the signal with the $i$th frame length. \begin{figure}[t] \centering \includegraphics[scale=0.29]{ddsp.pdf} \caption{Architecture of DDSP autoencoder. Red blocks consist of DNNs to be trained and “CREPE” block is pretrained DNN.} \vspace{-9pt} \label{fig:ddsp} \end{figure} \subsection{Audio Source Separation} Most conventional music audio editing systems use audio source separation methods as preprocessors to extract the target sources from a mixture \cite{timbre_replacement, equalizer, changing_timbre, time_domain}. This approach can be applied to our problem. The recent literature has shown that the use of musical scores enhances the separation performance \cite{gakuhukouka1,gakuhukouka2,gakuhukouka3,simsd}. The method using nonnegative matrix factorization (NMF) presented in \cite{simsd} is one of the state-of-the-art score-informed source separation methods. This method trains NMF bases with isolated instrument sounds in advance and separates the input mixture while aligning the performance and score information. Although the DNN-based methods show superior performance in the usual supervised source separation setting \cite{bunrireport}, in the score-informed setting, this NMF-based method works better than DNN-based methods \cite{simsd}. The audio source separation method presented in \cite{gp} uses pretrained instrument sound synthesizers based on generative adversarial networks (GANs). The GANs convert random vectors into audio signals. These vectors are thus difficult to interpret and introduce prior musical knowledge into the inputs. Furthermore, GANs are usually unstable during training, which requires painstaking hyperparameter exploration \cite{gan_overview}. \section{PROPOSED METHOD} \subsection{Motivation and Strategy}\label{sec:motivation} \begin{figure*}[t] \centering \includegraphics[scale=0.52]{proposed.pdf} \caption{DDSP mixture model with $R$ sources.} \vspace{-9pt} \label{proposed} \end{figure*} One straightforward approach to use the DDSP autoencoder for polyphonic audio signals is to decompose the mixture into the source signals and apply the DDSP autoencoder to them. Since the DDSP autoencoder is trained with only clean instrument sounds, its synthesis performance is strongly affected by the artifacts and interferer signals included in the separated signals. Although the introduction of DNNs has rapidly increased the performance of source separation methods \cite{bunrireport}, the separated signals obtained even with the latest methods often include artifacts and interferer signals, and sometimes lack part of the target source signals. These separation failures lead to the performance degradation of the DDSP autoencoder, as we will show later in section \ref{sec:eval}. To avoid this problem, we take an approach in which the synthesis parameters of the sources are directly extracted from the mixture. We focus on the fact that the part subsequent to the encoder of the DDSP autoencoder can be seen as a source audio synthesizer using the synthesis parameters (see Fig.~\ref{fig:ddsp}). We call it the source synthesizer. Using multiple source synthesizers, we construct a generative model of the harmonic sound mixture as shown in Fig.~\ref{proposed}. We also formulate the synthesis parameter extraction problem as an inverse problem using the proposed model. \subsection{DDSP Mixture Model}\label{sec:algo} The proposed DDSP mixture model represents the mixture audio signal of $R$ harmonic sources as the sum of the outputs of the source synthesizers driven with source-specific synthesis parameters. Let $r=1,\ldots,R$ denote the source index and $h_r$ represent the source synthesizer of source $r$. To distinguish the synthesis parameters and synthesized signals of each source, we hereafter add a subscript $r$ to $f_t,\bm{z}_t,l_t,$ and $\bm{\hat{x}}$. Fig. \ref{proposed} shows the architecture of the DDSP mixture model. The synthesis parameters of source $r$, $\{f_{r,t},\bm{z}_{r,t},l_{r,t}\}_{t=1}^T$, are fed into $h_r$, and the synthesized signal of source $r$, $\bm{\hat{x}}_r$, is generated. Adding all the synthesized source signals yields the synthesized mixture signal $\bm{\hat{y}}\in\mathbb{R}^{N}$: \begin{align} \bm{\hat{y}} &= \sum_{r=1}^R\bm{\hat{x}}_r \label{eq:ddsp_mixture}, \\ \bm{\hat{x}}_r &= h_r(\{f_{r,t},\bm{z}_{r,t},l_{r,t}\}_{t=1}^T). \label{eq:ddsp_r} \end{align} Note that although all $h_r$ are separately depicted in Fig. \ref{proposed}, we can use the same pretrained source synthesizer for all sources when the DDSP autoencoder is trained with multiple instrument sounds. The DDSP mixture model describes the forward process of the generation of the harmonic sound mixture. Thus, the synthesis parameter extraction problem amounts to the problem of finding the synthesis parameters of the sources so that they minimize the loss between the output of the DDSP mixture model $\bm{\hat{y}}$ and the observed mixture $\bm{y}\in\mathbb{R}^N$. As a loss function, we can use the multiscale spectral loss defined in \eqref{eq:mss_loss}. In summary, the problem of interest is formulated as \begin{align} \min_{\{f_{r,t},\bm{z}_{r,t},l_{r,t}\}_{r=1,t=1}^{R,T}} & L(\bm{y},\bm{\hat{y}}). \label{eq:formulation} \end{align} Since $h_r$ and this loss are differentiable, we can use a gradient descent algorithm for this problem. Note that that all $h_r$ are trained in advance and fixed during this minimization. To distinguish the DDSP pretraining and this step, we call the latter the fitting step. \subsection{Initialization of $F_0$ and Loudness Using Score Information}\label{sec:param_init} Owing to the recent development of automatic music transcription \cite{B1,B2,B3}, accurate score information can be extracted from polyphonic music signals. Since the proposed method uses the interpretable synthesis parameters, we can utilize the available score information for the initialization of $f_{r,t}$ and $l_{r,t}$. For simplicity, the score information is given in a musical instrument digital interface (MIDI) format and is assumed to be aligned in time with the input mixture. Let $p_{r,t}=-1,0,\ldots,127$ denote the MIDI note number of source $r$ at time $t$, where $p_{r,t}=-1$ means that there are no played notes at that time, i.e., silence. By converting $p_{r,t}$ into the corresponding frequency, we can initialize $f_{r,t}$ as \begin{equation} f_{r,t} = \begin{cases} 440\times 2^{(p_{r,t}-69)/12} & (p_{r,t}\geq0) \\ 440\times 2^{(p^{(\text{sil})}_r-69)/12} & (p_{r,t}=-1), \end{cases} \end{equation} where $p_r^{\text{(sil)}}$ is the time average of nonnegative $p_{r,t}$s. The loudnesses $l_{r,t}$ are initialized with $l^{(\text{high})}$ for the active notes and $l^{(\text{low})}$ for the silences. \begin{table*}[tb] \caption{Averages and standard errors of MAEs in $F_0$, MFCC, and loudness obtained with separation-based and proposed methods} \label{tab: result} \hbox to\hsize{\hfil \begin{tabular}{ccc|c|ccc}\noalign{\hrule height 1.4pt} Label & Instruments & Total dur. [s]& Method & $F_0$ [cent] & MFCC & Loudness $[$dB$]$\\ \hline \multirow{3}{*}{Va./\rule{0.2cm}{0.15mm}.} & & &SISS+DDSP & $\mathbf{125\pm{35}}$ & $3.52\pm{0.12}$& $\mathbf{10.21\pm{1.75}}$\\ & Va./Db. (Mahler), Va./Fl. (Mahler) & 240 &SISS+Proposed & $129\pm{34}$ & $2.84\pm{0.13}$ & $12.06\pm{1.89}$ \\ & & &SI-Proposed & $217\pm{48}$ & $\mathbf{2.79\pm{0.13}}$ & $10.86\pm{0.63}$\\\hline \multirow{3}{*}{Fl./\rule{0.2cm}{0.15mm}.} & & &SISS+DDSP & $130\pm{45}$ & $2.04\pm{0.27}$& $33.43\pm{2.92}$\\ & Fl./Bn. (Mozart), Fl./Va. (Mahler) & 300 &SISS+Proposed & $132\pm{45}$ & $\mathbf{2.01\pm{0.18}}$ & $34.31\pm{3.28}$ \\ & & &SI-Proposed & $\mathbf{90\pm{18}}$ & $2.25\pm{0.17}$ & $\mathbf{10.54\pm{0.59}}$\\\hline \multirow{3}{*}{Db./\rule{0.2cm}{0.15mm}.} & & &SISS+DDSP & $1588\pm{1286}$ & $2.88\pm{0.15}$& $16.33\pm{1.96}$\\ & Db./Vc. (Beethoven), Db./Va. (Mahler) & 300 &SISS+Proposed & $902\pm{682}$ & $\mathbf{2.03\pm{0.15}}$ & $17.52\pm{2.61}$\\ & & &SI-Proposed & $\mathbf{164\pm{56}}$ & $2.05\pm{0.15}$& $\mathbf{10.41\pm{0.61}}$\\\hline \multirow{3}{*}{Vc./\rule{0.2cm}{0.15mm}.} & & &SISS+DDSP & $1572\pm{983}$ & $3.29\pm{0.09}$& $10.49\pm{2.04}$\\ & Vc./Db. (Beethoven), Vc./Bn. (Mozart) & 260 &SISS+Proposed & $1178\pm{777}$ & $2.32\pm{0.13}$ & $10.29\pm{2.29}$\\ & & &SI-Proposed & $\mathbf{111\pm{24}}$ & $\mathbf{2.21\pm{0.11}}$ & $\mathbf{8.09\pm{0.26}}$\\\hline \multirow{3}{*}{Bn./\rule{0.2cm}{0.15mm}.} & & &SISS+DDSP & $1043\pm{506}$ & $2.45\pm{0.11}$& $24.24\pm{1.65}$\\ & Bn./Fl. (Mozart), Bn./Vc. (Mozart) & 260 &SISS+Proposed & $911\pm{490}$ & $\mathbf{1.90\pm{0.13}}$ & $31.39\pm{1.94}$\\ & & &SI-Proposed & $\mathbf{113\pm{10}}$ & $2.25\pm{0.15}$ & $\mathbf{10.47\pm{0.59}}$\\\noalign{\hrule height 1.0pt} \multirow{3}{*}{Va./Va.} & & &SISS+DDSP & $\mathbf{140\pm{72}}$ & $3.83\pm{0.13}$& $13.38\pm{1.16}$\\ & Va. (Mahler)/Va. (Mozart) & 120 &SISS+Proposed & $146\pm{71}$ & $3.03\pm{0.12}$ & $13.16\pm{1.65}$ \\ & & &SI-Proposed & $155\pm{38}$ & $\mathbf{2.89\pm{0.11}}$ & $\mathbf{11.84\pm{0.62}}$\\\hline \multirow{3}{*}{Fl./Fl.} & & &SISS+DDSP & $129\pm{68}$ & $1.72\pm{0.27}$& $31.32\pm{4.05}$\\ & Fl. (Mahler)/Fl. (Mozart) & 120 &SISS+Proposed & $116\pm{57}$ & $1.62\pm{0.21}$ & $33.84\pm{4.33}$\\ & & &SI-Proposed & $\mathbf{90\pm{26}}$ & $\mathbf{1.44\pm{0.23}}$ & $\mathbf{9.13\pm{0.52}}$\\\hline \multirow{3}{*}{Db./Db.} & & &SISS+DDSP & $844\pm{754}$ & $2.33\pm{0.17}$ & $15.74\pm{2.86}$\\ & Db. (Beethoven)/Db. (Mahler) & 120 &SISS+Proposed & $877\pm{752}$ & $\mathbf{1.91\pm{0.13}}$ & $20.23\pm{3.32}$ \\ & & & SI-Proposed & $\mathbf{195\pm{68}}$ & $1.97\pm{0.13}$ & $\mathbf{10.51\pm{0.81}}$\\\hline \multirow{3}{*}{Vc./Vc.} & & &SISS+DDSP & $3438\pm{1851}$ & $2.79\pm{0.18}$ & $20.05\pm{3.21}$\\ & Vc. (Beethoven)/Vc. (Mahler) & 120 &SISS+Proposed & $2775\pm{1737}$ & $\mathbf{2.27\pm{0.14}}$ & $24.35\pm{3.85}$\\ & & &SI-Proposed & $\mathbf{347\pm{144}}$ & $2.32\pm{0.16}$ & $\mathbf{12.06\pm{0.91}}$\\\hline \multirow{3}{*}{Bn./Bn.} & & &SISS+DDSP & $1139\pm{527}$ & $3.36\pm{0.15}$ & $9.75\pm{2.02}$\\ & Bn./Bn. (Beethoven) & 180 &SISS+Proposed & $1151\pm{528}$ & $\mathbf{2.38\pm{0.15}}$ & $10.87\pm{2.16}$\\ & & &SI-Proposed & $\mathbf{567\pm{248}}$ & $2.39\pm{0.16}$ & $\mathbf{8.54\pm{0.32}}$\\\noalign{\hrule height 1.4pt} \end{tabular}\hfil} \vspace{-10pt} \end{table*} \section{EXPERIMENTAL EVALUATION}\label{sec:eval} \subsection{Experimental Conditions} To evaluate the effectiveness of the proposed method, we conducted synthesis parameter extraction experiments on mixtures of two harmonic sources. We created test data using the PHENICX-Anechoic dataset \cite{gakuhukouka3,testdata2}. This dataset includes separate audio recordings of musical instruments of four symphonies: Symphony no. 1, fourth movement by G. Mahler (Mahler), an aria of Donna Elvira from the opera Don Giovanni by W. A. Mozart (Mozart), and Symphony no. 7, first movement by L. van Beethoven (Beethoven). It also includes time-aligned MIDI data. The test data consisted of mixtures of two different instruments or two of the same instrument. We used audio signals played with the viola (Va.), flute (Fl.), double bass (Db.), cello (Vc.), and bassoon (Bn.) and downsampled all audio signals to $16$ kHz. We divided each mixture into $12$-s segments from the beginning to the end and applied synthesis parameter extraction methods to them. The segments shorter than $12$ s were not used for the evaluation. We compared the following three methods. \noindent\textbf{SISS+DDSP:} We separated the mixtures by the state-of-the-art score-informed source separation method (SISS) presented in \cite{simsd} and applied the DDSP autoencoder to the separated signals. In our preliminary experiment, we found that this method provided higher separation performance than the recent DNN-based score-informed source separation method presented in \cite{si_dnn}. We used the official implementation of SISS available at \url{https://github.com/AntonioJMM/OISS_Minus-One.github.io} and the same hyperparameters as those used in \cite{simsd}. \noindent\textbf{SISS+Proposed:} We initialized $f_{r,t}$ and $l_{r,t}$ with those obtained with SISS+DDSP and ran the proposed method. The initial values of $z_{r,t}$ were drawn from a standard normal distribution. We used the Adam optimizer and set its learning rate at $0.1$, which was decreased to $0.01$ at the $1000$th iteration and $0.001$ at the $2000$th iteration. For the multiscale spectral loss, we used Hann windows of $8, 16, 32, 64, 128,$ and $256$-ms lengths and hop sizes of half of the corresponding frame lengths. \noindent\textbf{SI-Proposed:} This model is a score-informed (SI) version of the proposed method. We initialized $f_{r,t}$ and $l_{r,t}$ with the score information described in section \ref{sec:param_init} and ran the proposed method. We experimentally determined that $l^{\text{(high)}}=-6$ and $l^{\text{(low)}}=-10$. The other conditions were the same as those of SISS+Proposed. These methods used the same pretrained DDSP autoencoder. It was trained using the University of Rochester multimodal music performance (URMP) dataset \cite{urmp}, which consists of $44$ classical chamber music pieces and audio signals played with $13$ musical instruments. We used $35$ out of the $44$ music pieces as the training data (total $11523$ s) and divided all instrument signals into $12$-s segments. The segments shorter than $12$ s were zero-padded up to $12$-s length. We trained $3000$ epochs using the Adam optimizer with a learning rate of $0.001$. We used the multiscale spectral loss with the same frame lengths as in the fitting step. The synthesis parameters were computed at $32$-ms intervals ($T=375$). As evaluation measures, we used the mean absolute errors (MAEs) in $F_0$, MFCC, and loudness between the estimates and the ground truths extracted from the source signals. The ground truths of $F_0$ were extracted using the CREPE model. Following \cite{ddsp_ref}, the $F_0$ MAEs were computed at the frames for which the confidences of the ground truths of $F_0$ were greater than or equal to $0.85$. Note that since the estimated $f_{r,t}$ values may be negative, we floored them with $10^{-7}$ Hz. The MFCC estimates were computed from the audio signals synthesized with the estimated synthesis parameters. We calculated the MFCCs using the log-mel-spectrogram with $128$-ms frames, a $32$-ms hop size, and $128$ frequency bins ranging from $20$ to $8000$ Hz, and we used the first $30$ coefficients. \subsection{Results} Table~\ref{tab: result} shows the results of all methods, where the evaluation measures were computed at each segment and their averages and standard errors were computed. Here, the instrument names followed by the slash and underline denote the results for the mixtures of the different instruments, and those followed by the slash and same instrument names denote the results for the mixtures of the same instruments. The names inside the parentheses of the ``Instruments'' column indicate the music pieces in which the instrument audio signals are included. Although SISS+DDSP provided moderate performance in terms of $F_0$ for the Va. and Fl. mixtures, it showed much lower performance for the other mixtures. A similar tendency was observed for MFCC and loudness. When we listened to the separated signals of SISS+DDSP, we found that these signals lack part of the target sources and included artifacts and interferer signals. We also found that some of the synthesized signals of SISS+DDSP were considerably different from the target source signals. These results show that the separation-based method has unstable performance and that separation failures greatly degrade the synthesis parameter extraction performance. Compared with SISS+DDSP, SISS+Proposed and SI-Proposed provided comparable and higher performances for most of the mixtures, particularly in terms of MFCC, showing the effectiveness of the proposed methods. Although the SISS+Proposed performances of $F_0$ and loudness were still low for the mixtures that SISS failed to separate, SI-Proposed consistently provided higher performances for all measures. Furthermore, we observed that the source signals synthesized by the proposed method had much more similar timbres to the target sources and audibly outperformed those obtained with SISS+DDSP. These results clearly show that the proposed method works stably and effectively compared with the separation-based method. Some synthesized examples are available at \url{https://sarulab-audio.github.io/DDSP_Mixture_Model/}. Importantly, SI-Proposed had much lower standard errors in $F_0$ and loudness than the other methods for most of the mixtures. This result shows that the score information is useful for the proposed method and can decrease the deviations of the estimates. \section{CONCLUSION} We proposed the DDSP mixture model that represents the generation process of a mixture of harmonic audio signals, using part of the pretrained DDSP autoencoder as a source audio synthesizer. We also developed a synthesis parameter extraction method by fitting the output of the DDSP mixture model to the observed mixture. Through experiments using mixtures of sounds made by the different and same instruments, we showed that the proposed method outperforms a straightforward method that applies the DDSP autoencoder to signals separated with an existing audio source separation method. \vfill\pagebreak \bibliographystyle{IEEE}
{ "timestamp": "2022-02-02T02:09:56", "yymm": "2202", "arxiv_id": "2202.00200", "language": "en", "url": "https://arxiv.org/abs/2202.00200" }
\section{Introduction} Many objects in everyday environments are made for human hands. Mugs have handles to grasp; the stove has dials to push and turn; a needle has a small hole through which to weave a tiny thread. In order for robots to assist people in human-centric environments, and in order for them to reach new levels of adept manipulation skill, \emph{multi-fingered dexterous robot hands} are of great interest as a physical embodiment~\cite{gupta2016learning,rajeswaran2017learning,jain2019learning, zhu2019dexterous,nagabandi2019deep,akkaya2019solving,andrychowicz2020learning}. Unlike common end effectors like parallel jaw grippers or suction cups, a dexterous hand has the potential to execute complex behaviors beyond pushing, pulling, and picking, and to grasp objects with complex geometries in functionally useful ways~\cite{brahmbhatt2019contactgrasp,kokic2020learning}. The flexibility of a dexterous robotic hand, however, comes with significant learning challenges. With articulated joints offering 24 to 30 degrees of freedom (DoF), the action space is formidable. At the same time, interacting with new objects having unfamiliar shapes demands a high level of generalization. Both factors have prompted exciting research in deep reinforcement learning (RL), where an agent dynamically updates its manipulation strategy using closed-loop feedback control with visual sensing while attempting interactions with different objects~\cite{rajeswaran2017learning,nagabandi2019deep,andrychowicz2020learning}. To mitigate sample complexity---since many exploratory hand pose trajectories will yield no reward---current methods often incorporate imitation learning~\cite{gupta2016learning, rajeswaran2017learning, jain2019learning, zhu2019dexterous,state-only}. With imitation, expert (human) demonstrations provided by teleoperation in virtual reality~\cite{zhu2018reinforcement,nair-2017}, mocap~\cite{gupta2016learning,handa2019dexpilot}, or kinesthetic manipulation of the robot's body~\cite{vecerik-2018,mime} are used to steer the RL agent towards desirable state-action sequences. Such demonstrations can noticeably accelerate robot learning. \begin{figure}[t] \centering \begin{center} \includegraphics[width=\linewidth]{figures/intro.pdf} \end{center} \vspace*{-0.15in} \caption{\textbf{Main idea}: We learn dexterous grasping by watching human-object interactions in YouTube how-to videos. Using hand poses extracted from a repository of curated human grasp images (left), we train a dexterous robotic agent to learn to grasp objects in simulation (right). The key benefits include improved grasping performance and the ability to quickly scale the method to new objects. } \vspace*{-0.2in} \label{fig:intro} \end{figure} However, the existing paradigms for human demonstrations have inherent shortcomings. First, they require some degree of specialized setup: a motion glove for the human demonstrator to wear, a virtual reality platform matched to the target robot, a high precision hand and arm tracker, and/or physical access to the robot equipment itself. This in turn restricts demonstrations to lab environments, assumes certain expertise and resources, and entails repeated overhead to add new manipulations or objects. Second, there is an explicit layer of indirection: in conventional methods, a person does not do a task with their own hands, but instead enacts a proxy under the constraints of the hardware/software used to collect the demonstration. For example, in VR, the person needs to watch a screen to judge their success in manipulating a simulated hand and may receive limited or no force feedback~\cite{rajeswaran2017learning,kumar2015mujoco}. Similarly, advanced visual teleoperation systems that observe the bare hand~\cite{handa2019dexpilot} still separate the person's hand from the real-world object being manipulated. In a kinesthetic demonstration, the human expert is furthest removed, manually guiding the robot's end effector, which has a very different embodiment~\cite{mime,pathak2019}. In light of these challenges, we propose to learn dexterous robot grasping policies by watching people interact with objects in video (see Fig.~\ref{fig:intro}). The main idea is to observe people's hand poses as they use objects in the real world in order to establish a 3D hand pose prior that a robot might attempt to match during functional grasping. Rather than enlist special purpose demonstrations (e.g., record videos at lab tabletops), we turn to in-the-wild Internet videos as the source of the visual prior. Our method automatically extracts human hand poses from video frames using a state-of-the-art computer vision technique. We then define a deep reinforcement learning model that augments a grasp success reward with a reward favoring human-like hand poses \textcolor{black}{upon} object contact, while also preferring to grasp the object around affordance regions predicted from an image-based model. In short, by watching video of people performing everyday activities with a variety of objects, our agents learn how to approach objects effectively using their own multi-fingered hand, while also accelerating training. Aside from providing accurate policies and faster learning, our approach has several conceptual advantages. First, it removes the indirection discussed above. There is no awkwardness or artificiality of VR, kinesthetic manipulations, etc,; people in the videos are simply interacting with objects in the context of their real activity. This also promotes learning \emph{functional} grasps---those that prepare the object for subsequent use---as opposed to grasps that simply lift an object in an arbitrary manner. New demonstrations are also easy to curate whenever new objects become of interest, since it is a matter of downloading additional video. In addition, because our visual model focuses on 3D hand pose, it tolerates viewpoint variation and is agnostic to the complex visual surroundings in training data (e.g., variable kitchens, clutter, etc.). Finally, the proposed method requires only visual input from the human data---no state-action sequences. We demonstrate our approach trained with frames from YouTube how-to videos to learn grasps for a 30-DoF robot in simulation for a variety of 27 objects. The resulting policy outperforms state-of-the-art methods for learning from demonstration and visual affordances~\cite{rajeswaran2017learning,mandikal2020graff}, while also being $20\%$ more efficient to train. The learned behavior resembles that of natural human-object interactions and offers an encouraging step in the direction of robot learning from in-the-wild Internet data. \section{Related Work} \noindent \textbf{Learning to grasp objects} Early grasping work explicitly reasons about an object's 3D shape against the gripper~\cite{eigengrasps,bicchi2000robotic}, whereas learning methods often estimate an object or hand pose followed by model-based planning~\cite{levine2018learning,brahmbhatt2019contactgrasp}, optionally using supervised learning on visual inputs to predict successful grasps~\cite{levine2018learning,wu-nips2020,lenz2015deep,redmon,mahler2017dex}. Most planning methods cater to simple non-dexterous end-effectors like parallel jaw grippers or suction cups that make a control policy easier to codify but need not yield functional grasps. Rather than plan motions to achieve a grasp, reinforcement learning (RL) methods act in a closed loop with sensing, \textcolor{black}{which has the advantage of dynamically adjusting to object conditions}~\cite{kalashnikov2018qt, quillen2018deep, merzic2019leveraging}. Only limited work explores RL for dexterous manipulation~\cite{nagabandi2019deep,andrychowicz2020learning,mandikal2020graff}. Our work addresses functional dexterous grasping with RL, but unlike the existing methods it primes agent behavior according to videos of people. \noindent \textbf{Imitation and learning from demonstration} To improve sample complexity, imitation learning from expert demonstrations is often used, whether for non-dexterous~\cite{finn-corl2017,stadie2017,liu2018,pathak-iclr2018,tcn,mime} or dexterous~\cite{gupta2016learning, rajeswaran2017learning, jain2019learning, zhu2019dexterous} end effectors. Researchers often use demonstrations to explore dexterous manipulation in simulation~\cite{rajeswaran2017learning, jain2019learning}, and recent work shows the promise of sim2real transfer~\cite{zhu2018reinforcement,andrychowicz2020learning}. Like learning from demonstrations (LfD), our approach aims to learn from human experts, but unlike traditional LfD, our method does so without full state-action trajectories, relying instead only on a visual prior for ``good" hand states. Furthermore, our use of in-the-wild video as the source of human behavior is new and has all the advantages discussed above. \noindent \textbf{Imitating visual observations} Learning to imitate \emph{observations}~\cite{ifo} relaxes the requirement of capturing state sequences in demonstrations. This includes ideas for overcoming viewpoint differences between first and third person visual data~\cite{tcn,stadie2017,liu2018}, multi-task datasets to learn correspondences between video of people and kinesthetic trajectories of a robot arm~\cite{mime}, few-shot learning~\cite{finn-corl2017,pathak-iclr2018}, and shaping reward functions with video~\cite{tcn,goo-niekum,mime,liu2018}. However, none of the prior work uses in-the-wild video to learn dexterous grasping as we propose. By connecting real video of human-object interactions to a dexterous robotic hand, our method capitalizes on both the naturalness of the demonstrations as well as the near-shared embodiment. Furthermore, unlike our approach, the existing (almost exclusively non-dexterous) methods require paired data for the robot and person's visual state spaces~\cite{tcn,mime,jain2019learning}, assume the demonstrator and robot share a visual environment (e.g., a lab tabletop)~\cite{finn-corl2017,pathak-iclr2018,stadie2017,liu2018,jain2019learning}, and/or tailor the imitation for a specific object~\cite{jain2019learning}. \noindent \textbf{Visual object affordances} As a dual to hand pose, priors for the object regions that afford an interaction can also influence grasping. Vision methods explore learning affordances from visual data~\cite{affordancenet,Brahmbhatt_2019_CVPR,nagarajan2019grounded}, though they stop short of agent action. Visual affordances can successfully influence a pick and place robot~\cite{wu2020affordance,48887} and help grasping with simple grippers~\cite{kokic2020learning,levine2018learning,lenz2015deep,redmon} or a dexterous hand~\cite{brahmbhatt2019contactgrasp,mandikal2020graff}. Object-centric affordances predicted from images also benefit the proposed model, but they are complementary to our novel contribution---the hand-pose prior learned from video. \noindent \textbf{Estimating hand poses} Detecting human hands and their poses is explored using a variety of visual learning methods~\cite{cai-rss2016,kitani-gesture,hasson19_obman,rong2020frankmocap,zhou-cvpr2020}. Many methods jointly reason about the shape of the object being grasped~\cite{hasson19_obman,kokic2020learning,dexycb2021chao,ilija-arxiv2020,dexycb2021chao}. Recent work provides large-scale datasets to better understand human hands~\cite{fouhey-cvpr2020,Brahmbhatt_2020_ECCV,taheri-eccv2020}. We rely on the state-of-the-art FrankMocap method~\cite{rong2020frankmocap} to extract 3D hand poses from video. Our contribution is not a computer vision method to parse pose, but rather a machine learning framework to produce dexterous robot grasping behavior. \section{Approach} \begin{figure}[t] \centering \begin{center} \includegraphics[width=0.95\linewidth]{figures/overview.pdf} \end{center} \vspace*{-0.1in} \caption{\textbf{Overview of \textsc{DexVIP}.} We use grasp poses inferred from Internet video to train a dexterous grasping policy. An actor-critic network (blue) processes sensory observations from visual and motor streams (green) to estimate agent actions. Human hand pose priors derived from how-to videos (red) encourage the agent to explore worthwhile grasp poses via an auxiliary reward (purple).} \label{fig:overview} \vspace*{-0.25in} \end{figure} We consider the task of dexterous grasping with an articulated 30-DoF multi-fingered robotic hand. Our goal is to leverage abundant human interaction videos to provide the robot with a prior over meaningful grasp poses. To this end, we propose \textsc{DexVIP}, an approach to learn \textbf{Dex}terous grasping using \textbf{V}ideo \textbf{I}nformed \textbf{P}ose priors. We first lay out the formulation of the reinforcement learning problem for dexterous grasping (Section~\ref{subsec:problem_formulation}). Then, we describe how we leverage human hand pose priors derived from in-the-wild YouTube videos for this task (Section~\ref{subsec:policy_learning}). \subsection{Reinforcement Learning Framework for Dexterous Grasping} \label{subsec:problem_formulation} \noindent \textbf{Background} Our dexterous grasping task is structured as a reinforcement learning (RL) problem, where an agent interacts with the environment according to a policy in order to maximize a specified reward (Fig.~\ref{fig:overview}). At each time step $t$, the agent observes the current observation $o_t$ and samples an action $a_t$ from its policy $\pi$. It then receives a scalar reward $R_{t+1}$ and next observation $o_{t+1}$ from the environment. This feedback loop continues until the episode terminates at $T$ time steps. The goal of the agent is to determine the optimal stochastic policy that maximizes the expected sum of rewards. \noindent \textbf{Observations} Our task setup consists of a robotic hand positioned above a tabletop, with an object of interest resting on the table. At the start of each episode, we sample an object and randomly rotate it from its canonical orientation. The observations $o_t^r$ (Fig.~\ref{fig:overview}, green block) at each time step $t$ are a combination of visual and motor inputs to the robot. The visual stream is from an egocentric hand-mounted camera. It consists of an RGB image of the scene $I_t^r$ and the corresponding depth map $D_t^r$. Additionally, we provide a binary affordance map $A_t^r$ that is inferred from $I_0^r$ using an affordance prediction network~\cite{mandikal2020graff} to guide the agent towards functional grasp regions on the object. The motor inputs are a combination of the robot proprioception $P_t^r$ and the hand-object contact distances $d_t^r$. $P_t^r$ comprises of the robot joint angles or pose $p_t^r$ and angular velocities $v_t^r$ of the actuator, while $d_t^r$ is the pairwise distance between the object affordance regions and contact points on the hand. \textcolor{black}{The latter assumes the object is tracked in 3D once its affordance region is detected, following~\cite{mandikal2020graff}.} The agent also has 21 touch sensors $T^r$ spread uniformly across the palm and fingers. \noindent \textbf{Action space} At each time step $t$, the policy $\pi$ processes observations $o_t^r$ and estimates actions $a_t^r$---30 continuous joint angle values---which are applied at the joint angles in the actuator. The robotic manipulator we consider is the Adroit hand~\cite{kumar2013fast}, a 30-DoF position-controlled dexterous hand. With a five-fingered 24-DoF actuator attached to a 6-DoF arm, the morphology of the robot hand closely resembles that of the human hand. This congruence opens up an exciting avenue to infuse a grasp pose prior learned from human-object interaction videos, as we present later in Sec.~\ref{subsec:policy_learning}. \noindent \textbf{Feature and policy learning} We adopt an actor-critic model for learning the grasping policy. The visuo-motor observations $o_t^r$ are processed separately using two neural networks, $f_V$ and $f_M$ (Fig.~\ref{fig:overview}, blue block). Specifically, the visual inputs encompassing \{$I_T^r, D_T^r, A_t^r$\} are concatenated and fed to a three-layer CNN $f_V$ that encodes them to obtain a visual embedding $V_t$. The motor stream comprised of \{$P_t^r, d_t^r$\} is processed by a two-layer fully connected network $f_M$ that encodes them to a motor embedding $M_t$. Finally $V_t$ and $M_t$ are concatenated and fed to the actor and critic networks to estimate the policy distribution $\pi_{\theta}(a_t^r|o_t^r)$ and state values $V_{\theta}(o_t^r)$, respectively, at each time step. The resulting policy $\pi$ outputs a 30-D unit-variance Gaussian whose mean is inferred by the network; we sample from this distribution to obtain the robot's next action $a_t^r$. We train the complete RL network with PPO~\cite{schulman2017proximal} using a reward that encourages successful grasping, touching object affordance regions, and mimicking human hand poses, as we will detail below. \noindent \textbf{Robot hand simulator} We conduct experiments in MuJoCo~\cite{todorov2012mujoco}, a physics simulator commonly used in robotics research. Due to lack of access to a real robotic hand, we perform all experiments in simulation. The successful transfer of dexterous policies trained purely in simulation to the real world~\cite{zhu2018reinforcement,andrychowicz2020learning,akkaya2019solving} supports the value of simulator-based learning in research today. In addition, we conduct numerous experiments with noisy sensing and actuation settings that might occur in the real world to illustrate the robustness of the policy to non-ideal scenarios (see Sec.~\ref{sec:expts} and Supp.). \subsection{Object-Specific Human Hand Pose Priors from Video} \label{subsec:policy_learning} We now describe how we leverage human-object interaction videos to enable robust dexterous grasping. The morphological similarity between the human hand and the dexterous robot holds immense potential to learn meaningful pose priors for grasping. To facilitate this, we first curate a human-object interaction dataset of video frames to infer 3D grasp hand poses for a variety of objects. We then transfer these poses to the robotic hand using inverse kinematics, and finally develop a novel RL reward that favors human-used poses when learning to grasp. The proposed approach allows us to leverage readily available Internet data to learn robotic grasping. \noindent\textbf{Object dataset} We consider household objects often encountered in daily life activity (and hence online instructional videos) and for which 3D models are available. We acquire 3D object models from multiple public repositories: ContactDB~\cite{Brahmbhatt_2019_CVPR}, 3DNet~\cite{wohlkinger20123dnet}, YCB~\cite{ycb2017ijrr}, Free 3D~\cite{free3d}, and 3D Warehouse~\cite{3dwarehouse}. We specifically include the 16 ContactDB objects with one-hand grasps used in recent grasping work to facilitate concrete comparisons~\cite{mandikal2020graff}. We obtain a total of 27 objects to be used for training the robotic grasping policy, all 16 from ContactDB plus 11 additional objects. \noindent \textbf{Video frame dataset} We use the HowTo100M dataset~\cite{miech19howto100m} to curate images containing human-object grasps for the objects of interest. HowTo100M is a large-scale dataset consisting of 13.6 M instructional YouTube videos across categories such as cooking, entertainment, hobbies and crafts, etc. We focus on videos consisting of commonly used household objects---tools and kitchen utensils such as mug, hammer, jug, etc. The idea is to capture objects in active use during natural human interactions so that we can obtain functional hand poses. The grasp images contain the object in its canonical upright position (e.g., pan on stove), which is also the initial vertical orientation of the object on the tabletop in the simulator. Using the above criteria, we curate an object interaction repository $\mathcal{I}_h$ of 715 video frames from HowTo100M where the human hand is grasping one of the 27 total objects, to yield on average 26 grasp images per object. For instance, to collect expert data for grasping a \textit{pan}, we curate grasp images from task ids such as \say{\textit{care for nonstick pans}}, \say{\textit{buy cast iron pans}}, etc. While we found it effective to use simple filters based on the weakly labeled categories and specific task ids in HowTo100M, the curation step could be streamlined further by deploying vision methods for detecting hands, actions, and objects in video~\cite{fouhey-cvpr2020,bojanowski,miech19howto100m}. \begin{figure}[t] \centering \begin{center} \includegraphics[width=0.95\linewidth]{figures/dataset.pdf} \end{center} \vspace*{-0.05in} \caption{\textbf{Human hand pose priors informing the agent action.} Each row shows three example images and extracted human hand poses for each object category (left) and the corresponding consensus robot hand pose rewarded by our method and its application by the agent in action (right).} \label{fig:dataset} \vspace*{-1em} \end{figure} \noindent \textbf{Target hand pose acquisition} We propose to use the obtained HowTo100M grasp images $\mathcal{I}_h$ to provide a learning signal for robotic grasping. To that end, for each image we first infer its 3D human hand pose $p^h$. In particular, we employ FrankMocap~\cite{rong2020frankmocap} to estimate 3D human hand poses (see Fig.~\ref{fig:dataset}, left). FrankMocap is a near real-time method for 3D hand and body pose estimation from monocular video; it returns 3D joint angles for detected hands in each frame. While alternative pose estimation methods could be plugged in here, we use FrankMocap in our implementation due to its efficiency and good empirical performance. We keep right hand detections only, since our robot is one-handed; we leave handling bimanual grasps for future work. This step yields a collection of different hand poses per object for a variety of objects found in the videos. Let $\mathcal{P}(c) = \{p_{c_1}^h,\dots,p_{c_n}^h\}$ be the set of human hand poses associated with object class $c$. The poses within an object class are often quite consistent since the videos naturally portray people using the object in its standard functional manner, e.g., gripping a pot handle elicits the same pose for most people. However, some objects elicit a multi-modal distribution of poses (e.g., a knife held with or without an index finger outstretched). See Fig.~\ref{fig:dataset}. In order to automatically discover the ``consensus" pose for an object, we next apply k-medoid clustering on each set $\mathcal{P}(c)$. We consider the medoid hand pose of the largest cluster to be the consensus target hand pose $p_{c^\ast}^h$ and use its associated target robot pose $p_{c^\ast}^r$ (obtained using a joint re-targeting mechanism described in Supp.~Sec.~D) during policy learning for object $c$. \noindent \textbf{Video-informed reward function} To exert the video prior's influence in our RL formulation, we incorporate an auxiliary reward function favoring robot poses similar to the human ones in video. In this way, the reward function not only signals \textit{where} to grasp a particular object, but also guides the agent on \textit{how} to grasp effectively. To realize this, we combine three rewards: $R_{succ}$ (positive reward when the object is lifted off the table), $R_{aff}$ (negative reward denoting the hand-affordance contact distance obtained from~\cite{mandikal2020graff}, see Supp.~Sec.~F), and---most notably---$R_{pose}$, a positive reward when the agent's pose $p_t^r$ matches the target grasp pose $p_{c^\ast}^r$ for that object. Our total reward function is: \setlength{\abovedisplayskip}{1em} \setlength{\belowdisplayskip}{1em} \begin{equation} R = \alpha R_{succ} + \beta R_{aff} + \gamma R_{pose} + \eta R_{entropy}, \label{eq:reward} \end{equation} where $\alpha,\beta,\gamma,\eta$ are scalars weighting the rewards that are set by validation, and $R_{entropy}$ rewards entropy over the target action distribution to encourage the agent to explore the action space. Through $R_{aff}$, the agent is incentivized to explore areas of the object within the affordance region, while $R_{pose}$ encourages the agent to reach hand states that are most suitable for grasping those regions. For $R_{pose}$, we compute the mean per-joint angle error at time $t$ between the robot joints $p^r_t$ and the target grasp pose $p_{c^\ast}^r$. We ignore the azimuth and elevation values of the arm in the pose error since they are specific to the object orientation in $I^h$, which may be different from the robot's viewpoint $I^r$. This provides more flexibility during object reaching to orient the arm while trying to match the hand pose alone. Furthermore, $R_{pose}$ is applied only when 30\% of the robot's touch sensors are activated. This encourages the robot to assume the target hand pose once it is close to the object and in contact with it. Following~\cite{mandikal2020graff}, $R_{aff}$ is computed as the Chamfer distance between points on the hand and points on the inferred object affordance region. See Supp.~Sec.~B for reward function details. The proposed approach permits imitation by visual observation of the human how-to videos, yet without requiring access to the state-action trajectories of the human activity. Its mode of supervision is therefore much lighter than that of conventional teleoperation or kinesthetic teaching, as we will see in results. Furthermore, because our model can incorporate priors for new objects by obtaining new training images, it scales well to add novel objects. \section{Experiments} \label{sec:expts} We present experiments to validate the impact of learning pose priors from video and to gauge \textsc{DexVIP}'s performance relative to existing methods and baselines. \noindent \textbf{Compared methods} We compare to the following methods: \begin{enumerate*}[label=(\textbf{\arabic*})] \item \textsc{CoM}: uses the center of mass of the object as a grasp location prior, since this location can lead to stable grasps~\cite{kanoulas2018center}. We implement this by penalizing the hand-CoM distance for $R_{aff}$ in Eqn~\ref{eq:reward}, and removing $R_{pose}$. \item \textsc{Touch}: uses only the touch sensors $T^r$ on the hand to positively reward the agent $+1$ for object interaction when 30\% of them are activated, but imposes no supervision on the hand pose. \item \textsc{GRAFF}~\cite{mandikal2020graff}: a state-of-the-art RL grasping model that trains a policy to grasp objects at inferred object-centric affordance regions. Unlike our method, \textsc{GRAFF} does not enforce any prior over the agent's hand pose. All the above three RL methods use the same architecture as our model for the grasping policy, allowing for a fair comparison. \item \textsc{DAPG}~\cite{rajeswaran2017learning}: is a hybrid imitation+RL model that uses motion-glove demonstrations collected from a human expert in VR. It is trained with object-specific mocap demonstrations collected by~\cite{mandikal2020graff} for grasping ContactDB object (25 demos per object). For objects beyond ones in ContactDB, we use demos from the object most similar in shape in ContactDB. \end{enumerate*} \noindent \textbf{Metrics} We report four metrics: \begin{enumerate*}[label=(\textbf{\arabic*)}] \item Grasp Success: when the object is lifted off the table by the hand for at least the last 50 time steps (a quarter of the episode length) \textcolor{black}{to allow time to reach the object and pick it up}. \item Grasp Stability: the firmness with which the object is held by the robot, discounting grasps in which the object can easily be dropped. We apply perturbation forces of $1$ Newton in six orthogonal directions on the object after an episode completes. If the object continues to be grasped by the agent, the grasp is deemed stable. \item Functionality: the percentage of successful grasps in which the hand lies close to the GT affordance region. This metric evaluates the utility of the grasp for post-grasp functional use. \item Posture: the distance between the target human hand pose $p_{c*}^h$ and the agent hand pose after a successful grasp $p_T^r$. It tells us how human-like the learned grasps are. \end{enumerate*} We normalize all metrics on a $[0,100\%]$ scale, where higher is better. We evaluate 100 episodes per object with the objects placed at different initial orientations ranging from [0,180\degree]. We report the mean and standard deviation of the metrics across all models trained with four random seeds. \noindent \textbf{Implementation details} The visual encoder $f_V$ has filters of size [8,4,3], and a bottleneck 512-D and uses ReLU activations. The motor encoder $f_M$ has dimensions [512,512]. For the hand-object contacts, we use 10 and 20 uniformly sampled points on the hand and affordance region, respectively, following~\cite{mandikal2020graff}. The entire network is optimized using Adam with a learning rate of $5e-5$. A single grasping policy is trained on all the curated objects for 150M agent steps with an episode length of 200 time steps. The coefficients in the reward function (Eq.~\ref{eq:reward}) are set as: $\alpha=1,\beta=1,\gamma=1,\eta=0.001$. We train for four random seed initializations. Further details are provided in Supp.~Sec.~E.1. \noindent \textbf{Grasping policy performance} We take the policy trained on all 27 objects and first evaluate it on the 16 objects from ContactDB~\cite{Brahmbhatt_2019_CVPR}. Since these objects have ground truth affordances (used when training \textsc{GRAFF} and our model) and mocap demonstrations (used when training \textsc{DAPG}), they represent the best case scenario for the existing models to train with clean expert data. Note that our method always uses hand poses inferred from YouTube videos, \textcolor{black}{even for the ContactDB objects}. \begin{wrapfigure}{!rb}{0.5\textwidth} \centering \vspace*{-0.2in} \includegraphics[width=\linewidth]{figures/grasp_results.pdf} \caption{\textbf{Grasping performance. } Example frames for the grasping task. Our \textsc{DexVIP} policy guided by pose-priors is able to successfully grasp objects in natural human-like poses, while the other methods may either generate unusual poses or fail to grasp effectively. Please see Supp.~video. \vspace*{-0.25in} } \label{fig:comparison} \end{wrapfigure} Fig~\ref{fig:grasp_metrics} (left) shows the results. \textsc{DexVIP} consistently outperforms all the methods on all metrics. The grasp success and stability rates experience a significant boost even compared to \textsc{GRAFF}~\cite{mandikal2020graff}, which utilizes object affordances but does not enforce any constraints on the hand pose. The Functionality values are similar as both the methods encourage the agent to grasp the object at the affordance regions. Our method also scores well on the Posture metric, indicating that the learned policies indeed demonstrate human-like behavior during grasping. \textcolor{red}{Reader will think ``well, this is to be expected, you optimize for that." Anything we want to say to counter that?} See Supp.~Sec.~G and Sec.~E.2 for additional results and Sec.~C for TSNE plots illustrating our model's human-like poses. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/bar_plot_curve.pdf} \vspace*{-0.1in} \caption{\textbf{Grasping success and learning speed.} Left: The proposed model outperforms all the baselines, including recent methods for learning from demonstrations (\textsc{DAPG}~\cite{rajeswaran2017learning}) or favoring visual affordance regions (\textsc{GRAFF}~\cite{mandikal2020graff}). Relative results remain stable under noisy sensing and actuation (shaded bars). Right: Our model trains faster than the others, showing that the Internet videos of people help kickstart learning even before the agent begins attempting its own grasps. } \label{fig:grasp_metrics} \vspace{-1.5em} \end{figure} Fig.~\ref{fig:comparison} shows grasp policies from sample episodes. We can see that \textsc{DexVIP} is able to grasp objects with a human-like natural posture compared to the other methods. Contrast this with \textsc{CoM} or \textsc{GRAFF}, which only uses object affordances: those agents often end up in unusual poses which reduce their applicability to post-grasp functional tasks using the object. Failure cases arise when some objects are in orientations not amenable to the target hand pose. For instance, a knife with the handle on the left would ideally be picked up differently and then re-oriented for use. \textbf{Training speed} Fig.~\ref{fig:grasp_metrics} (right) shows the training curves. Our method learns successful policies $20\%$ faster than the next best method, \textsc{GRAFF}~\cite{mandikal2020graff}. This underscores how the hand pose priors enable our agent to approach objects in easily graspable configurations, thus improving sample efficiency. \noindent \textbf{Expert data: teleoperation vs.~YouTube video} Compared to traditional demonstrations, a key advantage of our video-based approach is the ease of data collection and scalability to new objects. We analyze the time taken to collect demonstrations for \textsc{DAPG}~\cite{rajeswaran2017learning} versus the time taken to curate videos for \textsc{DexVIP}. On average, it takes 5 minutes to collect a single demonstration for \textsc{DAPG} owing to the complex setup, while a video or image for \textsc{DexVIP} is collected in a few seconds. Fig.~\ref{fig:demo_acc} quantifies the impact of the efficiency of human demonstrations for our model (trained with video frames) compared to traditional state-action demonstrations (trained with VR demos). Plotting success rates as a function of accumulated demo experience, we see how quickly the proposed image-based supervision translates to grasping success on an increasing number of new objects, whereas with traditional demonstrations reaching peak performance takes much longer. This highlights the significant gains that can be realized by shifting from tele-op to video supervision for robot learning. \begin{wrapfigure}{r}{0.42\textwidth} \centering \vspace*{-0.2in} \includegraphics[width=\linewidth]{figures/demo_acc.png} \caption{\textbf{Demo-time vs success rate.} DexVIP benefits from easily available Internet data to scale up supervision.} \vspace*{-0.15in} \label{fig:demo_acc} \end{wrapfigure} This efficiency also means new objects are quick to add. To illustrate, we further evaluate \textsc{DexVIP} and \textsc{DAPG} on all 11 non-ContactDB objects, for which mocap demonstrations are not available. \textsc{DAPG}'s success rate drops from $59\%$ to $50\%$---a $15\%$ relative drop in performance---while \textsc{DexVIP} experiences a marginal $4\%$ drop from $68\%$ to $65\%$, remaining comparable to its performance on ContactDB. Since \textsc{DAPG} is trained on sub-optimal demonstrations, this precludes it from generalizing well to objects for which expert data is absent. \textsc{DexVIP}, on the other hand, is able to benefit from easily available Internet images to scale up supervision. \vspace{-.0in} \noindent \textbf{Ablations} Next we investigate how different components of the reward influence the dexterous grasping policy. Using only $R_{aff}$, the success rate is 60\% on ContactDB. When we successively add touch and hand pose priors to the reward function (Eq.~\ref{eq:reward}), we obtain success rates of 63\% and 68\% respectively. Thus our full model is the most effective. \noindent \textbf{Noisy sensing and actuation} Finally, to mimic non-ideal real-world scenarios, we induce multiple sources of noise into our agent's sensory and actuation modules during training and testing following prior work~\cite{zhu2018reinforcement,andrychowicz2020learning,akkaya2019solving,mandikal2020graff}. These include Gaussian noise on the robot's proprioception $P_t^r$, object tracking $d_t^r$, and actuation $a_t^r$, as well as pixel perturbations on the image observations $I_t^r$. Under heavy noise, \textsc{DexVIP} still yields a grasp success rate of $64\%$, even outperforming noise-free models of the other methods (Fig.~\ref{fig:grasp_metrics}, shaded bars). This encouraging result in a noise-induced simulation environment lends support for potentially transferring the learned policies to the real world~\cite{akkaya2019solving,andrychowicz2020learning,tobin-iros2017} were we to gain access to a dexterous robot. Please see Supp.~Sec.~A for more details. \section{Conclusion} \vspace{-.0in} We proposed an approach to learn dexterous robotic grasping from human hand pose priors derived from video. By leveraging human-object interactions in YouTube videos, we showed that our dexterous grasping policies outperformed methods that did not have access to these priors, including two state-of-the-art models for RL-based grasping. Key advantages of our approach are 1) humans are observed directly doing real object interactions, without the interference of conventional demonstration tools; and 2) expert information from video sources scales well with new objects. This is an encouraging step towards training robotic manipulation agents from weakly supervised and easily scalable in-the-wild expert data available on the Internet. In the future, we are interested in expanding the repertoire of tasks beyond grasping to learn fine-grained manipulation skills from human interaction videos. \section*{\centering \LARGE{Supplementary Material}} \textit{Note: Please see the supplementary video on the project page for example episodes: \url{https://vision.cs.utexas.edu/projects/dexvip-dexterous-grasp-pose-prior}.} \section{Noisy sensing and actuation} We are so far unable to deploy our system on a real robot, since we lack access to a dexterous hand robot. Instead, we provide experiments with a popular realistic simulator and further stress-test our approach with noisy sensing and actuation. Those results appear in the main paper; here we elaborate on the noise models. Robots can encounter a number of non-ideal scenarios when executing policies in the real world. The ever changing nature of the real world coupled with faults in hardware systems can pose a daunting challenge to real world deployment. These discrepancies often occur in the form of sensing and actuation failures. Sources for noise include variations in sensory systems such as perception modules as well as fluctuations in actuation control. Before robots can successfully be deployed into the real world, they must be capable of handling such variations. In Section 4 of the main paper, we describe the setup for inducing noise into our agent's sensory and actuation modules during training and testing following prior work~\cite{zhu2018reinforcement,andrychowicz2020learning,akkaya2019solving,mandikal2020graff}. Here, we further describe each of the noise sources in detail. \begin{enumerate} \item Proprioceptive noise: We apply additive Gaussian noise of mean $0$ and standard deviation $0.01$ on the robot's joint angles and angular velocities. This simulates the sensing and signal failures that can arise in the system. Thus training with such an induced noise source improves the likelihood of the robot being more robust to hardware failures during actuation. \item Actuation noise: Similar to the proprioceptive noise, we apply additive Gaussian noise on the robot actuation values. Such a noise model accounts for fluctuations in actuation control experienced in real-world deployment. \item Perception noise: For each RGB image $I_t^r$ that is processed by the vision module $f_V$, we apply pixel perturbations in the range $[-5, 5]$ and clip all pixel values between $[0, 255]$. This more closely resembles noise arising during camera sensing~\cite{zhu2018reinforcement}. \item Tracking noise: The object tracking points are operated on by a Gaussian noise model of mean $0$ and standard deviation $1$ cm. Additionally, we also freeze these tracking points for 20 frames at random intervals to further challenge our system. This simulates the effect of tracking failures arising from momentary occlusions of the object during interaction. \end{enumerate} As can be seen in Fig.~5 (left) of the main paper, even under substantial noise, our proposed method \textsc{DexVIP} still yields a high grasp success rate comparable to its performance under noise-free conditions, and even outperforms noise-free models of the other methods. This demonstrates the robustness of the trained policy to non-ideal realistic conditions. Since our lab does not have access to a real robot, we perform all our experiments in simulation. However, using the noise-induction techniques described above, we are able to test our method for real-world compliance under realistic conditions. Some areas that could still vary between simulation and the real world can include friction coefficients, damping factors, and so forth, which could further be accounted for using automatic domain randomization techniques as in ~\cite{akkaya2019solving}. Despite the lack of access to a real robot, the encouraging performance of \textsc{DexVIP} in a noise-induced simulation environment lends support for potentially transferring the learned policies to the real world~\cite{akkaya2019solving,andrychowicz2020learning,tobin-iros2017} were we to gain access to a robot. \section{Reward function details} We describe the various components of the reward function in Eq.1 of the main paper in more detail. \begin{enumerate} \item $R_{succ}$: This is a positive reward determining if the object has been grasped by the agent. At a particular time step $t$, if there is contact established between the hand and the object and no contact between the object and the table, the agent gets a $+1$ reward for that time step. This ensures that scenarios where the object is resting on the table as well those in which the object is in the air but out of the agent's reach do not get counted as grasp successes. \item $R_{aff}$: This is a negative reward denoting the hand-affordance contact distance between points on the hand and object affordance regions. Following~\cite{mandikal2020graff}, we compute $R_{aff}$ as the negative of the Chamfer distance between $M$ points on the hand and $N$ points on the object. It is defined as follows: \begin{equation} R_{aff} = - d_{Chamfer}(M,N) = - \sum_{m\in M}\min_{n\in N}{||m-n||}^2_2 - \sum_{n\in N}\min_{m\in M}{||m-n||}^2_2. \label{eq:chamfer} \end{equation} We set $M=10$ and $N=20$ in our experiments, following~\cite{mandikal2020graff}. In essence, the agent experiences a higher penalty if it is away from the object affordance region, which drops to 0 as it gets closer. This encourages the agent to explore useful object regions during exploration. \item $R_{pose}$: As discussed in the main paper, $R_{pose}$ is a mean per-joint angle error applied on the robot joint angles $p_t^r$ upon making contact with the object, so that it matches the human expert pose $p_c^*$. We ignore the azimuth and elevation values of the arm in the pose error since they are specific to the object orientation in $I^h$, which may be different from the robot's viewpoint $I^r$. This joint error is also hierarchically weighted over the joint angles such that errors on the parent joints are more heavily penalized compared to those on the child joints. This encourages the agent to align parent joints first before aligning their children and thus adopts a global to local approach for pose matching. $R_{pose}$ is therefore defined as follows: \begin{equation} R_{pose} = \gamma_1 l_{wrist} + \sum_{j=1}^{5} \big( \gamma_2 l^j_{knuckle} + \gamma_3 l^j_{middle} + \gamma_4 l^j_{distal} \big). \end{equation} Here, $j$ spans all five fingers of the hand and $l^j_{i}$is the error between joint $i$ of the $j^{th}$ finger of the robot pose $p_t^r$ and target robot pose $p_{c^\ast}^r$. In our experiments, we set $\gamma_1=1.0, \gamma_2=0.75, \gamma_3=0.5, \gamma_4=0.25$. In this way, we can have the poses align faster starting from the root joint. $R_{pose}$ is a negative reward that penalizes the agent for having poses that are distant from the human pose $p_c^*$. Furthermore, $R_{pose}$ is applied only when 30\% of the robot's touch sensors $T^r$ are activated. This encourages the robot to assume the target hand pose once it is close to the object and in contact with it. \item $R_{entropy}$: This reward is used while training the PPO agent so as to encourage exploration of the action space. This is implemented by maximizing the entropy over the target action distribution. \end{enumerate} \section{TSNE on hand poses} We perform a TSNE analysis on all the human hand poses $p^h \in \mathcal{P}^h$ in our curated YouTube dataset and the hand poses of our trained grasping agent in Fig.~\ref{fig:tsne}. We observe a meaningful distribution across object morphologies (e.g. clenched fist -- mug, pan, teapot, vs.~loose fist -- apple, mouse). This behavior is also reflected in the trained \textsc{DexVIP} agent for which we analyze the robot's pose at the last time step, $p_T^r$ of all successful grasps. The distribution for \textsc{DexVIP} is also more clustered since it uses the cluster center per object category $p_c^*$ as the target pose during training. This shows that the proposed approach of injecting human hand pose priors derived from in-the-wild object interaction videos can successfully guide dexterous robotic agents. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figures/tsne.png} \caption{\textbf{TSNE over hand poses.} A good distribution of human and robot poses across morphologies is observed (e.g. clenched fist - mug, pan, teapot vs. loose fist - apple, mouse). } \label{fig:tsne} \end{figure*} \section{Hand pose retargeting from FrankMocap to Adroit} To use the human pose inferred from FrankMocap in our simulator, we need to re-target the pose from FrankMocap to Adroit. Although both the human hand and the robot hand share a common five-finger morphology, their joint hierarchy trees are different. Fig.~\ref{fig:pose_retargeting}.i shows the two kinematic chains. We briefly describe each morphology below: \begin{itemize} \item \textbf{FrankMocap}: It uses the hand model from SMPL-X [4] to represent the human hand pose inferred from video frames. It consists of three ball joints in each of the five fingers, each having 3 DoF. This yields 15 ball joints in total with 45 DoF. Additionally, the root joint at the base of the hand $j^h_0$ has 6 DoF. The joint space of the human hand in 3D is thus represented as $\vect{J}^h \in \mathbb{R}^{21\times3}$ --- a wrist, 15 finger joints, and 5 finger tips. Note that the finger tips are not joint locations, but are used for computing joint angles for their parent joints \item \textbf{Adroit}: The Adroit hand in the simulator is actuated by 24 revolute joints having 1 DoF each, resulting in 24 DoF in total. The hand is attached to a 6 DoF robotic arm, yielding 30 DoF in total. The joint space of the robot hand is thus represented by $\vect{J}^r \in \mathbb{R}^{30}$. \end{itemize} As we can see, the FrankMocap model has many more degrees of freedom compared to the Adroit hand. Keeping these differences in mind, we design a joint retargeting mechanism to bridge the gap between the FrankMocap and Adroit models and effectively use the human pose to train the robot. The hand pose retargeting mechanism is depicted in Fig.~\ref{fig:pose_retargeting}.ii. The different stages of this retargeting pipeline are as follows: \begin{enumerate}[label=\textbf{\alph*)}] \item \textbf{Hand pose in world coordinates}: We infer the FrankMocap pose from the input video frame $I^h$ and obtain the human hand pose $p^h \in \vect{J}^h_w$ as a set of 3D joint key-points in world coordinates. \item \textbf{World to root relative}: We convert raw X, Y, Z positions of the joints in world coordinates to root-relative coordinates by shifting the origin to the wrist joint $j^h_0$ to obtain $p^h_{rr}$ in root relative pose $\vect{J}^h_{rr} \in \mathbb{R}^{21\times3}$. To determine the orientation of the Adroit arm, we first construct a plane through the wrist $j^h_0$, fore finger knuckle $j^h_4$ and ring finger knuckle $j^h_{10}$. The palm is taken to lie in this plane. The palmar plane along with its normal defines the orientation of the arm. The axis angles obtained during this transformation are used to set the rotational joints of the Adroit arm in $\vect{J}^r \in \mathbb{R}^{30}$. \item \textbf{Root relative to parent relative}: A sequential rotational transform is applied about X, Y and Z axes, with the angular changes $\alpha$, $\beta$ and $\gamma$ respectively, on the joint positions in the Root Relative Coordinate System to get the corresponding skeleton in Parent Relative Coordinate System. These angular changes are computed such that the Z-axis lies along the child joint and the Y-axis points outward at every finger joint. At every level of the joint hierarchy, coordinate transformations of the parent are applied to the child joints so that after successively parsing through the entire tree, the root relative coordinates $p_{rr}^h$ are transformed into a parent-relative coordinate frame $p_{pr}^h$ in $\vect{J}^h_{pr} \in \mathbb{R}^{21\times3}$. Here every joint $j_i$ is expressed relative to a coordinate frame defined at its parent joint $P(j_i)$. \item \textbf{Joint angle transfer}: The polar coordinates (azimuth and elevations) computed in the parent relative system yield local joint angles that are mapped onto the revolute joints in the Adroit space $\vect{J}^r \in \mathbb{R}^{30}$. Most revolute joints in Adroit can be mapped from the azimuth or elevation values of different joints in $p^h_{pr}$. As an example, consider the middle joint on the fore finger in Adroit i.e. $j^r_9$ in Fig. B.i. This joint angle can be obtained by computing the elevation of $j^h_6$ with respect to the coordinate frame defined at its parent joint $j^h_5$ in FrankMocap, while ignoring the azimuth and tilt angles. Other joints can similarly be obtained from the joint definitions in $p^h_{pr}$. For the little finger metacarpel $j^r_{19}$ which is not modelled in FrankMocap, we set it to be 0.25 of the elevation at $j^h_{13}$. \end{enumerate} Using the above re-targeting scheme, we are able to successfully transfer the FrankMocap pose from the human pose space to the Adroit pose space. Samples can be seen in Fig. 3, main paper. While the mapping is approximate---due to the inherent differences in kinematic chains as discussed above---we find that this re-targeting mechanism generates Adroit poses that closely match the human pose and works well for our purpose. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/pose_retargeting.pdf} \caption{ \textbf{Pose retargeting from FrankMocap to Adroit}. \textbf{i)} Joint hierarchies for Frankmocap (left) ad Adroit (right). Frankmocap has 15 ball joints with 3 DoF, root with 6 DoF and 5 finger tips. Adroit has 24 revolute joints with 1 DoF and an arm (not shown) with 6 DoF. \textbf{ii) Pose retargeting mechanism for transforming Frankmocap joints to the Adroit joint space.} a) We first obtain the Frankmocap pose i.e. 3D joint locations in the world coordinate frame b) This is converted to a root relative coordinate frame through a simple coordinate translation. c) We then compute the palmar plane in Frankmocap to obtain the arm orientation for Adroit. Subsequently, the structure of the kinematic tree is used to successively transform the root relative coordinate frame to a parent relative frame centered at each joint. d) The polar coordinates (azimuth and elevations) computed in the parent relative system yield local joint angles that are mapped onto the revolute joints in Adroit. } \label{fig:pose_retargeting} \end{figure} \section{Physical parameters in the simulator} \subsection{Parameter settings} We report the physical parameters setting for the simulated environment in Table ~\ref{tab:sim_params}. All environment physical parameters are taken from ~\cite{rajeswaran2017learning}. The sliding friction acts along both axes of the tangent plane. The torsional friction acts around the contact normal. The rolling friction acts around both axes of the tangent plane. Regular friction is present between the hand, object and table. The joints within the hand are assumed to be friction-less with respect to each other. \begin{table}[] \setlength{\tabcolsep}{20pt} \centering \begin{center} \begin{tabular}{ll} \toprule Parameter & Value \\ \midrule Sliding friction & $1N$ \\ Tortional friction & $0.5N$ \\ Rolling friction & $0.01N$ \\ Hand wrist damping & $0.5N$ \\ Hand fingers damping & $0.05N$ \\ Object rotational damping & $0.1N$ \\ Object mass & $1 kg$ \\ \bottomrule \end{tabular} \end{center} \caption{\textbf{Physical parameter settings used in the simulator}} \label{tab:sim_params} \end{table} \subsection{Robustness to parameters} To further demonstrate the robustness of the trained policy to different masses and scales of the objects, we evaluate the trained policy on objects with varying masses and scales. Specifically, we vary the mass between $0.5 kg$ and $1.5 kg$ and the scale between $0.8x$ and $1.2x$ of the training size. Results can be seen in Figure ~\ref{fig:mass_scale_plot}. We observe that DexVIP remains fairly robust to large variations in these physical properties. It is easier to grasp lighter object than heavier ones as expected. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figures/mass_scale_plot.pdf} \caption{\textbf{Robustness to changes in physical parameters}. We evaluate DexVIP on a range of object mass and scale values. \textsc{DexVIP} remains fairly robust to large variations in such physical object properties.} \label{fig:mass_scale_plot} \end{figure} \section{Hand-Affordance distance} The distance input $d_t^r$ is the pairwise distance between the agent's hand and the object affordance region as defined in GRAFF~\cite{mandikal2020graff}. Here the object affordance region is a 2D binary affordance map that is inferred from a model from ~\cite{mandikal2020graff} that is trained for affordance anticipation on ContactDB objects. Like in ~\cite{mandikal2020graff}, we find that this model works reasonably well for objects outside ContactDB as well. We obtain affordance points by back-projecting the affordance map to 3D points in the camera coordinate system using the depth map at $t_0$. We sample $M=20$ points from these back-projected points. We then track these points throughout the rest of the episode. In the noise experiments, in addition to the stated noise models, we also induce tracking failure on the affordance points to relax the tracking assumption as in ~\cite{mandikal2020graff}. For points on the hand, we sample $N=10$ regular points on the surface of the palm and fingers. The number of points at every time step remains the same across all objects. \section{Additional results} \subsection{Effect of pose prediction on performance} \textsc{DexVIP} uses hand poses inferred from video frames for obtaining target pose priors. Since these poses are inferred, there can be errors in these predictions. To examine the effects of those errors on our policies, we train a model on ground truth (GT) hand poses from ContactPose~\cite{Brahmbhatt_2020_ECCV} captured using mocap for all ContactDB objects. Using these GT poses to train the \textsc{DexVIP} policy provides an upper bound for grasp performance with perfect pose. Results are reported in Table ~\ref{tab:contactpose_metrics}. We find that the policy trained using inferred poses performs comparably to the one trained on GT poses, showing that \textsc{DexVIP} is fairly robust to errors in pose predictions. We further note that the hand pose clustering process that we perform is able to effectively filter out bad/outlier poses so that we obtain a representative hand pose for each object (Fig.~\ref{fig:outlier_poses}). \begin{table}[] \centering \begin{center} \begin{tabular}{lllll} \toprule Pose Supervision & Success & Stability & Functionality & Posture \\ \midrule Inferred pose & 68 & 51 & 64 & 62 \\ GT pose & 70 & 53 & 65 & 65 \\ \bottomrule \end{tabular} \end{center} \caption{\textbf{Effect of hand pose supervision on grasping performance}. \textsc{DexVIP} uses hand poses inferred from video frames for supervision. Using ground truth poses captured using mocap to train the \textsc{DexVIP} policy provides an upper bound for grasp performance. The policy trained using inferred poses performs comparably to the one trained on GT poses indicating robustness to pose errors.} \label{tab:contactpose_metrics} \end{table} \begin{figure}[] \centering \includegraphics[width=0.8\linewidth]{figures/outlier_poses.pdf} \vspace*{-0.05in} \caption{\textbf{Outlier poses.} The clustering mechanism effectively filters out outlier poses which are produced due to errors in pose prediction. } \label{fig:outlier_poses} \end{figure} \subsection{Effect of object shape variation on performance} We use keywords containing the object class label for obtaining grasp images for each object. We use the same object category in simulation as well. Note that we do not require an exact matching object instance for the one in the video. A generic object mesh from the same category works quite well. To illustrate this, we show samples of a few objects in the video frame and within the simulator along with their grasp success rate in Fig.~\ref{fig:object_video_sim}. We find that \textsc{DexVIP} remains fairly robust to variations in the object shape between the video and simulator. For instance, even though objects like teapot, flashlight and saucepan do not have an exact match in the simulator, the grasp policy works quite well on these objects. \begin{figure}[] \centering \includegraphics[width=\linewidth]{figures/objects_video_sim.pdf} \vspace*{-0.1in} \caption{\textbf{Effect of object shape variation on performance.} \textsc{DexVIP} remains fairly robust to variations in the object shape between the video and simulator. For instance, even though objects like teapot, flashlight and saucepan don't have an exact match in the simulator, the grasp policy works quite well on these objects. } \label{fig:object_video_sim} \vspace{-1.5em} \end{figure} \subsection{Ablation evaluation} We report all metrics for the ablations of the main paper in Table ~\ref{tab:ablation_metrics}. We observe that the full \textsc{DexVIP} model gains substantially in success, stability and posture metrics, while maintaining the functionality score of the policy that is trained using only affordance. \begin{table}[] \centering \begin{center} \begin{tabular}{lllll} \toprule Model & Success & Stability & Functionality & Posture \\ \midrule Affordance only & 60 & 41 & 63 & 48 \\ Affordance + Touch & 63 & 45 & 64 & 49 \\ \begin{tabular}[c]{@{}c@{}}Affordance + Touch + Pose prior\\(full \textsc{DexVIP} model)\end{tabular} & \textbf{68} & \textbf{51} & \textbf{64} & \textbf{62} \\ \bottomrule \end{tabular} \end{center} \caption{\textbf{Metrics for \textsc{DexVIP} ablations.} The full \textsc{DexVIP} policy is able to leverage the touch-informed pose prior to perform well across all metrics.} \label{tab:ablation_metrics} \end{table} \subsection{Performance on additional objects} In addition to comparing performance on non-ContactDB objects with \textsc{DAPG}, we provide a comparison against all methods in Figure~\ref{fig:additional_objects}. Note that all 11 non-ContactDB objects belong to object classes not found in ContactDB. When compared to performance on ContactDB, we observe that \textsc{DexVIP} is able to maintain its performance and experiences only a marginal drop whereas the other methods undergo substantial drops in performance. As reported in L318-320, \textsc{DAPG}’s success rate drops from 59\%-50\%, a 15\% drop in performance, while \textsc{DexVIP} sees a marginal 4\% drop from 68\% to 65\%. Furthermore \textsc{GRAFF} also sees a large 12\% drop from 60\% to 53\%. The results indicate that \textsc{DexVIP} can effectively leverage hand poses for a variety of objects. \begin{figure}[] \centering \includegraphics[width=0.6\linewidth]{figures/additional_objects.png} \vspace*{-0.1in} \caption{\textbf{Grasping results on additional non-ContactDB objects.} Compared to the performance on ContactDB, \textsc{DexVIP} is able to maintain its performance and experiences only a marginal drop whereas the other methods take substantial hits to performance. While the success rate of \textsc{DAPG} and \textsc{GRAFF} drop by 15\% and 12\% respectively, the drop for DexVIP is only 4\%. These results indicate that \textsc{DexVIP} can effectively leverage hand poses for a variety of different objects. } \label{fig:additional_objects} \end{figure}
{ "timestamp": "2022-02-02T02:08:26", "yymm": "2202", "arxiv_id": "2202.00164", "language": "en", "url": "https://arxiv.org/abs/2202.00164" }
\section{Introduction} Modern developments in relativistic femtosecond lasers\cite{Yoon2019,Yoon2021} and microstructure fabrication\cite{Fritzler2019,Fu2016} have expanded the scope of high energy density physics\cite{Snyder2019,Zhou2021,Sugimoto2020}. Recently, studies have utilized these developments to investigate ion acceleration\cite{Adli2019}, magnetic field generation\cite{Jiang2021}, ultra-high density compression\cite{Murakami2018,Murakami2019}, and pair-creation\cite{Koga2020}. Generating magnetic fields on the 100-kT scale is exciting because it enables the study of fundamental phenomena such as magnetic reconnection\cite{Fujioka2013,Gu2021,Pei2016}. Magnetic fields on this scale are observed in the accretion disks of black holes, which makes them valuable for laboratory astrophysics experiments\cite{Law2020}. Irradiating an ``escargot" target with a laser is a well-known scheme to produce a strong magnetic field\cite{Korneev2015,Korneev2017}. It has been used in a laboratory experiment as a magnetic field source\cite{Law2020}. Microtube implosion (MTI) is another method for magnetic field generation\cite{Murakami2020,Weichman2020}. In MTI, the implosion of the inner layer of a microtube amplifies a seed magnetic field to the megatesla scale, enhancing its strength by a few orders of magnitude\cite{Shokov2021}. Although both setups utilize different approaches to generate a magnetic field, an essential factor is the formation of a surface current\cite{Weichman2020_2,Korneev2015,Weichman2020}. In this work, we propose a paisley design to generate a magnetic field without a seed magnetic field. Our paisley design is described mathematically by the following function \begin{equation} f(k) = \begin{cases} -\frac{R_0}{2}\left(\exp{\left[4\pi ik - \pi/2 \right]} + 1\right) & k \in \left(0, \frac{1}{4} \right) \\ -\frac{R_0}{2}\left(\exp{\left[4\pi ik - \pi/2 \right]} - 1\right) & k \in \left(\frac{1}{4}, \frac{1}{2} \right) \\ R_0\exp{\left(2\pi ik - \pi/2 \right)} & k \in \left(\frac{1}{2},1 \right) \label{eq:paisley} \end{cases}, \end{equation} where $R_0$ is the radius, and $k\in \left(0,1 \right)$ is a parametric variable. The x- and y-coordinates are the real and imaginary parts of eq.~\eqref{eq:paisley}, respectively. Figure~\ref{fig:single}(a) graphically depicts Eq.~\eqref{eq:paisley}. In this design, surface currents produce a magnetic field on the concave side of the target, which makes the magnetic field easily accessible. The open area makes it easier for incoming particles to interact with the magnetic field. Additionally, the accessible location permits two or more targets to be connected in a modular fashion allowing the generated magnetic fields to interact with each other. Thus, it is suitable in experiments requiring the interaction of two or more magnetic field sources. Various arrangements can be used to study magnetic field phenomena such as magnetic reconnection, magnetic mirrors, and other laboratory astrophysics experiments. To study the magnetic field generation of the paisley design, we used the 2.5D particle-in-cell (PIC) program, EPOCH\cite{Arber:2015hc}. The laser parameters were $\lambda_L =\SI{800}{\nano\meter},\ I_L = \SI{1e21}{\watt\per\square\centi\meter}$, and $\tau_L = \SI{100}{\femto\second}$ for the wavelength, peak intensity, and full-width at half-maximum (FWHM), respectively. The simulations used a $\SI{30}{\micro\meter}\times \SI{30}{\micro\meter}$ box with a cell size of $\frac{\lambda_L}{100}$, $100$ pseudo-ions and $200$ pseudo-electron per cell, and a laser propagating in the +x direction. The target consisted of fully ionized carbon with a density of $\SI{1e23}{\per\cubic\centi\meter}$. The paisley target generates a surface current via local charge separation. The thickness gradient along the tip creates a larger charge separation around the apex upon laser irradiation [Fig.~\ref{fig:single}(b)]. The laser strips most of the electrons from the thin sections of the paisley target, but it cannot penetrate the thicker areas. Hence, more electrons are ejected close to the apex. This causes the surface electrons to flow towards the apex [Fig.~\ref{fig:single}(c)]. The curvature of the surface causes a positive (negative) magnetic field to form on its concave (convex) side [Fig.~\ref{fig:single}(d)]. Although the magnetic field covers only a few square microns, using a larger target will increase its coverage. When using larger targets, materials with low electron densities, such as foam, are preferred because low-density materials have a larger skin depth. The larger skin depth enables the larger target to maintain the charge separation gradient across the tip. Additionally, if the target is rotated $180^{\circ}$ about $y=0$, the polarity of the magnetic fields flips. To predict magnitude of the magnetic field in \si{\kilo\tesla}, we developed a simple analytic model. The magnetic field strength is $B_z\sim j_e R_0$, and the estimated current density, $j_e$, is $j_e\sim n_{he} c$. The hot-electron density, $n_{he}$, is related to $I_L$ by $n_{he} = {\eta_a I_L}/{\mathcal{E}c}$\cite{Forslund1977}. For relativistic electrons, the average kinetic energy, $\mathcal{E}$, is approximately $3T_e$, where $T_e$ is the electron temperature. If the electron temperature is estimated using the ponderomotive scaling\cite{Wilks1992}, the model is reduced to \begin{equation} B_z = 30.3 \frac{\eta_a\sqrt{I_{L20}}R_{0\mu m}}{\lambda_{\mu m}},\label{eq:model} \end{equation} where $I_{L20}$ is the laser intensity normalized to $10^{20}$~\si{\watt\per\square\centi\meter}, $\eta_a$ is the absorption efficiency, $R_{0 \mu m}$ is characteristic radius given by Eq.~\eqref{eq:paisley} in \si{\mu m}, and $\lambda_{\mu m}$ is the laser wavelength in \si{\micro\meter}. The magnetic field strength scales as $B_z \sim \sqrt{I_L}$, and linearly with $R_0$. Figure~\ref{fig:model}(a) shows that the peak magnetic field increases with the laser intensity. The FWHM for the magnetic field is $\sim 2\tau_L$. Figure~\ref{fig:model}(b) shows that the PIC simulation results agree well with Eq.~\eqref{eq:model} for $\eta_a = 0.4$. According to the model, an absorption efficiency of $0.8$ or higher is necessary to reach the megatesla scale for $I_L = 10^{22}~\si{\watt\per\square\centi\meter}$. However, at this intensity, the model may inaccurate because non-linear effects are no longer be negligible. Comparing Fig.~\ref{fig:model}(a) with the simulation parameters, the peak magnetic field coincides with the laser maximum. Additionally, the magnetic field sharply drops once the laser stops interacting with the target. This results in a relatively short magnetic field lifetime. However, using two paisley targets prolongs the magnetic field lifetime [Fig.~\ref{fig:double}(c)]. In this case, the two lasers hit a pair of paisley targets from the +x and --x-direction. The two targets are separated by a gap to minimize the possibility of electrons flowing directly from the body of one target to the tip of the other. The magnetic field generated by the double paisley setup almost completely covers the void [Figs.~\ref{fig:double}(a) \& (b)]. Additionally, the magnetic field is sustained for much longer than the laser pulse duration. As the system evolves, electrons flow towards the center [Fig.~\ref{fig:double}(d)] and form a partial current loop [Fig.~\ref{fig:double}(e)]. This loop is stable and extends the magnetic field's lifetime to the picosecond scale [Fig.~\ref{fig:double}(f)]. Although the maximum magnetic field strength has a long lifetime, Fig.~\ref{fig:double}(e) shows that the magnetic field leaks from the confined space. This results in a gradual reduction of the total magnetized area. By \SI{1}{\pico\second}, the 100-kT region is estimated to be 20\% of the area at \SI{400}{\femto\second}, and the magnetic field is reduced by one order of magnitude by \SI{2}{\pico\second}. Figure~\ref{fig:single}(b) shows that a positive patch forms on the concave-side. It is attributed to the imploding ions. The imploded ions attract electrons whose trajectories are bent by the magnetic field generated by the surface current\cite{Gu2021_2}. The electron gyro motion around the imploded ions works to sustain the magnetic field for a brief period after the laser has disappeared in the single paisley case. For the double paisley targets, the imploded ion region is more pronounced, which helps sustain a partial current loop [Fig.~\ref{fig:double}(f)]. In addition, two paisley targets form a more confined region, which delays the expansion of the current loop. The electron collision frequency is calculated using a simple formula\cite{Weichman2020_2}. This formula gives the characteristic time scale of the electron collision , $\nu_e^{-1}$, which is several picoseconds long. Thus, dissipation due to Coulomb collisions is negligible for the duration of the magnetic field. For comparison, we also conducted simulations of a \SI{5}{\micro\meter} ``escargot" target. It generates a magnetic field of \SI{150}{\kilo\tesla} with a picosecond lifetime using the same laser parameters. A drawback of the paisley design is its intricate shape, which is challenging to fabricate. However, a simplified design will yield comparable results. Figure~\ref{fig:simp}(a), shows that a quarter of a rectangular microtube can be used as a simplified paisley target. Although the design differs from the original one, the core concept of utilizing the thickness gradient to guide the surface current remains. Despite a major change in its appearance, the magnetic field strength produced by the simplified target is comparable to the paisley design [Fig.~\ref{fig:simp}]. However, it has a slightly smaller cross-section, and a shorter lifetime. Although the simplified design might be easier to fabricate, the original paisley structure is still interesting from a theoretical viewpoint. Due to its small size, the paisley target is prone to pre-expansion when interacting with the laser's pre-pulse. To approximate this effect, we modified the initial plasma distribution profile of the paisley target. Figure~\ref{fig:pre} shows the simulation results of a paisley target with a modified initial density profile [Fig.~\ref{fig:pre}(a)]. Although the area of the magnetic field in Fig.~\ref{fig:pre}(b) is smaller than that in Fig.~\ref{fig:single}(d), the results show that even when the initial distribution is not ideal, it can still produce 100-kT magnetic fields. The potential 3D-effects is another factor to consider for these targets. The influence of 3D effects should be more prominent on the top and bottom (z-axis) ends of the target because the z-axis expansion on the ends alters the electron dynamics close to the ends. This effect can be mitigated by choosing a relatively long or high aspect-ratio target. For experimental verification, the magnetic field strength can be evaluated by measuring the deflection of a passing ion beam\cite{Law2020}. The paisley target is a robust design to generate a magnetic field without a seed. However, further optimization can still be performed to maximize the generated magnetic field. It has potential for multiple applications due to its flexible and modular design. For the double paisley target, its performance is similar to the ``escargot" target for both the magnetic field intensity and lifetime. However, the double paisley target requires two lasers, which is less efficient than the ``escargot" target. Nevertheless, the paisley target's advantage lies in its flexibility. Although the current double paisley setup is used to prolong the magnetic field lifetime, flipping one of the paisley targets will result in two magnetic fields with opposing polarities. This configuration is suitable for studying magnetic reconnection. Additionally, different configurations of the paisley targets may be realized to study other magnetic field interactions. \begin{acknowledgments} The authors acknowledge Didar Shokov for the fruitful discussions. Computational resources were provided by the Cybermedia Center, Osaka University. This work was supported by the Japan Society for the Promotion of Science (JSPS). PIC simulations were performed using EPOCH, developed under UK EPSRC (Grant Nos. EP/G054940, EP/G055165, and EP/G056803). \end{acknowledgments} \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{Author Declarations} \subsection*{Conflict of Interest} The authors have no conflicts to disclose.
{ "timestamp": "2022-02-02T02:09:41", "yymm": "2202", "arxiv_id": "2202.00193", "language": "en", "url": "https://arxiv.org/abs/2202.00193" }
\section{Introduction and main results}\label{sec:intro} Stein's lemma, \cite{stein}, \cite{liu}, is a well-known identity of the multivariate normal distribution, with applications in statistics, see e.g. \cite{zhang}, \cite{new_stein}. Stein's lemma is stated as follows. \begin{theorem}[Multivariate Stein's lemma]\label{thm:Stein} For an $N$-dimensional Gaussian random vector $\bm{X}$, with mean value vector $\bm{\mu}$ and autocovariance matrix $\bm{C}$, it holds true that \begin{equation}\label{eq:stein} \mathbb{E}\left[g(\bm{X})X_m\right]=\mu_m\mathbb{E}\left[g(\bm{X})\right]+\sum_{j=1}^NC_{mj}\mathbb{E}\left[\frac{\partial g(\bm{X})}{\partial X_j}\right], \ \ m=1,\ldots,N, \end{equation} where $\mathbb{E}\left[\bm{\cdot}\right]$ denotes the average operator, and $\partial g(\bm{X})/\partial X_j$ is the partial derivative of $g(\bm{X})$ with respect to $X_j$ component. \end{theorem} \begin{proof} In Appendix \ref{A:stein}, we prove Stein's lemma using integration by parts. \end{proof} By Stein's lemma \eqref{eq:stein}, the presence of $m$th component inside the average is eliminated, with $\mathbb{E}\left[g(\bm{X})X_m\right]$ being expressed in terms of the averages of $g(\bm{X})$, and the averages of its first-order partial derivatives with respect to each of the components of $\bm{X}$. The topic of the present work is to generalize Stein's lemma, providing a formula of expressing the average $\mathbb{E}\left[g(\bm{X})X_1^{n_1}\cdots X_N^{n_N}\right]$ in terms of the averages of partial derivatives of $g(\bm{X})$. In the recent work \cite{mamis_stein}, we generalized the counterpart of Eq.~\eqref{eq:stein} for the univariate normal distribution, expressing the average $\mathbb{E}\left[g(X)X^n\right]$, where $X$ is a scalar Gaussian random variable, in terms of the averages of the derivatives of scalar function $g(X)$, up to the $n$th order. Furthermore, the formula generalizing Stein's lemma for $\mathbb{E}\left[g(\bm{X})X_1^{n_1}\cdots X_N^{n_N}\right]$ has to be compatible with identities expressing higher order moments of a Gaussian random vector in terms of its first two moments (mean value vector and autocovariance matrix). The central result in this direction is Isserlis theorem \cite{isserlis}, also known in physics literature as Wick's theorem \cite{wick}, on the moments $\mathbb{E}\left[X_1\cdots X_N\right]$ of a zero-mean value Gaussian random vector. For a review of the related literature see also \cite{triant}. Recently, Song and Lee in \cite{song} derived a formula in closed, compact form for the product Gaussian moments $\mathbb{E}\left[X_1^{n_1}\cdots X_N^{n_N}\right]$. Subsequently, in corollaries \ref{cor:isserlis}, \ref{cor:song}, Isserlis and Song \& Lee formulas for Gaussian moments are easily rederived from our generalizing formula \eqref{eq:formula}, by setting $g(\bm{X})=1$. \begin{theorem}[Generalizing formula]\label{thm:formula} Let $\bm{X}$ be an $N$-dimensional Gaussian random vector with mean value vector $\bm{\mu}$ and autocovariance matrix $\bm{C}$. The diagonal elements of matrix $\bm{C}$ (the autocovariances of each $X_i$ component) are denoted as $\sigma^2_i:=C_{ii}$. For a smooth enough function $g:\mathbb{R}^N\rightarrow\mathbb{R}$, for a given $\bm{n}=(n_1,\ldots,n_N)$ with $n_i\geq 0$, $i=1,\ldots,N$, and under the assumption that all averages involved exist, it holds true that: \begin{align}\label{eq:formula} &\mathbb{E}\left[g\left(\bm{X}\right)\left(\prod_{i=1}^NX_i^{n_i}\right)\right]=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\times\nonumber\\ &\times\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\prod_{i=1}^N\partial_i^{a_i}g\left(\bm{X}\right)\right], \end{align} where $\partial_i^mg(\bm{X}):=\partial^mg(\bm{X})/\partial X_i^m$, $\lfloor\bm{\cdot}\rfloor$ is the floor function, and $\binom{n}{r,\ell_{1},\ldots,\ell_{N}}=\frac{n!}{r!\ell_1!\cdots\ell_N!}$ is the multinomial coefficient with $N+1$ factors. In Eq.~\eqref{eq:formula}, the orders $a_i$ of partial derivatives $\partial_i^{a_i}$ are defined as \begin{equation}\label{a_i} a_i=\sum_{j=1}^N\left[\ell_{ji}-(1+\delta_{ij})k_{ij}\right], \ \ \text{with} \ \ k_{ij}=k_{ji}, \end{equation} and $\delta_{ij}$ being Kronecker's delta; $\delta_{ii}=1$, $\delta_{ij}=0$ for $i\neq j$. Coefficients $H_{\ell,k}$, $G_{\ell_1,\ell_2,k}$ are \begin{equation}\label{eq:hermite_numbers} H_{\ell,k}=\frac{\ell!}{2^kk!(\ell-2k)!}, \ \ k=0,\ldots,\lfloor \ell/2\rfloor, \end{equation} and \begin{equation}\label{eq:glue_numbers} G_{\ell_1,\ell_2,k}=\binom{\ell_1}{k}\binom{\ell_2}{k}k!, \ \ k=0,\ldots,\min\{\ell_1,\ell_2\}, \end{equation} where $\binom{\ell}{k}=\frac{\ell!}{k!(\ell-k)!}$ is the binomial coefficient. Sum $\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}$ is over all combinations of nonnegative integers $\left\{\{r_i,\ell_{i1},\ldots,\ell_{iN}\}_{i=1}^N\right\}$ with $r_i+\sum_{j=1}^N\ell_{ij}=n_i$, $i=1,\ldots,N$. \end{theorem} \begin{proof} In Sec.~\ref{sec:induction}, we prove Eq.~\eqref{eq:formula} rigorously, by multidimensional mathematical induction on $\bm{n}\in\mathbb{N}^N$. In addition to this proof, and in order to provide the reader with more insight, we present, in Sec.~\ref{sec:construction}, a constructive formal proof of theorem \ref{thm:formula}. This constructive proof is based on treating the mean value operator not as an integral, but as the sequential action of a number of pseudodifferential operators that are introduced in definition \ref{def1} via the moment-generating function of the Gaussian random vector. The action of these pseudodifferential operators is determined by their Taylor series expansions, under the formal assumption that all infinite series involved are summable. As we see in the proofs of lemmata \ref{lem:Tii}, \ref{lem:Tij}, coefficients $H_{\ell,k}$ and $G_{\ell_1,\ell_2,k}$ arise naturally in the constructive proof, when evaluating the terms appearing in the series expansions of the pseudodifferential operators. \end{proof} \begin{remark}[On the orders $a_i$ of partial derivatives] By Eq.~\eqref{a_i}, we observe that the order $a_i$ of the partial derivative with respect to $X_i$ in the right-hand side of Eq.~\eqref{eq:formula} depends on: \textbf{i)} $\ell_{ii}$, that belongs to the partition of power $n_i$ of the same $X_i$ in the left-hand side of Eq.~\eqref{eq:formula}, and \textbf{ii)} all $\ell_{ji}$, $j\neq i$, that each one of them belongs to the partition of power $n_j$ of one of the rest $X_j$, $j\neq i$. The second dependence of each $a_i$ is a consequence of the correlation between the components of the Gaussian random vector $\bm{X}$; in corollary \ref{cor:uncorr} for the uncorrelated case, we shall see that each $a_i$ depends on $\ell_{ii}$ only. \end{remark} \begin{remark}[On the coefficients $H_{\ell,k}$, $G_{\ell_1,\ell_2,k}$] By the combinatorial interpretation of binomial coefficients, see e.g. \cite[p. 110]{char}, coefficients $G_{\ell_1,\ell_2,k}$, defined by Eq.~\eqref{eq:glue_numbers}, are identified as the number of ways to put together $k$ elements from a set of $\ell_1$ elements, with $k$ elements from another set of $\ell_2$ elements. In \cite[p. 62]{knuth}, $H_{\ell,k}$, defined by Eq.~\eqref{eq:hermite_numbers}, are identified as the number of partitions of a set of $\ell$ elements into $k$ unordered pairs and $(\ell-2k)$ singletons. Also, by virtue of \cite[expression 22.3.11]{stegun}, $H_{\ell,k}$ are identified as the absolute values of the coefficients appearing in the $\ell$th order probabilist's Hermite polynomial; $\mathrm{He}_{\ell}(x)=\sum_{k=0}^{\lfloor \ell/2\rfloor}(-1)^k H_{\ell,k}x^{\ell-2k}$. Last, in the On-Line Encyclopedia of Integer Sequences \cite{OEIS}, $H_{\ell,k}$ are referred to as the Bessel numbers. \end{remark} We shall now review simplifications of formula \eqref{eq:formula} for some specific cases: \begin{corollary}[Formula \eqref{eq:formula} for zero-mean valued $\bm{X}$] For $\bm{\mu}=\bm{0}$, Eq.~\eqref{eq:formula} reads \begin{align}\label{eq:formula_zero} &\mathbb{E}\left[g\left(\bm{X}\right)\left(\prod_{i=1}^NX_i^{n_i}\right)\right]=\sum_{\substack{\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\times\nonumber\\ &\times\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\prod_{i=1}^N\partial_i^{a_i}g\left(\bm{X}\right)\right]. \end{align} \end{corollary} We show how to apply the generalizing formula by deriving the following example. \begin{example}\label{example} For $N=2$, $(n_1,n_2)=(1,2)$ and $(\mu_1,\mu_2)=(0,0)$, Eq.~\eqref{eq:formula_zero} results in \begin{align}\label{eq:example} &\mathbb{E}\left[g(X_1,X_2)X_1X_2^2\right]=\left(\sigma_1^2\sigma_2^2+2C_{12}^2\right)\mathbb{E}\left[\frac{\partial g(X_1,X_2)}{\partial X_1}\right]+3\sigma_2^2C_{12}\mathbb{E}\left[\frac{\partial g(X_1,X_2)}{\partial X_2}\right]+\nonumber\\ &+\left(2\sigma_1^2\sigma_2^2C_{12}+C_{12}^3\right)\left[\frac{\partial^3 g(X_1,X_2)}{\partial X_1^2\partial X_2}\right]+\left(\sigma_1^2\sigma_2^4+2\sigma_2^2C_{12}^2\right)\left[\frac{\partial^3 g(X_1,X_2)}{\partial X_1\partial X_2^2}\right]+\nonumber\\ &+\sigma_1^2C_{12}^2\left[\frac{\partial^3 g(X_1,X_2)}{\partial X_1^3}\right]+\sigma_2^4C_{12}\left[\frac{\partial^3 g(X_1,X_2)}{\partial X_2^3}\right]. \end{align} \end{example} \begin{proof} The derivation of Eq.~\eqref{eq:example} from Eq.~\eqref{eq:formula_zero} is performed in Appendix \ref{A:example}. \end{proof} \begin{corollary}[Formula \eqref{eq:formula} for uncorrelated $X_i$]\label{cor:uncorr} Consider the case where the components of $\bm{X}$ are uncorrelated: $C_{ij}=0$ for $i\neq j$. Thus, in Eq.~\eqref{eq:formula}, $\ell_{ij}=0$ for $i\neq j$, and so the first summation in its right-hand side is simplified into $\sum_{\substack{r_i+\ell_{ii}=n_i\\i=1,\ldots,N}}$. Now, by defining $\ell_i:=\ell_{ii}$, and expressing $r_i=n_i-\ell_{i}$, the said summation is simplified into $\sum_{\ell_{1}=0}^{n_1}\stackrel{(N)}{\cdots}\sum_{\ell_{N}=0}^{n_N}$, which can be denoted in a contracted way by using multi-index notation, see e.g. \cite[p. 319]{reed}. By introducing the $N$-dimensional multi-indices $\bm{n}=(n_1,\ldots,n_N)$, $\bm{\ell}=(\ell_1,\ldots,\ell_N)$, $\bm{k}=(k_1,\ldots,k_N)$, and denoting $\lfloor\bm{\ell}/2\rfloor:=(\lfloor\ell_1/2\rfloor,\ldots,\lfloor\ell_N/2\rfloor)$, Eq.~\eqref{eq:formula} for the uncorrelated case reads \begin{align}\label{eq:uncorr} \mathbb{E}\left[g\left(\bm{X}\right)\bm{X}^{\bm{n}}\right]=\sum_{\bm{\ell}\leq\bm{n}}\binom{\bm{n}}{\bm{\ell}}\bm{\mu}^{\bm{n}-\bm{\ell}} \sum_{\bm{k}\leq\lfloor\bm{\ell}/2\rfloor}H_{\bm{\ell},\bm{k}}\bm{\sigma}^{2(\bm{\ell}-\bm{k})}\mathbb{E}\left[\partial^{\bm{\ell}-2\bm{k}}g\left(\bm{X}\right)\right]. \end{align} In Eq.~\eqref{eq:uncorr}, the partial ordering of multi-indices $\bm{\ell}\leq\bm{n}$ implies that $\ell_i\leq n_i$, for $i=1,\ldots,N$. Furthermore $\binom{\bm{n}}{\bm{\ell}}=\prod_{i=1}^N\binom{n_i}{\ell_i}$, $\bm{X}^{\bm{n}}=\prod_{i=1}^NX_i^{n_i}$, $\bm{\mu}^{\bm{n}-\bm{\ell}}=\prod_{i=1}^N\mu_i^{n_i-\ell_i}$, $\bm{\sigma}^{2(\bm{\ell}-\bm{k})}=\prod_{i=1}^N\sigma_i^{2(\ell_i-k_i)}$, $H_{\bm{\ell},\bm{k}}=\prod_{i=1}^NH_{\ell_i,k_i}$, and $\partial^{\bm{\ell}-2\bm{k}}=\prod_{i=1}^N\partial_i^{\ell_i-2k_i}$. For $N=1$, Eq.~\eqref{eq:uncorr} coincides with the extension of Stein's lemma for the scalar case, that we derived recently in \cite{mamis_stein}. \end{corollary} In the following corollary, we see that Stein's lemma \ref{thm:Stein} is a special case of Eq.~\eqref{eq:formula}. \begin{corollary}[Rederivation of Stein's lemma]\label{rem:Stein} For the average $\mathbb{E}\left[g(\bm{X})X_m\right]$, the first summation in the right-hand side of Eq.~\eqref{eq:formula} is over all sets $\left\{\{r_i,\ell_{i1},\ldots,\ell_{iN}\}_{i=1}^N\right\}$, where $r_i=\ell_{i1}=\cdots=\ell_{iN}=0$ for all $i\neq m$ and for all subsets $\{r_m,\ell_{m1},\ldots,\ell_{mN}\}$ with one integer being equal to 1 and all the rest being equal to 0. Since the multinomial coefficient corresponding to each of these integer combinations equals to 1, $H_{1,0}=G_{1,0,0}=G_{0,1,0}=1$, and by using the symmetry property of autocovariance matrix $\bm{C}$, Eq.~\eqref{eq:formula} for $\mathbb{E}\left[g(\bm{X})X_m\right]$ reads \begin{equation}\label{eq:stein_2} \mathbb{E}\left[g(\bm{X})X_m\right]=\mu_m\mathbb{E}\left[g(\bm{X})\right]+\sigma^2_m\mathbb{E}\left[\partial_mg(\bm{X})\right]+\sum_{\substack{j=1\\j\neq m}}^NC_{mj}\mathbb{E}\left[\partial_jg(\bm{X})\right]. \end{equation} Under the notation $C_{mm}:=\sigma_m^2$, Eq.~\eqref{eq:stein_2} results into Eq.~\eqref{eq:stein}. \end{corollary} We shall now derive Isserlis theorem \cite{isserlis} and Song \& Lee formula \cite{song} from formula \eqref{eq:formula}, by setting $g(\bm{X})=1$ in it, and identify the nonzero terms in its right-hand side. For $g(\bm{X})=1$, the nonzero terms in the right-hand side of Eq. ~\eqref{eq:formula}, are only those that contain the zeroth order derivatives of $g$; all $a_i=0$ for $i=1,\ldots,N$. \begin{corollary}[Isserlis theorem]\label{cor:isserlis} The higher order moments of an $N$-dimensional Gaussian random vector $\bm{X}$ with zero mean value are expressed in terms of its autocovariance matrix $\bm{C}$ as \begin{equation}\label{eq:isserlis} \mathbb{E}\left[\prod_{i=1}^NX_i\right]= \left\{ \begin{array}{ll} 0 & \text{for } N\text{ odd}, \\ \sum_{P\in\wp_N^2}\prod_{\{i,j\}\in P}C_{ij} & \text{for } N\text{ even},\\ \end{array} \right. \end{equation} with $\wp_N^2$ being the set of all partitions of $\{1,\ldots,N\}$ into unordered pairs. \end{corollary} \begin{proof} Eq.~\eqref{eq:isserlis} is proven in Isserlis work \cite{isserlis}. In Appendix \ref{A:isserlis}, we derive Eq.~\eqref{eq:isserlis} from formula \eqref{eq:formula}, by setting $\bm{\mu}=\bm{0}$, $n_i=1$ for $i=1,\ldots,N$, and $g(\bm{X})=1$. \end{proof} \begin{corollary}[Product moment formula for Gaussian vectors]\label{cor:song} As a generalization of Isserlis theorem \ref{cor:isserlis}, and for a Gaussian random vector whose mean value $\bm{\mu}$ is non-zero in general, the following formula for its product moments holds true: \begin{equation}\label{eq:song} \mathbb{E}\left[\prod_{i=1}^NX_i^{n_i}\right]=\sum_{\bm{m}\in S_{\bm{n}}}d_{\bm{n},\bm{m}}\left(\prod_{i=1}^N\prod_{j=i}^NC_{ij}^{m_{ij}}\right)\left(\prod_{i=1}^N\mu_i^{r_i}\right), \end{equation} where \begin{equation}\label{r_i} r_i=n_i-\sum_{j=1}^N(1+\delta_{ij})m_{ij}, \end{equation} and with coefficients $d_{\bm{n},\bm{m}}$ defined as \begin{equation}\label{d_coeff} d_{\bm{n},\bm{m}}=\frac{\left(\prod_{i=1}^Nn_i!\right)}{2^{\sum_{i=1}^Nm_{ii}}\left(\prod_{i=1}^N\prod_{j=i}^Nm_{ij}!\right)\left(\prod_{i=1}^Nr_i!\right)}. \end{equation} In Eq.~\eqref{eq:song}, summation $\sum_{\bm{m}\in S_{\bm{n}}}$ is over the set $S_{\bm{n}}$ of all $\bm{m}=\left\{m_{ij}\right\}_{i,j=1}^N$ with $m_{ij}=m_{ji}$, for which $r_i$, $i=1,\ldots,N$ are nonnegative. \end{corollary} \begin{proof} Eq.~\eqref{eq:song} has been proven by Song and Lee in \cite{song}, using results from the work of Price \cite{price}. In Appendix \ref{A:song}, we derive Eq.~\eqref{eq:song} by setting $g(\bm{X})=1$ in formula \eqref{eq:formula}, as we have done for the derivation of Isserlis theorem \ref{cor:isserlis}. \end{proof} \section{Proof of theorem \ref{thm:formula} by mathematical induction}\label{sec:induction} Theorem \ref{thm:formula} for $\mathbb{E}\left[g\left(\bm{X}\right)\left(\prod_{i=1}^NX_i^{n_i}\right)\right]$ can be proven by multidimensional mathematical induction on the multi-index of exponents $\bm{n}=(n_1,n_2,\ldots,n_N)\in\mathbb{N}^N$. As the base case, we choose $|\bm{n}|:=\sum_{i=1}^Nn_i=1$. As we have seen in corollary \ref{rem:Stein}, Eq.~\eqref{eq:formula} for $|\bm{n}|=1$ results in Stein's lemma, Eq.~\eqref{eq:stein}, which is proven by theorem \ref{thm:Stein}. Our inductive hypothesis is that Eq.~\eqref{eq:formula} holds true for $|\bm{n}|$. Then, we have to prove Eq.~\eqref{eq:formula} for $|\bm{n}|+1$. Thus, we have to prove Eq.~\eqref{eq:formula} for $n_m$ augmented by 1, for every $m=1,\ldots,N$. In this section, we prove Eq.~\eqref{eq:formula} for $n_1$ being augmented by 1. The proof for the rest $m=2,\ldots,N$ is similar. \[\mathbb{E}\left[g\left(\bm{X}\right)\left(X_1^{n_1+1}\prod_{i=2}^NX_i^{n_i}\right)\right]=\mathbb{E}\left[\left(g\left(\bm{X}\right)X_1\right)\left(\prod_{i=1}^NX_i^{n_i}\right)\right],\] and by using the inductive hypothesis: \begin{align}\label{eq:formula1} &\mathbb{E}\left[g\left(\bm{X}\right)\left(X_1^{n_1+1}\prod_{i=2}^NX_i^{n_i}\right)\right]=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\times\nonumber\\&\times\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\prod_{i=1}^N\partial_i^{a_i}\left(g\left(\bm{X}\right)X_1\right)\right]. \end{align} By the general Leibniz rule \cite[expression 3.3.8]{stegun}, we calculate the $\partial_1^{a_1}$ derivative inside the average in the right-hand side of Eq.~\eqref{eq:formula1} as \begin{equation}\label{eq:leibniz0} \partial_1^{a_1}\left(g\left(\bm{X}\right)X_1\right)=\sum_{p=0}^{a_1}\binom{a_1}{p}\left(\partial_1^{a_1-p}g(\bm{X})\right)\left(\partial_1^pX_1\right). \end{equation} Since $\partial_1^0X_1=X_1$, $\partial_1^1X_1=1$, and $\partial_1^pX_1=0$ for $p\geq 2$, Eq.~\eqref{eq:leibniz0} is simplified into \begin{align}\label{eq:leibniz} \partial_1^{a_1}\left(g\left(\bm{X}\right)X_1\right)&=\partial_1^{a_1}\left(g\left(\bm{X}\right)X_1\right)=\left(\partial_1^{a_1}g(\bm{X})\right)X_1+a_1\partial_1^{a_1-1}g(\bm{X})=\nonumber\\&=\left(\partial_1^{a_1}g(\bm{X})\right)X_1+\left(\left(\ell_{11}-2k_{11}\right)+\sum_{\substack{m=2}}^N\left(\ell_{m1}-k_{1m}\right)\right)\partial_1^{a_1-1}g(\bm{X}). \end{align} By using Eq.~\eqref{eq:leibniz}, we rewrite Eq.~\eqref{eq:formula1} as \begin{equation}\label{eq:As} \mathbb{E}\left[g\left(\bm{X}\right)\left(X_1^{n_1+1}\prod_{i=2}^NX_i^{n_i}\right)\right]=A+\sum_{m=1}^NB_m, \end{equation} where \begin{align}\label{eq:A} &A=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\times\nonumber\\ &\times\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\left(\prod_{i=1}^N\partial_i^{a_i}g\left(\bm{X}\right)\right)X_1\right], \end{align} \begin{align}\label{eq:B1} &B_1=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\left(\sum_{k_{11}=0}^{\lfloor(\ell_{11}-1)/2\rfloor}(\ell_{11}-2k_{11})H_{\ell_{11},k_{11}}\sigma_{1}^{2(\ell_{11}-k_{11})}\right)\times\nonumber\\ &\times\left(\prod_{i=2}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\partial_1^{a_1-1}\prod_{i=2}^N\partial_i^{a_i}g\left(\bm{X}\right)\right], \end{align} \begin{align}\label{eq:Bm} &B_m=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\times\nonumber\\ &\times\left(\sum_{k_{1m}=0}^{\min\{\ell_{1m},\ell_{m1}-1\}}(\ell_{m1}-k_{1m})G_{\ell_{1m},\ell_{m1},k_{1m}}C_{1m}^{\ell_{1m}+\ell_{m1}-k_{1m}}\right)\times\nonumber\\ &\times\left(\prod_{\substack{j=2\\j\neq m}}^N\sum_{k_{1j}=0}^{\min\{\ell_{1j},\ell_{j1}\}}G_{\ell_{1j},\ell_{j1},k_{1j}}C_{1j}^{\ell_{1j}+\ell_{j1}-k_{1j}}\right)\left(\prod_{i=2}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\times\nonumber\\&\times\mathbb{E}\left[\partial_1^{a_1-1}\prod_{i=2}^N\partial_i^{a_i}g\left(\bm{X}\right)\right], \ \ m=2,\ldots,N. \end{align} \begin{remark}[Upper limit of $k_{1m}$-sum in each $B_m$] In $B_1$, the term for $k_{11}=\ell_{11}/2$ in the $k_{11}$-sum is equal to zero. The only such term is for even $\ell_{11}$ and for $k_{11}=\ell_{11}/2$. In order to exclude this term, the upper limit of $k_{11}$-sum is $\lfloor\ell_{11}/2\rfloor$ for odd $\ell_{11}$, and changed to $\lfloor\ell_{11}/2\rfloor-1$ for even $\ell_{11}$. These two values are expressed in a unified way as $\lfloor(\ell_{11}-1)/2\rfloor$. In every $B_m$ for $m>1$, the term for $k_{1m}=\ell_{m1}$ in the $k_{1m}$-sum is equal to zero. This term is present in the sum for $\ell_{m1}<\ell_{1m}$. In order to exclude this term, the upper limit is changed to $\min\{\ell_{1m},\ell_{m1}-1\}$. \end{remark} By applying Stein's lemma \eqref{eq:stein} at the average appearing in the right-hand side of Eq.~\eqref{eq:A}, we obtain \begin{equation}\label{eq:Ass} A=A_0+\sum_{m=1}^NA_m, \end{equation} where \begin{align}\label{eq:A0} &A_0=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\mu_1^{r_1+1}\prod_{i=2}^N\mu_i^{r_i}\right)\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\times\nonumber\\&\times\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\prod_{i=1}^N\partial_i^{a_i}g\left(\bm{X}\right)\right], \end{align} \begin{align}\label{eq:A1} &A_1=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\left(\sum_{k_{11}=0}^{\lfloor\ell_{11}/2\rfloor}H_{\ell_{11},k_{11}}\sigma_{1}^{2(\ell_{11}+1-k_{11})}\right)\times\nonumber\\ &\times\left(\prod_{i=2}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\partial_1^{a_1+1}\prod_{i=2}^N\partial_i^{a_i}g\left(\bm{X}\right)\right], \end{align} \begin{align}\label{eq:Am} &A_m=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\times\nonumber\\ &\times\left(\sum_{k_{1m}=0}^{\min\{\ell_{1m},\ell_{m1}\}}G_{\ell_{1m},\ell_{m1},k_{1m}}C_{1m}^{\ell_{1m}+1+\ell_{m1}-k_{1m}}\right)\left(\prod_{\substack{j=2\\j\neq m}}^N\sum_{k_{1j}=0}^{\min\{\ell_{1j},\ell_{j1}\}}G_{\ell_{1j},\ell_{j1},k_{1j}}C_{1j}^{\ell_{1j}+\ell_{j1}-k_{1j}}\right)\times\nonumber\\ &\times\left(\prod_{i=2}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\partial_m^{a_m+1}\prod_{\substack{i=1\\i\neq m}}^N\partial_i^{a_i}g\left(\bm{X}\right)\right], \ \ m=2,\ldots,N. \end{align} By performing the change of index $\tilde{k}_{1m}=k_{1m}+1$ in each $B_m$, we obtain: \begin{align}\label{eq:BB1} &B_1=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\times\nonumber\\ &\times\left(\sum_{k_{11}=1}^{\lfloor(\ell_{11}+1)/2\rfloor}(\ell_{11}-2k_{11}+2)H_{\ell_{11},k_{11}-1}\sigma_{1}^{2(\ell_{11}+1-k_{11})}\right)\left(\prod_{i=2}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\times\nonumber\\&\times\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\partial_1^{a_1+1}\prod_{i=2}^N\partial_i^{a_i}g\left(\bm{X}\right)\right]. \end{align} For Eq.~\eqref{eq:BB1} we used the fact that $\lfloor(\ell_{11}-1)/2\rfloor+1=\lfloor(\ell_{11}+1)/2\rfloor$. \begin{align}\label{eq:BBm} &B_m=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\times\nonumber\\ &\times\left(\sum_{k_{1m}=1}^{\min\{\ell_{1m}+1,\ell_{m1}\}}(\ell_{m1}+1-k_{1m})G_{\ell_{1m},\ell_{m1},k_{1m}-1}C_{1m}^{\ell_{1m}+1+\ell_{m1}-k_{1m}}\right)\times\nonumber\\ &\times\left(\prod_{\substack{j=2\\j\neq m}}^N\sum_{k_{1j}=0}^{\min\{\ell_{1j},\ell_{j1}\}}G_{\ell_{1j},\ell_{j1},k_{1j}}C_{1j}^{\ell_{1j}+\ell_{j1}-k_{1j}}\right)\left(\prod_{i=2}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\times\nonumber\\&\times\mathbb{E}\left[\partial_m^{a_m+1}\prod_{\substack{i=1\\i\neq m}}^N\partial_i^{a_i}g\left(\bm{X}\right)\right], \ \ m=2,\ldots,N. \end{align} For Eq.~\eqref{eq:BBm} we used the fact that $\min\{\ell_{1m},\ell_{m1}-1\}+1=\min\{\ell_{1m}+1,\ell_{m1}\}$. \begin{remark}[Zero element conventions on $H_{\ell,k}$, $G_{\ell_1,\ell_2,k}$] In order to be able to calculate $(A_m+B_m)$, $m=1,\ldots,N$, the $k_{1m}$-sums in both $A_m$, $B_m$ should have the same range. We achieve this under the conventions that $H_{\ell,k}=0$ for $k<0$ or $k>\ell/2$ and $G_{\ell_1,\ell_2,k}=0$ for $k<0$ or $k>\min\{\ell_1,\ell_2\}$. \end{remark} Under these conventions, we calculate \begin{align}\label{eq:AB1} A_1+B_1&=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\left(\prod_{i=2}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\times\nonumber\\ &\times\left(\sum_{k_{11}=0}^{\lfloor(\ell_{11}+1)/2\rfloor}\left[H_{\ell_{11},k_{11}}+(\ell_{11}-2k_{11}+2)H_{\ell_{11},k_{11}-1}\right]\sigma_{1}^{2(\ell_{11}+1-k_{11})}\right)\times\nonumber\\&\times\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\partial_1^{a_1+1}\prod_{i=2}^N\partial_i^{a_i}g\left(\bm{X}\right)\right]. \end{align} \begin{align}\label{eq:ABm} &A_m+B_m=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\times\nonumber\\ &\times\left(\sum_{k_{1m}=0}^{\min\{\ell_{1m}+1,\ell_{m1}\}}\left[G_{\ell_{1m},\ell_{m1},k_{1m}}+(\ell_{m1}+1-k_{1m})G_{\ell_{1m},\ell_{m1},k_{1m}-1}\right]C_{1m}^{\ell_{1m}+1+\ell_{m1}-k_{1m}}\right)\times\nonumber\\ &\times\left(\prod_{\substack{j=2\\j\neq m}}^N\sum_{k_{1j}=0}^{\min\{\ell_{1j},\ell_{j1}\}}G_{\ell_{1j},\ell_{j1},k_{1j}}C_{1j}^{\ell_{1j}+\ell_{j1}-k_{1j}}\right)\left(\prod_{i=2}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\times\nonumber\\&\times\mathbb{E}\left[\partial_m^{a_m+1}\prod_{\substack{i=1\\i\neq m}}^N\partial_i^{a_i}g\left(\bm{X}\right)\right], \ \ m=2,\ldots,N. \end{align} \begin{lemma}[Recurrence relation for $H_{\ell,k}$]\label{recur} For $\ell\in\mathbb{N}$, $k=0,\ldots,\lfloor(\ell+1)/2\rfloor$, and with $H_{\ell,k}=0$ for $k<0$ or $k>\ell/2$, it holds true that \begin{equation}\label{H} H_{\ell+1,k}=H_{\ell,k}+(\ell-2k+2)H_{\ell,k-1}. \end{equation} \end{lemma} \begin{proof} See \cite{OEIS}; Eq.~\eqref{recur} has also been derived in our recent work \cite[lemma 1]{mamis_stein}. \end{proof} \begin{lemma}[Recurrence relation for $G_{\ell_1,\ell_2,k}$]\label{lem:G} For $\ell_1,\ell_2\in\mathbb{N}$, $k=0,\ldots,\min\{\ell_1+1,\ell_2\}$, and with $G_{\ell_1,\ell_2,k}=0$ for $k<0$ or $k>\min\{\ell_1,\ell_2\}$, it holds true that \begin{equation}\label{eq:rec_G} G_{\ell_1+1,\ell_2,k}=G_{\ell_1,\ell_2,k}+(\ell_2-k+1)G_{\ell_1,\ell_2,k-1}. \end{equation} \end{lemma} \begin{proof} See Appendix \ref{B}. \end{proof} By using recurrence relations \eqref{H}, \eqref{eq:rec_G}: \begin{align}\label{eq:AB11} &A_1+B_1=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\left(\sum_{k_{11}=0}^{\lfloor(\ell_{11}+1)/2\rfloor}H_{\ell_{11}+1,k}\sigma_{1}^{2(\ell_{11}+1-k_{11})}\right)\times\nonumber\\&\times\left(\prod_{i=2}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\partial_1^{a_1+1}\prod_{i=2}^N\partial_i^{a_i}g\left(\bm{X}\right)\right], \end{align} and, for $m=2,\ldots,N$: \begin{align}\label{eq:ABmm} &A_m+B_m=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\left[\prod_{i=1}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\times\nonumber\\ &\times\left(\sum_{k_{1m}=0}^{\min\{\ell_{1m}+1,\ell_{m1}\}}G_{\ell_{1m}+1,\ell_{m1},k_{1m}}C_{1m}^{\ell_{1m}+1+\ell_{m1}-k_{1m}}\right)\left(\prod_{\substack{j=2\\j\neq m}}^N\sum_{k_{1j}=0}^{\min\{\ell_{1j},\ell_{j1}\}}G_{\ell_{1j},\ell_{j1},k_{1j}}C_{1j}^{\ell_{1j}+\ell_{j1}-k_{1j}}\right)\times\nonumber\\ &\times\left(\prod_{i=2}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\partial_m^{a_m+1}\prod_{\substack{i=1\\i\neq m}}^N\partial_i^{a_i}g\left(\bm{X}\right)\right]. \end{align} We change indices $\tilde{r}_1=r_1+1$ for $A_0$, and $\tilde{\ell}_{1m}=\ell_{1m}+1$ for $(A_m+B_m)$, $m=1,\ldots,N$, and thus we have \begin{equation}\label{eq:ABs} \mathbb{E}\left[g\left(\bm{X}\right)\left(X_1^{n_1+1}\prod_{i=2}^NX_i^{n_i}\right)\right]=A_0+\sum_{m=1}^N(A_m+B_m), \end{equation} with \begin{align}\label{eq:A0_new} &A_0=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\binom{n_1}{(r_1-1),\ell_{11},\ldots,\ell_{1N}}\left[\prod_{i=2}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\left(\prod_{i=1}^N\mu_i^{r_i}\right)\times\nonumber\\&\times\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\prod_{i=1}^N\partial_i^{a_i}g\left(\bm{X}\right)\right], \end{align} and for $m=1,\ldots,N$: \begin{align}\label{eq:ABm_new} &A_m+B_m=\nonumber\\&=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=1,\ldots,N}}\binom{n_1}{r_1,\ell_{11},\ldots,\ell_{1(m-1)},(\ell_{1m}-1),\ell_{1(m+1)},\ldots,\ell_{1N}}\left[\prod_{i=2}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\times\nonumber\\ &\times\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\prod_{i=1}^N\partial_i^{a_i}g\left(\bm{X}\right)\right]. \end{align} Note that the index change $\tilde{\ell}_{1m}=\ell_{1m}+1$ in $(A_m+B_m)$ resulted also in the change of order of derivative $a_m+1$ to $a_m$, since integer $\ell_{1m}$ appears in the definition of $a_m$, see Eq.~\eqref{a_i}. Under Eqs.~\eqref{eq:A0_new}, \eqref{eq:ABm_new}, Eq.~\eqref{eq:ABs} is rewritten as \begin{align}\label{eq:aver} &\mathbb{E}\left[g\left(\bm{X}\right)\left(X_1^{n_1+1}\prod_{i=2}^NX_i^{n_i}\right)\right]=\sum_{\substack{r_i+\sum_{j=1}^N\ell_{ij}=n_i\\ i=2,\ldots,N}}\left[\prod_{i=2}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\times\nonumber\\&=\left[\sum_{\substack{r_1+\sum_{j=1}^N\ell_{1j}=n_1}}\binom{n_1}{(r_1-1),\ell_{11},\ldots,\ell_{1N}}+\sum_{m=1}^N\binom{n_1}{r_1,\ell_{11},\ldots,\ell_{1(m-1)},(\ell_{1m}-1),\ell_{1(m+1)},\ldots,\ell_{1N}}\right]\times\nonumber\\ &\times\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\prod_{i=1}^N\partial_i^{a_i}g\left(\bm{X}\right)\right]. \end{align} Proof of Eq.~\eqref{eq:formula} for $\mathbb{E}\left[g\left(\bm{X}\right)\left(X_1^{n_1+1}\prod_{i=2}^NX_i^{n_i}\right)\right]$ is completed by the following lemma. \begin{lemma}[Addition of multinomial coefficients]\label{lem:multi} For $r_1+\sum_{j=1}^N\ell_{1j}=n_1+1$, it holds true that \begin{align}\label{eq:multi} \binom{n_1+1}{r_1,\ell_{11},\ldots,\ell_{1N}}&=\binom{n_1}{(r_1-1),\ell_{11},\ldots,\ell_{1N}}+\nonumber\\&+\sum_{m=1}^N\binom{n_1}{r_1,\ell_{11},\ldots,\ell_{1(m-1)},(\ell_{1m}-1),\ell_{1(m+1)},\ldots,\ell_{1N}}. \end{align} \end{lemma} \begin{proof} See Appendix \ref{C}. \end{proof} By substituting Eq.~\eqref{eq:multi} into Eq.~\eqref{eq:aver}, we obtain \begin{align}\label{eq:aver2} &\mathbb{E}\left[g\left(\bm{X}\right)\left(X_1^{n_1+1}\prod_{i=2}^NX_i^{n_i}\right)\right]=\sum_{\substack{r_1+\sum_{j=1}^N\ell_{1j}=n_1+1\\r_i+\sum_{j=1}^N\ell_{ij}=n_i,\\ i=2,\ldots,N}}\binom{n_1+1}{r_1,\ell_{11},\ldots,\ell_{1N}}\left[\prod_{i=2}^N\binom{n_i}{r_i,\ell_{i1},\ldots,\ell_{iN}}\right]\times\nonumber\\ &\times\left(\prod_{i=1}^N\sum_{k_{ii}=0}^{\lfloor\ell_{ii}/2\rfloor}H_{\ell_{ii},k_{ii}}\sigma_{i}^{2(\ell_{ii}-k_{ii})}\right)\left(\prod_{i=1}^N\prod_{j>i}^N\sum_{k_{ij}=0}^{\min\{\ell_{ij},\ell_{ji}\}}G_{\ell_{ij},\ell_{ji},k_{ij}}C_{ij}^{\ell_{ij}+\ell_{ji}-k_{ij}}\right)\mathbb{E}\left[\prod_{i=1}^N\partial_i^{a_i}g\left(\bm{X}\right)\right]. \end{align} Eq.~\eqref{eq:aver2} is Eq.~\eqref{eq:formula} for $n_1+1$. The proof of Eq .~\eqref{eq:formula} for the rest $n_m+1$, $m=2,\ldots,N$ is performed in a similar way following the same steps, completing thus the proof of theorem \ref{thm:formula} by mathematical induction. \section{Constructive formal derivation of theorem \ref{thm:formula}}\label{sec:construction} Our alternative, constructive derivation of formula \eqref{eq:formula} is based on the following definition for the mean value operator. \begin{definition}[Mean value as the action of averaged shift operators]\label{def1} Let $\bm{X}$ be an $N$-dimensional Gaussian random vector with mean value vector $\bm{\mu}$ and autocovariance matrix $\bm{C}$, and $g$ be a $C^{\infty}\left(\mathbb{R}^N\rightarrow\mathbb{R}\right)$ function. The average $\mathbb{E}\left[g(\bm{X})\right]$ is expressed as \begin{equation}\label{eq:mean_value_action} \mathbb{E}\left[g(\bm{X})\right]=\left(\prod_{i=1}^N\mathcal{T}_{ii}\right)\left(\prod_{i=1}^N\prod_{j>i}^N\mathcal{T}_{ij}\right)g(\bm{\mu}), \end{equation} where $\mathcal{T}_{ij}$ are the pseudodifferential averaged shift operators defined as \begin{equation}\label{Tii} \mathcal{T}_{ii}=\exp\left(\frac{\sigma^2_{i}}{2}\partial^2_i\right), \ \ i=1,\ldots,N, \end{equation} \begin{equation}\label{Tij} \mathcal{T}_{ij}=\exp\left(C_{ij}\partial_i\partial_j\right), \ \ i,j=1,\ldots,N, \ j\neq i, \end{equation} whose action is to be understood by their series forms \begin{equation}\label{Tii_series} \mathcal{T}_{ii}=\sum_{m=0}^{\infty}\frac{\sigma^{2m}_{i}}{2^mm!}\partial_i^{2m}, \ \ i=1,\ldots,N, \end{equation} \begin{equation}\label{Tij_series} \mathcal{T}_{ij}=\sum_{m=0}^{\infty}\frac{C_{ij}^m}{m!}\partial_i^m\partial_j^m, \ \ i,j=1,\ldots,N, \ j\neq i. \end{equation} \end{definition} \begin{proof} We formally derive Eq.~\eqref{eq:mean_value_action} in Appendix \ref{A:def1}, by using the moment-generating function of the $N$-dimensional Gaussian vector $\bm{X}$. The infinite-dimensional counterpart of definition \ref{def1}, regarding Gaussian processes, is presented in \cite{mamis_NF}, and it is also found in \cite[Ch. 4]{klya} as a concept. \end{proof} \begin{remark}[Properties of $\mathcal{T}_{ij}$ operators]\label{rem:Tproperties} Under the formal assumption that all infinite series involved are summable, and by employing the linearity of derivatives, we can easily see that $\mathcal{T}_{ij}$ operators are linear, commute with differentiation operators $\partial_i$, and also commute with each other (see also \cite[lemmata 1-3]{mamis_NF}). \end{remark} \begin{lemma}[Action of $\mathcal{T}_{ii}$ operator]\label{lem:Tii} It holds true that \begin{equation}\label{eq:Tii_action} \mathcal{T}_{ii}\left[g(\bm{x})x_i^{n_i}\right]=\sum_{\ell=0}^{n_i}\binom{n_i}{\ell}x_i^{n_i-\ell}\sum_{k=0}^{\lfloor\ell/2\rfloor}H_{\ell,k}\sigma_i^{2(\ell-k)}\mathcal{T}_{ii}\left[\partial_i^{\ell-2k}g(\bm{x})\right]. \end{equation} \end{lemma} \begin{proof} See Appendix \ref{A:Tii}. \end{proof} \begin{lemma}[Action of $\mathcal{T}_{ij}$, $j\neq i$ operator]\label{lem:Tij} It holds true that \begin{align}\label{eq:Tij_action} &\mathcal{T}_{ij}\left[g(\bm{x})x_i^{n_i}x_j^{n_j}\right]=\nonumber\\&=\sum_{\ell_i=0}^{n_i}\binom{n_i}{\ell_i}x_i^{n_i-\ell_i}\sum_{\ell_j=0}^{n_j}\binom{n_j}{\ell_j}x_j^{n_j-\ell_j}\sum_{k=0}^{\min\{\ell_i,\ell_j\}}G_{\ell_i,\ell_j,k}C_{ij}^{\ell_i+\ell_j-k}\mathcal{T}_{ij}\left[\partial_i^{\ell_j-k}\partial_j^{\ell_i-k}g(\bm{x})\right]. \end{align} \end{lemma} \begin{proof} See Appendix \ref{A:Tij}. \end{proof} By expressing the average $\mathbb{E}\left[g\left(\bm{X}\right)\left(\prod_{i=1}^NX_i^{n_i}\right)\right]$ via definition \ref{def1}, we understand that, for its evaluation, it suffices to sequentially apply operators $\mathcal{T}_{ii}$, $\mathcal{T}_{ij}$, $i,j=1,\ldots,N$, $j>i$ at the product $g\left(\bm{x}\right)\left(\prod_{i=1}^Nx_i^{n_i}\right)$, and set $\bm{x}=\bm{\mu}$ afterwards. After algebraic manipulations and using the operator properties of remark \ref{rem:Tproperties}, we obtain Eq.~\eqref{eq:formula}. \section{Conclusions and future works} In the present work, we derived formula \eqref{eq:formula} generalizing Stein's lemma for the evaluation of avarages $\mathbb{E}\left[g(\bm{X})\prod_{i=1}^NX_i^{n_i}\right]$, where $\bm{X}$ is an $N$-idimensional Gaussian random vector. By our generalizing formula, the said average is expressed in terms of the averages of partial derivatives of $g(\bm{X})$, as well as the mean value vector and autocovariance matrix of $\bm{X}$. Furthermore, by setting $g(\bm{X})=1$, generalizing formula \eqref{eq:formula} results in Isserlis theorem \cite{isserlis} and Song \& Lee formula \cite{song} for Gaussian product moments $\mathbb{E}\left[\prod_{i=1}^NX_i^{n_i}\right]$. A direction for future works is the generalization of the infinite-dimensional analog of Stein's lemma, called the Novikov-Furutsu theorem (see \cite[Sec. 11.5]{scott}, \cite{mamis_NF}). In the infinite-dimensional case, $X$ is a Gaussian random process of time argument $t$, whose mean value is the function $\mu(t)$, and its two-time autocovariance function is $C(t_1,t_2)$. Thus, for $g$ being a functional of $X$ over the time interval $[t_0,t]$, Novikov-Furutsu theorem reads: \begin{equation}\label{eq:NF} \mathbb{E}\left[g[X]X(t)\right]=\mu(t)\mathbb{E}\left[g(X)\right]+\int_{t_0}^tC(t,s)\mathbb{E}\left[\frac{\delta g[X]}{\delta X(s)}\right]\mathrm{d}s, \end{equation} where $\delta g[X]/\delta X(s)$ is the Volterra functional derivative of $g[X]$ with respect to a local perturbation of process $X$ centered at time $s$ (see e.g. \cite[Appendix A]{mamis_NF} for more on Volterra calculus). Novikov-Furutsu theorem, Eq.~\eqref{eq:NF}, is the main tool in deriving evolution equations, that resemble the classical Fokker-Planck equation, for the response probability density of dynamical systems under Gaussian random excitation, see e.g. \cite[Eq.(3.19)]{hanggi}, \cite{Mamis2019,Mamis2021}. Recently \cite[Ch. 3]{Mamis2020}, we extended Novikov-Furutsu theorem for averages that contain the Gaussian argument at various times; $\mathbb{E}\left[g[X]\prod_{i=1}^NX(t_i)\right]$. Its generalization for averages $\mathbb{E}\left[g[X]\prod_{i=1}^NX^{n_i}(t_i)\right]$ will be the topic of a future work. As we have already shown in \cite{mamis_NF}, the introduction and use of averaged shift operators is very helpful in constructing generalizations of the Novikov-Furutsu theorem.
{ "timestamp": "2022-02-02T02:09:30", "yymm": "2202", "arxiv_id": "2202.00189", "language": "en", "url": "https://arxiv.org/abs/2202.00189" }
\section{Introduction} Refactoring is defined as restructuring software to improve its quality without altering its external behavior~\cite{opdyke1992refactoring}. The need to restructure software can come from such diverse goals as improving software quality, migrating to new platforms like cloud, containerizing software for DevOps, incorporating new technologies, or extracting capabilities for strategic reuse. Many of these scenarios involve refactoring at a large scale and imply broad changes to the system that cannot be accomplished through local code changes. This paper focuses on these larger refactoring efforts, which we refer to as \textbf{large-scale refactoring (LSR)} and is the first study of large refactoring efforts (mean estimated efforts greater than 1500 staff days) from multiple industry organizations. Such broad changes, however, are often hindered by software complexity and require labor intensive efforts to complete. Consequently, developers continue to desire more time to conduct refactoring activities ~\cite{dagstuhlDig14}, often combine refactoring with new feature development to gain approval to proceed ~\cite{Kim2012FSE}, and seek more tool support while not trusting tools that are available as previous empirical studies reveal ~\cite{Murphy2008tools,kim2014tse}. Well known refactoring types described by Martin Fowler (e.g., rename, move function, extract class) are frequently used by developers ~\cite{Fowler1999refactoringBook}. Integrated development environments (IDEs) like IntelliJ IDEA, Eclipse, VS Code, and Visual Studio all include features that change code to apply primitive refactoring types as directed by users. However, these tools have varying levels of acceptance by developers. For example, in a study done with 328 Microsoft developers, Kim et al. found that 86\% refactor manually, with minimal use of features that implement the refactoring types they intend to use \cite{kim2014tse}. These results mirror results of earlier studies showing that in industry manual efforts dominate use of available tool support for refactoring ~\cite{Murphy2006eclipse, Vakilian2012refactorings, murphy-hill07, Murphy2008tools, murphy2012tse}. These well-established refactoring types are also used in large-scale refactoring \cite{kim2014tse,hyrum2013,hyrum2019}. Prior studies show use of such refactorings to address evolution of APIs~\cite{Dig2005api,Weissgerber2006api,Kim2011api}, design ~\cite{bavota2014rss}, and architecture ~\cite{Arcelli2018easier,bavota2014rss,lin2016interactive,mkaouer2016use,terra2012recommending,Zimmermann2017arch}. However, while large-scale refactoring is anecdotally performed in industry as demonstrated by these studies, it is not explicitly studied as part of software evolution or refactoring tool support. To understand how developers engage with large-scale refactoring and how they use tools to support different activities involved, we conducted a developer survey. Our findings confirm existing research on the challenges of smaller-scale refactoring activities. However, our results demonstrate that when it comes to tools that developers use to perform large-scale refactoring, developers use several categories of tools beyond those that implement refactorings in code. Tool support varies across the different activities that are involved in large-scale refactoring, with some particularly challenging activities seeing little use of tools in practice. While developers broadly agree that better tools are desired, they vary in the activities and degree of intelligence they want in tools. Our contributions include the following: \begin{itemize} \item Our study is the first to specifically focus on large-scale refactoring, demonstrate its prevalence with industry empirical data, and position it as part of refactoring research and tool agendas. This contribution provides empirical data that challenges the assumptions that research should mostly focus on small-scale (floss) refactoring. \item We identify common reasons for deciding to perform or forgo large-scale refactoring. We further identify common consequences of forgoing such refactoring, which adds a missing perspective to industry's motivation in engaging in large-scale refactoring. \item We identify which refactoring activities are most challenging, time consuming, and see the greatest and least use of tools. We further identify the categories of tools that are used today, which further improves our understanding of gaps in refactoring tool support. \item Lastly, we share our data. \end{itemize} This paper is organized as follows. Section \ref{sec:background} summarizes research in different kinds of refactoring and the refactoring process and introduces large-scale refactoring. Section \ref{sec:methodology} describes our study approach. Section \ref{sec:results} presents our analysis results, whose implications are discussed in Section \ref{sec:discussion}. Section \ref{sec:conclusion} presents our conclusions. \section{Background} \label{sec:background} Refactoring is a complex activity involving problem recognition, problem analysis, decision making, implementation, and evaluation ~\cite{Haendler2018process}. Despite increasing research interest in providing tool support in refactoring in terms of design, composition, and decision making~\cite{Mens2004survey}, developers continue to be reluctant to adopt automated refactoring support ~\cite{jetbrainsSurvey2021}. Murphy-Hill and Black ~\cite{Murphy2008tools} introduced two different notions of refactoring; the need to continually tweak code while making other changes (floss refactoring) and infrequent, but focused changes to improve unhealthy code (root-canal refactoring). Floss and root-canal refactoring are primarily differentiated by the nature of changes made -- floss refactoring intermingles refactoring with other changes, like feature development, while root-canal is almost entirely about refactoring. Root-canal refactoring is also typically described as correcting unhealthy code, emphasizing quality improvements rather than other motivations. Multiple empirical studies that use analysis of commit histories or IDE usage have found more evidence of floss refactoring than evidence of root-canal refactoring \cite{murphy2012tse,liu2012reftactics,sousa2020archref}. Murphy-Hill et al's study \cite{murphy2012tse} further suggests that "studies should focus on floss refactoring for the greatest generality." Tools that implement refactorings in code are available for many popular programming languages through IDE context menu options that provide a list of available refactoring types from which developers choose, such as those included in IntelliJ IDEA, Eclipse, VS Code, and Visual Studio. According to a recent JetBrains survey, developers do in fact refactor their code every week or even almost every day and refactoring sessions often last an hour or longer. Despite this tool support, developers frequently refactor their code manually, often due to a lack of trust in what tools would do ~\cite{jetbrainsSurvey2021}. Furthermore, studies analyzing GitHub contributions reveal that refactorings are driven more often by changing requirements than by code smells ~\cite{Silva_2016}. There is clearly an aspect of refactoring that is broader in scope than the local code improvements that its original definition and floss refactoring recommendations implied. We define \textbf{large-scale refactoring} as restructuring software, without introducing functionality, for the purpose of improving non-functional quality or changing architecture. Large-scale involves either pervasive changes across a codebase or extensive changes to a substantial element of the system (e.g., greater than 10k LOC). Large-scale refactoring often involves a substantial commitment of resources, requiring management approval. One example is the need to partition legacy monoliths into smaller pieces to create separately deployable, scalable, and evolvable units. Another is restructuring interfaces and communication patterns to enable replacement of a legacy feature by an improved or less proprietary alternative. Large-scale refactoring is closely related to root-canal refactoring in that both focus on structural improvements that are not intermingled with other changes. However, we distinguish it from the common use of root-canal refactoring in scale and motivation. Many examples of root-canal refactorings in the literature do not represent particularly large efforts that require management support and significant commitment of resources and focus on system wide code quality improvements. Using this distinction in our survey, we were able to capture data for refactoring efforts that were estimated to require a mean of more than 1500 staff days of effort. Such large refactoring efforts are often motivated by broader business concerns than quality improvement and we wanted to be more inclusive of other business motivations. Architecture refactoring can be considered a form of large-scale refactoring. The work of Lin et al. is closest in its motivation to the large-scale refactoring notion that our survey investigated ~\cite{lin:fse2016}, but their work is focused on developing a research tool that relies on recommendations of a limited number of refactorings. Other work in architecture refactoring mostly addresses code and architecture smell detection, which often focus narrowly on quality symptoms to hint at opportunities for architecture level changes ~\cite{sousa2020archref, lucaArchSmellRefactIWR18}. Our study, in contrast, takes a broad perspective on the range of activities and supporting tools from the perspective of developers performing large-scale refactoring in industry. A recent study of how software developers make decisions proposed a decision-making framework for refactoring ~\cite{Leppanen2015framework}. They found stages of decision-making that consist of a pain zone that triggers the decision of refactoring, situation analysis, refactoring planning, refactoring implementation, and follow up to assess the effort. Factors that lead to decision making are influenced by scale. More recently, Haendler and Frysak~\cite{Haendler2018process} provided a theoretical perspective on applying concepts from decision-making research to deconstruct the refactoring process. They provide a more general interpretation of the software maintenance process~\cite{Kitchenham1999maintenance} and different refactoring stages~\cite{Leppanen2015framework}: problem recognition, problem analysis, decision-making, implementation, evaluation. Furthermore, the model introduces a second dimension to account for the primary decisions in refactoring at management and operational levels: whether to refactor, what to refactor, how to refactor. The authors then group the many tools and techniques available for refactoring by the following characteristics: smell detection and refactoring recommendation tools, code-quality and design-critique tools, refactoring tools, technical debt management and analysis tools, automated regression testing frameworks, and documented knowledge on refactoring rules. Our survey also reveals tools used across these categories and confirms that the support is not ideal. These studies commonly point out that the refactoring process consists of activities that span several decision-making stages as well as activities along the software development lifecycle. Abid et al. recently completed a literature survey spanning 30 years of refactoring research that emphasized a lifecycle view of refactoring ~\cite{abid202030}. In our survey, we build on these studies and focus on the following activities. \begin{itemize} \item Determining where changes were needed \item Choosing what changes to make \item Implementing the changes \item Generating new tests \item Migrating existing tests \item Validating refactored code (inspection, executing tests, etc.) \item Re-certifying refactored code (common to industry in regulated domains) \item Updating documentation \end{itemize} Through the rest of the paper we make use of these activities to understand the prevalence of large-scale refactoring, its challenges, and gaps in existing tools that support large-scale refactoring activities. Our survey is not the first survey study to focus on refactoring. Kim et al. ~\cite{kim2014tse} conducted a survey study with 328 Microsoft developers in 2014 to understand the benefits of refactoring and developer perceptions. Their conclusions included that the definition of refactoring in practice is broader than behavior-preserving program transformations and include system wide changes. In addition, they showed that developers need various types of refactoring support beyond the refactoring features provided by IDEs. More recently, Golubev et al. surveyed 1183 IntelliJ users and reaffirmed that many developers do not trust automated refactoring features. Our survey study is similar in its methodology to Kim and Golubev; however, it differs in its motivation and is the first to explicitly target large-scale refactoring, establish it explicitly as a distinct refactoring category, and provide insights into the tooling challenges it entails. \section{Methodology}% \label{sec:methodology} Our goals in this study include assessing how developers perform large-scale refactoring and understanding the tools they use to support the process and their shortcomings. To achieve these goals, we ask the following research questions. \textbf{RQ1:} Is large-scale refactoring common in industry and what drives decision making? \textbf{RQ2:} How do developers use tools to aid their large-scale refactoring efforts? \textbf{RQ3:} What tools and support, if any, do developers desire to aid their large-scale refactoring efforts? In our first question we look for overarching business and technical goals, reasons why and why not to refactor, and risks and challenges associated with large-scale refactoring. Our other questions then focus on refactoring process activities, examine the role of tools to support these activities, and what kind of tools would better support these activities. \begin{figure} \begin{center} \newcommand*\rot{\rotatebox{90}} {\small \begin{tabular}{|l|l|} \hline \multirow{5}{*}{\rot{\textbf{RQ1}}} & $\bullet$ What were the business goals of the refactoring? \\ & $\bullet$ Have you ever wanted to perform a large-scale refactoring \\ & but were unable to? \\ & $\bullet$ What consequences, if any, did you observe from not \\ & performing the refactoring? \\ \hline \multirow{3}{*}{\rot{\textbf{RQ2}}} & $\bullet$ What tools, if any, did you use to assist your large-scale \\ & refactoring efforts? \\ & $\bullet$ To what extent do you use tools for the following activities? \\ \hline \multirow{4}{*}{\rot{\textbf{RQ3}}} & $\bullet$ What kind of automation, if available, would have most \\ & improved your large-scale refactoring? \\ & $\bullet$ What are the strengths and weakness of the tools you used \\ & to support large-scale refactoring? \\ \hline \end{tabular} } \end{center} \caption{A sample of our survey questions and their corresponding research question (RQ). } \label{fig:survey} \end{figure}% \textbf{Survey Design}. To answer our research questions, we performed an online survey of members of the software engineering community between November, 2020 and February, 2021. To ensure that we collected meaningful and informative results, we followed several survey design best practices by explicitly deriving survey questions from our research questions, conducting a series of iterative pilot surveys on a representative population of sample respondents, and refining our survey design until reaching saturation~\cite{principles-survey-research,survey-design-experiences}. A sample of our survey questions is given in \Cref{fig:survey}. We used a branching design to elicit separate experiences in which participants had performed large-scale refactoring and those in which they had been unable to do so. Those who had performed large-scale refactoring were presented questions related to the challenges, outcomes, and the extent to which tools supported the process. Those who were unable to do so answered questions as to why not and the consequences of not refactoring. \textbf{Recruitment}. Our survey targeted an industry audience. We distributed our survey to members of the software engineering community via email (dlist: 7,700), LinkedIn (subscribers: 16,012), Twitter (followers: 5,383), research colleagues (for redistribution to their industry collaborators), and company internal technical interest groups. A total of 107 participants took part in the survey. 96\% of participants were software engineers and/or software architects (both of which we refer to as developers) and 74\% worked in industry. 79\% of the participants had 10+ years of experience (\autoref{fig:demographics}). These demographics demonstrate that our participants represent a wealth of collective industry experience, which helps to increase the confidence in our findings. \begin{figure} \begin{tabular}{lrr} \toprule \textbf{Years of experience} & \textbf{\#} & \textbf{\%} \\ \midrule Less than three years & 6 & 6\% \\ Between three and ten years & 16 & 15\% \\ Ten or more years & 85 & 79\% \\ \bottomrule \end{tabular} \caption{Demographics of our 107 survey participants in terms of their years of experience in the software industry.} \label{fig:demographics} \end{figure} \textbf{Qualitative Analysis}. \label{sec:methodology:analysis} To analyze responses to the qualitative parts of our survey, we used a descriptive coding approach~\cite{saldana2015coding}. We first tagged each response to open-ended survey questions with one or more labels, known as codes, describing the topics of that response. We then performed adjudication and code mapping to collapse our codes into a consistent set of categories. Finally, we used axial coding to identify relationships between categories, and to identify a small number of overarching themes. Throughout this process, we performed continual analysis, comparison, and discussion of data until reaching thematic saturation (i.e., no new perspectives, dimensions, or relationships were identified). These responses reflect developers' perceptions and experiences of large-scale refactoring. We report frequency of our coding of this data only to demonstrate the prevalence of themes in our data, not to suggest generalized conclusions. To allow others to understand the logic behind our analysis process, we also provide sample quotes throughout the paper. \textbf{Study Artifacts}. To promote further research and allow others to inspect and replicate our methodology and findings, we provide a detailed audit trail of our study artifacts, which include our survey questionnaire, recruitment materials, codebook, anonymized survey data, and the Jupyter notebook used to produce the figures in the paper. Our study artifacts are available at: \hfill \break \url{https://github.com/ArchitecturePractices/lsr\_survey\_artifacts}. \section{Results}% \label{sec:results} In this section, we report our analysis and key insights on the data from 107 responses. \subsection{RQ1: Is large-scale refactoring common in industry and what drives decision making?} \label{sec:results:rq1} Common perception is that business priorities and natural system evolution drive the need to conduct large-scale changes in industrial software. However, do we know whether developers resonate with the concept of large-scale refactoring and how frequently, if at all, they engage in conscious large-scale refactoring activities? We wanted to understand the business and technical triggers, as well as the challenges surrounding the decision to perform or forgo large-scale refactoring and their consequences. \textbf{Prevalence of LSR}. 82\% of respondents had participated in large-scale refactoring at least once. Of the 61\% who reported participating in large-scale refactoring more than once, 12\% of respondents had engaged in such refactoring five or more times. These refactorings were performed on significantly large systems (34\% were larger than 1M LOC and 38\% ranged from 100K-1M LOC) and consumed significant resources, ranging from 2 days to 20,000 staff days as shown in \autoref{fig:lsr_time}. Furthermore, 56\% of systems on which respondents had performed large-scale refactoring had undergone large-scale refactoring multiple times (16\% twice, 36\% three to five times, and 5\% more than five times). Half of respondents reported that they are still working on the same system on which they had performed large-scale refactoring, 42\% of whom have worked on this system for more than five years. The release frequencies for these systems ranged from several times a month (25\%) to several time a year (49\%). These results confirm the common wisdom that industrial software systems go through major changes and support the conclusion that organizations do commonly conduct large-scale refactoring. \begin{figure}[ht!] \centering \centering \includegraphics[width=1\columnwidth]{images/LSR_time.pdf} \caption{Estimated effort (in staff days) that teams required to complete their large-scale refactorings.} \label{fig:lsr_time} \end{figure} \textbf{Reasons for LSR}. Reducing cost of change and time to deliver were expressed as top business reasons to refactor by our respondents who both had the opportunity to refactor and wanted to refactor but could not (\autoref{fig:bizreasons}). As for technical reasons for refactoring, improving understandability and migrating to a new architecture had similarly top occurrences (\autoref{fig:techreasons}). \begin{figure}[ht!] \centering \centering \includegraphics[width=1\linewidth]{images/BusinessReasons.pdf} \caption{Business reasons for large-scale refactoring.} \label{fig:bizreasons} \end{figure} Our analysis revealed an interesting relationship between the top business and technical reasons: 78\% of those who reported that reducing cost of change was a business reason to refactor also reported improving code understandability as a technical reason to refactor. Among those who did undertake large-scale refactoring, 70\% reported both improving code understandability and migrating to a new architecture as top technical reasons to refactor. These results further demonstrate the relationship between the kind of architectural change that requires large-scale refactoring and the potential impact of such change on business goals. \begin{figure}[ht!] \centering \includegraphics[width=1\linewidth]{images/TechnicalReasons.pdf} \caption{Technical reasons for large-scale refactoring.} \label{fig:techreasons} \end{figure} \textbf{Forgoing LSR}. Having established that industry systems undergo multiple large-scale refactorings, we looked at how often organizations had wanted to perform refactoring but had decided not to do so. 71\% of respondents reported that there were occasions that they wanted to conduct large-scale refactoring, but were unable to do so. Sharma's study reported a similarly high portion of respondents (76\%) identifying prioritization of features over refactoring as an obstacle to undertaking refactoring \cite{Sharma2015industry}. The reasons for deciding not to perform large-scale refactoring centered around opportunity cost (new features were prioritized and anticipated cost was too high) as the most important reasons. 35\% of respondents reported both as driving reasons, indicating that when resources are scarce, new features are commonly preferred over other investments. Interestingly, among the reasons to not refactor, only 6\% of the participants indicated that the anticipated value of refactoring was too low (\autoref{fig:reasonswhynot}). \begin{figure}[ht!] \centering \includegraphics[width=1\linewidth]{images/why_not.pdf} \caption{Reasons why organizations forgo large-scale refactoring.} \label{fig:reasonswhynot} \end{figure} \textbf{Consequences of Forgoing LSR}. Given business realities, these results are not surprising and they align with previous refactoring research ~\cite{kim2014tse,Sharma2015industry}. When resource constraints (especially time and cost) force choices, new features are prioritized over refactoring. However, there are consequences to not performing needed refactoring, as our participants reported through open ended questions. When we analyzed these responses through a coding exercise (\autoref{table:consequences}), we found that the most common long term consequences were related to inability or slowing paces of delivering new features (56\%). Instances of deteriorating internal (54\%) and external (32\%) quality were often accompanied by references to increasing operating or development costs, which are expected consequences of quality deterioration. 90\% of respondents reported delivery and/or internal quality problems, both of which reflect slowing development velocity, as consequences of not refactoring. These consequences undermine the perceived opportunity to divert resources from refactoring to new features. The consequences that participants shared also clearly exemplify the need for large-scale refactoring. \begin{itemize} \item\textit{We are stuck on outdated technologies. It is difficult to keep up with the "startup" companies that provide features that we are not able to create on the old tech stack.} \item \textit{...modernization cycle was held back by 4 years....maintenance cost stayed high....cost to implement, deploy, and validate continue to increase}. \item \textit{Feature delivery took longer as it required changes to multiple parts of the system.} \end{itemize} Not surprisingly, long term consequences of not refactoring include jeopardizing the top priority business concern of reducing time to deliver new features, as well as increased cycle time and costs. \begin{table} \begin{tabularx}{\columnwidth}{lXr} \toprule \textbf{Category} & \textbf{Description} & \textbf{\%} \\ \midrule Delivery & Slow feature delivery, inability to \newline develop features & 56\% \\ Internal quality & Low productivity, duplicated code, non-bug design flaws & 54\% \\ External quality & Degraded user experience, bugs, \newline performance issues & 32\% \\ Staffing & Low morale, increased onboarding time, difficulty hiring or retaining staff & 22\% \\ \bottomrule \end{tabularx} \caption{Consequences of forgoing large-scale refactoring, by fraction of respondents reporting each category.} \label{table:consequences} \end{table} \medskip \textbf{Findings} \begin{itemize} \item 82\% of respondents had performed large-scale refactoring. Of the systems on which they had performed large-scale refactoring, 57\% had undergone multiple large-scale refactorings. \item Large-scale refactorings are substantial efforts. 71\% had refactored systems of at least 100K LOC. The mean time to complete refactoring was estimated at more than 1500 staff days. \item Forgoing large-scale refactoring is also common in industry, as 71\% of respondents had wanted to perform refactoring but were unable to do so. \item While prioritizing new features over refactoring was the most common reason for forgoing large-scale refactoring, 56\% of respondents reported the inability or slowing pace of delivering features as a consequence of forgoing refactoring. \end{itemize} \subsection{RQ2: How do developers use tools to aid their large-scale refactoring efforts?}% \label{sec:results:rq2} Refactoring has been a familiar concept to developers for decades ~\cite{Fowler1999refactoringBook}, but adoption of tools to support refactoring remains less common \cite{murphy2012tse,kim2014tse}. While studies have focused more on support for low-level refactoring than on large-scale refactoring, a study by Kim et al. included an analysis of interviews with a team that had performed system-wide refactoring on a very large system \cite{kim2014tse}. Their analysis indicates that refactoring at this scale involves far more than applying low-level refactorings. Instead, refactoring involved understanding the system, performing dependency analysis, creating a desired architecture structure, performing multiple gate checks, educating other developers, and developing custom refactoring tools. We sought to understand whether the kinds of tools used in large-scale refactoring differ from those used in other refactoring, the different activities involved in refactoring, and how those tools support those activities. \textbf{Tools Used}. We used two open ended questions to collect a list of tools that respondents used for refactoring at any scale and for large-scale refactoring. We used coding to categorize each tool into one of the categories shown in \autoref{fig:tools-used}, which contrasts the fraction of respondents using at least one tool in each category for refactoring at any scale with that for large-scale refactoring. There is little difference between the fraction of respondents using tools for large-scale refactoring and refactoring at any scale for most tool categories. The exceptions are IDEs and text editors (greater use in refactoring at any scale), testing tools (greater use in large-scale), and other tools (much greater use in large-scale). The other tools category includes custom scripts and tools on which custom tools were likely built (static code analyzers and abstract syntax trees). \begin{figure}[ht!] \centering \includegraphics[width=1 \linewidth]{images/tools_used.pdf} \caption{Categories of tools used to support refactoring.} \label{fig:tools-used} \end{figure} The most commonly used category of tool is the IDE; more than half of all respondents reported using IDEs for refactoring (68.7\% for any scale and 54.3\% for large-scale). In contrast, fewer than 10\% of respondents reported using tools that are designed specifically for refactoring like ReSharper and JDeodorant (8.4\% for any scale and 4.3\% for large-scale) or called out refactoring features of IDEs (6\% for any scale and 6.5\% for large-scale). The portion of tools falling into the other category was substantially higher for large-scale refactoring (50\%) than for refactoring at any scale (18\%). \textbf{Refactoring Activities}. We next looked at the work that respondents perform as part of large-scale refactoring activities. We listed the refactoring activities found in \autoref{fig:activities_most} and asked respondents to report how much time they spent in each, how challenging they found each, and the extent to which they used tools for each. \autoref{fig:activities_most} shows the fraction of respondents reporting each activity in the positive for each question (i.e., most time spent, most challenging, and extensive use of tools). The top three activities in terms of what taking the most time, being the most challenging, and making the greatest use of tools all come from these four activities: (1) determining where changes are needed, (2) choosing what changes to make, (3) implementing changes, and (4) validating refactored code. \begin{figure}[ht!] \centering \includegraphics[width=1\linewidth]{images/activities_most.pdf} \caption{Refactoring activities that take the most time, are most challenging, and make the most use of tools.} \label{fig:activities_most} \end{figure} Respondents commonly reported choosing what change to make as most time consuming (48\%) and most challenging (50\%), but only 25\% reported it as making extensive use of tools. In fact, looking at the negative responses, 58\% report this activity as making the least use of tools. The activity for which respondents reported least use of tools was updating documentation (75\%), which was also commonly noted as taking the least time (63\%) and being least challenging (61\%). \textbf{Tool Effectiveness}. Ideally, there is a relation among how much tools are used for an activity, how much time it takes, and how challenging it is. Highly effective tools can dramatically reduce the time spent and the perceived challenge. Activities that remain highly challenging and time consuming can suggest shortcomings in or under-use of tools. We compared the tools that respondents used for large-scale refactoring (\autoref{fig:tools-used}) with the time spent, challenge, and extent of tool use for refactoring activities (\autoref{fig:activities_most}). We also applied our judgment regarding the degree of support each category of tool provides (based on the specific tools listed by respondents) to each activity as additional context. Respondents report that determining where changes are needed and implementing changes as highly challenging activities (50\% and 43.8\%) for which they extensively use tools (37.7\% and 49.2\%), and yet both take significant time (46\% and 59\%). Of note, respondents report relatively little use of tools that are specifically designed to support these tasks. Refactoring, code smell analysis, and dependency exploration tools better support determining where changes are needed, but are used by only 4.3\%, 15.2\%, and 0\% of respondents. Refactoring tools and IDE refactoring features better support implementing changes, but are used by only 4.3\% and 6.5\% of respondents. Both indicate that while respondents use tools extensively for two of the three most challenging activities, they rely more heavily on general purpose tools like IDEs and manual effort than on tools specifically designed for refactoring. \medskip \textbf{Findings} \begin{itemize} \item Significantly more respondents use general-purpose tools like IDEs for large-scale refactoring (54.3\%) than use tools designed specifically for refactoring (less than 10\%). \item 50\% of respondents performing large-scale refactoring report use of other tools, which are dominated by custom tools, scripts, and packages on which they build their own tools. \item Choosing which changes to make is one of the most challenging and time consuming activities, while also one of the activities for which developers make the least use of tools. \item Updating documentation sees the least use of tools, but is also the least challenging and time consuming activity. \end{itemize} \subsection{RQ3: What tools and support, if any, do developers desire to aid their large-scale refactoring efforts?} \label{sec:results:rq3} To understand what kinds of tools would aid large-scale refactoring, we looked at the challenges respondents faced during their refactoring, the strengths and weaknesses of current tools, and how respondents directly answered the question. \textbf{Activity Challenges}. After asking respondents to rate how challenging each refactoring activity was, we asked them through an open response question what made their most challenging activities challenging. \autoref{table:challenges} shows our coding of these responses. Unsurprisingly, the most common challenge is the poor quality of the software being refactored, a challenge that refactoring exercises inherit as a starting point. The second most common challenge is the difficulty in understanding code and the implications of a change. One respondent emphasized this as \textit{The hardest part was gaining a conceptual grasp of the overall code structure, and code flow, and understanding how one basic change – no matter how simple it appeared on the surface – might create consequences throughout the system.} While this challenge is more dependent on the tools and processes used for refactoring, the starting quality of code can exacerbate it. Half of respondents reporting code comprehension as a challenge also reported poor code quality or lack of documentation as a challenge. A need for code comprehension often stems from inheriting code written by someone else. Most respondents reported that the software on which they had performed large-scale refactoring was relatively old when they started working on it (for 27\% it was already 5-10 years old and for 25\% it was already more than 10 years old). Respondents reported challenges that closely relate to code artifacts more than twice as often (code quality and comprehension at 34\% and 26\%) as challenges that relate to making decisions (scoping refactoring and decision criteria at 15\% and 6\%). \begin{table} {\small \begin{tabularx}{\columnwidth}{lXr} \toprule \textbf{Category} & \textbf{Description} & \textbf{\%} \\ \midrule Code Quality & Poor quality of code being refactored, excessive dependencies that complicate changes & 34\% \\ Comprehension & Difficulties in understanding code structure, flow, and possible side-effects & 26\% \\ Tests & Lack of tests to ensure behavior & 19\% \\ Communication & Need to persuade management and teammates, gaining user trust & 19\% \\ Scoping & Managing expectations, deciding how much refactoring to do & 15\% \\ Documentation & Poor documentation, unclear intent & 13\% \\ Techniques & Lack of well-defined refactoring techniques & 11\% \\ Decision Criteria & Choosing the right changes & 6\% \\ \bottomrule \end{tabularx} } \caption{What made large-scale refactoring challenging, by fraction of respondents reporting each category.} \label{table:challenges} \end{table} \begin{comment} \begin{figure}[ht!] \centering \includegraphics[width=1\linewidth]{images/challenges.pdf} \caption{Why challenging activities were challenging.} \label{fig:challenges} \end{figure} \end{comment} \textbf{Current Tools}. We next looked participant responses to a question on the strengths and weaknesses of the refactoring tools that they currently use. \autoref{table:str_weak} shows our coding of these responses. Less than half of respondents provided any strengths, while only three respondents provided only strengths. The top categories for reported strengths were modification (16\%, automation of changes) and planning what (12\%, identifying opportunities for refactoring). The top categories for reported weaknesses were usability (33\%, learning curve, poor interfaces for tasks) and modification (21\%, lack of control over or unacceptable results from automated refactoring). This corroborates a finding of Pinto and Kamei's study, which identified usability as a key barrier to adoption of refactoring tools \cite{Pinto2013stackoverflow}. Several responses directly contrasted available refactoring support for small-scale changes with needs for large-scale refactorings. Examples include: \begin{itemize} \item \textit{They address refactoring efforts at a component level. They don't address end to end scenarios and analysing dynamics. Todays tools I have used provide quite a lot indicators for increasing complexity and structure loss, but these are not enough to make large scale decision with reducing these effects leading to system failure.} \item \textit{The tools I use don’t offer any guides or hints related to large-scale refactoring. Their analysis features usually present only low level code smells that often don’t offer a considerable improvement in the quality of the software.} \item \textit{The tools I've got are too focused on munging text, or on refactoring that is syntactically simple enough that I don't really need help with it (maybe it saves time on typing, but typing time isn't the problem).} \end{itemize} \begin{table} \begin{tabularx}{\columnwidth}{Xrr} \toprule \textbf{Category} & \textbf{Strengths} & \textbf{Weaknesses} \\ \midrule Usability & 7\% & 33\% \\ Modification & 16\% & 21\% \\ Planning what to refactor & 12\% & 19\% \\ Analysis & 9\% & 16\% \\ Large-scale refactoring & 5\% & 16\% \\ Comprehension & 2\% & 16\% \\ Testing & 2\% & 5\% \\ Planning how to refactor & 0\% & 5\% \\ Scoping refactoring & 0\% & 5\% \\ \bottomrule \end{tabularx} \caption{Strengths and weaknesses of tools respondents use for refactoring, by fraction of respondents reporting each category.} \label{table:str_weak} \end{table} \textbf{Desired Tools}. Despite 80\% of respondents who had performed large-scale refactoring reporting having achieved their goals, the activity challenges (\autoref{table:challenges}) and the weaknesses of tools used (\autoref{table:str_weak}) point to room for improvement. We asked participants what kind of tools would have most improved their experience. \autoref{table:wants} shows our coding of these open ended responses. The three most common categories focused directly on the code being refactored. Testing (46\%) focused on testing automation, modification (26\%) focused on automating code changes, and analysis (23\%) focused on understanding the code (e.g., static and data flow analyses). In contrast, tools that included recommending actions were much less common: planning what to refactor (9\%, recommending where changes are needed) and planning how to refactor (6\%, recommending specific changes). Pinto and Kamei's analysis of Stack Overflow questions on refactoring identified generating refactoring recommendations as a desirable feature at a similarly low number (13\%) \cite{Pinto2013stackoverflow}. Fewer respondents expressed interest in tools that make decisions for them than in tools that act as directed by a developer, like performing requested analyses, making specified changes, and confirming the results of changes. This preference aligns with \autoref{table:challenges}, which summarizes what made refactoring challenging. Challenges with code comprehension and tests align with top wants. Decision criteria was the least common challenge, aligning with the lack of wants for tools that recommend changes. However, this preference is somewhat at odds with where respondents report spending the most time in \autoref{fig:activities_most}. Two of the four activities on which they spend the most time (implementing changes and validation) align with two of the top three wants (testing and modification). The other two activities on which they spend the most time (determining where changes are needed and choosing the changes to make) align with two of the least common wants (planning what and how to refactor). This may reflect a lack of trust in tools' ability to make good recommendations, as evidenced by the following comment: \begin{itemize} \item \textit{I'm still highly skeptical that a tool that can effectively automatically suggest a collection of refactoring that would solve a specific problem can be written. Refactoring is highly contextual... Until you can create a program that can figure out the context of the problem just by analyzing the structure of the code (i.e. make the tool read people's minds), I doubt such a tool will ever exist.} \end{itemize} \begin{table} \begin{tabularx}{\columnwidth}{Xr} \toprule \textbf{Category} & \textbf{\%} \\ \midrule Testing & 46\% \\ Analysis & 23\% \\ Modification & 26\% \\ Comprehension & 17\% \\ Planning what to refactor & 9\% \\ Build Automation & 9\% \\ Planning how to refactor & 6\% \\ \bottomrule \end{tabularx} \caption{What kinds of tools would improve large-scale refactoring, by fraction of respondents reporting each category.} \label{table:wants} \end{table} When asked how useful respondents would find a tool that automatically suggests a collection of refactorings that would solve a problem that you specified, 73\% of respondents replied affirmatively. Regardless of any skepticism in what tools can do, responses to the question of what tools would help reflected a genuine, if sometimes plaintive need for help (e.g., \textit{Any at all} and \textit{Almost anything :-)}). \textbf{Findings} \begin{itemize} \item The most commonly reported refactoring challenge is the starting quality of software. This challenge is closely followed by the difficulty in understanding that software. \item Of the tools respondents use today, the most common strengths reported are in automating changes while the most common weaknesses are in usability. \item Despite identifying many challenges and weaknesses in today's refactoring tools, 80\% of respondents report having achieved their large-scale refactoring goals. \end{itemize} \section{Discussion} \label{sec:discussion} Industry software goes through periodic structural changes as part of continuous evolution \cite{iversFSE2020}. While intuitively we know that refactorings support these changes, we also know that they are significantly larger in scale than the kinds of changes common in floss refactoring and so may have different implications on desirable tool support. We discuss the implications of our survey analysis and findings around the need to recognize and study large-scale refactoring, the gaps in tool support for industry's refactoring needs, and the role that tools can play in improving the state of practice. \textbf{Recognizing LSR}. Our findings confirm that large-scale refactoring is a major undertaking that industry software can go through multiple times in its lifetime. Likewise, it confirms that the business consequences of forgoing needed refactoring are often substantial, impacting the ability to deliver features, team productivity and morale, and product quality as seen by users. While existing low-level refactoring knowledge and tool support inform large-scale refactoring, our survey helps identify two distinct characteristics of large-scale refactoring: it is tightly coupled to business needs and the scope of work is broader than local code improvements. The following response summarizes these differences clearly: \begin{itemize} \item \textit{Agile development practices encourage continuous micro refactorings to make the code a little bit better all the time. ... Refactoring is just part of the job ... like a surgeon washing her hands. ...teams should be continuously refactoring in the small without the need for explicit investment or direction from the wider business. I think larger scale "refactoring" is different in that there is an opportunity cost to doing or not doing the work. It becomes a business decision as to where to invest.} \end{itemize} Large-scale refactoring is a distinct activity for which significant resources need to be allocated, rather than being something that developers can easily weave into their day-to-day work. This context is often not apparent from development artifacts like commit messages or logs of tool use, hampering researchers' ability to study it the way that floss refactoring has been studied. It includes activities like persuading stakeholders of benefits and managing expectations; reasoning activities that span understanding code, requirements, and intent; and integrating data from and capabilities of different tools. In short, it is reasonable to think of it as having a project-like scope that includes building blocks like those of floss refactoring, but also includes many others. As many of these activities are orchestrated across a team (recall that the mean effort estimated for large-scale refactoring from \autoref{fig:lsr_time} is more than 1500 staff days), to study large-scale refactoring and understand the breadth of tool support that is needed, we need to collect and integrate data from more sources. Large-scale refactoring has architectural implications as demonstrated by the following general feedback responses: \begin{itemize} \item \textit{I believe in such refactorings to be a highly creative task, which often requires with coming up with brand new architectural ideas, or even suggesting actual product changes to make the architecture cleaner ... I think that large-scale refactorings are more about such things than the code itself.} \end{itemize} \textbf{Tool Support for LSR}. The finding that developers make little use of existing refactoring tools for large-scale refactoring was unsurprising, as it mirrors studies of smaller-scale refactoring. As part of our study, we also sought to understand whether the kinds of tools used in large-scale refactoring differ from those used in other refactoring, the different activities involved in refactoring, and how the tools being used support those activities. Our findings demonstrate that developers typically use multiple tools as part of large-scale refactoring to address the range of activities that are involved. The broad range of tools offered by the respondents go beyond IDEs, including diverse tools such as static code analyzers, issue trackers and wikis, testing tools, and custom scripts. References to custom scripts were significantly more common for large-scale refactoring, suggesting significant tool gaps that are more apparent at scale. Additional research is needed to explore the range of capabilities of such scripts and whether a more general capability can be provided by tool vendors or result from future research. Responses included several doubts about the feasibility of tool support for large-scale refactoring. A key challenge in developing tools that support large-scale refactoring is improving our understanding of what motivates such activities. Tools for smaller scale refactoring often start with an assumption that the goal is to remove specific code smells or improve specific code quality metrics. While these improvements offer business value in form of improved software maintainability and developer productivity, these improvements are not always what motivates businesses to invest and hence may be misaligned with project needs. Tools that address different motivations or allow users to express their improvement goals could provide more options for developers. \textbf{Deciding to Forgo LSR}. Existing research focusing on floss refactoring has identified barriers and risks that influence decisions about whether to refactor that include: missing resources, risk of introducing an error, difficulty of performing the refactoring, unclear value, constraints set by management, and lack of appropriate tools~\cite{Tempero2017barriers}. Our results show similar factors influencing decisions about whether to perform large-scale refactoring. Our respondents reported that prioritizing new features over refactoring and perceiving the cost of refactoring as too high were both the most common and most important reasons that their organizations decided to forgo refactoring. This is not a surprising result to developers in industry or to researchers. However, while prioritizing new features over refactoring was the most common reason for forgoing large-scale refactoring, 56\% of respondents reported the inability or slowing pace of delivering features as a consequence of forgoing refactoring. One of our respondents expressed this passionately when asked about the consequences of not refactoring: \textit{Codebase became shit!} Better tools can help change these business decisions and avoid the consequences that follow. As most of the reasons provided boil down to cost-benefit decisions, tools that reduce cost can shift the balance. Our respondents estimated a mean of more than 1500 staff days of effort were spent on large-scale refactorings, suggesting many opportunities to reduce the work involved in refactoring activities. While the common wisdom is to focus research on floss refactoring because it is more common (orders of magnitude more so), this perspective neglects the cost difference (large-scale refactorings being orders of magnitude larger). Researchers and tool vendors would benefit from further research to obtain more granular data on the activities that developers spend the most time on and analyses of which activities they can reliably support with trusted tools. \textbf{Threats to Validity}. \label{sec:methodology:threats} Our threats to validity include: \textit{Internal Validity.} Our analysis of survey responses represents a potential threat to internal validity. To mitigate this threat and to ensure the reliability of our qualitative findings, we implemented and consistently adhered to established guidelines and best practices for conducting qualitative research, including comprehensive data use, constant comparison, the use of tables, and refinement of codes through adjudication and investigator triangulation. \textit{External Validity.} Our findings are based on the data we collected from 107 survey respondents. We do not make a generalizability claim, but position our findings as observations supported by our data and research literature. \textit{Conclusion Validity.} We distributed our survey to a broad audience to collect the most relevant data for our goals. To ensure that we asked the right questions and to avoid introducing our own biases into the wording and selection of questions, we conducted a series of iterative pilots to identify and address shortcomings in the survey design. Furthermore, we included several open-ended questions to allow participants to share their views and experiences. \section{Conclusion} \label{sec:conclusion} In order to understand the prevalence, challenges, and tool support for large-scale refactoring we conducted a survey with industry developers. Our analysis of data from 107 respondents, 85\% of whom report to have at least 10 years of experience, confirm that large-scale refactoring is not an unusual occurrence. While floss refactoring is certainly orders of magnitude more common, industry systems undergo multiple large-scale refactorings over their lifetimes and the magnitude of effort involved in each is considerable. Refactoring tools designed to support smaller scale refactoring efforts aren't enough to address the breadth of activities that developers consider a part of large-scale refactoring, and developers encounter a wide range of challenges despite using many different kinds of tools. The anticipated cost of such refactoring, along with business priorities that favor new features, commonly result in organizations forgoing large-scale refactoring, which in turn commonly results in troubling consequences. Our study demonstrates a clear need for better tools and an opportunity for refactoring researchers to make a difference in industry. The results we summarize in this paper is one concrete step towards this goal. \begin{acks} This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. References herein to any specific commercial product, process, or service by trade name, trade mark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by Carnegie Mellon University or its Software Engineering Institute. DM21-0915 \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2022-02-16T02:26:56", "yymm": "2202", "arxiv_id": "2202.00173", "language": "en", "url": "https://arxiv.org/abs/2202.00173" }
\section{Introduction}\label{sect:intro} \setcounter{equation}{0} Let $k$ be a field of characteristic $p\ge 0$, $k[\x] =k[x_1,\ldots ,x_n]$ the polynomial ring in $n$ variables over $k$, and $\Aut _kk[\x] $ the automorphism group of the $k$-algebra $k[\x] $. For $\phi \in \Aut _kk[\x] $, we consider the invariant ring $k[\x] ^{\phi }:=\{ f\in k[\x] \mid \phi (f)=f\} $. We say that $\phi \in \Aut _kk[\x] $ is \noindent $\bullet $ {\it affine} if $\deg \phi (x_i)=1$ for $i=1,\ldots ,n$; \noindent $\bullet $ {\it elementary} if $x_1,\ldots ,x_{n-1}\in k[\x] ^{\phi }$ and $\phi (x_n)\in x_n+k[x_1,\ldots ,x_{n-1}]$; \noindent $\bullet $ {\it exponential} if $\phi $ is induced by a ${\bf G}_a $-action on the affine space ${\bf A} _k^n$ (cf.~\S \ref{sect:Ga-action}); \noindent $\bullet $ {\it of characteristic-order} if $\langle \phi \rangle \simeq {\bf Z} /p{\bf Z} $ or $\phi ={\rm id} $. Let $\Aff _n(k)$ (resp.\ $\El _n(k)$, $\Ex _n(k)$, and $\Ch _n(k)$) be the set of $\phi \in \Aut _kk[\x] $ which is affine (resp.\ elementary, exponential, and of characteristic-order). Then, we have $$ \T _n(k):=\langle \Aff _n(k)\cup \El _n(k)\rangle \subset \langle \Aff _n(k)\cup \Ex _n(k)\rangle \subset \langle \Aff _n(k)\cup \Ch _n(k)\rangle \subset \Aut _kk[\x] , $$ since $\El _n(k)\subset \Ex _n(k)\subset \Ch _n(k)$ (cf.~\S \ref{sect:Ga-action}). We call $\T _n(k)$ the {\it tame subgroup}. The {\it Tame Generators Problem} asks whether $\T _n(k)=\Aut _kk[\x] $. This is clear if $n=1$. Jung~\cite{Jung} and van der Kulk~\cite{Kulk} showed that $\T _2(k)=\Aut _kk[x_1,x_2]$. In 2004, Shestakov-Umirbaev~\cite{SU} showed that the automorphism of Nagata~\cite{Nagata} does not belong to $\T _3(k)$ if $p=0$, and solved the problem in the negative when $n=3$ and $p=0$. At present, the problem is open when $n=3$ and $p>0$, and when $n\ge 4$. It is well known that Nagata's automorphism is exponential. Hence, $\T_3(k)$ is a proper subgroup of $\langle \Aff _3(k)\cup \Ex _3(k)\rangle $ if $p=0$. The {\it Exponential Generators Conjecture} asserts that $\Aut _kk[\x] =\langle \Aff _n(k)\cup \Ex _n(k)\rangle $ (cf.~\cite[\S 2.1]{Essen}), which is open for all $n\ge 3$. If $p=0$, then we have $\Aut _kk[\x] =\langle \Aff _n(k)\cup \Ch _n(k)\rangle $, since $\Ch _n(k)$ contains $\phi \in \Aut _kk[\x] $ whenever the Jacobian of $\phi$ is not a root of unity. It is not known if the same holds when $p>0$ and $n\ge 3$. To study $\Aut _kk[\x] $ when $p>0$, we consider $\Ch _n(k)$ to be important. The automorphisms of order $p$ are equivalent to the ${\bf Z} /p{\bf Z} $-actions, and there are many researches on this and related subjects. For example, Miyanishi~\cite{M1} investigated ${\bf Z} /p{\bf Z} $-actions on a normal affine domain of characteristic $p$ from a view point of Artin-Schreier coverings (see also Takeda~\cite{Takeda}). Miyanishi-Ito~\cite{MI} contains more background in this direction. Tanimoto~\cite{Tani} studied ${\bf Z} /p{\bf Z} $-actions on ${\bf A} _k^n$ from an interest of Modular Invariant Theory. He classified the triangular ${\bf Z} /p{\bf Z} $-actions on ${\bf A} _k^3$ and showed that their invariant rings are generated by at most four elements. We also mention that Maubach~\cite{Maubach} showed that the invariant rings for a certain class of ${\bf Z} /p^n{\bf Z} $-actions on ${\bf A} _k^n$ are isomorphic to $k[\x] $. In general, for an action of a finite group $G$ with $p\nmid |G|$ on ${\bf A} _k^n$, it is difficult to describe the structure of the invariant ring, even for a linear action (cf.~\cite{MIT}). It should also be stressed that, to study properties of $\phi \in \Aut _kk[\x] $, the information about $k[\x] ^{\phi }$ is of great use. Recently, some researchers remarked that, if $p>0$, then $\phi \in \Ch _2(k)$ is always a conjugate of an elementary automorphism (cf.~Theorem~\ref{thm:Osaka}, \cite{M1}, \cite{Maubach}). Hence, there exists $\sigma \in \Aut _kk[x_1,x_2]$ such that $\sigma (x_1)\in k[x_1,x_2]^{\phi }$. Then, the following question naturally arises. Here, for each $k$-subalgebra $A$ of $k[\x] $, we define $$ \gamma (A):=\max \{ N\mid \exists \sigma \in \Aut _kk[\x] \text{ such that\ } \sigma (k[x_1,\ldots ,x_N])\subset A\} . $$ \begin{q}\label{question}\rm Does $\gamma (k[\x] ^{\phi })\ge 1$ hold for all $\phi \in \Ch _n(k)$ when $p>0$ and $n\ge 3$? \end{q} We have three main contributions in this paper. \noindent 1) We give the first counterexample to Question~\ref{question} for $n=3$. To explain the result, we recall some known results about ${\bf G}_a $-actions on ${\bf A} _k^n$. The {\it rank} of a ${\bf G}_a $-action on ${\bf A} _k^n$ is defined to be $n-\gamma (k[\x] ^{{\bf G}_a })$ (cf.~\cite{Frank}). Then, every nontrivial ${\bf G}_a $-action on ${\bf A} _k^2$ is of rank one if $p=0$ by Rentschler~\cite{Rentschler}, and if $p>0$ by Miyanishi~\cite{{MiyanishiNagoya}}. When $p=0$, Freudenburg~\cite{Frank} gave the first example of a ${\bf G}_a $-action on ${\bf A} _k^n$ of rank $n$ for each $n\ge 3$. When $p>0$, our result says that every rank three ${\bf G}_a $-action on ${\bf A} _k^3$ yields a family of $\phi \in \Ex _3(k)$ with $\gamma (k[\x] ^{\phi })=0$ (Theorem~\ref{thm:main}). Here, we emphasize that $\gamma (k[\x] ^{{\bf G}_a })=0$ does not immediately imply $\gamma (k[\x] ^{\phi })=0$ for an induced $\phi \in \Ex _3(k)$, because $k[\x] ^{{\bf G}_a }\subsetneq k[\x] ^{\phi }$. We construct a family of rank three ${\bf G}_a $-actions on ${\bf A} _k^3$ when $p>0$, and give counterexamples to Question~\ref{question}. See (\ref{eq:simple example}) for simple concrete examples. \noindent 2) The plinth ideal $\pl (\phi )$ for an automorphism $\phi $ (cf.~(\ref{eq:def pl phi})) is an analogue of the plinth ideal for a derivation (cf.~\cite[\S 1.1]{Plinth}), which carries useful information about $\phi $. For $\phi \in \Ex _3(k)$ induced by the rank three ${\bf G}_a $-action on ${\bf A} _k^3$ stated above, we show that $\pl (\phi )$ is principal if and only if $k[\x] ^{\phi }$ is isomorphic to $k[\x] $ under some mild assumptions (Theorems~\ref{thm:plinth rank 3}, \ref{thm:rank3 invariant ring} and \ref{thm:rank3 invariant ring2}). This result is of interest in its own right, because a ${\bf G}_a $-action on ${\bf A} _k^n$ of rank $n$ is in general difficult and mysterious. \noindent 3) Let $R$ be a domain, $0\ne a\in R$, $0\ne \theta (x_2)\in x_2R[x_2]$ and $0\ne F\in R[ax_1+\theta (x_2)]$. Then, there exists $\psi \in \Aut _RR[x_1,x_2]$ such that $\psi (x_1)=x_1+a^{-1}(\theta (x_2)-\theta (x_2+aF))$ and $\psi (x_2)=x_2+aF$ (cf.~\S \ref{sect:Nagata construction}), and is called the {\it Nagata type automorphism}. It is known that $\psi $ is exponential. Hence, $\psi ^p={\rm id} $ holds if $\ch R=p>0$. Nagata's automorphism is equal to $\psi $ with $(R,a,\theta (x_2),F)=(k[x_3],x_3,x_2^2,x_1x_3+x_2^2)$. With this notation, we have the following result. \begin{thm}\label{thm:Nagata Main} Assume that $R$ is a UFD with $\ch R>0$, and let $\psi $ be as above. \noindent{\rm (i)} The invariant ring $R[x_1,x_2]^{\psi }$ is generated by at most three elements over $R$. \noindent{\rm (ii)} The following are equivalent: {\rm (a)} The ideal $I:=(a,d\theta (x_2)/dx_2)$ of $R[x_1,x_2]$ is principal. {\rm (b)} The plinth ideal $\pl (\psi )$ is a principal ideal of $R[x_1,x_2]^{\psi }$. {\rm (c)} $R[x_1,x_2]^{\psi }$ is isomorphic to $R[x_1,x_2]$ as an $R$-algebra. \noindent{\rm (iii)} If $R=k[x_3,\ldots ,x_n]$, then {\rm (a)}, {\rm (b)} and {\rm (c)} in {\rm (ii)} are equivalent to the following: {\rm (d)} $R[x_1,x_2]^{\psi }$ is isomorphic to $R[x_1,x_2]$ as a $k$-algebra. \end{thm} This paper is organized as follows. In Section~\ref{sect:prelim}, we recall basic notions and results used in this paper. In Section~\ref{sect:key}, we discuss how to derive a counterexample to Question~\ref{question} from a rank three ${\bf G}_a $-action on ${\bf A} _k^3$. In Sections~\ref{sect:rank3 family} and \ref{sect:invariant ring}, we construct a family of rank three ${\bf G}_a $-actions on ${\bf A} _k^3$, and study their exponential automorphisms. Section~\ref{sect:Nagata} is devoted to the study of the Nagata type automorphisms. In Section~\ref{sect:remark}, we list some questions and conjectures. \section{Preliminary}\label{sect:prelim} \setcounter{equation}{0} Throughout this paper, all rings and algebras are commutative, and $k$ denotes a field. If $B\subset B'$ are domains, $\trd _BB'$ denotes the transcendence degree of $B'$ over $B$, and $Q(B)$ denotes the quotient field of $B$. For rings $B\subset B'$ and a ring homomorphism $\phi :B\to B'$, we define \begin{equation}\label{eq:B^phi def} B^{\phi }:=\{ b\in B\mid \phi (b)=b\} . \end{equation} \subsection{${\bf G}_a $-action and exponential automorphisms}\label{sect:Ga-action} Let $R$ be a ring, $B$ an $R$-algebra, and $T$ and $U$ indeterminates. Recall that a homomorphism $\epsilon :B\to B[T]$ of $R$-algebras defines an action of the additive group ${\bf G}_a :=\Spec R[T]$ on $\Spec B$ if and only if the following conditions hold for each $a\in B$. Here, we write $\epsilon (a)=\sum _{i\ge 0}a_iT^i$, where $a_i\in B$. \smallskip (A1) $a_0=a$. \qquad (A2) $\sum_{i\ge 0}\epsilon (a_i)U^i =\sum _{i\ge 0}a_i(T+U)^i$ in $B[T,U]$. \smallskip \noindent If this is the case, the ${\bf G}_a $-invariant ring is $B^{\epsilon }$. We call this $\epsilon $ a ${\bf G}_a $-{\it action on} $B$. Let $\epsilon :B\to B[T]$ be a ${\bf G}_a $-action on $B$. For each $a\in B^{\epsilon }$, we define $$ \epsilon _a:B\ni b\mapsto \epsilon (b)|_{T=a}\in B, $$ where $\epsilon (b)|_{T=a}$ is the value of $\epsilon (b)\in B[T]$ at $T=a$. Clearly, we have $B^{\epsilon }\subset B^{\epsilon _a}$. Note that $\epsilon _0={\rm id} $ by (A1), and $\epsilon _a\circ \epsilon _b=\epsilon _{a+b}$ for all $a,b\in B^{\epsilon }$ by (A2). Hence, $\epsilon _a$ has the inverse $\epsilon _{-a}$, and $B^{\epsilon }\ni a\mapsto \epsilon _a\in \Aut _RB$ is a group homomorphism. \begin{definition}\label{def:exp}\rm We say that $\phi \in \Aut _RB$ is {\it exponential} if $\phi =\epsilon _a$ for some ${\bf G}_a $-action $\epsilon $ on $B$ and $a\in B^{\epsilon }$. If this is the case, we have $B^{\epsilon }\subset B^{\phi }$. If moreover $\ch R=p>0$, then we have $\phi ^p={\rm id} $, since $(\epsilon _a)^p=\epsilon _{pa}=\epsilon _0={\rm id} $. \end{definition} \begin{example}\label{example:action on A[x]}\rm Let $A$ be an $R$-domain, and $A[x]$ the polynomial ring in one variable over $A$. For $a\in A\setminus \{ 0\} $, we define $\widetilde{\epsilon }:A[x]\ni f(x)\mapsto f(x+aT)\in A[x][T]$. \noindent (i) $\widetilde{\epsilon }$ is a ${\bf G}_a $-action on $A[x]$ with $A[x]^{\widetilde{\epsilon }}=A$. \noindent (ii) $\widetilde{\epsilon }_b$ is equal to $A[x]\ni f(x)\mapsto f(x+ab)\in A[x]$ for each $b\in A$. \noindent (iii) Let $S$ be an $R$-subalgebra of $A[x]$ with $\widetilde{\epsilon }(S)\subset S[T]$. Then, $\widetilde{\epsilon }$ restricts to a ${\bf G}_a $-action $\epsilon $ on $S$ with $S^{\epsilon }=A\cap S$. In this case, $\epsilon _b$ is the restriction of $\widetilde{\epsilon }_b$ to $S$ for each $b\in A\cap S$. \end{example} The elementary automorphisms of $k[\x] $ are the exponential automorphisms for the ${\bf G}_a $-actions as in Example~\ref{example:action on A[x]} with $A=k[x_1,\ldots ,x_{n-1}]$ and $x=x_n$. Finally, we recall the following well-known fact (cf.~\cite{M1}). \begin{rem}\label{rem:Miyanishi}\rm Let $B$ be a $k$-domain, and $\epsilon $ a ${\bf G}_a $-action on $B$. \noindent (i) $B^{\epsilon }$ is {\it factorially closed} in $B$, i.e., $ab\in B^{\epsilon }$ implies $a,b\in B^{\epsilon }$ for each $a,b\in B\setminus \{ 0\} $. \noindent (ii) $B^{\epsilon }$ is {\it algebraically closed} in $B$, i.e., $a\in B$ belongs to $B^{\epsilon }$ if $f(a)=0$ for some $f(T)\in B^{\epsilon }[T]\setminus B^{\epsilon }$. \noindent (iii) If $B^{\epsilon }\ne B$ and $\trd _kB<\infty $, then we have $\trd _kB^{\epsilon }=\trd _kB-1$. \end{rem} \subsection{One and two variable cases} Let $p$ be a prime number, $R$ a ring with $\ch R=p$, and $R[x]$ the polynomial ring in one variable over $R$. For $a\in R\setminus \{ 0\} $, we define $\phi \in \Aut _RR[x]$ by $\phi (x)=x+a$. Then, $\phi $ is of order $p$, since $\bF _p\subset R$. Moreover, we have $x^p-a^{p-1}x\in R[x]^{\phi }$, since \begin{equation}\label{eq:x^p-ax} \phi (x^p-a^{p-1}x) =(x^p+a^p)-a^{p-1}(x+a) =x^p-a^{p-1}x. \end{equation} With this notation, the following lemma holds. \begin{lem}\label{lem:1var} If $a$ is not a zero-divisor of $R$, then we have $R[x]^{\phi }=R[x^p-a^{p-1}x]$. \end{lem} \begin{proof} We show that $f$ belongs to $R[x^p-a^{p-1}x]$ for all $f\in R[x]^{\phi }$ by induction on $l:=\deg f$. The assertion is clear if $l\leq 0$. Assume that $l\geq 1$, and let $b\in R\setminus \{ 0\} $ be the leading coefficient of $f$. Then, we have $0=\phi (f)-f=labx^{l-1}+\cdots $. Since $a$ is not a zero-divisor of $R$, this implies $p\mid l$. Set $f':=f-b(x^p-a^{p-1}x)^{l/p}\in R[x]^{\phi }$. Then, $\deg f'$ is less than $l$. Hence, $f'$ belongs to $R[x^p-a^{p-1}x]$ by induction assumption. Therefore, $f=f'+b(x^p-a^{p-1}x)^{l/p}$ belongs to $R[x^p-a^{p-1}x]$. \end{proof} The following theorem\footnote{ The author announced this theorem, together with a counterexample to Question~\ref{question}, on the occasion of the 13th meeting of Affine Algebraic Geometry at Osaka on March 5, 2015 (see \cite{Miyanishi3}).} (cf.~\cite{Miyanishi3}, \cite{Maubach}) is based on the well-known fact that $\Aut _kk[x_1,x_2]$ is the amalgamated product of $\Aff _2(k)$ and the triangular subgroup. \begin{thm}\label{thm:Osaka} Let $k$ be a field of characteristic $p>0$, and let $\phi \in \Aut _kk[x_1,x_2]$ be of order $p$. Then, there exist $X_1,X_2\in k[x_1,x_2]$ and $f\in k[X_1]\setminus \{ 0\} $ such that $k[X_1,X_2]=k[x_1,x_2]$, $\phi (X_1)=X_1$ and $\phi (X_2)=X_2+f$. \end{thm} \subsection{Plinth ideal}\label{sect:plinth} To begin with, let $B\subset B'$ be any rings, $\iota :B\to B'$ the inclusion map, and $\phi :B\to B'$ a ring homomorphism. We define $\delta :=\phi -\iota : B\ni b\mapsto \phi (b)-b\in B'$. Then, $\delta $ is a $B^{\phi }=\ker \delta $-linear map. Moreover, the following (1) through (6) hold: \smallskip \noindent (1) $\delta (B)\cap B^{\phi }$ is an ideal of $B^{\phi }$. Indeed, since $\delta (B)$ and $B^{\phi }$ are $B^{\phi }$-submodules of $B'$, we see that $\delta (B)\cap B^{\phi }$ is a $B^{\phi }$-submodules of $B'$, and hence of $B^{\phi }$. \noindent (2) $\delta (b^l)=\phi (b)^l-b^l =\delta (b)\sum _{i=0}^{l-1}\phi (b)^ib^{l-1-i}$ holds for each $b\in B$ and $l\ge 1$. \noindent (3) If $\ch B=p$ is a prime number, then $\delta (b^{p^e})=(\phi (b)-b)^{p^e}=\delta (b)^{p^e}$ holds for each $b\in B$ and $e\ge 0$. \noindent (4) $\delta (ab) =(\phi (a)-a)b+\phi (a)(\phi (b)-b) =\delta (a)b+\phi (a)\delta (b) =\delta (a)b+(\delta (a)+a)\delta (b)$ for each $a,b\in B$. \noindent (5) If $B=R[b_1,\ldots ,b_n]$ for some subring $R$ of $B^{\phi }$ and $b_1,\ldots ,b_n\in B$, then $\delta (B)\subset \sum _{i=1}^n\delta (b_i)A$ holds for $A:=B[\delta (b_1),\ldots ,\delta (b_n)]$. In fact, using (4), we can prove $\delta (b_1^{i_1}\cdots b_n^{i_n})\in \sum _{i=1}^n\delta (b_i)A$ for all $i_1,\ldots ,i_n\ge 0$ by induction on $i_1+\cdots +i_n$. \noindent (6) For a ${\bf G}_a $-action $\epsilon :B\to B[T]$ and $a\in B^{\epsilon }$, we set $\delta :=\epsilon -\iota :B\to B[T]$ and $\delta _a:=\epsilon _a-{\rm id} :B\to B$. Then, we have $\delta (B)\subset TB[T]$ by (A1), and so $\delta _a(B)\subset aB$. \smallskip Next, we consider the case where $B=B'$. For $\phi \in \Aut B$, we define \begin{equation}\label{eq:def pl phi} \pl (\phi ):=\delta (B)\cap B^{\phi }, \quad\text{where}\quad \delta :=\phi -{\rm id} . \end{equation} By (1), $\pl (\phi )$ is an ideal of $B^{\phi }$, which we call the {\it plinth ideal} of $\phi $. \begin{lem}\label{lem:pl principal} In the notation above, the following assertions hold. \noindent {\rm (i)} If $a\in \pl (\phi )$ is not a zero-divisor of $B$ and $\delta (B)\subset aB$, then $\pl (\phi )=aB^{\phi }$. \noindent {\rm (ii)} Assume that $B$ is a UFD, and $\pl (\phi )$ is a principal ideal of $B^{\phi }$. If $a,b\in \pl (\phi )\setminus \{ 0\} $ satisfy $\gcd (a,b)\in B^{\phi }$, then $\gcd (a,b)$ belongs to $\pl (\phi )$. \end{lem} \begin{proof} (i) Note that $aB^{\phi }\subset \pl (\phi ) =\delta (B)\cap B^{\phi }\subset aB\cap B^{\phi }$, since $a\in \pl (\phi )$ and $\delta (B)\subset aB$ by assumption. Hence, it suffices to show that $aB\cap B^{\phi }\subset aB^{\phi }$, i.e., $b\in B$ and $ab\in B^{\phi }$ imply $b\in B^{\phi }$. Since $ab\in B^{\phi }$ and $a\in \pl (\phi )\subset B^{\phi }$, we have $ab=\phi (ab)=a\phi (b)$. Since $a$ is not a zero-divisor, it follows that $b=\phi (b)$. (ii) Choose $c\in \pl (\phi )$ with $\pl (\phi )=cB^{\phi }$. Since $a,b\in \pl (\phi )\setminus \{ 0\} $, we have $c\ne 0$, and $a=ca'$ and $b=cb'$ for some $a',b'\in B^{\phi }$. Then, we get $\gcd (a,b)=c\gcd (a',b')$. Since $\gcd (a,b)\in B^{\phi }$ by assumption, and $c\in \pl (\phi )$, this implies $\gcd (a',b')\in B^{\phi }$ as in the proof of (i). Hence, $\gcd (a,b)=c\gcd (a',b')$ belongs to $cB^{\phi }=\pl (\phi )$. \end{proof} \begin{example}\rm In the situation of Lemma~\ref{lem:1var}, we have $\pl (\phi ) =aR[x^p-a^{p-1}x]$ by Lemma~\ref{lem:pl principal} (i), since $\delta (x)=\phi (x)-x=a\in \pl (\phi )$, and $\delta (R[x])\subset aR[x]$ by (5). \end{example} \begin{rem}\label{rem:pl for ch order autom}\rm Let $B$ be a domain with $\ch B=p>0$ and $\phi \in \Aut B$ of order $p$. \noindent (i) It is well known that $B=\bigoplus _{i=0}^{p-1}B^{\phi }s^i$ holds for every $s\in B$ with $\phi (s)=s+1$ (cf.~e.g., \cite[Lemma 2.4]{Tani} for a proof using a pseudo-derivation). Here is another proof: $\phi $ extends to an automorphism of $Q(B)$ of order $p$. Since $[Q(B):Q(B)^{\phi }]=p$ and $s\not\in Q(B)^{\phi }$, we get $Q(B)=Q(B)^{\phi }(s)=\bigoplus _{i=0}^{p-1}Q(B)^{\phi }s^i$. Now, suppose that $B\ne \bigoplus _{i=0}^{p-1}B^{\phi }s^i$, and pick $b\in B\setminus \bigoplus _{i=0}^{p-1}B^{\phi }s^i$. Then, since $b\in Q(B)$, we can write $b=\sum _{i=0}^{p-1}b_is^i$, where $b_i\in Q(B)^{\phi }$. Subtracting $b_is^i\in B$ from $b$ if $b_i\in B^{\phi }$, we may assume that $b=\sum _{i=0}^lb_is^i$ and $b_l\not\in B^{\phi}$ for some $1\le l<p$. Choose $b$ with least $l$. Then, noting $\delta (s^i)=(s+1)^i-s^i=is^{i-1}+\cdots $, we can write $\delta (b)=\sum _{i=0}^lb_i\delta (s^i)=lb_ls^{l-1}+ \sum _{i=0}^{l-2}b_i's^i$, where $b_i'\in Q(B)^{\phi }$. Since $\delta (b)\in \delta (B)\subset B$ and $lb_l\in Q(B)^{\phi }\setminus B^{\phi }$, this contradicts the minimality of $l$. \noindent (ii) By (i), $B$ is a free $B^{\phi }$-module of rank $p$ if there exists $s\in B$ with $\phi (s)=s+1$. Even if such $s$ does not exist, the $B^{\phi }[1/u]$-module $B[1/u]$ is free of rank $p$ for all $u\in \pl (\phi )\setminus \{ 0\} $. In fact, $\phi $ extends to an automorphism of $B[1/u]$ with $B[1/u]^{\phi }=B^{\phi }[1/u]$ and $\phi (t/u)=t/u+1$, where $t\in B$ is such that $u=\delta (t)$, i.e., $\phi (t)=t+u$. \end{rem} \section{A rank three ${\bf G}_a $-action yields counterexamples to Question~\ref{question} }\label{sect:key} \setcounter{equation}{0} Assume that $\ch k=p>0$. Let $\epsilon $ be a ${\bf G}_a $-action on $k[\x] =k[x_1,x_2,x_3]$. For $h\in k[\x] ^{\epsilon }$, we define $\epsilon _h\in \Ex _3(k)$. Recall that $\pl (\epsilon _h)=\delta _h(k[\x] )\cap k[\x] ^{\epsilon _h}$, where $\delta _h:=\epsilon _h-{\rm id} $. The goal of this section is to prove the following theorem. \begin{thm}\label{thm:main} Assume that $\epsilon $ is of rank three, i.e., $\gamma (k[\x] ^{\epsilon })=0$. If $h\in k[\x] ^{\epsilon }$ satisfies the following condition $\clubsuit$, then we have $\gamma (k[\x] ^{\epsilon _h})=0$. \noindent $\clubsuit$ {\rm There exist $f_1,f_2\in k[\x] ^{\epsilon }$ such that $\pl (\epsilon _h)\subset f_1f_2k[\x] $ and $\trd_kk[f_1,f_2]=2$.} \end{thm} Since $\trd _kk[\x] ^{\epsilon }=2$ by Remark~\ref{rem:Miyanishi} (iii), there always exist $f_1,f_2\in k[\x] ^{\epsilon }$ such that $\trd_kk[f_1,f_2]=2$. Then, $h:=f_1f_2$ satisfies $\clubsuit$, since $\pl (\epsilon _h)\subset \delta _h(k[\x] )\subset hk[\x] $ by \S \ref{sect:plinth} (6). Hence, the existence of a rank three ${\bf G}_a $-action on $k[\x] $ implies the existence of $\phi \in \Ex _3(k)$ with $\gamma (k[\x] ^{\phi })=0$ (cf.~Corollary~\ref{cor:rank3}). \begin{lem}\label{thm:criterion} Let $R$ be a domain with $\ch R=p>0$, and $\phi \in \Aut _RR[x_1,x_2]$ of order $p$. Then, for each $q\in R[x_1,x_2]$ with $\pl (\phi )\subset qR[x_1,x_2]$, there exists an $R$-subalgebra $B$ of $R[x_1,x_2]^{\phi }$ such that $q\in B$ and the following $(*)$ holds: \noindent $(*)$ $\trd _RB=1$, and $B$ is factorially closed and algebraically closed in $R[x_1,x_2]$. \end{lem} \begin{proof} Put $K:=Q(R)$. Let $\tilde{\phi }\in \Aut _KK[x_1,x_2]$ be the extension of $\phi $. By Theorem~\ref{thm:Osaka}, there exist $X_1,X_2\in K[x_1,x_2]$ and $g\in K[X_1]\setminus \{ 0\} $ such that $K[X_1,X_2]=K[x_1,x_2]$, $\tilde{\phi }(X_1)=X_1$ and $\tilde{\phi }(X_2)=X_2+g$. Multiplying by $X_2$ an element of $K^*$, we may assume that $X_2$ lies in $R[x_1,x_2]$. Now, set $B:=K[X_1]\cap R[x_1,x_2]$. Then, we have $\trd _RB=1$; $B\subset R[x_1,x_2]^{\phi }$, since $\tilde{\phi }(X_1)=X_1$; and $B$ is factorially closed and algebraically closed in $R[X_1,X_2]$, since so is $K[X_1]$ in $K[x_1,x_2]$. From $g\in K[x_1]$, $X_2\in R[x_1,x_2]$ and $\phi (X_2)=X_2+g$, we see that $g=\phi (X_2)-X_2$ is in $B$. Since $B\subset R[x_1,x_2]^{\phi }$, this also shows $g\in \pl (\phi )$. Hence, we have $g=qh$ for some $h\in R[x_1,x_2]$, since $\pl (\phi )\subset qR[x_1,x_2]$ by assumption. Note that $qh=g$ is in $B$. Since $B$ is factorially closed in $R[x_1,x_2]$, it follows that $q$ lies in $B$. \end{proof} The following lemma is true for any field $k$, and is readily verified. \begin{lem}\label{lem:acfc} Let $A$ be a $k$-domain, and let $B$ and $B'$ be $k$-subalgebras of $A$. Assume that $r:=\trd _kB=\trd _kB'<\infty $, and $B$ and $B'$ are factorially closed and algebraically closed in $A$. If there exist $a_1,\ldots ,a_r\in A$ such that the product $a_1\cdots a_r$ belongs to $B$ and $B'$, and $\trd _kk[a_1,\ldots ,a_r]=r$, then we have $B=B'$. \end{lem} \begin{proof} Both $B$ and $B'$ are equal to the algebraic closure of $k[a_1,\ldots ,a_r]$ in $A$. \end{proof} \begin{proof}[Proof of Theorem~{\rm \ref{thm:main}}] Suppose that $\gamma (k[\x] ^{\epsilon _h})\ge 1$, and let $\sigma \in \Aut _kk[\x] $ be such that $R:=\sigma (k[x_1])\subset k[\x] ^{\epsilon _h}$. Then, $\epsilon _h$ is viewed as an element of $\Aut _RR[y_2,y_3]$, where $y_i:=\sigma (x_i)$. Since $\epsilon _h$ is exponential, $\epsilon _h$ is of order $p$. Moreover, we have $\pl (\epsilon _h)\subset f_1f_2k[\x] $ by the assumption $\clubsuit$. Hence, by Lemma~\ref{thm:criterion}, there exists an $R$-subalgebra $B$ of $k[\x] ^{\epsilon _h}$ such that $f_1f_2\in B$; $\trd _RB=1$, i.e., $\trd _kB=2$; and $B$ is factorially closed and algebraically closed in $k[\x] $. By Remark~\ref{rem:Miyanishi}, $k[\x] ^{\epsilon }$ is also factorially closed and algebraically closed in $k[\x] $, and $\trd _kk[\x] ^{\epsilon }=2$. We also have $f_1f_2\in k[\x] ^{\epsilon }$, and $\trd _kk[f_1,f_2]=2$ by $\clubsuit$. Thus, we get $B=k[\x] ^{\epsilon }$ by Lemma~\ref{lem:acfc}. Since $\sigma (k[x_1])=R\subset B$, this contradicts that $\gamma (k[\x] ^{\epsilon })=0$. \end{proof} \section{A family of ${\bf G}_a $-actions}\label{sect:rank3 family} \setcounter{equation}{0} In Sections~\ref{sect:rank3 family} and \ref{sect:invariant ring}, we construct a family of ${\bf G}_a $-actions $\epsilon $ on $k[\x] =k[x_1,x_2,x_3]$ of rank three, and study $\epsilon _h\in \Ex _3(k)$ for $h\in k[\x] ^{\epsilon }$. \subsection{Construction of the ${\bf G}_a $-actions}\label{sect:rank3 construction} For the moment, let $k$ be any field with $\ch k=p\ge 0$. We fix $l,m\ge 1$ and $t\ge 2$ with $mt\ge 3$. We define $f:=x_1x_3-x_2^t$, $r:=f^lx_2+x_1^m$ and \begin{align}\label{eq:rank3 g} g&:=x_1^{-1}(f^{lt+1}+r^t) =x_1^{-1} (f^{lt}(x_1x_3-x_2^t)+(f^lx_2+x_1^m)^t) \\ &=f^{lt}x_3+g^*+x_1^{mt-1}, \text{ where } g^*:=x_1^{-1}((f^lx_2+x_1^m)^t-f^{lt}x_2^t-x_1^{mt}). \notag \end{align} We note the following: \noindent {\bf 1}$^\circ $ $x_1=g^{-1}(f^{lt+1}+r^t)$, $x_2=f^{-l}(r-x_1^m)$ and $x_3=f^{-lt}(g-g^*-x_1^{mt-1})$. \noindent {\bf 2}$^\circ $ $g^*$ lies in $f^lx_2k[f^lx_2,x_1]$. If $p\mid t$, then $g^*$ lies in $(f^lx_2)^px_1k[(f^lx_2)^p,x_1]$. If $t$ is a power of $p$, then $g^*=0$. Now, we set $C:=k[f^{\pm 1},g^{\pm 1}]$. Here, $h^{\pm 1}$ stands for $h,h^{-1}$ for $h\in k[\x] \setminus \{ 0\} $. Then, from 1$^\circ $ and 2$^\circ $, we see that $k[\x] \subset C[r]$. This implies that \noindent {\bf 3}$^\circ $ $f$, $g$ and $r$ are algebraically independent over $k$. \noindent Hence, $C[r]$ is the polynomial ring in $r$ over $C$. Therefore, by Example~\ref{example:action on A[x]}, $$ \widetilde{\epsilon }:C[r]\ni u(r)\mapsto u(r+f^lgT)\in C[r][T] $$ is a ${\bf G}_a $-action on $C[r]$ with $C[r]^{\widetilde{\epsilon }}=C$. Moreover, we have $\widetilde{\epsilon }(k[\x] )\subset k[\x] [T]$ by Proposition~\ref{prop:restricts} (i) below. Hence, $\widetilde{\epsilon }$ restricts to a ${\bf G}_a $-action on $k[\x] $, which we denote by $\epsilon $. As shown in \S \ref{sect:intersection lemma}, $\epsilon $ is of rank three and $k[\x] ^{\epsilon }=C\cap k[\x] =k[f,g]$. When $p=0$, $t=2$ and $m=2l+1$, this ${\bf G}_a $-action is the same as Freudenburg~\cite{Frank}. \begin{prop}\label{prop:restricts} Set $\delta :=\widetilde{\epsilon }-\iota $ {\rm (cf.~\S \ref{sect:plinth})}. Then, the following assertions hold. \noindent {\rm (i)} For the ideal $J:=(x_1,x_2^{lt}x_2,x_2^{lt}x_3)$ of $k[\x] [T]$, we have $\delta (k[\x] )\subset TJ$. \noindent {\rm (ii)} If $p>0$ and $p\mid t$, then we have $\delta (k[\x] )\subset gTk[\x] [T]$. \end{prop} Note that $\delta (k[\x] )\subset \sum _{i=1}^3\delta (x_i)k[\x] [\delta (x_1),\delta (x_2),\delta (x_3)] \cap TC[r][T]$ by \S \ref{sect:plinth} (5) and (6). Hence, Proposition~\ref{prop:restricts} follows from (i) and (ii) of the following lemma. \begin{lem}\label{lem:rank3 J} {\rm (i)} $\delta (x_i)\in J$ holds for $i=1,2,3$. \noindent {\rm (ii)} If $p>0$ and $p\mid t$, then $\delta (x_i)\in gk[\x] [T]$ holds for $i=1,2,3$. \noindent {\rm (iii)} If $p>0$, and $m$ and $t$ are powers of $p$, then we have $\delta (x_1)=g^{-1}(f^lgT)^t$, $\delta (x_2)=gT-f^{-l}\delta (x_1)^m$, and $\delta (x_3)=-f^{-lt}\delta (x_1^{mt-1})$. \end{lem} The rest of \S \ref{sect:rank3 construction} is devoted to the proof of this lemma. First, note the following: \noindent {\bf 4}$^\circ $ Since $g\in x_1^{mt-1}+fk[\x] $ by (\ref{eq:rank3 g}) and 2$^\circ $, we have $\gcd (f,g)=\gcd (f,x_1^{mt-1})=1$. \noindent {\bf 5}$^\circ $ Since $f\in (x_1,x_2^t)$, we have $f^lx_i\in (x_1,x_2^{lt}x_i)\subset J$ for $i=1,2,3$. Hence, $f^l\fn \subset J$ holds for $\fn :=(x_1,x_2,x_3)$. \noindent {\bf 6}$^\circ $ $g^*$ is in $(f^lx_2{\cdot }x_1)$ if $m\ge 2$, and in $((f^lx_2)^2,f^lx_2{\cdot }x_1)$ if $m=1$ and $t\ge 3$. Hence, $g^*$ lies in $J^2$ by 5$^\circ $. Thus, we have $g\in (f^lf^{l}x_3,g^*,x_1^2)\subset f^lJ+J^2$. \noindent {\bf 7}$^\circ $ $r\in (f^lx_2,x_1)\subset J$ by 5$^\circ $, and so $\widetilde{\epsilon }(r)=r+f^lgT\in (r,g)\subset J$ by 6$^\circ $. Hence, we know by \S \ref{sect:plinth} (2) that $$ \delta (r^u) =\delta (r)\sum _{i=0}^{u-1}\widetilde{\epsilon }(r)^ir^{u-1-i} =f^lgT\sum _{i=0}^{u-1}\widetilde{\epsilon }(r)^ir^{u-1-i}\in f^lgJ^{u-1} \text{ for all }u\ge 1. $$ \begin{proof}[Proof of Lemma~$\ref{lem:rank3 J}$] (i) $\delta $ is a linear map over $\ker \delta =C[r]^{\widetilde{\epsilon }}=C$ (cf.~\S \ref{sect:plinth}). Hence, \noindent {\bf 8}$^\circ $ we have $\delta (x_1)=\delta (g^{-1}(f^{lt+1}+r^t))= g^{-1}\delta (r^t)\in f^lJ^{t-1}$ by 1$^\circ $ and 7$^{\circ }$. \noindent Since $t\ge 2$, this proves $\delta (x_1)\in J$. Hence, $\widetilde{\epsilon }(x_1)=x_1+\delta (x_1)$ is in $J$. Moreover, \noindent {\bf 9}$^\circ $ $\delta (x_1)\fn \subset f^l\fn \cdot J^{t-1}\subset J^2$ holds by 5$^\circ $ and 8$^\circ $. Similarly, by 1$^\circ $ and \S \ref{sect:plinth} (2), we have \begin{equation}\label{eq:delta(x_2)} \delta (x_2) =f^{-l}(\delta (r)-\delta (x_1^m)) =gT-f^{-l}\delta (x_1)\sum _{i=0}^{m-1}\widetilde{\epsilon }(x_1)^ix_1^{m-1-i}, \end{equation} in which $g\in f^lJ+J^2$ by 6$^\circ $, $f^{-l}\delta (x_1)\in J^{t-1}$ by 8$^\circ $, and $\widetilde{\epsilon }(x_1),x_1\in J$. Hence, $\delta (x_2)$ is in $f^lJ+J^2+J^{t+m-2}$. Since $mt\ge 3$, we have $t+m\ge 4$. Thus, we get $\delta (x_2)\in f^lJ+J^2\subset J$. This implies $\widetilde{\epsilon }(x_2)=x_2+\delta (x_2)\in \fn $, and so \noindent {\bf 10}$^\circ $ $\delta (x_2^t)=\delta (x_2)\sum _{i=0}^{t-1}\widetilde{\epsilon }(x_2)^ix_2^{t-1-i} \in (f^lJ+J^2)\mathfrak{n}\subset J^2$ by 5$^\circ $, since $t\ge 2$. For $\delta (x_3)$, first note that $\delta (k[x_1,x_2,f])\subset \sum _{i=1}^2\delta (x_i)k[x_1,x_2,f,\delta (x_1),\delta (x_2)]$ by \S \ref{sect:plinth} (5), since $\delta (f)=0$. By 2$^\circ $, $g^*+x_1^{mt-1}$ is in $k[x_1,x_2,f]$. Hence, we get \noindent {\bf 11}$^\circ $ $\delta (x_3)=-f^{-lt}\delta (g^*+x_1^{mt-1}) \in f^{-lt}\sum _{i=1}^2\delta (x_i)k[x_1,x_2,f,\delta (x_1),\delta (x_2)]$ by 1$^\circ $. \noindent We have already proved that $\delta (x_1),\delta (x_2)\in J\subset k[\x] [T]$. Thus, $\delta (x_3)$ belongs to $f^{-lt}k[\x] [T]$ by 11$^\circ $. Since $0=\delta (f) =\delta (x_1)x_3+\widetilde{\epsilon }(x_1)\delta (x_3)-\delta (x_2^t)$ by \S \ref{sect:plinth} (4), we have $\widetilde{\epsilon }(x_1)\delta (x_3) =\delta (x_2^t)-\delta (x_1)x_3\in k[\x] [T]$. This implies $\delta (x_3)\in k[\x] [T]$, since $\widetilde{\epsilon }(x_1)=x_1+\delta (x_1)\in x_1+fJ^{t-1}$ by 8$^\circ $, and hence $\gcd (f,\widetilde{\epsilon }(x_1))=\gcd (f,x_1)=1$. Suppose that $\delta (x_3)\not\in J$. Then, there appears in $\delta (x_3)$ a monomial $h=x_2^{i_2}x_3^{i_3}T^j$ not in $J$. For such $h$, we have $x_1h\not\in J^2$. Since $J^2$ is a monomial ideal, this means that the monomial $x_1h$ does not appear in any polynomial belonging to $J^2$. We choose $h$ so that $i_2+i_3$ is minimal. Now, observe that \begin{equation}\label{eq:lem J delta(x_3)} \delta (x_2^t)-\delta (x_1)x_3 =\widetilde{\epsilon }(x_1)\delta (x_3)=x_1\delta (x_3)+\delta (x_1)\delta (x_3). \end{equation} By 9$^\circ $ and 10$^\circ $, $\delta (x_2^t)-\delta (x_1)x_3$ lies in $J^2$. Hence, $x_1h$ does not appear in (\ref{eq:lem J delta(x_3)}). Clearly, $x_1h$ appears in $x_1\delta (x_3)$. Thus, $x_1h$ must appear in $\delta (x_1)\delta (x_3)$. This implies $i_2+i_3\ge 1$, since $\delta (x_1)\delta (x_3)\in (f)\subset \fn ^2$ by 8$^\circ $. Moreover, $\delta (x_3)$ is not in $\fn $, for otherwise $\delta (x_1)\delta (x_3)\in J^2$ by 9$^\circ $. Hence, the monomial $T^{j'}$ appears in $\delta (x_3)$ for some $j'\ge 0$. This contradicts the minimality of $i_2+i_3$. (ii) We have $\delta (x_1)=g^{-1}\delta (r^t)=g^{-1}\delta (r^{t/p})^p \in g^{-1}(f^lg)^pk[\x] [T]\subset f^lgk[\x] [T]$ by 8$^\circ $, \S \ref{sect:plinth} (3) and 7$^\circ $. This implies $\delta (x_1)\in gk[\x] [T]$, and also $\delta (x_2)\in gk[\x] [T]$ by (\ref{eq:delta(x_2)}). Then, 11$^\circ $ yields that $\delta (x_3)\in f^{-lt}gk[\x] [T]$. Since $\delta (x_3)\in k[\x] [T]$ by (i), and $\gcd (f,g)=1$ by 4$^\circ $, it follows that $\delta (x_3)\in gk[\x] [T]$. (iii) For $\delta (x_1)$ and $\delta (x_2)$, apply \S \ref{sect:plinth} (3) to 8$^\circ $ and (\ref{eq:delta(x_2)}). Since $g^*=0$ by 2$^\circ $, we have $\delta (x_3)=-f^{-lt}\delta (g^*+x_1^{mt-1}) =-f^{-lt}\delta (x_1^{mt-1})$ by 1$^\circ $. \end{proof} \subsection{Plinth ideals}\label{sect:rank3 exp} Assume that $p>0$. Let $\epsilon $ be the ${\bf G}_a $-action on $k[\x] $ as in \S \ref{sect:rank3 construction}. For $h\in k[\x] ^{\epsilon }\setminus \{ 0\} $, we define $\epsilon _h\in \Ex _3(k)$, and a $k[\x] ^{\epsilon _h}$-linear map $\delta _h:=\epsilon _h-{\rm id} $. In \S \ref{sect:rank3 exp}, we study the plinth ideal $\pl (\epsilon _h)=\delta _h(k[\x] )\cap k[\x] ^{\epsilon _h}$. Our goal is to prove the following theorem. \begin{thm}\label{thm:plinth rank 3} \noindent{\rm (i)} If $p\nmid t$, then $\pl (\epsilon _h)$ is not a principal ideal of $k[\x] ^{\epsilon _h}$. \noindent{\rm (ii)} If $p\mid t$, then we have $\pl (\epsilon _h)=ghk[\x] ^{\epsilon _h}$. \end{thm} By definition, we have $\epsilon _h(r)=\epsilon (r)|_{T=h}=r+f^lgh$, and $\delta _h(r)=\epsilon _h(r)-r=f^lgh$. Since $f,g,h\in k[\x] ^{\epsilon }\subset k[\x] ^{\epsilon _h}$, it follows that \noindent {\bf 12}$^\circ $ $f^lgh$ belongs to $\pl (\epsilon _h)$. Consequently, we have $f^lghk[\x] ^{\epsilon _h}\subset \pl (\epsilon _h)$. \begin{lem}\label{lem:rank3 pl} If $p\nmid t$, then $f^{l'}h$ belongs to $\pl (\epsilon _h)$ for some $l'\ge l$. \end{lem} \begin{proof} By Euler's theorem, $p^v\equiv 1\pmod{t}$ holds for $v:=|({\bf Z} /t{\bf Z} )^*|$, since $p\nmid t$ by assumption. Set $u:=(p^v-1)/t$. Then, since $r^t+f^{lt+1}=x_1g$ by (\ref{eq:rank3 g}), we have $$ s:=r^{p^v}-(-f^{lt+1})^ur=(r^{tu}-(-f^{lt+1})^u)r \in (r^t-(-f^{lt+1}))k[r,f]\subset gk[\x] . $$ Hence, $\delta _h(g^{-1}s)$ is in $\delta _h(k[\x] )$. On the other hand, by \S \ref{sect:plinth} (3), we have $g\delta _h(g^{-1}s)=\delta _h(s) =\delta _h(r)^{p^v}-(-f^{lt+1})^u\delta _h(r) =(f^lgh)^{p^v}-(-f^{lt+1})^uf^lgh$. This gives that \begin{align}\label{eq:(Z/tZ)^*} \delta _h(g^{-1}s)=g^{p^v-1}(f^lh)^{p^v}-(-f^{lt+1})^uf^lh \in k[f,g,h]\subset k[\x] ^{\epsilon _h}. \end{align} Thus, (\ref{eq:(Z/tZ)^*}) belongs to $\pl (\epsilon _h)$. In the right-hand side of (\ref{eq:(Z/tZ)^*}), $g^{p^v-1}(f^lh)^{p^v}$ belongs to $\pl (\epsilon _h)$ by 12$^\circ $. Therefore, $(f^{lt+1})^uf^lh$ belongs to $\pl (\epsilon _h)$. \end{proof} Now, we set $I:=x_1k[\x] +x_2^{lt}x_2k[\x] +x_2^{lt}x_3k[\x] $. Then, by Proposition~\ref{prop:restricts}, \noindent {\bf 13}$^\circ $ we have $\delta _h(k[\x] )\subset hI$. Moreover, if $p\mid t$, then $\delta _h(k[\x] )\subset ghk[\x] $. \begin{proof}[Proof of Theorem {\rm \ref{thm:plinth rank 3} (i)}] By Lemma~\ref{lem:rank3 pl} and 12$^\circ $, $f^{l'}h$ and $f^lgh$ belong to $\pl (\epsilon _h)$, where $l'\ge l$. Moreover, we have $\gcd (f^{l'}h,f^lgh)=f^lh$ by 4$^\circ $, and $f^lh$ is in $k[\x] ^{\epsilon _h}$. Hence, if $\pl (\epsilon _h)$ is principal, then $f^lh$ must lie in $\pl (\epsilon _h)$ by Lemma~\ref{lem:pl principal} (ii). Since $\pl (\epsilon _h)\subset \delta _h(k[\x] )\subset hI$ by 13$^\circ $, it follows that $f^lh\in hI$, and so $f^l\in I$. Since $x_1\in I$, this implies that $x_2^{lt}\in I$, a contradiction. \end{proof} To prove (ii), we need to construct some elements of $k[\x] ^{\epsilon _h}$. The following remark is also used in Section~\ref{sect:Nagata}. \begin{notation}\label{notation:taylor}\rm Let $R$ be a ring, and $P(T)\in R[T]$. For each $i\ge 0$, we define $P_i(T)\in R[T]$ by $P(T+U)=\sum _{i\ge 0}P_i(T)U^i$. We note that $P_0(T)=P(T)$ and $P_1(T)=P'(T):=dP(T)/dT$ regardless of the characteristic of $R$. \end{notation} \begin{rem}\label{rem:q_1}\rm Let $B$ be a domain with $\ch B=p>0$. Assume that $\phi \in \Aut B$, $a,b\in B^{\phi }\setminus \{ 0\} $ and $c\in B$ satisfy $\phi (c)=c+ab$. Then, $B^{\phi }$ contains $q:=c^p-(ab)^{p-1}c$ (cf.~(\ref{eq:x^p-ax})). Now, let $S$ be a subring of $B^{\phi }$ and let $\xi (T)=\sum _{i\ge 0}r_iT^i\in S[T]$, where $r_i\in S$. If $\xi (c)=aw$ for some $w\in B$, then the following (i) and (ii) hold. \noindent (i) Set $\xi ^p(T):=\sum _{i\ge 0}r_i^pT^i$. Then, we have $\xi ^p(c^p)=\xi (c)^p=(aw)^p$ and $(\xi ^p)_1(c^p) =\sum _{i\ge 0}ir_i^pc^{(i-1)p} =\xi '(c)^p$. Hence, we get \begin{align}\label{eq:q_1} \widetilde{q}_1:=a^{1-p}\xi ^p(q) &=a^{1-p}\Bigl( \xi ^p(c^p) -(\xi ^p)_1(c^p){\cdot }(ab)^{p-1}c +\sum _{i\ge 2}(\xi ^p)_i(c^p){\cdot } (-(ab)^{p-1}c)^i \Bigr) \notag \\ &\quad \in aw^p-\xi '(c)^p{\cdot } b^{p-1}c+a^{p-1}b^{2(p-1)}c^2S[ab,c] \subset B. \end{align} Since $a,\xi ^p(q)\in B^{\phi }$, it follows that $\widetilde{q}_1\in B^{\phi }$. \noindent (ii) Assume that $\xi (T)=\xi ^*(T^p)-a\widehat{\xi }(T)$ for some $\xi ^*(T),\widehat{\xi }(T)\in S[T]$. Then, we have $\xi ^*(c^p)=\xi (c)+a\widehat{\xi }(c)=a(w+\widehat{\xi }(c))$. Hence, we get \begin{equation}\label{eq:q_1^*} \begin{aligned} q_1&:=a^{-1}\xi ^*(q) =a^{-1}\Bigl( \xi ^*(c^p)+\sum _{i\ge 1}(\xi ^*)_i(c^p){\cdot }(-(ab)^{p-1}c)^i \Bigr) \\ &\quad \in w+\widehat{\xi }(c)+a^{p-2}b^{p-1}cS[ab,c]\subset B. \end{aligned} \end{equation} Thus, $q_1$ belongs to $B^{\phi }$ as in (i). \end{rem} Now, observe that $\epsilon _h(r)=r+g{\cdot }f^lh$ and $f^{lt+1}+r^t=g{\cdot }x_1$. Hence, we can use Remark~\ref{rem:q_1} for $(a,b,c,S,\xi (T),w) =(g,f^lh,r,k[f],f^{lt+1}+T^t,x_1)$. Thus, \begin{equation}\label{eq:rank3 q} q:=r^p-(f^lgh)^{p-1}r =(f^{lp}x_2^p+x_1^{mp})-(f^lgh)^{p-1}(f^lx_2+x_1^m) \end{equation} belongs to $k[\x] ^{\epsilon _h}$. Moreover, if $p\mid t$, then we can write $f^{lt+1}+T^t=\xi ^*(T^p)-g\widehat{\xi }(T)$, where $\xi ^*(T):=f^{lt+1}+T^{t/p}$ and $\widehat{\xi }(T):=0$. Hence, by (\ref{eq:q_1^*}), $k[\x] ^{\epsilon _h}$ contains \begin{equation}\label{eq:rank3 q_1} q_1:=g^{-1}(f^{lt+1}+q^{t/p}) \in x_1+g^{p-2}(f^lh)^{p-1}rk[f,f^lgh,r]. \end{equation} \begin{proof}[Proof of Theorem {\rm \ref{thm:plinth rank 3} (ii)}] Since $p\mid t$ by assumption, we have $\delta _h(k[\x] )\subset ghk[\x] $ by 13$^\circ $. Hence, by Lemma~\ref{lem:pl principal} (i), it suffices to show that $gh$ belongs to $\pl (\epsilon _h)$. Since $r\equiv x_1^m$, $q_1\equiv x_1\pmod{f^lk[\x] }$, we have $s:=r-q_1^m\in f^lk[\x] $. Hence, $\delta _h(f^{-l}s)$ is in $\delta _h(k[\x] )$. On the other hand, we have $f^l\delta _h(f^{-l}s)=\delta _h(s)=\delta _h(r)=f^lgh$, since $q_1\in k[\x] ^{\epsilon _h}=\ker \delta _h$. Thus, $\delta _h(f^{-l}s)=gh$ is in $k[\x] ^{\epsilon _h}$. This proves $gh\in \pl (\epsilon _h)$. \end{proof} \section{Invariant ring}\label{sect:invariant ring} \setcounter{equation}{0} Let $\epsilon $ be the ${\bf G}_a $-action on $k[\x] =k[x_1,x_2,x_3]$ defined in \S \ref{sect:rank3 construction}. In this section, we prove the following three theorems. Theorem~\ref{thm:k[f,g]} holds for any $k$. \begin{thm}\label{thm:k[f,g]} We have $k[\x] ^{\epsilon }=k[f,g]$, and $\epsilon $ is of rank three, i.e., $\gamma (k[\x] ^{\epsilon })=0$. \end{thm} Now, assume that $p>0$. For $0\ne h\in k[\x] ^{\epsilon }=k[f,g]$, we define $\epsilon _h\in \Ex _3(k)$. The following corollary is a consequence of Theorems~\ref{thm:main} and \ref{thm:k[f,g]}. The case (1) is the same as the remark after Theorem~\ref{thm:main}, but we use 13$^\circ $ for the case (2). \begin{cor}\label{cor:rank3} We have $\gamma (k[\x] ^{\epsilon _h})=0$ if $p>0$ and one of the following holds. \noindent{\rm (1)} $h=f_1f_2$ for some $f_1,f_2\in k[f,g]$ with $\trd _kk[f_1,f_2]=2$. \noindent{\rm (2)} $p\mid t$ and $h\in k[f,g]\setminus k[g]$. \end{cor} Note that $\epsilon _h$ is the restriction of $\widetilde{\epsilon }_h:C[r]\ni u(r)\mapsto u(r+f^lgh)\in C[r]$ to $k[\x] $. By Lemma~\ref{lem:1var}, we have $C[r]^{\widetilde{\epsilon }_h}=C[q]$, where $q$ is as in (\ref{eq:rank3 q}). Hence, we get \begin{equation}\label{eq:C[r]^{ep_h}} \begin{aligned} &k[\x] ^{\epsilon _h}=C[r]^{\widetilde{\epsilon }_h}\cap k[\x] =C[q]\cap k[\x] =k[f^{\pm 1},g^{\pm 1},q]\cap k[\x] . \end{aligned} \end{equation} To describe $k[\x] ^{\epsilon _h}$, we use the isomorphism \begin{equation}\label{eq:psi} \psi :C[r^p]\ni u(r^p)\mapsto u(r^p-(f^lgh)^{p-1}r)=u(q)\in C[q]=C[r]^{\widetilde{\epsilon }_h}. \end{equation} By 1$^\circ $ and 2$^\circ $, it is easy to check that $x_i^p\in C[r^p]$ for $i=1,2,3$. If $p\mid t$, then we also have $x_1,x_3\in C[r^p]$. Moreover, $\psi (x_1)=\psi (g^{-1}(f^{lt+1}+r^t))=g^{-1}(f^{lt+1}+q^{t/p})$ is equal to $q_1$ in (\ref{eq:rank3 q_1}). With this notation, the following theorems hold. \begin{thm}\label{thm:rank3 invariant ring} Assume that $p\mid t$. \noindent {\rm (i)} There exist $q_2,q_3\in k[\x] ^{\epsilon _h}$ such that $q_1q_3-q_2^{t/p}=f$ and $k[\x] ^{\epsilon _h}=k[q_1,q_2,q_3]$. \noindent {\rm (ii)} We have $\psi (k[x_1,x_2^p,x_3])=k[\x] ^{\epsilon _h}$ if and only if $h^{p-1}\in f^lk[f,g]$. \end{thm} Let $k[{\boldsymbol x},y,z]=k[\x] [y,z]$ be the polynomial ring in five variables over $k$. \begin{thm}\label{thm:rank3 invariant ring2} Assume that $p\nmid t$, $p\nmid mt-1$ and $h^{p-1}\in f^{l+1}g^2k[f,g]\setminus \{ 0\} $. \noindent {\rm (i)} We have $k[\x] ^{\epsilon _h}=\psi (k[x_1^p,x_2^p,x_2^p,f,g]) =k[\psi (x_1^p),\psi (x_2^p),\psi (x_3^p),f,g]$. \noindent {\rm (ii)} The $k$-algebra $k[\x] ^{\epsilon _h}$ is isomorphic to $k[{\boldsymbol x},y,z]/(y^p-f,z^p-g)$, and is not isomorphic to $k[\x] $. \end{thm} \subsection{Intersection lemmas}\label{sect:intersection lemma} First, we prove some lemmas. If $I$ is an ideal of a ring $B$, then $\pi :B\to B/I$ denotes the natural surjection. Now, let $R\subset B$ be domains, $a_1,\ldots ,a_s,b\in B\setminus \{ 0\} $ with $b\not\in B^*$, and $R[\y] =R[y_1,\ldots ,y_s]$ the polynomial ring in $s$ variables over $R$. For $\sigma :R[\y] \ni u({\boldsymbol y} )\mapsto u(a_1,\ldots ,a_s)\in B$ and $\pi :B\to B/bB$, we set $\overline{\sigma }:=\pi \circ \sigma $ and $A:=\sigma (R[\y] )=R[a_1,\ldots ,a_s]$. Then, the following lemma holds. \begin{lem}\label{lem:intersection} If there exists $\mathcal{S}\subset R[\y] $ such that $\ker \overline{\sigma }=(\mathcal{S})$ and $\sigma (\mathcal{S})\subset bA[b]$, then we have $A[ b^{\pm 1}]\cap B=A[b]$. If moreover $b$ is in $A$, then $A[b^{-1}]\cap B=A$. \end{lem} \begin{proof} First, we show that $A\cap bB\subset bA[b]$. Pick any $a\in A\cap bB$. Since $a\in A$, we can write $a=\sigma (p)$, where $p\in R[\y] $. Then, we have $p\in \ker \overline{\sigma }=(\mathcal{S})$, since $\sigma (p)=a\in bB$. Write $p=\sum _ip_iq_i$, where $p_i\in \mathcal{S}$ and $q_i\in R[\y] $. Since $\sigma (p_i)\in bA[b]$ by assumption, and $\sigma (q_i)\in A$, it follows that $a=\sigma (p)=\sum _i\sigma (p_i)\sigma (q_i)\in bA[b]$. Now, we prove that $A[b^{\pm 1}]\cap B\subset A[b]$ by contradiction. Suppose that there exists $c\in A[b^{\pm 1}]\cap B\setminus A[b]$. Choose the least $u\ge 1$ with $cb^u\in A[b]$, and write $cb^u=\sum _{i\ge 0}c_ib^i$, where $c_i\in A$. Then, $c_0$ is not in $bA[b]$ by the minimality of $u$. On the other hand, $c_0=cb^u-\sum _{i\ge 1}c_ib^i$ belongs to $A\cap bB\subset bA[b]$ by the discussion above. This is a contradiction. The inclusion $A[b^{\pm 1}]\cap B\supset A[b]$ is clear. \end{proof} \begin{rem}\label{rem:indep}\rm Regard $B/bB$ as an $R$-algebra. If $\overline{a}_1,\ldots ,\overline{a}_l\in B/bB$ are algebraically independent over $R$, then the assumption of Lemma~\ref{lem:intersection} holds with $\mathcal{S}=\{ 0\} $. \end{rem} Next, we consider the case where $b$ is in $R$. Since $bR\subset R\cap bB$, there exists a natural homomorphism $\overline{R}:=R/bR\to R/(R\cap bB)\to B/bB$. Hence, we can regard $B/bB$ as an $\overline{R}$-algebra. We define a substitution map $\widehat{\sigma }:\overline{R}[{\boldsymbol y} ]\to B/bB$ by $\widehat{\sigma }(y_i)=\overline{a}_i$ for $i=1,\ldots ,s$. Then, the following lemma holds. \begin{lem}\label{lem:intersection 2} Assume that $b$ is in $R$. If there exists $\mathcal{S}\subset R[\y] $ such that the image of $\mathcal{S}$ in $\overline{R}[{\boldsymbol y} ]$ generates $\ker \widehat{\sigma }$, and $\sigma (\mathcal{S})\subset bA[b]$, then we have $A[b^{-1}]\cap B=A$. \end{lem} \begin{proof} Note that $\overline{\sigma }$ equals $R[\y] \twoheadrightarrow \overline{R}[{\boldsymbol y} ] \stackrel{\widehat{\sigma }}{\to }B/bB$. Since $\ker \widehat{\sigma }$ is generated by the image of $\mathcal{S}$ in $\overline{R}[{\boldsymbol y} ]$, we see that $\ker \overline{\sigma }=(\{ b\} \cup \mathcal{S})$. Since $\sigma (b)=b\in bA[b]$, and $\sigma (\mathcal{S})\subset bA[b]$ by assumption, we get $A[b^{-1}]\cap B=A$ by Lemma~\ref{lem:intersection}. \end{proof} \begin{rem}\label{rem:indep2}\rm Assume that $b$ is in $R$. If $\overline{a}_1,\ldots ,\overline{a}_l\in B/bB$ are algebraically independent over $R/bR$, then the assumption of Lemma~\ref{lem:intersection 2} holds with $\mathcal{S}=\{ 0\} $. \end{rem} \begin{rem}\label{rem:intersection}\rm Let $S$ be a subring of $B$, and $a,b\in B\setminus \{ 0\} $. If $S[a,b^{\pm 1}]\cap B=S[a,b]$, then we have $S[a^{\pm 1},b^{\pm 1}]\cap B=S[a^{\pm 1},b]\cap B$. Actually, if $c\in S[a^{\pm 1},b^{\pm 1}]\cap B$, there exists $u\ge 0$ such that $a^uc\in S[a,b^{\pm 1}]\cap B=S[a,b]$. This implies $c\in S[a^{\pm 1},b]$. \end{rem} \begin{proof}[Proof of Theorem~{\rm \ref{thm:k[f,g]}}] Since $k[\x] ^{\epsilon }= C\cap k[\x] =k[f^{\pm 1},g^{\pm 1}]\cap k[\x] $, we show that (i) $k[f^{\pm 1},g^{\pm 1}]\cap k[\x] =k[f^{\pm 1},g]\cap k[\x] $ and (ii) $k[ f^{\pm 1},g]\cap k[\x] =k[f,g]$. For (i), it suffices to check that (i$'$) $k[f,g^{\pm 1}] \cap k[\x] =k[f,g]$ by Remark~\ref{rem:intersection}. (i$'$) By Remark~\ref{rem:indep} with $(R,a_1,b)=(k,f,g)$, it suffices to show that $\overline{f}\in k[\x] /gk[\x] $ is transcendental over $k$. Supposing the contrary, there exist $\nu (T)\in k[T]\setminus \{ 0\} $ and $H\in k[\x] $ such that $\nu (f)=gH$. Then, we get $\nu (-x_2^t)=g|_{x_3=0}{\cdot }H|_{x_3=0}$ by the substitution $x_3\mapsto 0$. This implies that $g|_{x_3=0}\in k[x_2]$, which is absurd (cf.~(\ref{eq:rank3 g})). (ii) We repeat the same argument with $f$ and $g$ interchanged. If $\nu (g)=fH$ for some $\nu (T)\in k[T]\setminus \{ 0\} $ and $H\in k[\x] $, then we get $\nu (x_1^{mt-1})=0$ by the substitution $x_2,x_3\mapsto 0$, since $f\mapsto 0$ and $g\mapsto x_1^{mt-1}$ by 2$^\circ $. This is a contradiction. Observe that $f$ and $g$ have no linear part, since $t,mt-1\ge 2$. Hence, no element of $k[f,g]$ has a linear part. This implies that $\gamma (k[f,g])=0$ (cf.~\cite{Frank}). In fact, $\sigma (x_i)$ has a linear part for all $\sigma \in \Aut _kk[\x] $ and $i$, since the Jacobian of $\sigma $ lies in $k^*$. \end{proof} \subsection{Proof of Theorem~\ref{thm:rank3 invariant ring}} \label{sect:rank3 invariant proof} The goal of \S \ref{sect:rank3 invariant proof} is to prove Theorem~\ref{thm:rank3 invariant ring}. For the moment, let $\overline{u}$ denote the image of $u\in k[\x] $ in $k[\x] /gk[\x] $. Then, from (\ref{eq:rank3 q}) and (\ref{eq:rank3 q_1}), we see that \noindent {\bf 14}$^\circ $ $\overline{q}=\overline{r^p-(f^lgh)^{p-1}r}=\overline{r^p}$ and $\overline{q}_1\in \overline{x}_1 +\overline{g^{p-2}(f^lh)^{p-1}r}k[\overline{f},\overline{r}] \subset \overline{x}_1+k[\overline{f},\overline{r}]$. \begin{lem}\label{lem:K_g} $g$ is irreducible in $k[\x] $. If $p\mid t$, then we have $\trd _kk[\overline{f},\overline{q},\overline{q}_1]=2$. \end{lem} \begin{proof} For the first part, suppose that $g=p_1p_2$ for some $p_1,p_2\in k[\x] \setminus k$. Then, $p_1$ and $p_2$ are in $k[f,g]$, since $k[f,g]=k[\x] ^{\epsilon }$ is factorially closed in $k[\x] $ by Remark~\ref{rem:Miyanishi} (i). Hence, we can write $g=\gamma _1(f,g)\gamma _2(f,g)$, where $\gamma _i(x,y)\in k[x,y]\setminus k$. This contradicts 3$^\circ $. For the last part, note that $\trd _kk[\x] /gk[\x] =2$, and $k[\overline{f},\overline{q},\overline{q}_1]=k[\overline{f},\overline{r}^p,\overline{q}_1]$ by 14$^\circ $. Hence, it suffices to show that $Q(k[\x] /gk[\x] )$ is algebraic over $k(\overline{f},\overline{r}^p,\overline{q}_1)$. We have $x_2=f^{-l}(r-x_1^m)$ by 1$^\circ $, and $x_3=x_1^{-1}(f-x_2^t)$ since $f=x_1x_3-x_2^t$. Hence, $x_2$ and $x_3$ are in $k[f^{\pm 1},r,x_1^{\pm 1}]$. Clearly, $\overline{f}$ and $\overline{x}_1$ are nonzero. Thus, $Q(k[\x] /gk[\x] )=k(\overline{x}_1,\overline{x}_2,\overline{x}_3)$ is equal to $k(\overline{f},\overline{r},\overline{x}_1)$. Since $\overline{q}_1\in \overline{x}_1+k[\overline{f},\overline{r}]$ by 14$^\circ $, $k(\overline{f},\overline{r},\overline{x}_1)$ is equal to $k(\overline{f},\overline{r},\overline{q}_1)$, which is algebraic over $k(\overline{f},\overline{r}^p,\overline{q}_1)$. \end{proof} Let $k[\y ] =k[y_1,y_2,y_3]$ be the polynomial ring in three variables over $k$. \begin{prop}\label{prop:invariant ring first step} If $p\mid t$, then we have $k[\x] ^{\epsilon _h}=k[f^{\pm 1},g,q,q_1]\cap k[\x] $. \end{prop} \begin{proof} Since $q_1$ is in $k[\x] ^{\epsilon _h}$, (\ref{eq:C[r]^{ep_h}}) implies that $k[\x] ^{\epsilon _h}=k[f^{\pm 1},g^{\pm 1},q,q_1]\cap k[\x] $. We show that this is equal to $k[f^{\pm 1},g,q,q_1]\cap k[\x] $. By Remark~\ref{rem:intersection}, it suffices to verify that $k[f,g^{\pm 1},q,q_1]\cap k[\x] =k[f,g,q,q_1]$, i.e., $A[g^{\pm 1}]\cap k[\x] =A[g]$, where $A:=k[f,q,q_1]$. For $\sigma :k[\y ] \ni \nu ({\boldsymbol y} )\mapsto \nu (f,q,q_1)\in k[\x] $ and $\pi :k[\x] \to k[\x] /gk[\x] $, we set $\overline{\sigma }=\pi \circ \sigma $. Then, we have $\trd _k\overline{\sigma }(k[\y ] )= \trd _kk[\overline{f},\overline{q},\overline{q}_1]=2$ by Lemma~\ref{lem:K_g}. Hence, $\ker \overline{\sigma }$ is a prime ideal of $k[\y ] $ of height one, and thus principal. Therefore, $\eta \in \ker \overline{\sigma }$ satisfies $\ker \overline{\sigma }=(\eta )$ whenever $\eta $ is irreducible in $k[\y ] $. Now, observe that (1) $\sigma (y_1^{lt+1}+y_2^{t/p})=f^{lt+1}+q^{t/p}=gq_1\in gA[g]$ by (\ref{eq:rank3 q_1}), and (2) $y_1^{lt+1}+y_2^{t/p}$ is irreducible in $k[\y ] $, since $\gcd (lt+1,t/p)=1$. (1) implies that $\overline{\sigma }(y_1^{lt+1}+y_2^{t/p})=0$. Hence, $\ker \overline{\sigma }=(y_1^{lt+1}+y_2^{t/p})$ holds by (2). Then, $A[g^{\pm 1}]\cap k[\x] =A[g]$ follows from (1) and Lemma~\ref{lem:intersection}. \end{proof} Since $f=x_1x_3-x_2^t$ is irreducible, $k[\x] /fk[\x] $ is a $k$-domain of transcendence degree two. In what follows, $\overline{u}$ denotes the image of $u\in k[\x] $ in $k[\x] /fk[\x] $. Then, \noindent {\bf 15}$^\circ $ we have $\overline{q}_1=\overline{x}_1$ by (\ref{eq:rank3 q_1}), and $\overline{g}=\overline{x}_1^{mt-1}$ by (\ref{eq:rank3 g}) and 2$^\circ $. \begin{lem}\label{lem:technical} If $p\mid t$, then there exists $\xi \in q_1k[f,q_1]$ such that $q_2:=f^{-lp}(q-\xi )$ belongs to $k[\x] $, and $\overline{q}_1=\overline{x}_1$ and $\overline{q}_2$ are algebraically independent over $k$. \end{lem} First, we prove Theorem~\ref{thm:rank3 invariant ring} (i) by assuming this lemma. \begin{proof}[Proof of Theorem~{\rm \ref{thm:rank3 invariant ring} (i)}] First, we construct $q_3$. Let $\xi $ and $q_2$ be as in Lemma~\ref{lem:technical}. Since $f^{lp}q_2=q-\xi $ and $\xi \in q_1k[f,q_1]$, we see that \begin{equation}\label{eq:rank3 lambda} \lambda := q_1^{-1}(q^{t/p}-(f^{lp}q_2)^{t/p})= q_1^{-1}(q^{t/p}-(q-\xi )^{t/p})\in k[f,q_1,q]. \end{equation} Now, set $q_3:=f^{-lt}(g-\lambda )\in k[f^{\pm 1},g,q_1,q]$. Then, we have $g=f^{lt}q_3+\lambda $, and \begin{align*} f^{lt+1}+q^{t/p} \stackrel{\text{(\ref{eq:rank3 q_1})}}{=} q_1g =q_1(f^{lt}q_3+\lambda ) \stackrel{\text{(\ref{eq:rank3 lambda})}}{=} f^{lt}q_1q_3+(q^{t/p} -(f^{lp}q_2)^{t/p}). \end{align*} This gives that $f=q_1q_3-q_2^{t/p}$. We show that $q_3$ is in $k[\x] $. By definition, $q_3$ is in $k[\x] [f^{-1}]$. Moreover, $q_1q_3$ is in $k[\x] $, since $q_1q_3=f+q_2^{t/p}$ and $f,q_2\in k[\x] $. By (\ref{eq:rank3 q_1}), $q_1$ is in $x_1+fk[\x] $, and so $\gcd (q_1,f)=1$. Hence, $q_3$ must lie in $k[\x] $. Note that $A:=k[q_1,q_2,q_3] \subset k[f^{\pm 1},g,q,q_1]$ by the definition of $q_2$ and $q_3$, and that $k[f,g,q,q_1]\subset k[f,q,q_1,q_3] \subset A[f]=A$, since $g=f^{lt}q_3+\lambda \in k[f,q,q_1,q_3]$ by (\ref{eq:rank3 lambda}), $q=f^{lp}q_2+\xi \in k[f,q_1,q_2]$, and $f=q_1q_3-q_2^{t/p}\in A$. Hence, $k[f^{\pm 1},g,q,q_1]$ is equal to $A[f^{-1}]$. Thus, we get $k[\x] ^{\epsilon _h}=A[f^{-1}]\cap k[\x] $ by Proposition~\ref{prop:invariant ring first step}. We show that $A[f^{-1}]\cap k[\x] =A$. For $\sigma :k[\y ] \ni \nu ({\boldsymbol y} )\mapsto \nu (q_1,q_2,q_3)\in k[\x] $ and $\pi :k[\x] \to k[\x] /fk[\x] $, we set $\overline{\sigma }=\pi \circ \sigma $. Since $\overline{q}_1$ and $\overline{q}_2$ are algebraically independent over $k$ by Lemma~\ref{lem:technical}, we have $\trd _k\overline{\sigma }(k[\y ] )=2$. Then, as in the proof of Proposition~\ref{prop:invariant ring first step}, the assertion holds by Lemma~\ref{lem:intersection}, because (1) $\sigma (y_1y_3-y_2^{t/p}) =q_1q_3-q_2^{t/p}=f\in fA[f]$ and (2) $y_1y_3-y_2^{t/p}$ is irreducible in $k[\y ] $. \end{proof} Next, we prove Lemma~\ref{lem:technical}. Set $R:=k[x_1,x_2,fx_3,f]$ and $M:=f^lx_1R+f^{2l}k[\x] $. Observe that $x_1,f,g,h,r,q,q_1\in R$ and $RM\subset M$. Hence, by (\ref{eq:rank3 q_1}), we see that $q_1\in x_1+f^lrR=x_1+f^l(x_1^m+f^lx_2)R\subset x_1+M$. Since $x_1M\subset M$, it follows that \noindent {\bf 16}$^\circ $ $q_1^u\in x_1^u+M$ for all $u\ge 1$. Since $h\in k[f,g]$, we can write $(gh)^{p-1}=\eta (g)$, where $\eta (T)\in k[f][T]$. \begin{claim}\label{claim:1} If $p\mid t$, then there exist $\lambda _2,\lambda _3\in x_1R$ such that $$ x_1^{mp}-q_1^{mp}\equiv f^{lp}\lambda _2,\ f^{l(p-1)}((gh)^{p-1} x_1^m-\eta (q_1^{mt-1})q_1^m) \equiv f^{lp}\lambda _3 \pmod{f^{l(p+1)}k[\x] }. $$ \end{claim} \begin{proof} Since $x_1^m-q_1^m\in M$ by 16$^\circ $, we have $x_1^{mp}-q_1^{mp}=(x_1^m-q_1^m)^p\in (f^lx_1)^pR+f^{2lp}k[\x] $. Hence, there exists $\lambda _2$ as claimed. Since $p\mid t$, we have $g\in x_1^{mt-1}+M$ by 2$^\circ $. Hence, $g-q_1^{mt-1}$ lies in $M$ by 16$^\circ $. Since $\eta (T)$ is in $k[f][T]$, it follows that $\eta (g)-\eta (q_1^{mt-1})\in (g-q_1^{mt-1})k[f,g,q_1]\subset MR\subset M$. Thus, we get $$ \eta (g)x_1^m-\eta (q_1^{mt-1})q_1^m =\eta (g)(x_1^m-q_1^m) +(\eta (g)-\eta (q_1^{mt-1}))q_1^m\in M. $$ Since $\eta (g)=(gh)^{p-1}$, this shows that $f^{l(p-1)}((gh)^{p-1}x_1^m-\eta (q_1^{mt-1})q_1^m)$ belongs to $f^{l(p-1)}M=f^{lp}x_1R+f^{l(p+1)}k[\x] $. Therefore, there exists $\lambda _3$ as claimed. \end{proof} \begin{proof}[Proof of Lemma~$\ref{lem:technical}$] We show that the assertion holds for \begin{equation}\label{eq:rank3 xi} \xi :=\left\{ \begin{array}{ll} q_1^{mp}&\text{ (1) if }h^{p-1}\in f^lk[f,g] \\ q_1^{mp}-f^{l(p-1)}\eta (q_1^{mt-1})q_1^m&\text{ (2) otherwise. } \end{array} \right. \end{equation} First, observe that $q=f^{lp}(x_2^p-(gh)^{p-1}x_2)+x_1^{mp}-f^{l(p-1)}(gh)^{p-1}x_1^m$ by (\ref{eq:rank3 q}). (1) We can write $f^{l(p-1)}(gh)^{p-1}x_1^m=f^{lp}\lambda _1$, where $\lambda _1\in x_1^mk[f,g]\subset x_1R$. Let $\lambda _2\in x_1R$ be as in Claim~\ref{claim:1}. Then, $x_1^{mp}-q_1^{mp}$ is in $f^{lp}\lambda _2+f^{l(p+1)}k[\x] $. Hence, $q_2=f^{-lp}(q-\xi )=f^{-lp}(q-q_1^{mp})$ is in $x_2^p-(gh)^{p-1}x_2+\lambda _2-\lambda _1+f^lk[\x] \subset k[\x] $. This also shows that $\overline{q}_2 =\overline{x_2^p+\lambda }$ for some $\lambda \in x_1k[x_1,x_2]$, since $\overline{g}=\overline{x}_1^{mt-1}$ by 15$^\circ $, $h\in R$, $\lambda _1,\lambda _2\in x_1R$, and the image of $R$ in $k[\x] /fk[\x] $ is $k[\overline{x}_1,\overline{x}_2]$. Since $x_2^p+\lambda $ is in $k[x_1,x_2]\setminus k[x_1]$, this implies that $\overline{x}_1$ and $\overline{q}_2$ are algebraically independent over $k$, for otherwise $\nu (x_1,x_2^p+\lambda )\in fk[\x] $ for some $\nu \in k[x_1,x_2]\setminus \{ 0\} $, which is absurd. (2) We have $q_2=f^{-lp}(q-\xi )\in x_2^p-(gh)^{p-1}x_2+\lambda _2-\lambda _3+f^lk[\x] $, where $\lambda _2,\lambda _3\in x_1R$ are as in Claim~\ref{claim:1}. Then, the assertion is verified as in (1). \end{proof} \begin{proof}[Proof of Theorem~{\rm \ref{thm:rank3 invariant ring} (ii)}] First, we remark that $\psi (x_2^p)=\psi (f^{-lp}(r^p-x_1^{mp})) =f^{-lp}(q-q_1^{mp})$ by 1$^\circ $, since $\psi (x_1)=q_1$ as mentioned. If $h^{p-1}\in f^lk[f,g]$, then by (\ref{eq:rank3 xi}), we may take $\xi =q_1^{mp}$ in the proof of (i). Then, we have $q_2=f^{-lp}(q-q_1^{mp})$, which equals $\psi (x_2^p)$ as remarked. Hence, we get $f=\psi (f)=\psi (x_1x_3-x_2^{(t/p)p}) =q_1\psi (x_3)-q_2^{t/p}$. Since $q_1q_3-q_2^{t/p}=f$, this gives that $\psi (x_3)=q_3$. Thus, we get $\psi (k[x_1,x_2^p,x_3])=k[q_1,q_2,q_3]=k[\x] ^{\epsilon _h}$ by (i). For the converse, assume that $h^{p-1}\not\in f^lk[f,g]$. It suffices to prove $\psi (x_2^p)\not\in k[\x] $. Set $\lambda _1:=f^{-l}(gh)^{p-1}x_1^m$, and let $\lambda _2$ be as in Claim~\ref{claim:1}. Then, $f^{-lp}(q-q_1^{mp})$ is in $x_2^p-(gh)^{p-1}x_2+\lambda _2-\lambda _1+f^lk[\x] $ as shown in (1) of the proof of Lemma~\ref{lem:technical}. Moreover, $x_2^p-(gh)^{p-1}x_2+\lambda _2$ is in $k[\x] $. Hence, we get $\psi (x_2^p)\in -\lambda _1+k[\x] $ by the remark. Now, we claim that $f^{-l}h^{p-1}\not\in k[\x] $, for otherwise $f^{-l}h^{p-1}\in k[f^{\pm },g]\cap k[\x] =k[f,g]$ by (ii) of the proof of Theorem~\ref{thm:k[f,g]}, and so $h^{p-1}\in f^lk[f,g]$, a contradiction. Since $\gcd (f,g)=1$ by 4$^\circ $, it follows that $\lambda _1\not\in k[\x] $. This proves that $\psi (x_2^p)\not\in k[\x] $. \end{proof} \begin{rem}\label{rem:t=p, f q_2}\rm If $t=p$, then the following statements hold. \noindent {\rm (i)} Since $q_1q_3-f=q_2^{t/p}=q_2$, we have $k[\x] ^{\epsilon _h}=k[q_1,q_2,q_3]=k[f,q_1,q_3]$. \noindent {\rm (ii)} $\lambda $ in (\ref{eq:rank3 lambda}) is equal to $\xi q_1^{-1}$. Hence, we have $q_3=f^{-lt}(g-\xi q_1^{-1})$. \end{rem} \begin{example}\label{example:simple example}\rm Let $(l,t)=(1,p)$, and $m=2$ if $p=2$, and $m=1$ if $p\ge 3$. Put $s:=mt-1$, i.e., $s=3$ if $p=2$, and $s=p-1$ if $p\ge 3$. For $h:=f$, we set $\phi :=\epsilon _h$. Then, we have $f=x_1x_3-x_2^p$, $g=f^px_3+x_1^s$ by 2$^\circ $, \begin{equation}\label{eq:simple example} \begin{gathered} \phi (x_1)=x_1+f^{2p}g^{p-1},\quad \phi (x_2)=x_2+fg- \left\{ \begin{array}{cc} f^7g^2 & \text{if}\ p=2 \\ f^{2p-1}g^{p-1}&\text{if}\ p\ge 3 \end{array}\right. \\ \phi (x_3)=x_3-f^{-p}\bigl( (x_1+f^{2p}g^{p-1})^{s}-x_1^{s} \bigr) \end{gathered} \end{equation} by Lemma~\ref{lem:rank3 J} (iii), $\gamma (k[\x] ^{\phi })=0$ by Corollary~\ref{cor:rank3} (2), and $\pl (\phi )=fgk[\x] ^{\phi }$ by Theorem~\ref{thm:plinth rank 3} (ii). Since $h^{p-1}=f^{p-1}$ is in $fk[f,g]$, we may take $\xi =q_1^{s+1}$ by (\ref{eq:rank3 xi}), where $q_1:=g^{-1}(f^{p+1}+q)$. Then, we have $q_3:=f^{-p}(g-q_1^{s})$ and $k[\x] ^{\phi }=k[f,q_1,q_3]$ by Remark~\ref{rem:t=p, f q_2}. We also have $\psi (k[x_1,x_2^p,x_3])=k[\x] ^{\phi }$ by Theorem~\ref{thm:rank3 invariant ring} (ii). \end{example} \subsection{Proof of Theorem~\ref{thm:rank3 invariant ring2}} \label{sect:rank3 invariant proof2} The goal of \S \ref{sect:rank3 invariant proof2} is to prove Theorem~\ref{thm:rank3 invariant ring2}. Throughout, we assume that $p\nmid t$, $p\nmid mt-1$ and $h^{p-1}\in f^{l+1}g^2k[f,g]\setminus \{ 0\} $. First, we note the following: \noindent {\bf 17}$^\circ $ $q=r^p-(f^lgh)^{p-1}r$ belongs to $r^p+f^{lp+1}g^{p+1}k[\x] \subset r^p+f^{lp+1}gk[\x] $. \noindent {\bf 18}$^\circ $ By 1$^\circ $ and 2$^\circ $, we can write $x_1^p=g^{-p}\eta _1(r^p)$ and $x_3^p=f^{-ltp}\eta _3(x_1^p,x_2^p)$, where $\eta _1(T)\in k[f,T]$ and $\eta _3(T,U)\in k[f,g,T,U]$. We also have $x_2^p=f^{-lp}(r^p-x_1^{mp})$. \begin{lem}\label{lem:rank3 invariant ring2} \noindent{\rm (i)} $p_i:=\psi (x_i^p)$ belongs to $x_i^p+fgk[\x] $ for $i=1,2,3$. \noindent{\rm (ii)} We have $k[\x] ^{\epsilon _h}=C[{\boldsymbol p} ]\cap k[\x] $, where ${\boldsymbol p} :=\{ p_1,p_2,p_3\} $. \end{lem} \begin{proof} (i) We have $\psi (x_1^p)=g^{-p}\eta _1(q)$ by 18$^\circ $, and $\eta _1(q)\in \eta _1(r^p)+f^{lp+1}g^{p+1}k[\x] $ by 17$^\circ $. Hence, we get $\psi (x_1^p)\in g^{-p}\eta _1(r^p)+f^{lp+1}gk[\x] =x_1^p+f^{lp+1}gk[\x] \subset x_1^p+fgk[\x] $ by 18$^\circ $. From this and 17$^\circ $, we have $q-\psi (x_1^p)^m\in r^p-x_1^{mp}+f^{lp+1}gk[\x] $. Hence, we get $\psi (x_2^p)=f^{-lp}(q-\psi (x_1^p)^m) \in f^{-lp}(r^p-x_1^{mp})+fgk[\x] =x_2^p+fgk[\x] $ by 18$^\circ $. Since $\psi (x_1^p),\psi (x_2^p)\in k[\x] $, we have $\psi (x_3^p)=f^{-ltp}\eta _3(\psi (x_1^p),\psi (x_2^p))\in f^{-ltp}k[\x] $ by 18$^\circ $, and $\psi (x_1^p)\psi (x_3^p) =\psi (x_1^px_3^p) =\psi (x_2^{tp}+f^p) =\psi (x_2^p)^t+f^p\in k[\x] $. We also have $\gcd (f,\psi (x_1^p))=1$, since $\psi (x_1^p)\in x_1^p+fgk[\x] $. Thus, $\psi (x_3^p)$ belongs to $k[\x] $. In $k[\x] /gk[\x] $, the equation $\psi (x_1^p)\psi (x_3^p)-\psi (x_2^p)^t =\psi (f^p)=f^p=x_1^px_2^p-x_2^{tp}$ yields $\overline{x_1^p(\psi (x_3^p)-x_3^p)}=\overline{0}$, since $\overline{\psi (x_i^p)}=\overline{x_i^p}$ for $i=1,2$ as shown above. Since $\overline{x}_1\ne \overline{0}$ and $k[\x] /gk[\x] $ is a domain by Lemma~\ref{lem:K_g}, it follows that $\overline{\psi (x_3^p)-x_3^p}=\overline{0}$, i.e., $\psi (x_3^p)-x_3^p\in gk[\x] $. We can prove $\psi (x_3^p)-x_3^p\in fk[\x] $ similarly. Therefore, $\psi (x_3^p)-x_3^p$ belongs to $fk[\x] \cap gk[\x] =fgk[\x] $ by 4$^\circ $. (ii) From $p_2=f^{-lp}(q-p_1^m)$, we see that $q$ is in $C[{\boldsymbol p} ]$. Since $C[{\boldsymbol p} ]\subset C[q]$, we get $C[{\boldsymbol p} ]=C[q]$. Then, the assertion follows from (\ref{eq:C[r]^{ep_h}}). \end{proof} For $f_1,f_2\in k[\x] $, we define the {\it Jacobian derivation} $D_{(f_1,f_2)}:k[\x] \to k[\x] $ by $D_{(f_1,f_2)}(f_3):=\det (\partial f_i/\partial x_j)_{i,j}$ for each $f_3\in k[\x] $. \begin{lem}\label{lem:derivation} We have $D_{(x_2,f)}(g)\not\in fk[\x] $ and $D_{(x_1,g)}(f)\not\in gk[\x] $. \end{lem} \begin{proof} Set $D:=D_{(x_2,f)}$. Then, $\ker D$ contains $k[x_2,f]$. We also have $D(x_1)=D_{(x_2,f)}(x_1)=-D_{(x_2,x_1)}(f)=D_{(x_1,x_2)}(f)=\partial f/\partial x_3=x_1$. Hence, we get $D(r)=D(f^lx_2+x_1^m)=mx_1^{m-1}D(x_1)=mx_1^m$. Since $x_1g=f^{lt+1}+r^t$ by (\ref{eq:rank3 g}), it follows that $x_1D(g)+x_1g= x_1D(g)+D(x_1)g=D(x_1g)=D(f^{lt+1}+r^t)=tr^{t-1}D(r)=mtr^{t-1}x_1^m$. This gives that $D(g)=mtr^{t-1}x_1^{m-1}-g$. Now, recall that $r\in x_1^m+fk[\x] $, and $g\in x_1^{mt-1}+fk[\x] $ by 2$^\circ $. Hence, $D(g)$ is in $(mt-1)x_1^{mt-1}+fk[\x] $. Since $p\nmid mt-1$ by assumption, we know that $D(g)\not\in fk[\x] $. Next, set $E:=D_{(x_1,f)}$. Since $D_{(x_1,g)}(f)=-E(g)$, we show that $E(g)\not\in gk[\x] $. Since $k[x_1,f]\subset \ker E$ and $E(x_2)=-D_{(x_1,x_2)}(f)=-x_1$, we have $E(r)=E(f^lx_2+x_1^m)=f^lE(x_2)=-f^lx_1$, and $x_1E(g)=E(x_1g)= E(f^{lt+1}+r^t)=tr^{t-1}E(r) =-tr^{t-1}f^lx_1$. Since $p\nmid t$ by assumption, the last equation implies $E(g)\not\in gk[\x] $. \end{proof} Let $\overline{k}$ be an algebraic closure of $k$, and $F:\overline{k}\ni a\mapsto a^p\in \overline{k}$. Then, $\varphi :=F\otimes {\rm id} _{\overline{k}[{\boldsymbol x} ]}$ is an automorphism of $\overline{k}\otimes _{\overline{k}}\overline{k}[{\boldsymbol x} ] =\overline{k}[{\boldsymbol x} ]$ over $\bF _p[{\boldsymbol x} ]$. We define $$ \tau :k[{\boldsymbol x} ,y,z] \ni u(x_1,x_2,x_3,y,z)\mapsto u(x_1^p,x_2^p,x_3^p,f,g)\in k[\x] . $$ \begin{rem}\label{rem:Frob}\rm (i) $\tau (u)=u^p$ holds for all $u\in \bF _p[{\boldsymbol x} ]$, and hence for $u=f,g$. \noindent (ii) For each $\lambda \in k[\x] $, we have $\tau (\lambda )=\varphi ^{-1}(\lambda )^p$. Hence, if $u\in \bF _p[{\boldsymbol x} ]$ is irreducible in $\overline{k}[{\boldsymbol x} ]$, then the following implication holds: $\tau (\lambda )\in uk[\x] \Rightarrow \varphi ^{-1}(\lambda )\in u\overline{k}[{\boldsymbol x} ] \Rightarrow \lambda =\varphi (\varphi ^{-1}(\lambda ))\in \varphi (u\overline{k}[{\boldsymbol x} ])=u\overline{k}[{\boldsymbol x} ] \Rightarrow \lambda /u\in \overline{k}[{\boldsymbol x} ]\cap k(\x) =k[\x] \Rightarrow \lambda \in uk[\x] $. \end{rem} \begin{lem}\label{lem:derivation2} \noindent{\rm (i)} If $\lambda \in k[{\boldsymbol x} ,y]$ satisfies $\deg _y\lambda <p$ and $\tau (\lambda )\in gk[\x] $, then $\lambda $ belongs to $ gk[{\boldsymbol x} ,y]$. \noindent{\rm (ii)} If $\lambda \in k[{\boldsymbol x} ,z]$ satisfies $\deg _z\lambda <p$ and $\tau (\lambda )\in fk[\x] $, then $\lambda $ belongs to $fk[{\boldsymbol x} ,z]$. \end{lem} \begin{proof} (i) Recall that $g$ is in $\bF _p[{\boldsymbol x}]$, and is irreducible in $\overline{k}[{\boldsymbol x} ]$, since Lemma~\ref{lem:K_g} holds for any $k$. Now, suppose that there exists $\lambda \in k[{\boldsymbol x} ,y]\setminus gk[{\boldsymbol x} ,y]$ with $\deg _y\lambda <p$ and $\tau (\lambda )\in gk[\x] $. Write $\lambda =\sum _{i=0}^{p-1}\lambda _iy^i$, where $\lambda _i\in k[\x] $. For $i$ with $\lambda _i\in gk[\x] $, we have $\tau (\lambda _i)\in \tau (gk[\x] )\subset g^pk[\x] $ by Remark~\ref{rem:Frob} (i). Hence, by subtracting $\lambda _iy^i$ from $\lambda $ for such $i$, we may assume that $\lambda =\sum _{i=0}^s\lambda _iy^i$ and $\lambda _s\not\in gk[\x] $ for some $0\le s<p$. Choose $\lambda $ with least $s$. Then, we claim that $s\ge 1$, for otherwise $\tau (\lambda _0)=\tau (\lambda )\in gk[\x] $ and $\lambda _0\not\in gk[\x] $, contradicting Remark~\ref{rem:Frob} (ii). Set $\lambda ':=\sum _{i=0}^si\lambda _iy^{i-1}$. Since $1\le s<p$, we have $\deg _y\lambda '=s-1$ and $s\lambda _s\not\in gk[\x] $. Hence, $\tau (\lambda ')$ is not in $gk[\x] $ by the minimality of $s$. Now, set $D:=D_{(x_1,g)}$. Then, since $\tau (\lambda )\in gk[\x] $ and $D(g)=0$, we have $D(\tau (\lambda ))\in D(gk[\x] )\subset gk[\x] $. On the other hand, since $D$ kills $\tau (\lambda _i) =\lambda _i(x_1^p,x_2^p,x_3^p)$, we have $D(\tau (\lambda )) =D(\sum _{i=0}^s\tau (\lambda _i)f^i) =\sum _{i=0}^s\tau (\lambda _i)D(f^i) =\sum _{i=0}^si\tau (\lambda _i)f^{i-1}D(f) =\tau (\lambda ')D(f)$. Since $D(f)\not\in gk[\x] $ by Lemma~\ref{lem:derivation}, it follows that $\tau (\lambda ')\in gk[\x] $, a contradiction. We can prove (ii) as in (i) using $D_{(x_2,f)}$ instead of $D_{(x_1,g)}$. \end{proof} \begin{proof}[Proof of Theorem~{\rm \ref{thm:rank3 invariant ring2}}] (i) By Lemma~\ref{lem:rank3 invariant ring2} (ii) and Remark~\ref{rem:intersection}, it suffices to show that (a) $k[{\boldsymbol p} ,f][g^{\pm 1}]\cap k[\x] =k[{\boldsymbol p} ,f][g]$, and (b) $k[{\boldsymbol p} ,g][f^{\pm 1}]\cap k[\x] =k[{\boldsymbol p} ,g][f]$. We only prove (a), since the proof of (b) is similar. For $\sigma :k[{\boldsymbol x} ,y]\ni u({\boldsymbol x} ,y)\mapsto u({\boldsymbol p} ,f)\in k[\x] $ and $\pi :k[\x] \to k[\x] /gk[\x] $, we set $\overline{\sigma }:=\pi \circ \sigma $. By Lemma~\ref{lem:rank3 invariant ring2} (i), $\overline{\sigma }(x_i)=\pi (p_i)$ equals $\pi (x_i^p)=\pi (\tau (x_i))$ for each $i$. Moreover, we have $\sigma (y)=f=\tau (y)$. Hence, we get \noindent {\bf 19}$^\circ $ $\overline{\sigma }=\pi \circ \tau |_{k[{\boldsymbol x} ,y]}$. \noindent By definition, $\sigma (x_i)=p_i=\psi (x_i^p)=\psi (\tau (x_i))$ holds for each $i$. Hence, for $u=f,g$, we have $\sigma (u)=\psi (\tau (u))=\psi (u^p)=u^p$ by Remark~\ref{rem:Frob} (i). Thus, we know that \noindent {\bf 20}$^\circ $ $\sigma (g)=g^p$ and $\sigma (y^p-f)=f^p-f^p=0$ belong to $gk[{\boldsymbol p} ,f][g]$. Now, we show that $\ker \overline{\sigma }=(g,y^p-f)$. Then, (a) follows by Lemma~\ref{lem:intersection} and 20$^\circ $. By 20$^\circ $, we have $g,y^p-f\in \ker \overline{\sigma }$. To show $\ker \overline{\sigma }\subset (g,y^p-f)$, pick any $\eta \in \ker \overline{\sigma }$. Write $\eta =(y^p-f)\eta _0+\eta _1$, where $\eta _0,\eta _1\in k[{\boldsymbol x} ,y]$ with $\deg _y\eta _1<p$. Then, $\eta _1$ is in $\ker \overline{\sigma }$, since $\eta ,y^p-f\in \ker \overline{\sigma }$. By 19$^\circ $, this implies that $\tau (\eta _1)\in \ker \pi =gk[\x] $. Hence, we get $\eta _1\in gk[{\boldsymbol x} ,y]$ by Lemma~\ref{lem:derivation2} (i). Therefore, $\eta $ belongs to $(g,y^p-f)$. (ii) By (i), we have $k[\x] ^{\epsilon _h}\simeq k[{\boldsymbol x} ^p,f,g] =\tau (k[{\boldsymbol x} ,y,z])\simeq k[{\boldsymbol x} ,y,z]/\ker \tau $, where ${\boldsymbol x} ^p:=\{ x_1^p,x_2^p,x_3^p\} $. So, we show that $\ker \tau = (y^p-f,z^p-g)$. Since ``$\supset $" is clear, we only check ``$\subset $." First, we claim that $(f^ig^jr^m)_{0\le i,j,m<p}$ is a $k({\boldsymbol x} ^p)$-basis of $k(\x) $, since $f^p,g^p,r^p\in k({\boldsymbol x} ^p)$, $[k(\x) :k({\boldsymbol x} ^p)]=p^3$ and $k(\x) =k(f,g,r)=k({\boldsymbol x} ^p,f,g,r)$ by 1$^\circ $. Now, pick any $\eta \in \ker \tau $. Write $\eta =(y^p-f)\eta _1+(z^p-g)\eta _2+\sum _{i,j=0}^{p-1}\eta _{i,j}y^iz^j$, where $\eta _1,\eta _2\in k[{\boldsymbol x} ,y,z]$ and $\eta _{i,j}\in k[\x] $. Then, we have $\sum _{i,j=0}^{p-1}\tau (\eta _{i,j})f^ig^j=\tau (\eta )=0$. Hence, by the claim, $\tau (\eta _{i,j})=\eta _{i,j}(x_1^p,x_2^p,x_3^p)$ must be zero for all $i$, $j$. This implies that $\eta _{i,j}=0$ for all $i$, $j$. Therefore, $\eta $ belongs to $(y^p-f,z^p-g)$. Set $(f_1,f_2,x_4,x_5):=(y^p-f,z^p-g,y,z)$. If $k[{\boldsymbol x} ,x_4,x_5]/(f_1,f_2)\simeq k[\x] $, then the affine variety $f_1=f_2=0$ in ${\bf A} _k^5$ is isomorphic to ${\bf A} _k^3$, and hence smooth. However, this affine variety has a singular point at the origin, because $((\partial f_i/\partial x_j)(0))_{i,j}$ is a zero matrix, and of rank less than two. Hence, $k[\x] ^{\epsilon _h}$ is not isomorphic to $k[\x] $. \end{proof} \section{Nagata type automorphisms}\label{sect:Nagata} \setcounter{equation}{0} In this section, we study the Nagata type automorphisms. In \S \ref{sect:Nagata construction}, we construct the automorphism. In \S \ref{sect:Nagata pl}, we study the plinth ideals. In \S \ref{sect:Dedekind}, we prove a theorem used to construct a generator of the invariant ring. The invariant rings are studied in \S \ref{sect:Nagata invariant ring1} and \S \ref{sect:Nagata invariant ring2}. Theorem~\ref{thm:Nagata Main} easily follows from Theorems~\ref{thm:pl princ Nagata}, \ref{thm:Nagata2}, \ref{thm:Nagata relation1} and \ref{thm:Nagata relation2}. \subsection{Construction}\label{sect:Nagata construction} Let $R$ be a UFD with $p:=\ch R>0$, and $R[x,y]$ the polynomial ring in two variables over $R$. For $g,h\in R[x,y]$ and $c\in R$, we write $g\equiv _ch$ if $g-h\in cR[x,y]$. For $c\in R\setminus \{ 0\} $, we write $R_c:=R[c^{-1}]$. We remark that \begin{equation}\label{eq:R_c} R_{c_1^{l_1}\cdots c_t^{l_t}}= R_{c_1\cdots c_t}=R[c_1^{-1},\ldots ,c_t^{-1}] \quad (\forall c_1,\ldots ,c_t\in R\setminus \{ 0\} ,\ l_1,\ldots ,l_t\ge 1). \end{equation} Now, we fix $a\in R\setminus \{ 0\} $ and $\theta (y)\in yR[y]\setminus \{ 0\} $, and define $f:=ax+\theta (y)$. Then, $R_a[x,y]=R_a[f][y]$ is the polynomial ring in $y$ over $R_a[f]$. Hence, by Example~\ref{example:action on A[x]}, $\widetilde{\epsilon }:R_a[f][y]\ni u(y)\mapsto u(y+aT)\in R_a[f][y][T]$ is a ${\bf G}_a $-action on $R_a[x,y]$ with $R_a[x,y]^{\widetilde{\epsilon }}=R_a[f]$. Since $ax+\theta (y)=f=\widetilde{\epsilon }(f) =a\widetilde{\epsilon }(x)+\theta (y+aT)$, we have \begin{equation}\label{eq:Nagata action} \widetilde{\epsilon }(x) =x+a^{-1}(\theta (y)-\theta (y+aT)) =x-\theta '(y)T -aT^2\sum _{i\ge 2}\theta _i(y)(aT)^{i-2} \end{equation} (cf.~Notation~\ref{notation:taylor}). Hence, $\widetilde{\epsilon }(x)$ is in $R[x,y][T]$. Since $\widetilde{\epsilon }(y)=y+aT$ is in $R[x,y][T]$ by definition, $\widetilde{\epsilon }$ restricts to a ${\bf G}_a $-action $\epsilon $ on $R[x,y]$ with $R[x,y]^{\epsilon }=R_a[f]\cap R[x,y]$. We fix $0\ne F\in R[f]\subset R[x,y]^{\epsilon }$, and define \begin{equation}\label{eq:Nagata q} \phi :=\epsilon _F\in \Aut _RR[x,y] \quad\text{and}\quad q:=y^p-(aF)^{p-1}y. \end{equation} This $\phi $ is the same as $\psi $ in Theorem~\ref{thm:Nagata Main}, namely, the Nagata type automorphism. The purpose of Section~\ref{sect:Nagata} is to study the structures of $R[x,y]^{\phi }$ and $\pl (\phi )$. Note that $\phi $ is the restriction of $\widetilde{\phi }:R_a[f][y]\ni u(y)\mapsto u(y+aF)\in R_a[f][y]$ to $R[x,y]$. Hence, by Lemma~\ref{lem:1var}, we have \begin{equation}\label{eq:Nagata invariant} R[x,y]^{\phi }=R_a[f][y]^{\widetilde{\phi }}\cap R[x,y]=R_a[f,q]\cap R[x,y]. \end{equation} \begin{example}\label{example:Nagata}\rm Let $R=k[z]$ be the polynomial ring in one variable over a field $k$. If $a=z$, $\theta (y)=y^2$ and $F=f$, then we have $\phi (x)=x-2yf-zf^2$ and $\phi (y)=y+zf$. This $\phi $ is the famous automorphism of Nagata~\cite{Nagata}. \end{example} Write $\theta (y)=\sum _{i\ge 1}s_iy^i$, where $s_i\in R$. Then, we have $\theta '(y)=\sum _{p\nmid i}is_iy^{i-1}$. We define \begin{equation}\label{eq:Nagata notation} I:=aR[x,y]+\theta '(y)R[x,y], \ d:=\gcd (a,\theta '(y)), \ b:=ad^{-1},\ \rho (y):=\sum _{p\nmid i}t_iy^i, \end{equation} where $t_i:=s_id^{-1}$ for each $i$ with $p\nmid i$. \begin{rem}\label{rem:NGT}\rm $\theta '(y)=d\rho '(y)$ and $\gcd (b,\rho (y))=\gcd (b,\rho '(y))=1$. \end{rem} \begin{rem}\label{rem:principality of I}\rm We have $I\subset dR[x,y]$. Moreover, the following A through F are equivalent (see \cite[\S 1, Exercise 2]{AM} for D $\Leftrightarrow $ E, and \cite[\S 0.3]{Nagata} for E $\Leftrightarrow $ F). \noindent {\bf A}. $I$ is a principal ideal of $R[x,y]$. \noindent {\bf B}. $d$ belongs to $I$, that is, $I=dR[x,y]$. \noindent {\bf C}. There exist $\zeta _1,\zeta _2\in R[y]$ such that $a\zeta _1+\theta '(y)\zeta _2=d$, i.e., $b\zeta _1+d^{-1}\theta '(y)\zeta _2=1$. \noindent {\bf D}. The image of $d^{-1}\theta '(y)=\sum _{p\nmid i}it_iy^{i-1}$ in $(R/bR)[y]$ is a unit of $(R/bR)[y]$. \noindent {\bf E}. We have $\overline{t}_1\in (R/bR)^*$, i.e., $1\in t_1R+bR$. For each $i\ge 2$ with $p\nmid i$, we have $\overline{t}_i\in \nil (R/bR)$, i.e., $t_i\in \sqrt{bR}$. Here, $\nil (S)$ denotes the nilradical of a ring $S$. \noindent {\bf F}. $(R/bR)[\overline{\rho }(y)]=(R/bR)[y]$, i.e., $y\equiv _b\nu (\rho (y))$ for some $\nu (y)\in R[y]$. We note that, if $b=1$, i.e., $R/bR=\{ 0\} $, then D, E and F are trivial. \end{rem} Now, observe that $\phi (y)=y+dbF$ and $f-\theta (y)=dbx$. Set $\theta ^*(y):=\sum _{i\ge 1}s_{pi}y^i$. Then, we have $\theta (y)=\theta ^*(y^p)+d\rho (y)$. Hence, we can apply Remark~\ref{rem:q_1} (ii) with $$ (a,b,c,S,\xi (T),w,\xi ^*(T),\widehat{\xi }(T)) =(d,bF,y,R[f],f-\theta (T),bx,f-\theta ^*(T),\rho (T)). $$ Thus, by (\ref{eq:q_1^*}), we know that $R[x,y]^{\phi }$ contains \begin{equation}\label{eq:q_1 Nagata} q_1:=d^{-1}(f-\theta ^*(q)) =d^{-1}\xi ^*(q) \in bx+\rho (y)+d^{p-2}(bF)^{p-1}yR[f,aF,y]. \end{equation} Note that $R_a[q_1,q]=R_a[f,q]$, since $d^{-1}=b/a\in R_a$. Hence, it follows from (\ref{eq:Nagata invariant}) that $R[x,y]^{\phi }=R_a[q_1,q]\cap R[x,y]$. In fact, the following theorem holds. \begin{thm}\label{thm:Nagata1} We have $R[x,y]^{\phi }=R_b[q_1,q]\cap R[x,y]$. \end{thm} \begin{proof} It suffices to verify $R_a[q_1,q]\cap R[x,y]=R_b[q_1,q]\cap R[x,y]$. Let $d_0$ be the product of all prime factors of $d$ not dividing $b$. Since $a=bd$, we have $R_a=R_{bd_0}$ by (\ref{eq:R_c}). Hence, by Remark~\ref{rem:intersection}, we are reduced to proving that $R_{d_0}[q_1,q]\cap R[x,y]=R[q_1,q]$. Due to Remark~\ref{rem:indep2}, it suffices to show that $\mu (\overline{q}_1,\overline{q})\ne 0$ for all $\mu (x,y)\in (R/d_0R)[x,y]\setminus \{ 0\} $. Here, $\overline{h}$ denotes the image of $h$ in $(R/d_0R)[x,y]$ for $h\in R[x,y]$. Note that $\gcd (b,d_0)=1$ by the definition of $d_0$. Hence, $\overline{b}$ is not a zero-divisor of $R/d_0R$. Since $d_0\mid a$, we have $\overline{f}=\overline{\theta }(y)$. Hence, $\overline{q}_1=\overline{b}x+\overline{\eta }(y)$ for some $\eta (y)\in R[y]$ by (\ref{eq:q_1 Nagata}). We also have $\overline{q}=y^p$ by (\ref{eq:Nagata q}). Now, pick any $\mu (x,y)=\sum _{i=0}^l\mu _i(y)x^i\in (R/d_0R)[x,y]\setminus \{ 0\} $, where $\mu _i(y)\in (R/d_0R)[y]$, $l\ge 0$ and $\mu _l(y)\ne 0$. Then, since $\mu _l(y^p)\overline{b^l}\ne 0$, we get $\mu (\overline{q}_1,\overline{q})=\mu (\overline{b}x+\overline{\eta }(y),y^p) =\mu _l(y^p)\overline{b^l}x^l+\cdots \ne 0$. \end{proof} \begin{rem}\label{rem:Nagata psi}\rm (i) If $\theta '(y)\in aR[y]$, i.e., $b=1$, then we have $R[x,y]^{\phi }=R[q_1,q]$ by Theorem~\ref{thm:Nagata1}. \noindent (ii) We define an isomorphism $\psi :R_a[f][y^p]\ni u(y^p) \mapsto u(q)\in R_a[f,q]=R_a[x,y]^{\widetilde{\phi }}$. If $\theta '(y)=0$, i.e., $\theta (y)\in R[y^p]$, then $x=a^{-1}(f-\theta ^*(y^p))$ lies in $R_a[f][y^p]$, and $\psi (x)=q_1$ by (\ref{eq:q_1 Nagata}). Hence, we have $\psi (R[x,y^p])=R[q_1,q]=R[x,y]^{\phi }$ by (i). This can be considered as an analogue of Theorems~\ref{thm:rank3 invariant ring} (ii) and \ref{thm:rank3 invariant ring2}. \end{rem} \begin{example} If $p=2$ in Example~\ref{example:Nagata}, then we have $\theta '(y)=2y=0$ and $\theta ^*(y)=y$. Hence, we get $R[x,y]^{\phi }=R[q_1,q]=\psi (R[x,y^p])$ by Remark~\ref{rem:Nagata psi} (ii), where $q=y^2-zfy$ and $q_1=z^{-1}(f-q)=x+fy$. \end{example} \subsection{Plinth ideals}\label{sect:Nagata pl} We set $\delta :=\phi -{\rm id} $. Recall that $I=aR[x,y]+\theta '(y)R[x,y]$. From (\ref{eq:Nagata action}), we see that $\delta (x)=\widetilde{\epsilon }(x)|_{T=F}-x\in FI$. Since $\delta (y)=aF$ is in $FI$, we know by \S \ref{sect:plinth} (5) that \begin{equation}\label{eq:delta ideal} \pl (\phi ) =\delta (R[x,y])\cap R[x,y]^{\phi } \subset \delta (R[x,y])\subset FI. \end{equation} The goal of \S \ref{sect:Nagata pl} is to prove the following theorem. \begin{thm}\label{thm:pl princ Nagata} The following are equivalent: \noindent {\rm a.} $I$ is a principal ideal of $R[x,y]$. \noindent {\rm b.} $I=dR[x,y]$. \noindent {\rm c.} $\pl (\phi )=dFR[x,y]^{\phi }$. \noindent {\rm d.} $\pl (\phi )$ is a principal ideal of $R[x,y]^{\phi }$. \end{thm} We have ``a $\Leftrightarrow $ b" by Remark~\ref{rem:principality of I} B, and ``c $\Rightarrow $ d" is clear. In the rest of \S \ref{sect:Nagata pl}, we prove ``b $\Rightarrow $ c" and ``d $\Rightarrow $ b". Our tools are Lemma~\ref{lem:pl principal} and the following lemma. \begin{lem}\label{lem:gy-h} Assume that $g,h\in R[x,y]^{\phi }$ and $c\in R\setminus \{ 0\} $ satisfy $yg-h\in cR[x,y]$, i.e., $yg\equiv _ch$. Then, $c^{-1}aFg$ belongs to $\pl (\phi )$. \end{lem} \begin{proof} We remark that $\pl (\phi )=\delta (R[x,y])\cap R[x,y]^{\phi }[c^{-1}]$, since $\delta (R[x,y])\subset R[x,y]$ and $R[x,y]\cap R[x,y]^{\phi }[c^{-1}]=R[x,y]^{\phi }$. By assumption, $s:=c^{-1}(yg-h)$ lies in $R[x,y]$. Hence, $\delta (s)$ is in $\delta (R[x,y])$. Since $\delta $ is a linear map over $R[x,y]^{\phi }=\ker \delta $, and $g,h\in R[x,y]^{\phi }$, we have $c\delta (s)=\delta (cs)=\delta (yg-h)=\delta (y)g=aFg$. Hence, we get $\delta (s)=c^{-1}aFg$, which belongs to $\delta (R[x,y])\cap R[x,y]^{\phi }[c^{-1}]=\pl (\phi )$. \end{proof} In the following discussions, we frequently use the following remark ($\dag $): \noindent ($\dag $) If $c\in R\setminus \{ 0\} $ divides $b$, then we have $q\equiv _cy^p$ and $q_1\equiv _c\rho (y)$ (cf.~(\ref{eq:Nagata q}), (\ref{eq:q_1 Nagata})). \begin{proof}[Proof of {\rm ``b $\Rightarrow $ c"}] By b and (\ref{eq:delta ideal}), we have $\pl (\phi )\subset dFR[x,y]$. Hence, it suffices to show that $dF\in \pl (\phi )$ by Lemma~\ref{lem:pl principal} (i). By Remark~\ref{rem:principality of I}~F, there exists $\nu (y)\in R[y]$ such that $y\equiv _b\nu (\rho (y))$. Then, we have $y\equiv _b\nu (\rho (y))\equiv _b\nu (q_1)$ by ($\dag $). Since $\nu (q_1)$ is in $R[x,y]^{\phi }$, this implies that $b^{-1}aF=dF$ belongs to $\pl (\phi )$ by Lemma~\ref{lem:gy-h}. \end{proof} Now, we write $b=b_1^{e_1}\cdots b_r^{e_r}$, where $b_i$ is a prime element of $B$ and $e_i\ge 1$, and $b_iB\ne b_jB$ if $i\ne j$. The following lemma holds regardless of the principality of $I$. \begin{lem}\label{lem:Nagata pl key} The following assertions hold for each $i$. \noindent{\rm (i)} For any $s\in R[y]$, there exist $h_1,h_2\in R[q_1,q]$ such that $sh_2\equiv _{b_i}h_1$ and $h_2\not\equiv _{b_i}0$. \noindent{\rm (ii)} There exists $f_i\in R[q_1,q]\setminus b_iR[x,y]$ such that $b_i^{-e_i}aFf_i$ belongs to $\pl (\phi )$. \end{lem} \begin{proof} (i) Let $i=1$ for simplicity. Set $\overline{R}:=R/b_1R$ and $K:=Q(\overline{R})$. We denote the image of $h\in R[x,y]$ in $\overline{R}[x,y]$ by $\overline{h}$. Then, a surjection $R[q_1,q]\ni h\mapsto \overline{h}\in \overline{R}[\overline{\rho }(y),y^p]$ is defined thanks to ($\dag $). Recall that $\gcd (b,\rho (y))=1$ (cf.~Remark~\ref{rem:NGT}). Hence, $\overline{\rho }(y)$ is nonzero. Since $\rho (y)=\sum _{p\nmid i}t_iy^i$, it follows that $\overline{\rho }(y)\not\in K(y^p)$. Thus, we get $K(y)=K(\overline{\rho }(y),y^p)=Q(\overline{R}[\overline{\rho }(y),y^p])$. Since $\overline{s}\in K(y)$, we can write $\overline{s}=\widehat{h}_1/\widehat{h}_2$, where $\widehat{h}_1,\widehat{h}_2\in \overline{R}[\overline{\rho }(y),y^p]$ with $\widehat{h}_2\ne 0$. For $i=1,2$, choose $h_i\in R[q_1,q]$ with $\overline{h}_i=\widehat{h}_i$. Then, we have $sh_2\equiv _{b_1}h_1$ and $h_2\not\equiv _{b_1}0$, since $\overline{s}\overline{h}_2=\overline{h}_1$ and $\overline{h}_2\ne 0$. (ii) Let $E$ be the set of $e\ge 0$ for which there exist $g_1,g_2\in R[q_1,q]$ such that $yg_2\equiv _{b_1^e}g_1$ and $g_2\not\equiv _{b_1}0$. For such $g_1$ and $g_2$, we have $b_1^{-e}aFg_2\in \pl (\phi )$ by Lemma~$\ref{lem:gy-h}$, and $g_2\not\in b_1R[x,y]$. Hence, it suffices to show that there exists $e\in E$ with $e\ge e_1$. Clearly, $E$ is not empty. Suppose that $e:=\max E<e_1$. Choose $g_i=\gamma _i(q_1,q)\in R[q_1,q]$ for $i=1,2$ with $yg_2\equiv _{b_1^e}g_1$ and $g_2\not\equiv _{b_1}0$, where $\gamma _i(x,y)\in R[x,y]$. Then, by ($\dag $), $t:=y\gamma _2(\rho (y),y^p)-\gamma _1(\rho (y),y^p) \equiv _{b_1^l}yg_2-g_1$ holds for all $0\le l\le e_1$, and hence for $l=e,e+1$ by supposition. Since $yg_2\equiv _{b_1^e}g_1$, we get $t\equiv _{b_1^e}yg_2-g_1\equiv _{b_1^e}0$, i.e., $b_1^{-e}t\in R[y]$. Thus, by (i), there exist $h_1,h_2\in R[q_1,q]$ such that $b_1^{-e}th_2\equiv _{b_1}h_1$ and $h_2\not\equiv _{b_1}0$. Then, we have \begin{align*} (yg_2-g_1)h_2-b_1^eh_1 \equiv _{b_1^{e+1}}th_2-b_1^eh_1 =b_1^e(b_1^{-e}th_2-h_1) \equiv _{b_1^{e+1}}0. \end{align*} This gives that $yg_2h_2\equiv _{b_1^{e+1}}g_1h_2+b_1^eh_1$. Moreover, we have $g_2h_2\not\equiv _{b_1}0$, since $g_2\not\equiv _{b_1}0$ and $h_2\not\equiv _{b_1}0$. This contradicts the maximality of $e$. \end{proof} \begin{proof}[Proof of {\rm ``d $\Rightarrow $ b"}] $\pl (\phi )$ contains $\delta (y)=aF$ and $b^{-e_i}aFf_i$ for each $i$, where $f_i$ is as in Lemma~\ref{lem:Nagata pl key} (ii). Since $f_i\not\in b_iR[x,y]$, we have $\gcd (aF,b^{-e_1}aFf_1,\ldots ,b^{-e_r}aFf_r) =(b_1^{e_1}\cdots b_r^{e_r})^{-1}aF=dF$, which is in $R[x,y]^{\phi }$. Hence, by d and Lemma~\ref{lem:pl principal} (ii), $dF$ lies in $\pl (\phi )$. Thus, $dF$ is in $FI$ by (\ref{eq:delta ideal}), and so $d\in I$. This implies b. \end{proof} \subsection{Conductor}\label{sect:Dedekind} Let $p$ be a prime number, $S$ a ring with $\ch S=p$, and $S[y]$ the polynomial ring in one variable over $S$. The purpose of \S \ref{sect:Dedekind} is to prove the following theorem. \begin{thm}\label{thm:Dedekind} For every $f\in S[y]$, we have $(f')^pS[y]\subset S[y^p,f]$. \end{thm} We make use of the following well-known fact (cf.~\cite[Theorem 12.1.1]{HS}). \begin{lem}\label{lem:conductor} Let $R$ be an integrally closed domain, $L$ an algebraic extension of $Q(R)$, $z\in L$, and $\overline{R[z]}$ the integral closure of $R[z]$ in $Q(R[z])$. If $z$ separable over $Q(R)$, and the minimal polynomial $\Phi (T)$ of $z$ over $Q(R)$ lies in $R[T]$, then we have $\Phi '(z)\overline{R[z]}\subset R[z]$. Namely, $\Phi '(z)$ is in the conductor of $R[z]$. \end{lem} When $f$ is monic, it is not difficult to derive Theorem~\ref{thm:Dedekind} from Lemma~\ref{lem:conductor}. However, for the general case, an additional argument is needed. \begin{lem}\label{lem:Dedek1} Let $S:=\bF _p[\xi _1,\ldots ,\xi _{d-1}]$ be the polynomial ring in $d-1$ variables over $\bF _p$ with $p\nmid d$, and let $g:=y^d+\sum _{i=1}^{d-1}\xi _iy^i\in S[y]$. Then, for each $l\ge 0$, there exist $f_1,\ldots ,f_p\in S[y^p]$ such that {\rm (i)} $(g')^py^l=\sum _{i=0}^{p-1}f_{p-i}g^i$, and {\rm (ii)} the total degree of $f_{p-i}$ in $\xi _1,\ldots ,\xi _{d-1}$ is at most $p-i$ for $i=0,\ldots ,p-1$. \end{lem} \begin{proof} (i) Since $g^p\in S[y^p]$, we have $S[y^p,g]=\sum _{i=0}^{p-1}S[y^p]g^i$. Hence, it suffices to show that $(g')^py^l\in S[y^p,g]$. We prove that $(g')^pS[y]\subset S[y^p,g]$ using Lemma~\ref{lem:conductor} with $R:=S[g]$, $z:=y^p$ and $\Phi (T):=T^d+\sum _{i=1}^{d-1}\xi _i^pT^i-g^p\in S[g][T]$. Note that $\Phi (y^p)=0$ and $\Phi '(y^p)=(g')^p$. Hence, it suffices to check the following: \noindent (1) $S[g]$ is an integrally closed domain. \noindent (2) $S[y]\subset Q(S[g,y^p])$, and $S[y]$ is integral over $S[g,y^p]$. \noindent (3) $\Phi (T)$ is an irreducible polynomial in $T$ over $Q(S[g])$. Actually, (3) implies that $\Phi (T)$ is the minimal polynomial of $y^p$ over $Q(S[g])$. Since $p\nmid d$ by assumption, it follows that $y^p$ is separable over $Q(S[g])$. Since $S[g]$ is a polynomial ring in $d$ variables over $\bF _p$, it is an integrally closed domain, hence (1). Since $[Q(S[y]):Q(S[y^p])]=p$ and $g\in Q(S[y])\setminus Q(S[y^p])$, we have $Q(S[y^p,g])=Q(S[y])\supset S[y]$. Since $y$ is integral over $S[y^p,g]$, we get (2). Since $\gcd (d,p)=1$, we see that $\Phi (T)$ is an irreducible element of $Q(S)[g,T]\simeq Q(S)[x_1,x_2]$. Hence, the polynomial $\Phi (T)$ in $T$ is irreducible over $Q(S)[g]$, and thus over $Q(S[g])$ by Gauss's lemma. This proves (3). (ii) We define a monomial order on $S[y]=\bF _p[\xi _1,\ldots ,\xi _{d-1},y]$ by $\xi _1^{i_1}\cdots \xi _{d-1}^{i_{d-1}}y^{i_d} \succeq \xi _1^{j_1}\cdots \xi _{d-1}^{j_{d-1}}y^{j_d}$ if $\sum _{l=1}^{d-1}i_l>\sum _{l=1}^{d-1}j_l$, or $\sum _{l=1}^{d-1}i_l=\sum _{l=1}^{d-1}j_l$ and $(i_1,\ldots ,i_d)\ge _{\rm lex}(j_1,\ldots ,j_d)$, where $\ge _{\rm lex}$ is the lexicographic order. We denote by $\lt (h)$ the leading term of $h\in S[y]$ for this order. Then, we have $\lt (g)=\xi _1y$ and $\lt (g')=\xi _1$. Let $m:=\xi _1^{k_1}\cdots \xi _{d-1}^{k_{d-1}}y^{k_d}$ be the maximum among $\lt (f_{p-i}g^i)=\lt (f_{p-i})\xi _1^iy^i$ for $0\le i<p$, and $J$ the set of $0\le i<p$ with $\lt (f_{p-i}g^i)=m$. Now, suppose that the total degree of $f_{p-i}$ in $\xi _1,\ldots ,\xi _{d-1}$ is greater than $p-i$ for some $i$. Then, that of $\lt (f_{p-i})\xi _1^iy^i$ is greater than $p$. Hence, $\sum _{l=1}^{d-1}k_l>p$ holds by maximality. Thus, $\lt ((g')^py^l)=\xi _1^py^l$ cannot be $m$. Since $(g')^py^l=\sum _{i=0}^{p-1}f_{p-i}g^i$, this implies that $|J|\ge 2$, for otherwise $\lt (\sum _{i=0}^{p-1}f_{p-i}g^i)=m$. Choose $i_1,i_2\in J$ with $i_1<i_2$. Then, we have $\lt (f_{p-i_1})\xi _1^{i_1}y^{i_1}= \xi _1^{k_1}\cdots \xi _{d-1}^{k_{d-1}}y^{k_d} =\lt (f_{p-i_2})\xi _1^{i_2}y^{i_2}$. It follows that $\deg _y\lt (f_{p-i_l})=k_d-i_l$ for $l=1,2$. Since $f_{p-i_l}$ is in $S[y^p]$, we have $k_d-i_l\in p{\bf Z} $. Thus, $i_2-i_1$ is in $p{\bf Z} $. This contradicts that $0\le i_1<i_2<p$. \end{proof} \begin{proof}[Proof of Theorem~$\ref{thm:Dedekind}$] Observe that $(f+h)'=f'$ and $S[y^p,f+h]=S[y^p,f]$ for any $h\in S[y^p]$. Hence, replacing $f$ with $f+h$ for some $h\in S[y^p]$, we may assume that $f(0)=0$ and $d:=\deg f\not\in p{\bf Z} $. We may also assume that $f\ne 0$. Now, write $f=\sum _{i=1}^du_iy^i$, where $u_i\in S$. To show $(f')^pS[y]\subset S[y^p,f]$, it suffices to verify that $(f')y^l\in \bF _p[u_1,\ldots ,u_d,y^p,f]$ for each $l\ge 0$. For this purpose, we may assume that $u_1,\ldots ,u_d$ are algebraically independent over $\bF _p$. Set $\xi _i=u_i/u_d$ for $1\le i<d$ and $g:=u_d^{-1}f=y^d+\sum _{i=1}^{d-1}\xi _iy^i$. Choose $f_1,\ldots ,f_p\in \bF _p[\xi _1,\ldots ,\xi _{d-1},y^p]$ as in Lemma~\ref{lem:Dedek1}. Then, since $f=u_dg$ and $f'=u_dg'$, we have $(f')^py^l=u_d^p(g')^py^l =u_d^{p}\sum _{i=0}^{p-1}f_{p-i}g^i =\sum _{i=0}^{p-1}u_d^{p-i}f_{p-i}f^i$ by (i). By (ii), $u_d^{p-i}f_{p-i}$ is in $\bF _p[u_1,\ldots ,u_d,y^p]$ for each $i$. Therefore, $(f')y^l$ belongs to $\bF _p[u_1,\ldots ,u_d,y^p,f]$. \end{proof} \subsection{Invariant ring: generators} \label{sect:Nagata invariant ring1} In \S \ref{sect:Nagata invariant ring1}, we determine the generators of $R[x,y]^{\phi }$. The main task is to construct a new element $q_2\in R[x,y]^{\phi }$. For this purpose, we need Theorem~\ref{thm:Dedekind}. Throughout \S \ref{sect:Nagata invariant ring1}, let $\overline{R}:=R/bR$, and $\overline{h}$ denotes the image of $h\in R[x,y]$ in $\overline{R}[x,y]$. Recall that $\overline{q}=y^p$ and $\overline{q}_1=\overline{\rho }(y)$ by ($\dag $) after Lemma~\ref{lem:gy-h}. \begin{rem}\label{rem:R[bx,y]}\rm (i) For $h\in R[x,y]$, we have $\deg _x\overline{h}\le 0$ if $h\in R[bx,y]$, and $\deg _x\overline{h}\le 1$ if $h\in b^{-1}R[bx,y]\cap R[x,y]$. \noindent (ii) Since $f \in R[bx,y]$, we have $F,q,q_1\in R[bx,y]$ (cf.~(\ref{eq:Nagata q}) and (\ref{eq:q_1 Nagata})). \end{rem} Set $\xi (T):=q_1-\rho (T)\in R[q_1][T]$. Then, by (\ref{eq:q_1 Nagata}), we have $\xi (y)=bw$ for some \begin{equation}\label{eq:w Nagata} w\in x+(db)^{p-2}F^{p-1}yR[f,aF,y] \subset x+R[bx,y]. \end{equation} Since $\phi (y)=y+bdF$, we can use Remark~\ref{rem:q_1} with $(a,b,c,S)=(b,dF,y,R[q_1])$. Set $\xi ^p(T):=q_1^p-\rho ^p(T)$, where $\rho ^p(T):=\sum _{p\nmid i}t_i^pT^i$. Then, by (\ref{eq:q_1}), $R[x,y]^{\phi }$ contains \begin{equation}\label{eq:tilde q_1 Nagata} \widetilde{q}_1:=b^{1-p}(q_1^p-\rho ^p(q)) =b^{1-p}\xi ^p(q) \in bw^p+\rho '(y)^p{\cdot }(dF)^{p-1}y+b^{p-1}R[bx,y], \end{equation} where we use $F,q_1\in R[bx,y]$. By Theorem~\ref{thm:Dedekind}, there exists $\lambda (x,y)\in R[x,y]$ such that $\rho '(y)^py=\lambda (y^p,\rho (y))$. Then, $v:=(\rho '(y)^py-\lambda (q,q_1))(dF)^{p-1}$ belongs to $bR[x,y]$ by ($\dag $) after Lemma~\ref{lem:gy-h}, and to $R[bx,y]$ by Remark~\ref{rem:R[bx,y]} (ii). Hence, $b^{-1}v$ belongs to $R[x,y]\cap b^{-1}R[bx,y]$. Thus, by (\ref{eq:tilde q_1 Nagata}) and (\ref{eq:w Nagata}), we see that \begin{equation}\label{eq:q_2 Nagata} \begin{aligned} q_2:=b^{-1}(\widetilde{q}_1-\lambda (q,q_1)(dF)^{p-1}) &\in w^p+b^{-1}v+b^{p-2}R[bx,y] \\ &\subset x^p+R[bx,y]+R[x,y]\cap b^{-1}R[bx,y]. \end{aligned} \end{equation} Since $\widetilde{q}_1$ and $\lambda (q,q_1)(dF)^{p-1}$ are in $R[x,y]^{\phi }$, so is $q_2$. Therefore, we can define \begin{equation}\label{eq:Nagata sigma} \sigma :R[\y] \ni h(y_0,y_1,y_2)\mapsto h(q,q_1,q_2)\in R[x,y]^{\phi },\ \text{ where }\ {\boldsymbol y} :=\{ y_0,y_1,y_2\} . \end{equation} \begin{example} If $p\ge 3$ in Example~\ref{example:Nagata}, then $d=\gcd (z,2y)=1$, $b=a=z$, $\theta (y)=\rho (y)=\rho ^p(y)=y^2$ and $\theta ^*(y)=0$. Hence, we have $q_1=f$ and $\widetilde{q}_1=z^{1-p}(f^p-q^2)$. Since $\rho '(y)^py=2y^{p+1}$, we may take $\lambda (x,y)=2y^{(p+1)/2}$. Then, we get $q_2=z^{-1}\bigl(\widetilde{q}_1-2f^{(p+1)/2}f^{p-1}\bigr) $. \end{example} \begin{rem}\label{rem:q_1 etc}\rm (i) We have $f\in R[q,q_1]$ by (\ref{eq:q_1 Nagata}), and so $\widetilde{q}_1\in R[q,q_1,q_2]$ by (\ref{eq:q_2 Nagata}). \noindent (ii) We have $\widetilde{q}_1,q_2\in R_b[q,q_1]$ by (\ref{eq:tilde q_1 Nagata}) and (\ref{eq:q_2 Nagata}), since $f\in R[q,q_1]$. \noindent (iii) $\overline{q}_2$ is a monic polynomial in $x$ of degree $p$ by (\ref{eq:q_2 Nagata}) and Remark~\ref{rem:R[bx,y]} (i). \noindent (iv) $q$ and $q_1$ are algebraically independent over $R_a$, because $R_a[x,y]$ is algebraic over $R_a[x,y]^{\widetilde{\phi }}=R_a[f,q]=R_a[q_1,q]$ (cf.~\S \ref{sect:Nagata construction}). \end{rem} The following is the main result of \S \ref{sect:Nagata invariant ring1}. \begin{thm}\label{thm:Nagata2} In the notation above, we have $R[x,y]^{\phi }=R[q,q_1,q_2]$. \end{thm} \begin{proof} Since $q_2\in R_b[q,q_1]$ by Remark~\ref{rem:q_1 etc} (ii), we have $R_b[q,q_1]=R_b[q,q_1,q_2]$. Hence, by Theorem~\ref{thm:Nagata1}, it suffices to show that $R_b[q,q_1,q_2]\cap R[x,y]=R[q,q_1,q_2]$. Let $\widehat{\sigma }:\overline{R}[{\boldsymbol y} ]\to \overline{R}[x,y]$ be the substitution map induced by $\sigma $ in (\ref{eq:Nagata sigma}). We show that $\ker \widehat{\sigma }=(y_1^p-\overline{\rho ^p}(y_0))$. Then, the assertion follows by Lemma~\ref{lem:intersection 2}, since $\sigma (y_1^p-\rho ^p(y_0)) =q_1^p-\rho ^p(q)=b^{p-1}\widetilde{q}_1$ by (\ref{eq:tilde q_1 Nagata}), and $\widetilde{q}_1\in R[q,q_1,q_2]$ by Remark~\ref{rem:q_1 etc}~(i). Since $\overline{q}=y^p$ and $\overline{q}_1=\overline{\rho }(y)$ by ($\dag $), we have $\widehat{\sigma }(y_1^p-\overline{\rho ^p}(y_0))= \overline{\rho }(y)^p-\overline{\rho ^p}(y^p)=0$. To show $\ker \widehat{\sigma }\subset (y_1^p-\overline{\rho ^p}(y_0))$, pick any $\eta =\sum _i\eta _iy_2^i\in \ker \widehat{\sigma }$, where $\eta _i\in \overline{R}[y_0,y_1]$. We claim that $\widehat{\sigma }(\eta _i)=\eta _i(y^p,\overline{\rho }(y))=0$ for all $i$. In fact, if $\{ i\mid \widehat{\sigma }(\eta _i)\ne 0\} \ne \emptyset $, and $j:=\max \{ i\mid \widehat{\sigma }(\eta _i)\ne 0\} $, then $\widehat{\sigma }(\eta )= \sum _i\widehat{\sigma }(\eta _i)\overline{q}_2^i =\eta _j(y^p,\overline{\rho }(y))x^{jp}+\cdots \ne 0$ by Remark~\ref{rem:q_1 etc} (iii), a contradiction. Hence, we may assume that $\eta \in \overline{R}[y_0,y_1]$. Let $\nu \in \overline{R}[y_0,y_1]$ be the remainder of $\eta $ divided by $y_1^p-\overline{\rho ^p}(y_0)$ as a polynomial in $y_1$. Then, $\nu $ lies in $\ker \widehat{\sigma }$, since $\eta ,y_1^p-\overline{\rho ^p}(y_0)\in \ker \widehat{\sigma }$. By Lemma~\ref{lem:remainder Nagata} below, this implies that $\nu =0$, i.e., $\eta \in (y_1^p-\overline{\rho ^p}(y_0))$. \end{proof} \begin{lem}\label{lem:remainder Nagata} $\nu (y^p,\overline{\rho }(y))\ne 0$ holds for every $\nu \in \overline{R}[y_0,y_1]\setminus \{ 0\} $ with $\deg _{y_1}\nu <p$. \end{lem} \begin{proof} Suppose that there exists $\nu \in \overline{R}[y_0,y_1]\setminus \{ 0\} $ with $n:=\deg _{y_1}\nu <p$ and $\nu (y^p,\overline{\rho }(y))=0$. Then, $\nu $ is not in $\overline{R}[y_0]$, i.e., $n\ge 1$. Hence, $\nu _{y_1}:=\partial \nu /\partial y_1$ is of $y_1$-degree $n-1\ge 0$. Choose $\nu $ with least $n$. By the chain rule, we have $$ 0=(\nu (y^p,\overline{\rho }(y)))' =py^{p-1}\nu _{y_0}(y^p,\overline{\rho }(y))+ \overline{\rho }'(y)\nu _{y_1}(y^p,\overline{\rho }(y)) =\overline{\rho }'(y)\nu _{y_1}(y^p,\overline{\rho }(y)). $$ Recall that $\gcd (b,\rho '(y))=1$ (cf.~Remark~\ref{rem:NGT}). Hence, $\overline{\rho }'(y)$ is not a zero-divisor of $\overline{R}[y]$. Thus, we get $\nu _{y_1}(y^p,\overline{\rho }(y))=0$. This contradicts the minimality of $n$. \end{proof} \subsection{Invariant ring: relation} \label{sect:Nagata invariant ring2} By Theorem~\ref{thm:Nagata2}, $\sigma $ in (\ref{eq:Nagata sigma}) is surjective. Since $f$ is in $R[q,q_1]$ by Remark~\ref{rem:q_1 etc}~(i), we can write $(dF)^{p-1}=\nu (q,q_1)$, where $\nu \in R[y_0,y_1]$. We define \begin{equation}\label{eq:Nagata relation} \Lambda :=b^py_2+\rho ^p(y_0)-y_1^p+b^{p-1}\lambda (y_0,y_1)\nu (y_0,y_1). \end{equation} Then, from (\ref{eq:tilde q_1 Nagata}) and (\ref{eq:q_2 Nagata}), we see that $\sigma (\Lambda )=0$. Hence, $\ker \sigma $ contains $(\Lambda )$. \begin{thm}\label{thm:Nagata relation} We have $\ker \sigma =(\Lambda )$. Hence, $R[x,y]^{\phi }$ is isomorphic to $R[\y] /(\Lambda )$ as an $R$-algebra, where ${\boldsymbol y} =\{ y_0,y_1,y_2\} $. \end{thm} \begin{proof} Pick any $h\in \ker \sigma $. Noting $b^p\in R_b^*$, we can write $h=\Lambda h_1+h_0$, where $h_1\in R_b[{\boldsymbol y} ]$ and $h_0\in R_b[y_0,y_1]$. Then, we have $h_0(q,q_1)=0$, since $h,\Lambda \in \ker \sigma $. By Remark~\ref{rem:q_1 etc} (iv), this implies that $h_0=0$, i.e., $h=\Lambda h_1$. Since $h,\Lambda \in R[\y] $, $h_1\in R_b[{\boldsymbol y} ]$ and $\gcd (\Lambda ,b)=\gcd (\rho ^p(y_0)-y_1^p,b)=1$, it follows that $h_1\in R[\y] $. This proves $h\in (\Lambda )$. \end{proof} In the rest of \S \ref{sect:Nagata invariant ring2}, we study the structure of $R[x,y]^{\phi }\simeq R[\y] /(\Lambda )$. \begin{thm}\label{thm:Nagata relation1} If $I$ is a principal ideal of $R[x,y]$, then $\Lambda $ is a coordinate of $R[y_1][y_0,y_2]$, i.e., $R[\y] =R[y_1,\Lambda ,L]$ for some $L\in R[\y] $. Hence, $R[x,y]^{\phi }$ is isomorphic to $R[x,y]$ as an $R$-algebra. \end{thm} \begin{proof} Set $S:=R[y_1]$. Write $\Lambda =b^py_2+\sum _{i\ge 0}u_iy_0^i$, where $u_i\in S$. Due to a well-known result of Russell~\cite{Russell} and Sathaye~\cite{Sathaye}, it suffices to verify that $\overline{u}_1\in (S/b^pS)^*$ and $\overline{u}_i\in \nil (S/b^pS)$ if $i\ge 2$. Here, $\overline{u}$ denotes the image of $u$ in $S/b^pS$ for $u\in S$. Since $\rho ^p(y_0)=\sum _{p\nmid i}t_i^py_0^i$, we see from (\ref{eq:Nagata relation}) that $u_i\in bS$ if $i>0$ and $p\mid i$, and $u_i\in t_i^p+bS$ if $p\nmid i$. Hence, $\overline{u}_i$ is in $\nil (S/b^pS)$ if $i>0$ and $p\mid i$. If $p\nmid i$, then $\overline{u}_i$ is in $\overline{t}_i^p+\nil (S/b^pS)$. Since $I$ is principal by assumption, we know by Remark~\ref{rem:principality of I}~E that $t_1r_1+br_2=1$ for some $r_1,r_2\in R$, and $t_i\in \sqrt{bR}$ if $p\nmid i$ and $i\ge 2$. Since $t_1^pr_1^p+b^pr_2^p=1$, we have $\overline{t}_1^p\in (S/b^pS)^*$. Hence, we get $\overline{u}_1\in \overline{t}_1^p+\nil (S/b^pS)\subset (S/b^pS)^*$. When $t_i\in \sqrt{bR}$, we have $\overline{u}_i\in \overline{t}_i^p+\nil (S/b^pS)\subset \nil (S/b^pS)$. \end{proof} To prove the non-polynomiality of $R[\y] /(\Lambda )$, we use the following lemma. \begin{lem}\label{lem:non-sing} Let $S$ be a domain, $\kappa $ an algebraic closure of $Q(S)$, and $h\in S[\y] $, where ${\boldsymbol y} :=\{ y_0,\ldots ,y_r\} $ is a set of variables. If $S[\y] /(h)\simeq S[y_1,\ldots ,y_r]$ and $h$ is irreducible in $\kappa [{\boldsymbol y} ]$, then we have $\kappa [{\boldsymbol y} ]/(h)\simeq \kappa [y_1,\ldots ,y_r]$. Hence, the hypersurface $h=0$ in ${\bf A} _{\kappa }^{r+1}$ is isomorphic to ${\bf A} _{\kappa }^r$, and thus smooth. Consequently, the system of equations $h=0$ and $\partial h/\partial y_i=0$ for $i=0,\ldots ,r$ has no solution in $\kappa ^{r+1}$. \end{lem} \begin{proof} By assumption, the $S$-algebra $S[\y] /(h)$ is generated by $r$ elements. Hence, there exist $\xi _1,\ldots ,\xi _r\in S[\y] $ such that $y_0,\ldots ,y_r\in S[\xi _1,\ldots ,\xi _r]+hS[\y] $. Since $S[\xi _1,\ldots ,\xi _r]+hS[\y] \subset \kappa [\xi _1,\ldots ,\xi _r]+h\kappa [{\boldsymbol y} ]$, we see that the $\kappa $-algebra $\kappa [{\boldsymbol y} ]/(h)$ is generated by the images $\overline{\xi }_1,\ldots ,\overline{\xi }_r$ of $\xi _1,\ldots ,\xi _r$. Since $h$ is irreducible in $\kappa [{\boldsymbol y} ]$ by assumption, $\kappa [{\boldsymbol y} ]/(h)$ is a $\kappa $-domain with $\trd _{\kappa }\kappa [{\boldsymbol y} ]/(h)=r$. Therefore, $\overline{\xi }_1,\ldots ,\overline{\xi }_r$ must be algebraically independent over $\kappa $. The last part is well known. \end{proof} \begin{thm}\label{thm:Nagata relation2} Assume that $I$ is not a principal ideal of $R[x,y]$. \noindent{\rm (i)} $R[x,y]^{\phi }$ is not isomorphic to $R[x,y]$ as an $R$-algebra. \noindent{\rm (ii)} If $R=k[z_1,\ldots ,z_n]$ is the polynomial ring in $n$ variables over a field $k$ with $\ch k>0$, where $n\ge 1$, then $R[x,y]^{\phi }$ is not isomorphic to $R[x,y]$ as a $k$-algebra. \end{thm} \begin{proof} (i) By Remark~\ref{rem:principality of I} E, we have $1\not\in (t_1,b)$, or $t_i\not\in \sqrt{bR}$ for some $i\ge 2$ with $p\nmid i$. There exists $\fp \in \Spec R$ such that $t_1,b\in \fp $ in the former case, and $t_{i}\not\in \fp $ and $b\in \fp $ in the latter case. In both cases, $\overline{\rho }'(y_0)=\sum _{p\nmid i}i\overline{t}_iy_0^{i-1}$ is not a nonzero constant, and $\overline{\Lambda }=\overline{\rho ^p}(y_0)-y_1^p$. Here, $\overline{h}$ denotes the image of $h\in R[{\boldsymbol y} ]$ in $(R/\fp )[{\boldsymbol y} ]$. Let $\kappa $ be an algebraic closure of $Q(R/\fp )$. Then, there exists $\alpha \in \kappa $ with $\overline{\rho }'(\alpha )=0$. Now, suppose that $R[x,y]^{\phi }\simeq R[\y] /(\Lambda )=:A$ is isomorphic to $R[x,y]$ as an $R$-algebra. Then, we have $A/\fp A\simeq R[x,y]/\fp R[x,y]\simeq (R/\fp )[x,y]$. Since $A/\fp A\simeq R[\y] /(\Lambda R[\y] +\fp R[\y] )\simeq (R/\fp )[{\boldsymbol y} ]/(\overline{\Lambda })$, we get $(R/\fp )[{\boldsymbol y} ]/(\overline{\Lambda })\simeq (R/\fp )[x,y]$. This implies $\overline{\Lambda }\ne -y_1^p$, that is, $\overline{\rho ^p}(y_0)\ne 0$. Then, $\overline{\Lambda }=\overline{\rho ^p}(y_0)-y_1^p$ is irreducible in $\kappa [{\boldsymbol y} ]$, since the degree of $\overline{\rho ^p}(y_0)=\sum _{p\nmid i}\overline{t_i^p}y_0^i$ is coprime to $p$. Thus, the assumption of Lemma~\ref{lem:non-sing} holds for $S=R/\fp $ and $h=\overline{\Lambda }$. However, $(y_0,y_1,y_2)=(\alpha ^p,\overline{\rho }(\alpha ),0)$ is a solution of $\overline{\Lambda }=\partial \overline{\Lambda }/\partial y_i=0$ for $i=0,1,2$, since $\overline{\rho ^p}(\alpha ^p)=\overline{\rho }(\alpha )^p$ and $(\overline{\rho ^p})'(\alpha ^p)=\overline{\rho }'(\alpha )^p=0$. This is a contradiction. (ii) Let $\kappa $ be an algebraic closure of $k$, and ${\boldsymbol z} :=\{ z_1,\ldots ,z_n\} $. Then, $\Lambda $ is irreducible in $\kappa [{\boldsymbol y} ,{\boldsymbol z} ]$, since $\Lambda $ is a linear, primitive polynomial in $y_2$ over $\kappa [y_0,y_1,{\boldsymbol z} ]$. Now, suppose that $R[x,y]^{\phi }\simeq k[{\boldsymbol y} ,{\boldsymbol z} ]/(\Lambda )$ is isomorphic to $R[x,y]=k[x,y,{\boldsymbol z} ]$ as a $k$-algebra. Then, the assumption of Lemma~\ref{lem:non-sing} holds for $S=k$ and $h=\Lambda $. Since $R=k[{\boldsymbol z} ]$, we write $\lambda (y_0,y_1)=\lambda (y_0,y_1,{\boldsymbol z} )$ and $\rho (y_0)=\rho (y_0,{\boldsymbol z} )$. As in (i), we have $1\not\in (t_1,b)$, or $t_i\not\in \sqrt{bR}$ for some $i\ge 2$ with $p\nmid i$. By Hilbert's Nullstellensatz, there exists $\gamma \in \kappa ^n$ such that $t_1(\gamma )=b(\gamma )=0$ in the former case, and $t_i(\gamma )\ne 0$ and $b(\gamma )=0$ in the latter case. In both cases, $\rho _{y_0}(y_0,\gamma )$ is not a nonzero constant, and $b(\gamma )=0$. Choose $\alpha \in \kappa $ with $\rho _{y_0}(\alpha ,\gamma )=0$. Then, we have $\lambda (\alpha ^p,\rho (\alpha ,\gamma ),\gamma ) =\rho _{y_0}(\alpha ,\gamma )^p\alpha =0$, since $\lambda (y^p,\rho (y))=\rho '(y)^py$ by the choice of $\lambda $ (cf.~\S \ref{sect:Nagata invariant ring1}). Noting this and $\partial b^p/\partial z_i=\partial \rho ^p(y_0)/\partial z_i=0$ for all $i$, we can check that $(y_0,y_1,y_2,{\boldsymbol z} )=(\alpha ^p,\rho (\alpha ,\gamma ),0,\gamma )$ is a solution of $\Lambda =\partial \Lambda /\partial y_i =\partial \Lambda /\partial z_j=0$ for $i=0,1,2$ and $j=1,\ldots ,n$. This contradicts Lemma~\ref{lem:non-sing}. \end{proof} \section{Question and Conjecture}\label{sect:remark} \setcounter{equation}{0} \noindent (1) The {\it Stable Tameness Conjecture} asserts that every $\phi \in \Aut _kk[\x] $ is {\it stably tame}, i.e., there exists $l>0$ such that $\phi _l\in \T _{n+l}(k)$, where $\phi _l$ is the extension of $\phi $ defined by $\phi _l(x_i)=x_i$ for all $i>n$ (cf.~\cite[Conjecture 6.1.8]{Essen}). When $n=3$, every element of $T:=\langle \T _3(k)\cup \Aut _{k[x_3]}k[\x] \rangle $ is stably tame due to Berson-van den Essen-Wright~\cite{BEW}. However, we do not know the answer to the following question. \begin{q}\label{q:STC}\rm Is $\phi $ in (\ref{eq:simple example}) (or more generally $\epsilon _h$ in \S \ref{sect:rank3 exp}) stably tame? \end{q} \noindent (2) In the case $p=0$, the author \cite{wild3} studied in detail when $\phi \in \Ex _3(k)$ belongs to $\T _3(k)$ or not, using the Shestakov-Umirbaev theory~\cite{SU} and its generalization~\cite{tame3}. There, he arrived at the following conjecture for $p=0$ (cf.~\cite[Conjecture 17.3]{Sugaku}). Here, we say that $\tau \in \Aut _kk[\x] $ is {\it triangular} if $\tau (x_i)\in k[x_1,\ldots ,x_i]$ for each $i$. \begin{conj}\label{conj:tame3}\rm For every $\phi \in \Ex _3(k)\cap \T _3(k)$, there exists $\sigma \in \T _3(k)$ such that $\sigma \circ \phi \circ \sigma ^{-1}$ is triangular. \end{conj} It seems reasonable to expect that Conjecture~\ref{conj:tame3} also holds for $p>0$. In fact, we have the following conjecture (see also Question~\ref{q:C=E?} below). \begin{conj}\label{conj:ch triangular}\rm Assume that $p>0$. Then, for every $\phi \in \Ch _3(k)\cap \T _3(k)$, there exists $\sigma \in \T _3(k)$ such that $\sigma \circ \phi \circ \sigma ^{-1}$ is triangular. \end{conj} We claim that Conjecture~\ref{conj:ch triangular} implies the following conjecture. \begin{conj}\label{conj:variable}\rm Assume that $p>0$. Then, for every $\phi \in \Ch _3(k)\cap \T _3(k)$, there exists $\sigma \in \T _3(k)$ such that $\sigma (x_1)\in k[\x] ^{\phi }$. Hence, we have $\gamma (k[\x] ^{\phi })\ge 1$. \end{conj} In fact, if $\tau \in \Ch _3(k)$ is triangular, then $\tau $ restricts to an element of $\Ch _2(k)$, to which we can apply Theorem~\ref{thm:Osaka}. \noindent (3) We note that the Laurent polynomial ring $B=k[x_1^{\pm 1},\ldots ,x_p^{\pm 1}]$ admits no non-identity exponential automorphism. In fact, every ${\bf G}_a $-action on $B$ fixes $x_i$ and $x_i^{-1}$ for all $i$, since $x_ix_i^{-1}=1\in B^{{\bf G}_a }$, and $B^{{\bf G}_a }$ is factorially closed in $B$ (cf.~Remark~\ref{rem:Miyanishi}). Clearly, the automorphism of $B$ defined by $x_1\mapsto x_2\mapsto \cdots \mapsto x_p\mapsto x_1$ is of order $p$. Hence, $B$ admits a non-exponential automorphism of order $p$. However, there exists no such automorphism of $k[x_1,x_2]$ because of Theorem~\ref{thm:Osaka}. \begin{q}\label{q:C=E?}\rm Assume that $p>0$. Does $\Ch _n(k)=\Ex _n(k)$ hold for $n\ge 3$? \end{q} \noindent (4) Theorems~\ref{thm:Nagata Main}, \ref{thm:plinth rank 3}, \ref{thm:rank3 invariant ring} and \ref{thm:rank3 invariant ring2} support the following conjecture. \begin{conj}\label{q:pl k[x]}\rm Assume that $p>0$ and $n=3$. Then, for $\phi \in \Ex _3(k)$, we have $k[\x] ^{\phi }\simeq k[\x] $ if and only if $\pl (\phi )$ is a principal ideal of $k[\x] ^{\phi }$. \end{conj}
{ "timestamp": "2022-02-02T02:12:55", "yymm": "2202", "arxiv_id": "2202.00262", "language": "en", "url": "https://arxiv.org/abs/2202.00262" }
\section{Introduction} \label{sec:introduction} Atomic and molecular line emissions are powerful diagnostic tracers of the evolution of star formation over cosmic time. For instance, line emission studies may help reveal the causes of the significant decline in star formation rate after its peak at $z {\sim} 2$, inferred from observations of optical and infrared continuum radiation \citep{Madau2014}. One of the most useful lines for understanding the context of star formation is the [C{\sc\,II}]\ $^2P_{3/2}\rightarrow$ $^2P_{1/2}$ fine structure line at $1901$\,GHz, which is the brightest cooling line in the far-infrared (FIR) spectrum, typically accounting for 0.1$\%$ to 1$\%$ of FIR energy \citep{1991ApJ...373..423S, malhotra1997infrared, diaz-santos:2017}. The bulk of this [C{\sc\,II}]\ emission is expected to come from photodissociation regions (PDRs) on the edges of molecular gas clouds \citep{2010ApJ...724..957S}. This association suggests that [C{\sc\,II}]\ emission can trace the molecular gas available to fuel star formation. A study of $z\sim2$ galaxies detected by ALMA bore this out, finding that [C{\sc\,II}]\ emission is linearly related to molecular gas content \citep{2018MNRAS.481.1976Z}. Measurements of [C{\sc\,II}]\ emission and other lines (such as [C{\sc\,I}]\ and the rotational transitions of CO) can be used in concert with galaxy formation models to constrain the physical properties of star-forming regions within galaxies \citep{2019MNRAS.482.4906P, 2021ApJ...911..132Y}. Studies of line emission from individual galaxies (e.g., \citet{2013ARA&A..51..105C, 2017ApJ...834...36H, 2018MNRAS.481.1976Z}) provide important insights into the [C{\sc\,II}]\ luminosity function. However, they are subject to sample variance in small survey regions and are limited to galaxies above a brightness threshold, so they may not capture the cosmic average of the conditions of star formation. A complementary technique to individual galaxy studies is the technique of intensity mapping, which aims to map large-scale structure by detecting the aggregate redshifted line emission without cataloging individual sources \citep{1979MNRAS.188..791H, 1990MNRAS.247..510S, 1997ApJ...475..429M, 1999ApJ...512..547S, 2008MNRAS.383.1195W, 2008PhRvL.100i1303C, 2011JCAP...08..010V, kovetz2017line, 2019BAAS...51c.101K}. Some of the advantages of intensity mapping are that it captures all sources of emission rather than a biased sample of only the brightest sources. It also puts significantly lower requirements on telescope size, as the angular resolution need not be sufficient for individual source detection. Similarly, because individual objects do not need to be detected at a high signal-to-noise ratio, intensity mapping surveys can quickly cover large cosmic volumes, providing a complete census of emitting gas. Intensity mapping has developed rapidly in the past fifteen years. Pathfinding intensity mapping surveys have used pre-existing telescopes to detect aggregate emission from the 21-cm line of neutral hydrogen (HI) \citep{2009MNRAS.394L...6P, 2010Natur.466..463C, 2013ApJ...763L..20M, 2013MNRAS.434L..46S, 2017MNRAS.464.4938W, 2018MNRAS.476.3382A, 2021arXiv210204946W} via cross-correlation with optical galaxy surveys. Several dedicated 21-cm intensity mapping experiments are now underway, targeting both the epoch of reionization (e.g., LOFAR, \cite{2013A&A...556A...2V}, and SKA precursors HERA and MWA, \cite{DeBoer_2017,2018MNRAS.481.5034M}) as well as the era of dark energy dominance (e.g., CHIME, \cite{2014SPIE.9145E..22B}, Tianlai, \cite{2012IJMPS..12..256C}, HIRAX, \cite{2021arXiv210913755C}, SKA precursor MeerKAT, \cite{2021MNRAS.505.3698W}, and BINGO, \cite{2021arXiv210701633A}). The initial focus on the 21-cm line has expanded to include [C{\sc\,II}], the rotational lines of CO, Lyman $\alpha$, H$\alpha$, H$\beta$, and more \citep{2019BAAS...51c.101K}. The COPPS survey has utilized the Sunyaev-Zel'dovich Array (SZA) to make tentative detections of CO emission at $z\sim2.6$ in cross-correlation with spectroscopic galaxy surveys \citep{2021arXiv211002239K} and auto-correlation in the shot-noise regime \citep{Keating_2016}. A similar CO shot-noise detection was made with Millimeter-wave Intensity Mapping Experiment (mmIME), using ALMA and ACA facilities \citep{Keating_2020}. Among a new generation of dedicated ground-based intensity mapping instruments targeting CO and [C{\sc\,II}]\ are COMAP \citep{2021arXiv211105927C}, designed to measure CO(1-0) from $2.4<z<3.4$ and CO(2-1) at $z=6-8$ , TIME \citep{2014SPIE.9153E..1WC}, targeting [C{\sc\,II}]\ emission from the epoch of reionization and CO from $0<z<2$, CONCERTO \citep{2020arXiv200714246T}, focusing on [C{\sc\,II}]\ emission from the epoch of reionization, and the CCAT-prime receiver for the FYST \citep{2021arXiv210710364C}, which targets [C{\sc\,II}]\ at $3.5<z<8$ and [OIII] at $z>7$. Two NASA balloon experiments designed to measure line emission in the far-infrared (FIR) are TIM \citep{2020arXiv200914340V}, focusing on [C{\sc\,II}]\ at $0.52<z<1.67$, and EXCLAIM \citep{2021arXiv210111734C}, focusing on CO and [C{\sc\,II}]\ in windows from $0<z<3.5$. The NASA MIDEX-class satellite mission, SPHEREx, will produce intensity maps of multiple lines, including H$\alpha$, H$\beta$, [OII], and [OIII] at $z<5$ \citep{2017ApJ...835..273G}. The brightness of the [C{\sc\,II}]\ line makes it an excellent candidate for the technique of line intensity mapping. \cite{pullen2018search} recently demonstrated the promise of [C{\sc\,II}]\ intensity mapping through an analysis of the cross-correlation between the angular distribution of high-redshift ($z {\sim} 2.6$) quasars in the BOSS survey and the $353$, $545$, and $857$\,GHz Planck maps. They found a cross-correlation exceeding the expected thermal continuum at $545$\,GHz, consistent with [C{\sc\,II}]\ emission correlated with BOSS quasars. A refinement of the analysis \citep{yang2019evidence} increased the result's significance. Still, the authors caution that greater spectral resolution is required to verify that the excess cross-correlation is explained by [C{\sc\,II}]\ emission rather than the redshift evolution of the correlated continuum emission. \citet{2017ApJ...838...82S} describes how future instruments for measuring spectral distortions in the cosmic microwave background (CMB) can be employed for intensity mapping. To anticipate the capabilities of future measurements, we use data from the COBE-FIRAS instrument \citep{fixsen1994calibration}, in cross-correlation with the BOSS CMASS and LOWZ galaxy catalogs, to make the first tomographic intensity mapping constraint on [C{\sc\,II}]\ emission. The FIRAS instrument \citep{fixsen1994calibration} was designed to precisely measure the spectrum of the CMB, dust, and line emission from the Milky Way. It covers a broad frequency range from $30$\,GHz to $2910$\,GHz\ with $13.6$\,GHz spectral channels. This, and the fact that FIRAS' frequency range conveniently overlaps the well-sampled LOWZ and CMASS galaxy catalogs, make FIRAS${\times}$BOSS\, a natural candidate for a tomographic [C{\sc\,II}]\, cross-correlation analysis. Compared to the Planck data set used by \citet{pullen2018search}, the FIRAS data set has much higher thermal noise per pixel and much lower angular resolution (the FIRAS beam has nearly a 7-degree full-width-half-maximum). At first glance, it seems that these limitations give FIRAS${\times}$BOSS\ no hope of approaching the sensitivity of Planck${\times}$QSO, but we achieve error bars that are only about two times larger. One reason is that the effective number of independent modes scales with the number of redshift bins. We use 14 redshift bins for FIRAS${\times}$CMASS\ and 16 for FIRAS${\times}$LOWZ, whereas Planck${\times}$QSO\ only has one channel with correlated [C{\sc\,II}]\ signal. However, the number of modes also depends on the range of measurable angular scales, and Planck${\times}$QSO\ more than makes up for its lack of redshift resolution with its much larger range of observable angular modes. Counteracting this, FIRAS${\times}$BOSS\ sees a larger cosmological signal due to structure growth at lower redshifts, and, critically, the CMASS and LOWZ galaxy catalogs are well-sampled enough that shot noise is subdominant at the redshifts and angular scales we consider. The BOSS quasar sample used in Planck${\times}$QSO, on the other hand, is dominated by shot noise due to the sparser sampling. \subsection{Spherical harmonic tomography} We cross-correlate FIRAS data with cosmological overdensity inferred from the BOSS spectroscopic galaxy redshift survey to search for extragalactic [C{\sc\,II}]\ emission. Two competing effects make the BOSS CMASS\ ($0.41<z<0.75$) and LOWZ\ ($0.06<z<0.49$) especially well-tuned for this goal. The growth of large-scale structure and geometric factors yield increasing density contrast on large angular scales where FIRAS is most sensitive. However, by $z<0.25$ ($\nu > 1500$\,GHz), the FIRAS noise rises dramatically. For our analysis, we bin the CMASS\ and LOWZ\ galaxy maps into redshift slices that correspond to the FIRAS frequency channels for the [C{\sc\,II}]\ line. We then use the technique of spherical harmonic tomography (SHT) (see, for example, \cite{asorey2012recovering, nicola2014three, 2015A&A...578A..10L}) to compute the angular cross-power spectrum, $C_{\ell}^{\times}(z,z')$, between the FIRAS maps and BOSS galaxy over-densities at each pair of redshifts. With a fine enough binning in redshift, SHT captures the complete information available in the power spectrum \citep{asorey2012recovering}, and it is well-matched to large area surveys such as BOSS, where a flat-sky approximation would introduce significant distortions. In order to fit our data to a model, the expected angular clustering signal, $C_{\ell}^{\delta}(z,z')$, must be computed by integrating a 3D power spectrum model over highly oscillatory Bessel functions (details included in Section\,\ref{subsec:dark_matter_model}). Standard anisotropy codes, such as CAMB \citep{2011ascl.soft02026L} and CLASS \citep{2013JCAP...11..044D, 2014JCAP...01..042D}, include methods for performing this integration (with or without the Limber approximation). We describe the analysis over several sections. Section\,\ref{sec:C_ell_Analysis} motivates the $C_{\ell}(z,z')$ statistic, explains how to use the pseudo-$C_{\ell}$\ technique to compute $C_{\ell}(z,z')$ for surveys with partial sky coverage, and describes the likelihood function we use for parameter estimation with $C_{\ell}(z,z')$. Section\,\ref{sec:FIRAS_and_BOSS_data} describes the FIRAS dataset, viewed as a [C{\sc\,II}]\ intensity map, and the BOSS CMASS\ and LOWZ\ data sets, viewed as binned galaxy over-density maps. Section \ref{sec:signal model} describes our dark matter model and its use in fitting the [C{\sc\,II}]\ and CIB amplitude to our FIRAS${\times}$BOSS\ data. It also describes how we model the FIRAS${\times}$BOSS\ covariance by fitting parametric models to the BOSS${\times}$BOSS\ and FIRAS${\times}$FIRAS\ data. Section\,\ref{sec:Discussion} discusses our [C{\sc\,II}]\ constraints and how they relate to other measurements and astrophysical models. We conclude in Section\,\ref{sec:Conclusion}. \section{Parameter Estimation with the \texorpdfstring{$C_{\ell}(z,z')$}{Cl(z,z')} Statistic} \label{sec:C_ell_Analysis} \subsection{Motivating the estimator} \label{subsec:motivating_estimator} We perform our analysis with Spherical Harmonic Tomography (SHT), a two-point statistic wherein each redshift slice of data is decomposed into spherical harmonics, and the angular power spectrum, $C_{\ell}(z,z')$, is calculated between each pair of redshift slices. SHT captures the complete information available in the more typically used three-dimensional power spectrum statistic, $P(k)$ \citep{asorey2012recovering}. A recent analysis of BOSS CMASS and LOWZ clustering using Spherical Harmonic Tomography \citep{loureiro2019cosmological} found equivalent or better constraints on cosmological parameters compared to standard power spectrum analysis techniques. As the set of two-point cross-correlations, the SHT contains all information in the data cubes that is statistically isotropic and Gaussian. SHT has some inherent geometrical advantages, especially for large-angle and deep surveys. A significant advantage is that the spherical coordinates apply to wide-angle surveys like BOSS without any flat-sky approximation. A traditional $P(k)$ analysis relies on the flat-sky approximation to distinguish between transverse and line-of-sight modes, which is critical for accurately representing redshift space distortions (RSDs) and distinguishing continuum foregrounds from line signal. By contrast, in the SHT formalism, both foregrounds and linear redshift space distortions take exact, simple forms. Another important feature of SHT is its ability to capture redshift-dependent change over cosmological time in deep surveys. For deep surveys, structure growth and changes in star formation rate break the assumption of translational invariance in the line-of-sight direction, rendering the $P(k)$ statistic insufficient. However, since $C_{\ell}(z,z')$ does not compress data along the line-of-sight direction, it describes redshift evolution. A final geometric advantage is that $C_{\ell}(z,z')$ describes the data in observing coordinates of angle and frequency (or, equivalently, redshift) rather than re-gridding the data onto cosmological distances in an assumed cosmological model. An MCMC likelihood analysis that constrains cosmological parameters would therefore not need to recompute the data statistic at each step, which in principle would be needed for a $P(k)$ analysis. There are several practical advantages to parameter estimation with SHT. Due to the scan strategy, intensity mapping data generally have inhomogeneous noise and partial sky coverage in the angular direction. Multiplication by noise weights in real space couples transverse modes in Fourier space, which can be easily accounted for in SHT analysis, thanks to pre-existing work \citep{hivon2002master, tristram2005xspect} on the pseudo-$C_{\ell}$\ technique from CMB analysis. Intensity maps will also often have significant variation in the noise level in the frequency (line-of-sight) direction because of variations in spectrometer noise or chromatic contamination such as terrestrial radio frequency interference. Inhomogeneous noise in real space produces coupling of line-of-sight Fourier modes in analyses of $P(k)$. In contrast, because $C_{\ell}(z,z')$ does not perform any Fourier or Bessel transform in the redshift direction, no coupling occurs, and the noise can be expressed as a simple function of the redshift slice. Additionally, chromatic beam effects can be modeled per redshift slice without introducing any flat-sky approximation. In addition, SHT provides an avenue for self-consistently handling foregrounds and other continuum signals. In contrast, other approaches that remove foregrounds in map space before the two-point analysis must contend with signal loss \citep{Switzer:2015ria, 2018ApJ...868...26C}. With SHT, the covariance of $C_{\ell}(z,z')$ can be constructed to include information about foregrounds. The inverse covariance weights the data in the likelihood analysis for the line brightness, suppressing modes contaminated by foregrounds self-consistently with the parameter estimation. Additionally, the $C_\ell(z,z')$ model can include signal terms such as the cross-correlation of a galaxy redshift sample with the correlated continuum emission of the galaxies in the IM volume \citep{Serra:2014pva, pullen2018search, Switzer:2018tel}. SHT also introduces several new challenges to the analysis. First, it places significant requirements on memory for the computation of the likelihood. This is because the size of the covariance scales as $N_b^2 N_z^4$, where $N_z$ is the number of redshift bins and $N_b$ is the number of angular bins. For FIRAS, this is fairly manageable because we analyze only three angular bins and about 15 redshift slices for each of FIRAS${\times}$CMASS\ and FIRAS${\times}$LOWZ. The size of the covariance will be more challenging for future instruments with higher spectral and angular resolution. The visualization of $C_{\ell}(z,z')$ and its errors presents an additional challenge. The extra dimensionality of $C_{\ell}(z,z')$, which allows it to measure redshift evolution, also means that angular and redshift information cannot be shown in a single plot. We develop several approaches for displaying the data and describing the goodness of fit to high-dimensional data with complex covariance. Lastly, the evaluation of the $C_{\ell}(z,z')$ model is computationally expensive. Since the current generation of IM observations focuses on detection and measurements of line brightness \citep{2019BAAS...51c.101K}, this is not an issue. In this regime, the $C_{\ell}(z,z')$ model can be constructed from linear combinations of cosmological clustering and shot noise templates that need only be calculated once. In contrast, studies of large-scale structure acquire information from the nonlinear dependence of $C_{\ell}(z,z')$ on cosmological parameters, so can require expensive recalculation of the anisotropy. Recent work has accelerated the integrals that convert $P(k,z)$ to $C_{\ell}(z,z')$ without using the Limber approximation \citep{2017A&A...602A..72C, Schoneberg:2018fis}, making cosmological parameter estimation with $C_{\ell}(z,z')$ more practical. \subsection{Computing \texorpdfstring{$C_{\ell}(z,z')$}{Cl(z,z')} with Incomplete Sky Coverage} \label{subsec:Partial_sky} Extensive work in CMB data analysis has developed an approach \citep{hivon2002master} for dealing with incomplete sky coverage and an approximate formula for the covariance it induces between angular scales \citep{tristram2005xspect}. The code package NaMaster \citep{Alonso:2018jzx,NaMaster} includes this full functionality, along with contaminant removal for polarized and unpolarized maps, on both large curved-sky maps and small maps, using a flat-sky approximation. For observed maps A and B with known, possibly different, inverse noise weights and beams, the angular power spectrum of the inverse noise weighted maps is the pseudo-$C_{\ell}$\ spectrum \citep{hivon2002master}, which we label $D_{\ell}$, following the notation of \cite{tristram2005xspect}. The pseudo-$C_{\ell}$\ spectrum is related to the true full-sky angular power spectrum $C_{\ell}$ by \begin{equation}\label{eq:pseudocl} D_{\ell} = \sum_{\ell'} M^{AB}_{\ell\ell'}\omega_{\ell'}^A\omega_{\ell'}^BC_{\ell'}, \end{equation} where $\omega^A_{\ell'}$ is the product of the beam and pixel window function of map A, $\omega^B_{\ell'}$ is the product of the beam and pixel window function of map B, and $M^{AB}_{\ell \ell'}$ is the mixing matrix, computed via \begin{equation}\label{eq:mixing_matrix} M^{AB}_{\ell\ell'} = \frac{2\ell' + 1}{4 \pi} \sum_{\ell''}(2\ell''+1)\mathcal{W}^{AB}_{\ell''}\begin{pmatrix} \ell&\ell'&\ell'' \\ 0&0&0\end{pmatrix}^2, \end{equation} where $\mathcal{W}^{AB}_{\ell''}$ is the angular power spectrum of the inverse noise spatial weights of maps $A$ and $B$, and $\begin{pmatrix} \ell&\ell'&\ell'' \\ 0&0&0\end{pmatrix}$ represents a Wigner 3-j symbol. These formulas work for both auto and cross-correlations. The mixing matrix is not invertible with partial sky coverage, and it is convenient to define the binned pseudo-$C_{\ell}$\ spectrum and binned mixing matrix. Here, a range of multipoles $\ell$ are combined into a bandpower $b$, for which the mixing matrix of bandpowers $b$ and $b'$ is invertible using suitable binning. Define $B_{b\ell}$ as a binning matrix that averages blocks of consecutive $\ell$s into bins indexed by $b$ and define $B^u_{\ell b}$ as an equivalent unbinning matrix that assigns the average value of a bin $b$ to each $\ell$ within that bin. Then, define the binned quantities \begin{equation} \begin{split}\label{eq:binning_equation} D_b &\equiv \sum_{\ell} B_{b\ell} D_{\ell}\\ M^{AB}_{b b'} &\equiv \sum_{\ell, \ell'} B_{b\ell}M^{AB}_{\ell\ell'}\omega^A_{\ell'}\omega^B_{\ell'}B^u_{\ell' b'}. \end{split} \end{equation} If the bin size is wide enough, $M^{AB}_{b b'}$ is invertible even for partial sky coverage. One can then form an estimate, $\hat{C}_b$, of the binned angular power spectrum from the observed binned pseudo-$C_{b}$ spectrum, $\hat{D}_b$, via \begin{equation}\label{eq:binned_mixing} \hat{C}_b = \sum_{b'} \left( M^{AB}_{b b'}\right)^{-1} \hat{D}_{b'}. \end{equation} This procedure should only recover the true angular power spectrum if $C_{\ell}$ is constant within each bin $b$. Since this is not generally the case, to compare $\hat{C}_{b}$ to a model $C_{\ell}$, as we do in Section\,\ref{sec:signal model}, we always compute or simulate the expected binned angular power spectrum, $\tilde{C}_b$, which is related to the model via \begin{equation}\label{eq:model_mixing_binning} \tilde{C}_b \equiv \sum_{b'} \left(M^{AB}_{b b'}\right)^{-1} \sum_{\ell} B_{b' \ell} M^{AB}_{\ell \ell'} \omega^A_{\ell'}\omega^B_{\ell'} C_{\ell'}. \end{equation} If one assumes that the true $C_{\ell}(z,z')$ distribution is Gaussian, then the $b = b'$ terms of the covariance can be roughly approximated as \begin{equation} \begin{split}\label{eq:general_gaussian_variance} &\langle \Delta C_b^{A\times B}(z_1,z_2) \Delta C_b^{A\times B}(z_3,z_4) \rangle \approx \\ &\frac{1}{f_{\rm sky} \Delta \ell (2\ell + 1)} [ C_b^{A\times A}(z_1,z_3) C_b^{B\times B}(z_2,z_4) + \\ &C_b^{A\times B}(z_1,z_4) C_b^{B\times A}(z_2,z_3)], \end{split} \end{equation} where $f_{\rm sky}$ is the fraction of the sky seen by the survey and $\Delta \ell$ is the width of the bin centered at $\ell$. For intensity mapping tomography, the first term is the product of the autopower of the IM data $C_b^{\rm IM}$ and the galaxy survey $C_b^{g}$, and the second term is the product of cross-powers $C_b^\times$ between the IM data and the galaxy survey. For the FIRAS${\times}$BOSS\ analysis, the first term dominates the covariance due to the high thermal noise and large (at low $\ell$) Milky Way foreground signal present in $C_b^{\rm IM}(z,z')$. So, as a rough rule of thumb, the $b = b'$ terms of the cross-power covariance are \begin{equation}\label{eq:cross_power_cov_approx} \begin{split} &\langle \Delta C_b^{\times}(z_1,z_2) \Delta C_b^{\times}(z_3,z_4) \rangle \approx \\ &\frac{1}{f_{\rm sky} \Delta \ell (2\ell + 1)} [ C_b^{\rm IM}(z_1,z_3) C_b^{g}(z_2,z_4)]. \end{split} \end{equation} Appendix\,\ref{sec:appendix_sensitivity_forecast} applies this equation to derive an approximate formula for the expected sensitivity that a cross-power survey could achieve on the line intensity. The covariance can be more accurately approximated under the assumption of large sky coverage (\cite{tristram2005xspect}, formula included in Appendix\,\ref{sec:Appendix_coupling_approximation}) or computed via simulated draws from an assumed $C_{\ell}(z,z')$ model and repeated pseudo-$C_{\ell}$\ computation of the resulting $C_b(z,z')$. For our BOSS${\times}$BOSS\ analysis, we use the approximate covariance of \cite{tristram2005xspect}, as it matches our simulations well. For the FIRAS${\times}$FIRAS\ and FIRAS${\times}$BOSS\ analysis, the approximation fails to match simulations, so we instead use a fully simulated covariance. We suspect that the inconsistency between the simulations and the approximate formula is due to the combination of small sky coverage and the steep angular index of the Milky Way foregrounds. Further details on the covariance used for each analysis are included in Section\,\ref{subsec:FB_cov}. \subsection{Implementing the estimator} \label{subsec:implementing_estimator} For a range of binned multipoles indexed by bandpower $b$, the computed $C_{b}(z,z')$ is a rank-3 tensor, indexed by $b$, $z$, and $z'$. To perform a likelihood analysis, we define $\mathbf{x}$ as a flattened vector of all the unique elements of $C_{b}(z,z')$. \begin{equation} \mathbf{x} \equiv vec [C_{b}(z,z')]. \end{equation} Note that for auto-powers, there are only $N_{b} \times N_z \times (N_z+1)/2$ unique elements, since $C_{b}^{A \times A}(z,z') = C_{b}^{A \times A}(z',z)$. For the cross-power, all elements are unique, and $\mathbf{x}$ has a length of $N_{b} \times N_z^2$. The likelihood formed from $C_{b}(z,z')$ is well-studied in CMB literature (see e.g., \cite{Hamimeche:2008ai} for a survey of likelihood forms depending on assumptions). If the amplitudes of the spherical harmonics $a_{\ell m}(z)$ of the maps are drawn from an underlying Gaussian $C_{\ell}(z,z')$ distribution, then the resulting measured vector of anisotropies, $\mathbf{\hat{x}}$, is Wishart-distributed. With our bin size of $\Delta \ell =9$, and for the $\ell$-range with signal sensitivity (see Figure\,\ref{fig:variance_vs_ell}), the number of modes contributing to each bin is high enough that the Wishart distribution is reasonably well-approximated by a Gaussian distribution \begin{equation} \begin{split}\label{eq:gaussian_dist} -2\ln\mathcal{L} = &[ \bf{x}(\Theta) - \hat{\bf{x}} ]^T \bf{\Sigma}(\Theta)^{-1} [\bf{x}(\Theta) - \hat{\bf{x}} ]\\ &+ \ln |\bf{\Sigma}(\Theta)| - 2k\ln(2\pi), \end{split} \end{equation} where $\bf{\Sigma}(\Theta)$ is the bandpower covariance matrix of the flattened data vector and includes binned angular $b{-}b'$ coupling induced by incomplete sky coverage, $\hat{\bf{x}}$ is a flattened vector of the $\hat{C}_{b}(z,z')$ estimate computed from the data, and $\bf{x}(\Theta)$ is a flattened vector of the $C_{b}(z,z')$ model, which depends on the science parameters of interest, $\bf{\Theta}$. When analyzing multiple datasets, such as galaxy surveys and intensity maps, the likelihood can be expanded to apply to a data vector that concatenates the auto- and cross-powers of all the datasets. So, for the FIRAS and BOSS data, the data vector $\hat{\bf{x}}$ could contain flattened FIRAS${\times}$FIRAS, BOSS${\times}$BOSS, and FIRAS${\times}$BOSS\ $C_{b}(z,z')$ estimates. If the thermal noise and galaxy shot noise were small, then the high correlation between the galaxy data and [C{\sc\,II}]\ data, which trace the same matter fluctuations, could be exploited to remove cosmic variance from the measurement of the [C{\sc\,II}]\ line intensity \citep{McDonald:2008sh, Bull:2014rha, switzer2017tracing, Switzer:2018tel, 2021PhRvD.104h3501O}. For this FIRAS${\times}$BOSS\ analysis, thermal noise and foregrounds, rather than cosmic variance, are dominant sources of error, so the benefits of such an approach are negligible. However, future intensity mapping experiments may benefit from this approach. For simplicity, we focus on [C{\sc\,II}]\ constraints from the cross-power, FIRAS${\times}$BOSS, and we only use FIRAS${\times}$FIRAS\ and BOSS${\times}$BOSS\ to validate the cross-power covariance model. \section{The FIRAS and BOSS data sets} \label{sec:FIRAS_and_BOSS_data} \subsection{FIRAS Instrument and Data Set} \label{sec:FIRAS} \begin{figure} \includegraphics[width=\columnwidth]{FIRAS_freq_window_func.pdf} \caption{ \label{fig:FIRAS_freq_window_func}Each interferogram measured by FIRAS was multiplied by an apodization function before Fourier transforming. The result is that the FIRAS maps show the true sky spectrum convolved by the Fourier transform of that apodization function. We call this Fourier transformed apodization function the FIRAS frequency response function, $A(\Delta \nu)$, and plot it here. } \end{figure} FIRAS is a rapid-scan polarizing Michelson interferometer \citep{mather1993design} on the COBE satellite that mapped the frequency spectrum of the full infrared sky at a coarse angular resolution. The frequency spectrum for each pointing was obtained via an inverse Fourier transform of the interferogram of measured powers over a discrete range of instrument path length differences. The resulting measurements of the sky spectrum are equal to the true spectrum convolved by the inverse Fourier transform of an apodization function \citep{fixsen1994calibration}. Figure\,\ref{fig:FIRAS_freq_window_func} plots this frequency response function, which we shall denote $A(\Delta \nu)$. The published FIRAS maps are binned in the HEALPix \footnote{http://healpix.sourceforge.net} \citep{2005ApJ...622..759G, Zonca2019} format with resolution parameter $N_{\rm side}{=}16$, corresponding to 3072 angular pixels, sufficient to sample the 7-degree beam. In addition to the sky maps, inverse-noise weight maps were produced based on fluctuations of the different interferograms contributing to each pixel. We upgrade the map binning to $N_{\rm side}{=}128$ for analysis. This regridding does not gain any angular information from the FIRAS maps, but it allows finer, more accurate noise weights for the galaxy over-density maps. Figure\,\ref{fig:FIRAS_weights} shows the inverse-noise weights at $N_{\rm side}=128$ for [C{\sc\,II}]\ emission from $z\sim0.52$. \begin{figure} \includegraphics[width=\columnwidth]{FIRAS_weight_plot.pdf} \caption{ \label{fig:FIRAS_weights} Mollweide projection of the FIRAS inverse-noise weights for [C{\sc\,II}]\ emission at $z\sim0.52$. We zero the weights at all regions that do not overlap with the CMASS galaxy survey. Although this plot only shows the weights for the single frequency bin centered at $z\sim0.52$, the angular distribution of the weights is identical for all frequencies. The frequency dependence of the weights can be inferred from the orange curve of Figure\,\ref{fig:Gal_counts_noise}.} \end{figure} The FIRAS beam is formed by a non-imaging parabolic concentrator, which creates a near-tophat beam response with 7-degree FWHM. This beam was measured by in-flight scans of the Moon (Figure\,\ref{fig:FIRAS_beam}). In addition to this intrinsic beam convolution, the finite time required to complete an interferogram combined with the FIRAS scan motion causes the maps to be further smoothed in the ecliptic scan direction by a 2.4-degree tophat function. We account for all beam, scan, and pixelization smoothing effects on the power spectrum via simulation: realizations of a model power spectrum are drawn at $N_{\rm side}=128$, convolved by the FIRAS beam model, convolved by a 2.4-degree tophat function in the ecliptic direction, degraded to $N_{\rm side}=16$, and re-grid onto $N_{\rm side}=128$. We then compute an angular transfer function in $\ell$ by calculating the ratio of the angular power spectrum calculated from these modified maps to the original angular power spectrum. The square root of this transfer function, denoted $\omega_{FIR}(\ell)$ (Figure\,\ref{fig:FIRAS_win}), represents the combined beam, scan, and pixel window function for the FIRAS maps. \begin{figure} \includegraphics[width=\columnwidth]{FIRAS_measured_beam.pdf} \caption{ \label{fig:FIRAS_beam}The measured FIRAS beam (solid green) and our interpolation (dashed blue). The beam was measured at a frequency of $1.5$\,THz via observations of the Moon.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{FIRAS_beam_pix_win_func.pdf} \caption{ \label{fig:FIRAS_win}The simulated FIRAS angular window function, $\omega_{FIR}(\ell)$, which accounts for beam, scan, and pixelization effects.} \end{figure} \subsection{BOSS Data Set} \label{sec:BOSS} The BOSS survey \citep{dawson2012baryon} extends the Sloan Digital Sky Survey (SDSS). It was designed to measure the Baryon Acoustic Oscillation feature in the galaxy matter power spectrum. Precise spectroscopic redshifts were obtained for approximately 1.5\,million galaxies in the redshift range $0{<}z{<}0.8$, selected to have approximately constant stellar mass. Details about the telescope and instruments of SDSS can be found in \cite{1996AJ....111.1748F}, \cite{1998AJ....116.3040G}, \cite{gunn20062}, \cite{2010AJ....139.1628D}, and \cite{smee2013multi}. \begin{figure} \includegraphics[width=\columnwidth]{Gal_Counts_FIRAS_Noise.pdf} \caption{ \label{fig:Gal_counts_noise} Galaxy counts as a function of redshift for the BOSS LOWZ (blue) and CMASS (red) populations. Plotted in yellow is the expected FIRAS noise angular power spectrum, $C_{\ell}$, at $\ell=0$ for the corresponding frequencies of the [C{\sc\,II}]\ fine-structure line. For most of the angular modes accessible to FIRAS, the noise power spectrum will be higher, growing roughly proportional to the inverse of the square of the FIRAS beam and scan window function. For $z<0.2$, the FIRAS noise grows rapidly. The shaded blue and red rectangles indicate the redshift ranges used for the cross-power analysis of the FIRAS data with the LOWZ and CMASS samples. These regions were selected because both the FIRAS thermal noise and galaxy shot noise are low.} \end{figure} BOSS data release 12 \citep{alam2015eleventh} includes 100 mock unclustered realizations of CMASS and LOWZ galaxies. We bin both the real catalogs and the unclustered mocks onto HealPix maps with $N_{\rm side}=128$, in redshift bins corresponding precisely to the FIRAS frequency bins under the assumption that the FIRAS signal is redshifted [C{\sc\,II}]\ emission. We construct the CMASS and LOWZ galaxy selection functions, denoted $\bar n(z, \theta)$, by averaging their mock catalogs, assuming a selection function that is separable in angle and redshift. The separability assumption reduces shot noise in the selection function and should be sufficiently accurate for the cross-correlation, limited by FIRAS noise. We define the boundary of the survey by zeroing pixel weights where the selection function is more than $1.3$ standard deviations below the angular average of the sample, which additionally de-weights some regions with lower coverage. The galaxy over-density field is then formed via \begin{equation} \delta^g(z, \theta) = \frac{n(z,\theta)}{\bar n(z, \theta)} - 1, \end{equation} where $n(z,\theta)$ denotes the binned galaxy maps and $\bar n(z, \theta)$ denotes the selection function. Figure\,\ref{fig:CMASS_sel} displays the galaxy over-density maps and selection function for a single redshift slice at $z\sim0.52$. \begin{figure} \includegraphics[width=\columnwidth]{CMASS_map_sel.pdf} \caption{ \label{fig:CMASS_sel} Mollweide projection of the CMASS over-density map and selection function at $z\sim0.52$, near the redshift peak of CMASS, in Galactic coordinates. There are several points in the over-density map well above 4, but we choose to saturate the scale at 4 to show the broad clustering features.} \end{figure} Previous analysis of the BOSS data has found evidence of systematic contamination from stellar populations at low $\ell$ \citep{loureiro2019cosmological}. We also find evidence of systematic low $\ell$ contamination, with spurious $z-z'$ correlations visible at the lowest $\ell$-bin of our analysis ($2\leq \ell \leq 10$), indicative of a common systematic component across redshifts. $\chi^2$ tests of BOSS data compared to our fitted model are high when including this bin. However, they drop to expected values when excluding this bin. We cut this lowest angular bin and the next bin ($11\leq \ell \leq 19$) from our analysis (see Figure\,\ref{fig:variance_vs_ell}). \section{Signal Model and Parameter Fit} \label{sec:signal model} The low spatial resolution of the FIRAS maps limits the range of scales to $2 < \ell < 47$, or five bandpowers with $\Delta \ell = 9$. Figure\,\ref{fig:variance_vs_ell} shows the expected variance on the cross-power signal for each $\ell$ and the $\ell$-bins we use in this analysis. Since the first of the five available $\ell$-bins shows signs of stellar contamination in the BOSS sample, we drop it from our analysis. We also eliminate the second $\ell$-bin because, in that bin, the mode mixing caused by partial sky coverage combines with the steep angular index of Milky Way emission to mix the FIRAS auto-power negative. This makes the empirical FIRAS model non-positive definite in that bin and therefore unusable as verification for our cross-power fits. Consequently, we use only the last three bins, as indicated in Figure\,\ref{fig:variance_vs_ell}, which contain most of the sensitivity. In principle, extra information could be obtained at higher $\ell$ by re-making the FIRAS maps from the raw data at higher $N_{\rm side}$, but in practice, the 7-degree beam of the FIRAS instrument diminishes the signal-to-noise ratio at higher $\ell$s (see the upward trend in noise-to-signal ratio for $C_{\ell}^{\times}(z,z')$ at $\ell>35$ in Figure\,\ref{fig:variance_vs_ell}). To model the errors, we also restrict the FIRAS${\times}$FIRAS\ and BOSS${\times}$BOSS\ analysis to these same 3 $\ell$-bins, even though much higher $\ell$ information is available from the BOSS catalog. Similarly, we must restrict the FIRAS${\times}$FIRAS\ analysis to the sky fraction covered by the BOSS galaxy survey to avoid over-estimating the error bars by including the bright Galactic plane, which has no overlap with the BOSS North or South fields. \begin{figure} \includegraphics[scale=0.45]{variance_vs_ell_zpoint4.pdf} \caption{ \label{fig:variance_vs_ell} The approximate expected variance in the cross-power diagonal at $z {\sim} 0.4$, according to the Gaussian error formula $\langle \Delta C^{x}_\ell(z,z') \Delta C^{x}_\ell(z,z') \rangle \approx (f_{\rm sky}(2\ell + 1))^{-1} C^{\rm IM}_\ell(z,z')C_{\ell}^g(z,z')$. Although the magnitude varies somewhat with redshift, the shape is representative of all redshifts studied. The small number of modes and large galactic foregrounds drive high variance at low $\ell$. For $\ell>20$, the foregrounds have mostly subsided, and thermal noise dominates. For $\ell > 30$, the increase in the noise caused by beam and scan convolution starts to overtake the advantage of extra modes at higher $\ell$. Since the FIRAS data was originally mapped at $N_{\rm side}=16$, spherical harmonics below $\ell=3N_{\rm side}=48$ form a complete basis, and no further information can be extracted by considering higher $\ell$. Dashed lines indicate the bounds of the 5 $\ell$-bins, and the shaded regions show the three bins used in the cross-power analysis.} \end{figure} \subsection{Dark Matter Model}\label{subsec:dark_matter_model} Both our BOSS${\times}$BOSS\ and FIRAS${\times}$BOSS\ models require the dark matter angular power spectrum of the overdensity field, $C_{\ell}^{\delta}(z,z')$. We calculate this angular power spectrum with the Boltzmann code CLASS \citep{2013JCAP...11..044D, 2014JCAP...01..042D}, using cosmological parameters inferred from the Planck 2015 \citep{Ade:2015xua} temperature and low-$\ell$ polarization maps (TT+LowP). The Halofit routine \citep{Smith:2002dz} provides nonlinear corrections to the power spectrum. However, on the several-degree scales of this analysis, the fluctuations are well-described by linear perturbation theory ($k_{\rm max} \sim \frac{\ell_{\rm max}}{\chi(z)_{\rm min}} \sim \frac{50}{880}$ h/Mpc $\sim 0.06$ h/Mpc), and nonlinear corrections are small. CLASS computes the angular power spectrum from the 3D power spectrum, $P(k)$, according to the equation \begin{equation} C_{\ell}^{A\times B}(z,z') = \frac{2}{\pi} \int k^2 P^{\delta}(k,z{=}0) W_A^{\rm tot}(k,z)W_B^{\rm tot}(k,z') dk, \end{equation} where $P^{\delta}(k,z{=}0)$ is the dark matter power spectrum at the current epoch, and, if there are no redshift space distortions (RSDs), \begin{equation} W_A(k, z) = b_A \int \phi_z(z'') G(z'',k) j_{\ell}[k \chi (z'')] dz'', \end{equation} where $b_A$ is the bias for dark matter tracer $A$, $\phi_z(z'')$ is a tophat redshift selection function that is non-zero only over the range of the redshift slice centered at redshift $z$ and normalized to integrate to 1, $ j_{\ell}$ is a spherical Bessel function of the first kind with parameter $\ell$, $G(z'',k)$ is the growth factor, and $\chi(z'')$ is the radial comoving distance to the shell at redshift $z''$. Linear redshift space distortions (RSDs) can be included \citep{Fisher:1993pz, padmanabhan2007clustering} by replacing $W_A(k, z)$ with $W_A^{\rm tot}(k,z)$, where \begin{equation} \begin{split}\label{eq:RSD_boss} W_A^{\rm tot}(k, z) = &W_A(k, z) + W_A^{\rm RSD}(k,z)\\ W_A^{\rm RSD}(k,z) =&b_A \int \beta_A(z'') \phi_z(z'') \times \\ \biggl [ &\frac{2\ell^2+2\ell -1}{(2\ell+3)(2\ell-1)}j_{\ell}(k \chi (z''))\\ &- \frac{\ell(\ell-1)}{(2\ell-1)(2\ell+1)}j_{\ell-2}(k \chi(z''))\\ &- \frac{(\ell+1)(\ell+2)}{(2\ell+1)(2\ell+3)}j_{\ell+2 }(k \chi(z'')) \biggr ] dz'',\\ \end{split} \end{equation} \\ where $\beta_A(z) = f(z)/b_A$, with $f(z)$ being the logarithmic growth rate of linear perturbations in the matter power spectrum. The $C_{\ell}^{A\times B}(z,z')$ that results from using $W_A^{\rm tot}(k, z)$ and $W_B^{\rm tot}(k, z)$ contains cosmological terms proportional to $b_A b_B$, proportional to $(b_A {+} b_B)/2$, and independent of both $b_A$ and $b_B$. We label these terms $C_{\ell}^{(2)}(z,z')$, $C_{\ell}^{(1)}(z,z')$, and $C_{\ell}^{(0)}(z,z')$ respectively. They are calculated from CLASS via linear combinations of $C_{\ell}(z,z')$ computations without RSD and bias 1, with RSD and bias 1, and with RSD and bias 0. With this formalism, the cross-power spectrum of two biased matter tracers is given by \begin{equation} C_{\ell}^{A\times B}(z,z') = b_A b_B C_{\ell}^{(2)}(z,z') + \frac{b_A {+} b_B}{2}C_{\ell}^{(1)}(z,z') + C_{\ell}^{(0)}(z,z'). \end{equation} The bias dependence of these terms is reminiscent of the Kaiser correction in power spectrum space. Indeed, these equations can be derived by including the Kaiser enhancement term in a plane-wave expansion of the power spectrum and integrating along the line-of-sight \citep{padmanabhan2007clustering}. We do not model finger-of-God effects or redshift smearing due to spectroscopic survey errors since our redshift bin size of $\Delta z \sim 0.02$ is more than 10 times larger than the 400 km/s satellite galaxy velocity dispersion fit and 30 km/s spectroscopic error fit found by \cite{Guo:2014iga}'s analysis of CMASS galaxies. \subsection{\texorpdfstring{FIRAS${\times}$BOSS}{FIRASxBOSS}}\label{subsec:cross-power} The cross-correlation signal model consists solely of correlated continuum emission (dust) and [C{\sc\,II}]\ emission from the BOSS galaxies. Because they are uncorrelated with the cosmological overdensity field, the thermal noise and foregrounds contribute zero average cross-power and thus will not factor into the mean signal model (although the variance caused by their spurious correlation with the galaxy survey will be included through the FIRAS auto-power in the covariance). In Appendix\,\ref{sec:Appendix_A}, we derive the functional form of our cross-power model. The [C{\sc\,II}]\ part of the model is \begin{equation}\label{eq:cii_cl_full_orig} \begin{split} &C_{\ell}^{\rm [C{\sc\,II}] \times g}(z,z') = \\ &I_{\rm [C{\sc\,II}]}(z) \cdot \Big[ b_gb_{\rm [C{\sc\,II}]}C^{(2)}_{\ell}(z,z') + \\ &\frac{b_g + b_{\rm [C{\sc\,II}]}}{2}C^{(1)}_{\ell}(z,z') + C^{(0)}_{\ell}(z,z') \Big]. \end{split} \end{equation} The redshift dependence of $I_{\rm [C{\sc\,II}]}(z)$ is shown in Equation\,\ref{eq:I_CII}. An intensity mapping experiment with sufficient sensitivity can fit the parameters that control the redshift evolution of $I_{\rm [C{\sc\,II}]}(z)$. However, due to the high noise and large beam of FIRAS, we fix all of those evolution parameters with reasonable values from \cite{pullen2018search} (see details in Appendix\,\ref{sec:Appendix_A}). Those values lead to a modest evolution in which the brightness increases ${\approx}20\%$ toward higher redshift over each of the CMASS and LOWZ redshift ranges. Our MCMC analysis assumes this redshift shape and fits for the overall amplitude of $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}(z{=}z_{\rm center})$ at the central redshift of each region (CMASS and LOWZ, respectively). The CIB portion of the cross-power model is \begin{equation}\label{eq:cib_cl_full_orig} \begin{split} &C_{\ell}^{c \times g}(z,z') = \\ & \sum_{z''} \frac{dI_{\rm CIB}(\nu_{\rm [C{\sc\,II}]}^z,z'')}{dz''}\Delta z'' \times \left[ b_gb_{\rm [C{\sc\,II}]}C^{(2)}_{\ell}(z'',z') + \right.\\ &\left. \frac{b_g + b_{\rm [C{\sc\,II}]}}{2}C^{(1)}_{\ell}(z'',z') + C^{(0)}_{\ell}(z'',z') \right], \end{split} \end{equation} where $\frac{dI_{\rm CIB}(\nu_{\rm [C{\sc\,II}]}^z,z'')}{dz''}\Delta z''$ is the intensity of the CIB that is emitted from sources in a redshift bin of size $\Delta z''$, centered at redshift $z''$, and measured at a frequency of $\nu_{\rm [C{\sc\,II}]}/(1+z)$. The sum over $z''$ in Equation\,\ref{eq:cib_cl_full_orig} should, in principle, be carried out over all redshifts, even those outside of the galaxy survey. In practice, for the redshift bins and $\ell$ bins considered in this analysis, the bracketed $C_{\ell}(z'',z')$ kernel is dominated by the $z''= z'$ term, with the neighboring off-diagonal terms around 20\% of the diagonal term, and the rest of the terms are negligible. As with our [C{\sc\,II}]\ analysis, because of the limited sensitivity of the FIRAS data, we do not attempt to constrain the redshift evolution of the CIB brightness or the spectral shape of the CIB emission. Instead, these are fixed by the assumed values of $\beta = 1.5$ \citep{Planck2014}, $T_d=26$\,K \citep{Serra:2014pva}, $\Phi(z) = (1+z)^{2.3}$ \citep{pullen2018search}, $\log_{10}(M_{\rm eff}/M_{\odot}) = 12.6$ \citep{Planck2014, serra2016dissecting}, and $\sigma^2_{L/M}=0.5$ \citep{10.1111/j.1365-2966.2012.20510.x, Planck2014, serra2016dissecting}. Because of this assumed spectral and redshift evolution, the parameter we constrain is $\frac{dI_{\rm CIB}(\nu_{\rm [C{\sc\,II}]}^z = \nu_{\rm center} ,z'' = z_{\rm center})}{dz''}$, where $z_{\rm center}$ is the central redshift of the analysis region, and $\nu_{\rm center}$ is the corresponding central frequency of the analysis region. Finally, our signal model must account for the FIRAS data being convolved by the FIRAS frequency response function, $A(\nu)$. To do this, we convert the frequency response function $A(\nu)$ to a function of redshift, $A(z)$, and convolve $C_{\ell}^{\times}(z,z')$ by $A(z)$, resulting in the final signal model \begin{equation} C_{\ell}^{\times}(z,z') = A(z'') \circledast \left[ C_{\ell}^{\rm [C{\sc\,II}] \times g}(z'',z') + C_{\ell}^{c \times g}(z'',z') \right]. \end{equation} In our MCMC analysis, we fix the value of the galaxy bias, $b_g$, to the best-fit value from our BOSS${\times}$BOSS\ analysis (Section\,\ref{subsec:BOSS_auto_model}) and also fix the [C{\sc\,II}]\ and CIB bias to be identical to the galaxy bias ($b_{\rm [C{\sc\,II}]}=b_g$). This fixing of the [C{\sc\,II}]/CIB bias has almost no effect on our fits since only the small RSD terms ($C^1_{\ell}(z,z')$ and $C^0_{\ell}(z,z')$) can break the degeneracy between $b_{\rm [C{\sc\,II}]}$ and $I_{\rm [C{\sc\,II}]}$, and FIRAS${\times}$BOSS\ lacks the precision to break that degeneracy. The next section, \ref{subsec:FB_cov} develops the covariance model employed by the MCMC. Figure\,\ref{fig:cross_power_fits} shows the MCMC contours for our parameter fits to the FIRAS${\times}$BOSS\ data with the CMASS and LOWZ galaxies. Also shown, on the bottom row, are the $\chi^2$ of the CMASS and LOWZ maximum likelihood fits, compared to a simulated distribution of best-fit $\chi^2$ values. For the MCMC analysis, we apply a simple flat prior that restricts both $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}$ and $b_{\rm [C{\sc\,II}]}dI_{\rm CIB}/dz$ to positive values. This prior has almost no effect on our best-fit $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}$ values, but since we do not have enough sensitivity for detection, it prevents unphysical negative values from counting towards our quoted upper limit constraints. We find, at 95 percent confidence, that $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}<0.31$\, MJy/sr at $z\sim0.35$ and $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}<0.28$\, MJy/sr at $z\sim0.57$. \begin{figure} \includegraphics[width=\columnwidth]{FIRASxBOSSv3.pdf} \caption{Cross-power fits for the line and continuum amplitude from FIRAS${\times}$BOSS, showing the CMASS analysis on the left and the LOWZ analysis on the right. The top row shows the MCMC contours of our fit to the data. Dark blue and light blue regions represent the 68 and 95 percent contours, respectively. The MCMC analysis uses a simple flat prior that restricts both $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}$ and $b_{\rm [C{\sc\,II}]}dI_{\rm CIB}/dz$ to positive values. This prior has a minimal effect on our best-fit $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}$ values, but it serves to prevent nonphysical negative values from counting towards our upper limit constraints. The bottom row shows red histograms of the distribution of best fit $\chi^2$ for 10,000 simulations in which we draw $a_{\ell m}$ amplitudes from Gaussian distributions for the cosmological signal, galaxy shot noise, and FIRAS auto-power (for more details of the simulation, refer to the parametric simulations described in Appendix\,\ref{sec:Appendix_cov_sims}). The $\chi^2$ of the maximum likelihood fits to the data are plotted as vertical dashed blue lines.} \label{fig:cross_power_fits} \end{figure} \subsection{Modeling the \texorpdfstring{FIRAS${\times}$BOSS}{FIRASxBOSS} Covariance} \label{subsec:FB_cov} The required pieces for the covariance used in the MCMC parameter estimation are models for: 1) the [C{\sc\,II}]\ and CIB signal associated with cosmological clustering, 2) the BOSS signal associated with cosmological clustering, plus shot noise, and 3) the FIRAS thermal noise and foregrounds. In the following three subsections, we describe each of these models in turn. Appendix\,\ref{sec:Appendix_B} describes our method of simulating the covariance from these three models. \subsubsection{\texorpdfstring{[C{\sc\,II}]}{[CII]} and CIB signal} We simulate the [C{\sc\,II}]\ and CIB variance models through map-space simulations that include FIRAS instrumental effects. The [C{\sc\,II}]\ and CIB signals are painted onto maps drawn from the cosmological clustering signal with linear bias. For the [C{\sc\,II}]\ signal, this is accomplished by multiplying the drawn maps by $I_{\rm [C{\sc\,II}]}(z)$. For the CIB signal, the maps are matrix-multiplied by $\frac{dI_{\rm CIB}(\nu_{\rm [C{\sc\,II}]}^z,z'')}{dz''}\Delta z''$, in a map-space analogy to Equation\,\ref{eq:cib_cl_full_orig}. These maps are then convolved by the FIRAS redshift response function, $A(z)$. The magnitude of the portion of the covariance that comes from this [C{\sc\,II}]\ and CIB signal is a function of the [C{\sc\,II}]\ and CIB amplitudes. In order to account for this, our covariance is constructed from a linear combination of four separate simulations (accounting for each cross-term). Appendix\,\ref{sec:Appendix_B} describes this process. \subsubsection{Clustering signal and shot noise from \texorpdfstring{BOSS${\times}$BOSS}{BOSSxBOSS}}\label{subsec:BOSS_auto_model} We model the variance in the BOSS survey as a tracer of the dark matter with constant bias and linear RSD plus shot noise, or \begin{equation} \begin{split}\label{eq:boss_full_model} C_{\ell}^g(z,z') &= b_g^2C_{\ell}^{(2)}(z,z') + b_gC_{\ell}^{(1)}(z,z')\\ &+ C_{\ell}^{(0)}(z,z') + A_{SN}\frac{\Omega_{\rm pixel}}{\bar n(z)}\delta(z,z'), \end{split} \end{equation} where $\bar n(z)$ is the average number of BOSS galaxies per pixel in each redshift slice, and $\Omega_{\rm pixel}$ is the angular size of a pixel in steradians. While the complete BOSS${\times}$BOSS\ power spectrum requires additional modeling to describe all scales \citep{loureiro2019cosmological}, this simple model with free parameters for a constant bias and shot-noise amplitude is sufficient to describe the BOSS variance relevant to the angular scales analyzed in FIRAS${\times}$BOSS. Due to the large number of galaxies in the CMASS and LOWZ samples and the limited $\ell$-range of our analysis, the shot noise is considerably smaller than the clustering signal, so $A_{SN}$ is weakly constrained. The bias parameter is fit to a value of 1.81 and 1.82 for the LOWZ and CMASS samples, consistent with previous work \citep{salazar2017clustering}. We find reasonable $\chi^2$ values for both the CMASS ($\chi^2$ per degree of freedom of 1.02, PTE of 0.37) and LOWZ ($\chi^2$ per degree of freedom of 1.08, PTE of 0.12) fits. Figure\,\ref{fig:CMASS_gal_model_vs_data} shows the measured $C_b (z,z')$ from CMASS versus the best-fit model for the three $\ell$-bins we consider. The covariance we use for the BOSS${\times}$BOSS\ parameter estimation is computed using the approximate formula of \cite{tristram2005xspect} described in Appendix\,\ref{sec:Appendix_coupling_approximation}. Since the covariance is a function of the model, an MCMC analysis must, in principle, recalculate the covariance for each different estimate of the underlying model parameters. For BOSS${\times}$BOSS, we instead employ an iterative approach. We make a best guess of the parameters, compute a covariance for that guess, find a new maximum likelihood solution using that covariance, and then repeat the procedure. After several iterations, this procedure has converged, and the parameters assumed in the model for the covariance equal the parameters at the maximum likelihood peak of our MCMC fit to within a fractional tolerance of $3 \times 10^{-3}$. \begin{figure} \includegraphics[width=\linewidth]{CMASS_auto_data_vs_model.pdf} \caption{\label{fig:CMASS_gal_model_vs_data} The BOSS${\times}$BOSS\ binned angular power spectrum, $C_b(z,z')$, for the CMASS galaxies. The left and right columns show the data and the best-fit model, respectively. The rows show the three angular bins used in this analysis, each of size $\Delta \ell=9$ and centered at $\ell=24$, 33, and 42. At the resolution of FIRAS, most of the cosmological clustering signal occurs on the diagonal, where $z=z'$, though there is also a small correlation between neighboring redshift bins, visible just off of the diagonal. The variation in amplitude along the diagonal is due to the redshift evolution of the growth factor, the change in physical scales being probed as a function of redshift, and changes in the CMASS galaxy density.} \end{figure} \subsubsection{Thermal noise and foregrounds from \texorpdfstring{FIRAS${\times}$FIRAS}{FIRASxFIRAS}} \label{subsec:FIRAS_auto} We model the FIRAS thermal noise from the inverse noise variance weights provided by the FIRAS collaboration \citep{FIRASexplanatory}. Let $W(\theta, z)$ represent the FIRAS inverse noise variance maps at $N_{\rm side}{=}128$. Thermal noise contributes constant variance in $\ell$-space, given by $N(z){=}\Omega_{128} \frac{\langle W(\theta,z)\rangle_{\theta}}{\langle W^2(\theta,z)\rangle_{\theta}}$, where $\Omega_{128}$ is the pixel size in steradians, and the angular average is taken only over pixels that overlap the BOSS galaxy survey. We then model the FIRAS thermal noise as \begin{equation}\label{eq:thermal_noise_model} N(\ell, z, z') \equiv N(z)^{1/2}N(z')^{1/2}A\left(|\nu(z) - \nu(z')|\right), \end{equation} where $A\left(|\nu(z) - \nu(z')|\right)$ accounts for the convolution of the FIRAS spectrum by the Fourier Transform Spectrometer's frequency response function. The measured frequency correlations of the thermal noise agree with the covariance model of the FIRAS collaboration \citep{FIRASexplanatory}. For this thermal noise component, we compute the expected binned angular power spectrum (Equation\,\ref{eq:model_mixing_binning}) by applying the $N_{\rm side}=16$ pixel window function to $N(\ell, z, z')$, binning into bandpowers, and then unmixing with the binned mixing matrix that uses the full simulated beam, scan, and pixel window function. The result is that, after the binning and unmixing operator is applied, the initially flat angular spectrum of the noise now rises roughly as the inverse square of the FIRAS beam and scan window function. We model the Milky Way foreground angular power spectrum as a simple power-law with a free angular index $\gamma$ and amplitude $A_{\rm MW}^2$ that we fit to the FIRAS auto-power spectrum through the form \begin{equation} C^{\rm MW}_{\ell}(z,z') = A_{\rm MW}^2 \ell^{-\gamma}D(z,T_d)D(z',T_d). \end{equation} The spectrum of the Galactic emission, $D(z,T_d)$, is modeled as semi-thermal dust emission, given by \begin{equation} D(z,T_d) \propto \nu^{\beta} B_{\nu}(T_d), \end{equation} where $\nu$ is converted to redshift assuming the [C{\sc\,II}]\ line, $\beta = 1.5$ \citep{2014A&A...566A..55P}, $B_{\nu}$ is the Planck function, and the dust temperature $T_d$ is a free parameter. In principle, the Milky Way emission is also convolved in the frequency direction by the frequency response function of the FIRAS spectrometer, but the effect is negligible for smooth spectral emission, so we do not include it. The full model we fit to the data is \begin{equation} \label{eq:FIRAS_auto_model} C^{\rm IM}_\ell(z,z') = A_{\rm MW}^2 \ell^{-\gamma}D(z,T_d)D(z',T_d) + A_N N(\ell, z, z'). \end{equation} There are four free parameters: the Milky Way amplitude $A_{\rm MW}$ at $\ell\sim1$ (units of MJy/sr), the dust temperature $T_d$ (units of K), the unitless angular power-law index $\gamma$, and a unitless factor $A_N$ multiplying the expected noise signal ($A_N$ is expected to be near 1). Figures \ref{fig:FIRAS_auto_LOWZ} and \ref{fig:FIRAS_auto_CMASS} show color plots of the best-fit models and the data for LOWZ and FIRAS respectively, over the full redshift and angular range of our analysis. Figure\,\ref{fig:FIRAS_auto_diag} shows the redshift diagonal of the data and best-fit models for both the LOWZ and CMASS redshift ranges, along with error bars estimated from Monte Carlo simulations drawn from the best-fit model, fully including the effects of FIRAS beam convolution, ecliptic scan convolution, pixelization, and partial sky coverage. The covariance we use to fit FIRAS${\times}$FIRAS\ to our parametric model is simulated. We draw $a_{\ell m}$ amplitudes from a Gaussian distribution whose variance is given by our model (Equation\,\ref{eq:FIRAS_auto_model}). We use this to produce 10,000 full-sky maps, to which we apply our FIRAS window function. We then use the FIRAS inverse-noise weights and window function to compute a simulated observed binned partial-sky $C_b(z,z')$ for each of these 10,000 draws. The covariance computed from these simulated $C_b(z,z')$ amplitudes accounts for the effects of beam convolution, ecliptic scan smearing, and partial sky coverage. Since the covariance is a function of the assumed parameters for the model, it should, in principle, be recalculated for each different estimate of the underlying model parameters. For a simulated covariance, this procedure is computationally expensive. As with BOSS${\times}$BOSS, we instead employ an iterative approach for FIRAS${\times}$FIRAS. We make a best guess of the parameters, simulate a covariance for that guess, find a new maximum likelihood solution using that covariance, and then repeat the procedure. We repeat the above iteration until the $\chi^2$ per d.o.f. converges to ${<}1\%$. Figures \ref{fig:FIRAS_auto_LOWZ}, \ref{fig:FIRAS_auto_CMASS}, and \ref{fig:FIRAS_auto_diag} show the parametric model compared to the measured FIRAS${\times}$FIRAS\ power spectrum. The fit converges to a $1\%$ constraint on the thermal noise amplitude centered on 0.97 for LOWZ and 0.90 for CMASS. The three foreground parameters are correlated and weakly constrained but yield dust temperatures consistent with the $15-25$\,K measured over the region of the sky we observe \citep{PhysRevD.95.103517}. Milky Way emission contributes approximately half of the variance to the $\ell=24$ bandpower and is negligible at the other two bandpowers. Because there are relatively few spatial modes in the BOSS regions at $\ell=24$, and because the BOSS region is relatively clear of Milky Way emission, the constraint on the Milky Way contribution to $\ell=24$ is uncertain to $30\%$, but this uncertainty has a small impact on the final bounds on line brightness from FIRAS${\times}$BOSS. Since the foregrounds account for half of the redshift-diagonal variance at $\ell=24$, a $30\%$ increase would cause a $15\%$ increase in total $\ell=24$ bin variance. The $\ell=24$ bin makes up roughly half of the total [C{\sc\,II}]\ signal-to-noise ratio, so this could at most increase the [C{\sc\,II}]\ variance by $7.5\%$. In amplitude, a $3.75\%$ increase in the final CII constraint is 0.01 MJy/sr. In reality, this is conservative because the foregrounds have long frequency correlations that the [C{\sc\,II}]\ signal does not have, so most of the impact would instead be on the CIB constraint. The best-fit parametric models have low $\chi^2$ per d.o.f. (0.74 for LOWZ and 0.86 for CMASS), indicating that the error model may be overestimated. An alternative to the parametric model approach is instead to use the measured auto-power spectrum of the FIRAS data, $\hat{C}_b^{\rm IM}$, as the model of variance in the FIRAS data cube. The attraction of this approach is that it captures features that could be missing in any parametric model fit. Unfortunately, because the FIRAS auto-power signal also contains the [C{\sc\,II}]\ and CIB signal, this approach introduces cosmic bias wherein high-scattering modes with more [C{\sc\,II}]\ and CIB signal are artificially down-weighted in the cross-power analysis. This could bias both the measured [C{\sc\,II}]\ signal and its error bars low, an effect we have measured in simulations. Although the measured FIRAS auto-power cannot be used in a covariance model acting on the actual cross-power data, we use it on simulated cross-power signal to verify that our parametric model produces similar error bars on the FIRAS${\times}$BOSS\ parameters. Appendix\,\ref{sec:Appendix_cov_sims} shows the results of two sets of simulations, one with a covariance that uses the measured FIRAS auto-power and one with a covariance that uses our best-fit parametric model. The two models yield nearly identical error contours on the cross-power parameters. We note that the complete FIRAS covariance has additional terms with complex structure, described in \cite{FIRASexplanatory}, which includes correlations jointly between frequencies $\nu, \nu'$ and position vectors $\hat n, \hat n'$ on the sky. In the frequency range studied here and the survey region outside the Galactic plane, this covariance is dominated by thermal detector noise, which is diagonal in $\hat n, \hat n'$, and by the correlated structure across frequencies produced by Milky Way emission, at low multipoles. In measurements with future instruments that achieve high significance, the absolute calibration error must also be included. The FIRAS auto-power also contains the [C{\sc\,II}]\, signal and the continuum CIB, but at a low level that negligibly effects our parametric thermal noise and foregrounds fits. In addition, there are several prominent Galactic spectral lines \citep{fixsen1999cobe}. The only line that falls in the frequencies of our analysis is the 205.3 $\mu {\rm m}$ NII line. Although this line is visible in the full-sky FIRAS auto-power spectrum, it is not detectable when we restrict the data to the BOSS survey regions, which are out of the Galactic plane, so we do not include it in our model. \begin{figure} \includegraphics[width=0.45\textwidth]{FIRAS_LOWZ_auto_data_vs_model.pdf} \caption{ \label{fig:FIRAS_auto_LOWZ} The FIRAS${\times}$FIRAS\ binned angular power spectrum, $C_{b}(z,z')$, in the LOWZ redshift range for three angular bins of width $\Delta \ell = 9$, centered at $\ell=24$, 33, and 42. The left column shows the data, and the right column shows the best-fit model, of the form of Equation\,\ref{eq:FIRAS_auto_model}. The structure on the diagonal of the plots is due to thermal noise, with small just-off-diagonal correlations due to the FIRAS frequency window function, $A(\nu)$. The thermal noise increases with higher $\ell$, roughly as the inverse of the FIRAS beam and scan window function squared. The foregrounds are visible in the $\ell=24$ bin as a roughly constant offset to all $z, z'$ combinations. The foreground amplitude is comparable to the thermal noise at $\ell=24$, but it drops at higher $\ell$ with a power-law index of $\gamma \approx -2.3$. The foregrounds are negligible compared to the thermal noise at $\ell=33$ and $\ell=42$. } \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{FIRAS_CMASS_auto_data_vs_model.pdf} \caption{ \label{fig:FIRAS_auto_CMASS} The FIRAS${\times}$FIRAS\ binned angular power spectrum, $C_{b}(z,z')$, in the CMASS redshift range with layout and general properties similar to the LOWZ region described in Figure\,\ref{fig:FIRAS_auto_LOWZ}. The vertical and horizontal lines in the $\ell=24$ data at $z\sim 0.67$ are due to a spurious correlation between thermal noise and Galactic foregrounds. } \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{FIRAS_Auto_Diag_3ells.pdf} \caption{ \label{fig:FIRAS_auto_diag} The redshift diagonal of the FIRAS${\times}$FIRAS\ binned angular power spectrum, $C_b(z=z')$, for both the CMASS and LOWZ redshift regions. The three angular bins, centered at $\ell=24$, $\ell=33$, and $\ell=42$ are plotted in red, green, and blue, respectively. The best-fit models are plotted as solid lines, and the data are plotted as triangles or circles, with error bars computed from the best-fit model. The $\ell=24$ data are artificially shifted backward by half a redshift bin for visual clarity. Dashed lines show the thermal noise portion of the best-fit model only. From the dashed lines, the effect of the FIRAS beam and scan convolution can be seen, as the thermal noise increases from low to high $\ell$. Only the $\ell=24$ bin has a significant foreground contribution.} \end{figure*} \section{Discussion}\label{sec:Discussion} Figure\,\ref{fig:Ib_models} compares FIRAS${\times}$BOSS\ upper limits on $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}$ to several representative physical models of [C{\sc\,II}]\ brightness as a function of redshift. It also shows the Planck${\times}$QSO\ intensity mapping constraint from \cite{yang2019evidence}, assuming that the excess power detected is [C{\sc\,II}]\ emission. Models that do not scale [C{\sc\,II}]\ emission with star formation rate evolution are disfavored because their low-redshift [C{\sc\,II}]\ emission is too bright. Models that scale [C{\sc\,II}]\ emission with star formation rate can be calibrated with physical parameters to be consistent with both FIRAS${\times}$BOSS\ and Planck${\times}$QSO. The yellow dotted region in Figure\,\ref{fig:Ib_models} shows the range of [C{\sc\,II}]\ amplitudes predicted by the collisional excitation model from \citet{Gong2012}, hereafter G12. In this model, the mean [C{\sc\,II}]\ intensity is computed through a simple radiative transfer model whose free parameters are the number density $n_e$ and kinetic temperature $T_e^K$ of electrons within the emitting galaxies. The yellow dotted region spans the range of $T_e^K$ and $n_e$ values considered by G12. Because this model predicts comparatively bright [C{\sc\,II}]\ emission, \citet{yang2019evidence} argue that it provides the best-fit to their measurement and use their result to place constraints on the two free parameters. FIRAS${\times}$BOSS\ upper limits at $z {\sim} 0.5$ rule out the brightest range of the G12 predictions. Since it is this bright end that is consistent with the \citet{yang2019evidence} measurement at $z\sim2.6$, the G12 model is disfavored to explain the FIRAS${\times}$BOSS\ and Planck${\times}$QSO\ [C{\sc\,II}]\ measurements. The G12 model was originally created to forecast emission at much higher redshifts, during the epoch of reionization. Since [C{\sc\,II}]\ luminosity is expected to be correlated with star formation, we qualitatively expect the [C{\sc\,II}]\ amplitude to follow the cosmic star formation history and come to a peak around $z\sim2-3$, declining to the present day \citep{Madau2014}. This is not seen in the G12 model, which monotonically increases as we move to lower redshift. Thus the bright G12 predictions, which are disfavored by the combination of FIRAS${\times}$BOSS\ and Planck${\times}$QSO, may not be a physically accurate estimate of the true [C{\sc\,II}]\ evolution during more recent epochs since $z{\sim} 2$. Next, we consider the scaling models from \citet{Silva2015} (hereafter S15), shown as the hatched purple region in Figure\,\ref{fig:Ib_models}. \cite{Silva2015} consider four different empirically-calibrated power-law scaling relations between [C{\sc\,II}]\ luminosity and star formation rate (SFR). The hatched purple region shows the full range of [C{\sc\,II}]\ luminosity predictions from the four values of the slope and amplitude of this power-law scaling from their Table 1. Although these models were originally constructed to predict [C{\sc\,II}]\ intensity at reionization redshifts, they are correlated with the cosmic star formation history and thus show the qualitative redshift evolution we expect, with a peak at $z\sim2$ and a decline at lower redshifts. These predictions fall a factor of 100 or more below our FIRAS${\times}$BOSS\ upper limits at $z\sim0.5$, putting them well below our ability to constrain. However, these models also fall well below the Planck${\times}$QSO\, estimate, meaning they may also be pessimistic predictions. Finally, we examine the ``semi-empirical" model from \citet{Sun2019} (hereafter S19), plotted as a dashed green curve. This calculation falls somewhat between the other two. It uses a physically motivated scaling between the [C{\sc\,II}]\ and infrared luminosity of a halo. This scaling is parameterized in terms of the photoelectric heating efficiency from dust grains, $\epsilon_{\rm{PE}}$. The distribution of galaxy infrared luminosities is calibrated to empirical measurements of the cosmic infrared background \citep{Planck2014}. This model also predicts a fiducial [C{\sc\,II}]\ amplitude well below our limits and the \citet{yang2019evidence} measurement. However, as shown in Figure\,10 of S19, the [C{\sc\,II}]\ intensity can be brought into agreement with the Planck${\times}$QSO\, measurement, if one increases $\epsilon_{\rm{PE}}$ by a factor of ${\sim} 6$ from their fiducial number. The authors, however, note that this higher value may lead to tension with the observed relation between [C{\sc\,II}]\ luminosity and star formation rate in low-redshift galaxies. The redshift evolution of this rescaled S19 model is plotted as the dashed blue curve in Figure\,\ref{fig:Ib_models}. In contrast with the G12 model, we find that our FIRAS measurements are fully consistent with the redshift evolution seen here, even scaling to the brighter emission at $z{\sim}2.6$ suggested by \citet{yang2019evidence}. Figure \ref{fig:Ib_models} also includes a point at $z {\sim} 0$ from luminosity function measurements \citep{2017ApJ...834...36H}, which integrates to $4.1\pm 2.7$\,kJy/sr. The measured galaxies cover the $10^{6.5}$ to $10^{9.5}$ solar luminosity range, via a mix of direct detection of [C{\sc\,II}]\ emission, at the bright end, and inference of [C{\sc\,II}]\ emission via FIR brightness and color measurements at the dim end. An integral of this luminosity function yields the expected [C{\sc\,II}]\ brightness, $I_{\rm [C{\sc\,II}]}$, but in order to plot $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}$, a model for the [C{\sc\,II}]\ bias must be assumed. We estimate a bias of 1, which corresponds to the value from the halo model used in appendix \ref{sec:Appendix_A}. This roughly matches the bias evolution of all the models plotted, which predict decreasing bias at low redshift. This convergence of bias toward unity at low redshift occurs because, as halo masses increase at low redshift, the mass range for [C{\sc\,II}]-hosting galaxies becomes less biased towards the higher mass end of the halo distribution. Even the models that scale with star formation rate struggle to decrease quickly enough at low redshift to be consistent with both Planck${\times}$QSO\ and \cite{2017ApJ...834...36H}. There are several possible explanations. The estimated $z {\sim} 0$ [C{\sc\,II}]\ bias of 1 could be too low. Or there could be an undetected population of low-luminosity [C{\sc\,II}]\ galaxies. In fact, \cite{2017ApJ...834...36H} suggest the possibility of unmeasured faint low-metallicity dwarf-galaxies, which would have a different [C{\sc\,II}]-FIR calibration, contributing to the faint end of the [C{\sc\,II}]\ luminosity function. This possibility highlights a trend where early intensity mapping results suggest unexpectedly bright cumulative emission compared to direct detection of individual objects. For instance, \cite{2021arXiv210614904B} suggests that potential bright CO emission detected by mmIME \citep{Keating_2020} may be due to a large population of dim CO galaxies. Another possibility is that the assumption of these models, that [C{\sc\,II}]\ emission is directly tied to the star formation rate, is too simple. Conclusions here are limited by thermal noise in FIRAS. To meaningfully constrain the space of models, qualitatively new improvements in sensitivity are required. To put FIRAS in the context of what could be achieved by a modern mission, we compute the potential sensitivity achievable by cross-correlating maps from the proposed PIXIE satellite with the CMASS galaxy maps used in this paper. The Primordial Inflation Explorer (PIXIE) is a proposed NASA Explorer-class mission to measure the polarized imprint of primordial inflation on the CMB \citep{kogut2011primordial}, along with several other spectral distortions of the CMB \citep{chluba2021new}. PIXIE will conduct an all-sky survey with angular and spectral resolution and spectral coverage similar to FIRAS. However, the beam will be slightly smaller, and the noise level will be three orders of magnitude lower than FIRAS. These properties make PIXIE well-placed to study star-formation and scale-dependent bias as a test of non-Gaussianity via intensity mapping \citep{dizgah2019probing, switzer2017tracing}. To assess PIXIE's constraining power, we conduct a suite of 10,000 simulations with no [C{\sc\,II}]\ signal. These simulations project that a cross-correlation of PIXIE maps with CMASS galaxies could place an upper limit of $100$ Jy/sr on $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}$ at $95\%$ confidence (see the red transparent region in Figure\,\ref{fig:Ib_models}). This result means that PIXIE${\times}$CMASS\, is easily sensitive enough to detect even the most pessimistic [C{\sc\,II}]\ models at high significance. \begin{figure} \includegraphics[width=\columnwidth]{Planck_FIRAS_Ib.pdf} \caption{Comparison of measured values of $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}$ to various models from the literature. The gray boxes show the values allowed by the FIRAS${\times}$LOWZ\, and FIRAS${\times}$CMASS\, 95\% upper limits, with their widths representing the redshift ranges used. The black circle shows the Planck${\times}$QSO\, measurement from \citet{yang2019evidence}. The horizontal error bars show the full redshift range of that measurement, and the vertical error bars show the 95\% confidence intervals on their measurement of $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}$. The black square shows the value inferred from the $z\sim0$ [C{\sc\,II}]\ luminosity function \citep{2017ApJ...834...36H}, assuming $b_{\rm [C{\sc\,II}]}=1$. The error bars come from their $1\sigma$ errors on luminosity function parameters. The purple hatched region shows the range of brightnesses allowed by the scaling-relation models from \citet{Silva2015}, the yellow dotted region shows the range of brightnesses of the collisional excitation models from \citet{Gong2012}, and the green dashed curve shows the semi-empirical model from \citet{Sun2019}. The blue dashed curve shows the \citet{Sun2019} model linearly rescaled to match the Planck${\times}$QSO\, measurement at the appropriate redshift. The red region below 100\,Jy/sr in the FIRAS${\times}$CMASS\, redshift region shows the projected sensitivity achievable with a cross-correlation between PIXIE and CMASS galaxies in the same redshift range used for the FIRAS${\times}$CMASS\, analysis. This would easily detect even the most pessimistic of the plotted [C{\sc\,II}]\ models at high sensitivity.} \label{fig:Ib_models} \end{figure} \section{Conclusion} \label{sec:Conclusion} Our analysis constrains the product of the bias and specific intensity of [C{\sc\,II}], $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]}$, to be ${<}0.31$\,MJy/sr at $z{\sim}0.35$ and ${<}0.28$\,MJy/sr at $z{\sim}0.57$ at $95\%$ confidence. Through both FIRAS' unique capability for tomographic measurement and the depth of BOSS data at these redshifts, bounds on [C{\sc\,II}]\ developed here are competitive with those achieved in the recent analysis of Planck data \citep{yang2019evidence, pullen2018search}. We rule out a swath of collisional excitation models consistent with Planck${\times}$QSO, but note that those models were not designed to predict low redshift [C{\sc\,II}]. [C{\sc\,II}]\ emission is expected to correlate with the star formation rate, which appears to have peaked at $z \sim 2 - 3$ and then declined at later redshifts \citep{Madau2014}. Our constraints are consistent with all models that include this expected correlation of [C{\sc\,II}]\ emission with star formation rate. If the high [C{\sc\,II}]\ intensity measured from Planck${\times}$QSO\ is accurate, then [C{\sc\,II}]\ models with realistic redshift evolution tied to the star formation rate predict $b_{\rm [C{\sc\,II}]}I_{\rm [C{\sc\,II}]} \approx 0.1$\ MJy/sr at $z\sim0.5$. This expected value is nearly within reach of the FIRAS data. Measurements described here are limited by FIRAS thermal noise and angular resolution rather than the depth of the galaxy redshift survey, so substantial further progress will require new intensity mapping measurements. We show that the proposed PIXIE mission could detect even pessimistic models for [C{\sc\,II}]\ emission. The SHT approach developed here also applies well to wide-field surveys such as HIRAX, CHIME, TianLai, HERA, and SPHEREx, both in auto-power and crosspower. The SHT is broadly well-matched to intensity mapping analysis through its ability to capture all Gaussian information directly in observed map space and to easily account for survey effects such as the curved sky and inhomogeneous noise. \section*{Acknowledgements} \label{sec:Acknowledgements} We would like to thank Dale Fixsen, Nathan Miller, and Nils Odegard for their useful input regarding the FIRAS noise and beam model. Some of the results in this paper have been derived using the healpy and HEALPix package. PCB was supported by the James Arthur Postdoctoral Fellowship. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard \& Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. \section*{Data Availability} The FIRAS data underlying these results are available on NASA's LAMBDA archive at \url{https://lambda.gsfc.nasa.gov/product/cobe/firas_prod_table.cfm}. The BOSS data underlying these results are on the BOSS data release 12 archive at \url{https://www.sdss.org/dr12/}. \onecolumn
{ "timestamp": "2022-02-02T02:10:06", "yymm": "2202", "arxiv_id": "2202.00203", "language": "en", "url": "https://arxiv.org/abs/2202.00203" }
\section{Introduction} Electronic vote, or e-vote, is a voting process in which ballot casting and counting is computer-aided. Since late 1990s and early 2000s, e-vote has received increasing interest and is widely applied to various situations of decision-making. Many voting protocols based on classical cryptography have been developed and successfully applied in the last two decades \cite{Neff01,Chaum04}. However, the security of protocols based on classical cryptography is based on the unproven complexity of some computational problems, such as the factoring of large numbers. The research in quantum computation shows that quantum computers are able to factor large numbers in a short time, which means that classical protocols based on such algorithms are insecure. To react to the risk posed by forthcoming quantum computers, a number of quantum voting protocols have been developed in the last decade \cite{Hillery06,Vaccaro07,Li08,Horoshko11,Li12,Jiang12,Tian16,Wang16,Rad17,Thapliyal7,Sun19vote,LiJZL21}. In these works, the ballots are still classical but they are secured by quantum methods. We call this type of vote the quantum secured vote. While all quantum secured voting protocols have focused on the security problems of voting from the cryptographic perspective, Bao and Halpern \cite{Bao17} and Sun et al. \cite{SunHSG21} studied quantum vote from a social choice theoretic perspective. They designed voting rules in which ballots are in quantum states and the result of voting is calculated by using quantum operators. We call this type of vote quantum computed vote. An interesting advantage of quantum computed vote is that the quantum analogue of Arrow's Impossibility Theorem \cite{Arrow51} is violated in quantum computed vote. Arrow's Impossibility Theorem is one of the most influential results in social choice theory. According to the theorem, every voting rule satisfying unanimity and independence of irrelevant alternatives must also satisfy dictatorship, which implies that a fair and democratic voting rule cannot exist. The work of Bao and Halpern \cite{Bao17} and Sun et al. \cite{SunHSG21} disproved Arrow's theorem in the quantum setting. Therefore, it provides a theoretic demonstration of the advantage of quantum computed vote: quantum vote is better than classical vote in the sense that it enable the existence of fair and democratic voting rules. In addition to fairness and democracy, another advantage of quantum computed vote is that it has better expressive power than classical vote. Classical ballots can only be in a definite state like $0$ or $1$, while quantum ballots can be in a superposition of $|0\rangle$ and $|1\rangle$ or mixed state of $|0\rangle\langle 0 | $ and $|1 \rangle\langle 1 | $. In real life, the voters usually have a mixed preference on the proposal to be voted, such as "60\% agree and 40\% disagree". When voters are only allowed to use 0 or 1 to express their preference, it can happen that the result of voting does not truly reveal the aggregation of the preferences of voters. For example, suppose the preference of Alice, Bob and Charlie on a proposal is 0.6 (which means that "60\% agree and 40\% disagree", or Alice will vote for ``agree'' with probability 0.6), 0.6 and 0 respectively. Then they will cast their ballots into classical state ``agree", ``agree" and ``disagree" and the result of voting is ``agree" according to classical majority vote. But intuitively, the overall probability of agree, which is the probability that the majority of the voters vote for ``agree", should be $0.6 \times 0.6= 0.36$. This is because Charlie will never vote for ``agree". In order for the majority of the voters to vote for ``agree", both Alice and Bob must vote for ``agree", which happens with probability $0.6 \times 0.6= 0.36$. Therefore, the result of classical vote does not truly reveal the aggregation of the preferences of voters. This inconsistency is caused by the limited expressive power of classical ballot. On the other hand, the mixed preference like "60\% agree and 40\% disagree" can be described by the quantum state $\sqrt{0.6}|1\rangle+\sqrt{0.4}|0\rangle$ or $ 0.4 |0\rangle\langle 0 | + 0.6 |1 \rangle\langle 1 | $. Moreover, quantum ballot can also express entangled preference of voters. For example, Alice and Bob together can cast their ballots into state $\frac{|00\rangle + |11 \rangle }{\sqrt{2}}$, which has no analogue in classical voting. Since the main advantages of quantum computed vote is neither the speed of computing nor the security of communication. We believe quantum computed vote opens a new battlefield in the second quantum revolution. From a practical perspective, the quantum voting rules proposed in Bao and Halpern \cite{Bao17} and Sun et al. \cite{SunHSG21} to disprove Arrow's Impossibility Theorem are too complicated to be realized with the current technology. The number of qubits that are needed to be manipulated in the voting rules is exponential in the growth of the number of voters. Therefore, simpler voting rules are needed. In this paper, we propose two quantum voting rules that have better scalability: quantum logical veto (QLV) and quantum logical nomination (QLN). The number of qubits needed in the two rules is of constant number 3. The only quantum operation used in QLV/QLN is quantum logical conjunction/disjunction. Both of these operators are relatively simple and have been studied in-depth in the literature of quantum computational logic \cite{Gudder03,Cattaneo04,CattaneoCGL04,LeddaKPG06}. Moreover, various voting rules can be constructed by the combination of QLV and QLN without loss of scalability. The structure of this paper is the following. We introduce elements of background knowledge in Section \ref{Preliminaries}. Then in Section \ref{Quantum voting rules} we introduce our voting rules in detail. We conclude this paper with future work plan in Section \ref{Conclusion and future work}. \section{Preliminaries}\label{Preliminaries} Given a Hilbert space $\mathcal{H}=\mathbb{C}^2$, we denote the set of all density operators on $\mathcal{H}$ by $D(\mathcal{H})$. Our quantum voting rules use two quantum logical operators, quantum AND and quantum OR, for ballot aggregation. The construction of the quantum AND is based on the quantum Toffoli gate \cite{HolikSFGP17}. \begin{Definition}[Quantum Toffoli gate] The quantum Toffoli gate is a unitary operator on $\mathbb{C}^{2^3}$: $$T\ket{x_1, x_2, x_3}=\ket{x_1, x_2, x_1 x_2 \oplus x_3}$$ where $x_i \in \{0,1\}$. \end{Definition} \begin{Definition}[Quantum AND operator] For $\rho,\sigma \in D(\mathbb{C}^2)$, $$AND(\rho \otimes \sigma) = \Tr^{1,2}(T(\rho \otimes \sigma \otimes |0\rangle \langle 0| )T^{\dagger}),$$ Here $\Tr^{1,2}$ is the partial trace on the first and the second qubit. \end{Definition} \noindent The quantum AND operator is naturally generalized to multiple qubits: $AND(\rho_1 \otimes \rho_2 \ldots \otimes \rho_n):= AND(\ldots AND (AND(\rho_1 \otimes \rho_2 ) \otimes \rho_3) \otimes \ldots \otimes \rho_n) $. Just like in quantum computational logic \cite{Gudder03,Cattaneo04,CattaneoCGL04,LeddaKPG06}, we define the quantum NOT operator by using the Pauli X gate. \begin{Definition}[Quantum NOT operator] For $\rho \in D(\mathbb{C}^2)$, $NOT(\rho)=X \rho X^{\dagger}$, where $X$ is the Pauli X operator on a single qubit: $X=\begin{bmatrix} 0 & 1\\1 & 0 \end{bmatrix}$. \end{Definition} Now we define the quantum OR operator based on quantum AND and quantum NOT. \begin{Definition}[Quantum OR operator] For $\rho,\sigma \in D(\mathbb{C}^2)$, $$OR(\rho \otimes \sigma) = NOT(AND(NOT(\rho) \otimes NOT(\sigma)))$$ \end{Definition} The quantum OR operator is naturally generalized to multiple qubits: $OR(\rho_1 \otimes \rho_2 \ldots \otimes \rho_n):= OR(\ldots OR(OR(\rho_1 \otimes \rho_2 ) \otimes \rho_3 ) \otimes \ldots \otimes \rho_n)$. \section{Quantum voting rules}\label{Quantum voting rules} We design two quantum voting rules: QLV and QLN. In both of them, we assume there is $1$ proposal to be voted, $m$ voters $\{v_1,\ldots, v_m\}$ and $n$ quantum voting machines $\{M_1,\ldots, M_n\}$. Every voter's ballot is represented by a density operator of a single qubit. The ballot in state $|0\rangle \langle 0|$ represents ``disagree" and in state $|1\rangle \langle 1|$ represents ``agree". Every quantum voting machine is a small-scale quantum information processor. In QLV (resp. QLN), we assume that the quantum voting machine is able to execute the quantum AND (resp. OR) operator. \subsection{Quantum veto} The one-vote veto is a special type of vote, in which the proposal to be voted will be disagreed as long there is one voter who votes for ``disagree". It has been widely used by many political and economic organizations, among which the most famous is the UN Security Council's permanent member states group. There are some work on quantum-secured veto \cite{WuSWCHDX21,WangLYSZ21,mishra21}, in which the ballots are still classical, but they are encrypted by some methods of quantum cryptography in order to ensure some security properties. To the best of our knowledge, quantum veto in which ballots are in quantum state has never been studied before. Our QLV is performed in the following steps: \begin{enumerate} \item Every voter $v_i$ sends her/his ballot $\rho_i \in D(\mathbb{C}^2)$ to every quantum voting machine. \item Every quantum voting machine $M_j$ calculate $AND(\rho_1 \otimes \dots \otimes \rho_m) = \rho^{j}$. \item Every quantum voting machine $M_j$ measures $\rho^{j}$ by the projector $P_1 = |1\rangle \langle 1 |$. It records 1 if the result of measurement is ``yes". It records 0 if the result of measurement is ``no". \item Every quantum voting machine sends its record to every other quantum voting machine. \item Every quantum voting machine reads all the records it has received and outputs ``Agree'' if at least half of the records is 1, otherwise it outputs ``Disagree''. \end{enumerate} In order to show that QLV indeed satisfies some desirable properties of veto-like voting, we first define the winning probability of a ballot as follows. \begin{Definition}[winning probability] For $\rho \in D(\mathbb{C}^2)$, the winning probability of $\rho$ is $\mathsf{WP}(\rho):= \Tr(P_1 \rho )$. \end{Definition} \begin{Lemma} For all $\rho,\sigma \in D(\mathbb{C}^2)$, $\Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 (T(\rho \otimes \sigma \otimes |0\rangle \langle 0| ))T^{\dagger})= \Tr(P_1 \rho) \cdot \Tr(P_1 \sigma) $. \end{Lemma} \begin{proof} We first consider cases where $\rho,\sigma$ ranges over the computational basis. Then we have the following cases: \begin{enumerate} \item $\Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 (T(|0\rangle \langle 0| \otimes |0\rangle \langle 0| \otimes |0\rangle \langle 0| )T^{\dagger})= \Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 (|0\rangle \langle 0| \otimes |0\rangle \langle 0| \otimes |0\rangle \langle 0| )= \Tr(|0\rangle \langle 0|) \cdot \Tr(|0\rangle \langle 0|) \cdot \Tr P_1(|0\rangle \langle 0|)=1\cdot 1\cdot 0=0= \Tr(P_1 |0\rangle \langle 0|) \cdot \Tr(P_1 |0\rangle \langle 0|)$. \item $\Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 (T(|0\rangle \langle 0| \otimes |1\rangle \langle 1| \otimes |0\rangle \langle 0| )T^{\dagger})= \Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 (|0\rangle \langle 0| \otimes |1\rangle \langle 1| \otimes |0\rangle \langle 0| )= \Tr(|0\rangle \langle 0|) \cdot \Tr(|1\rangle \langle 1|) \cdot \Tr P_1(|0\rangle \langle 0|)=1\cdot 1\cdot 0=0= \Tr(P_1 |0\rangle \langle 0|) \cdot \Tr(P_1 |1\rangle \langle 1|)$. \item $\Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 (T(|1\rangle \langle 1| \otimes |0\rangle \langle 0| \otimes |0\rangle \langle 0| )T^{\dagger})= \Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 (|1\rangle \langle 1| \otimes |0\rangle \langle 0| \otimes |0\rangle \langle 0| )= \Tr(|1\rangle \langle 1|) \cdot \Tr(|0\rangle \langle 0|) \cdot \Tr P_1(|0\rangle \langle 0|)=1\cdot 1\cdot 0=0= \Tr(P_1 |1\rangle \langle 1|) \cdot \Tr(P_1 |0\rangle \langle 0|)$. \item $\Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 (T(|1\rangle \langle 1| \otimes |1\rangle \langle 1| \otimes |0\rangle \langle 0| )T^{\dagger})= \Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 (|1\rangle \langle 1| \otimes |1\rangle \langle 1| \otimes |1\rangle \langle 1| )= \Tr(|1\rangle \langle 1|) \cdot \Tr(|1\rangle \langle 1|) \cdot \Tr P_1(|1\rangle \langle 1|)=1\cdot 1\cdot 1=1= \Tr(P_1 |1\rangle \langle 1|) \cdot \Tr(P_1 |1\rangle \langle 1|)$. \end{enumerate} We further note that for all $a,b \in \{0,1\}$ with $a\neq b$, it holds that $\Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 (T(|a\rangle \langle b| \otimes \sigma \otimes |0\rangle \langle 0| ))T^{\dagger}) = 0 $. This plus the fact that $T$ is a linear operator implies that $\Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 (T(\rho \otimes \sigma \otimes |0\rangle \langle 0| ))T^{\dagger})= \Tr(P_1 \rho) \cdot \Tr(P_1 \sigma) $ for all $\rho,\sigma \in D(\mathbb{C}^2)$. \end{proof} \begin{Lemma} For $\rho,\sigma \in D(\mathbb{C}^2)$, $\mathsf{WP}(AND(\rho \otimes \sigma) ) = \mathsf{WP}(\rho) \cdot \mathsf{WP}(\sigma)$. \end{Lemma} \begin{proof} $\mathsf{WP}(AND(\rho \otimes \sigma) ) = \Tr ( P_1 AND(\rho \otimes \sigma) ) = \Tr ( P_1 \Tr^{1,2}(T(\rho \otimes \sigma \otimes |0\rangle \langle 0| ))T^{\dagger}) = \Tr ( P_1 (\Tr^1 \otimes \Tr^2 \otimes I^3 )(T(\rho \otimes \sigma \otimes |0\rangle \langle 0| ))T^{\dagger}) = (\Tr^1 \otimes \Tr^2 \otimes \Tr^3 P_1^3 )(T(\rho \otimes \sigma \otimes |0\rangle \langle 0| ))T^{\dagger} = \Tr(P_1 \rho) \cdot \Tr(P_1 \sigma) = \mathsf{WP}(\rho) \cdot \mathsf{WP}(\sigma)$. \end{proof} The following theorem states that QLV is indeed a veto-like voting rule. \begin{Theorem} For every quantum voting machine, it records 0 with probability 1 iff at least one voter's ballot is in state $|0\rangle \langle 0|$. \end{Theorem} \begin{proof} A quantum voting machine records 0 with probability 1 iff it records 1 with probability 0 iff $\mathsf{WP}(AND(\rho_1 \otimes \dots \otimes \rho_m) ) = 0$ iff $\mathsf{WP}(\rho_1 ) \cdot \ldots \cdot \mathsf{WP}(\rho_m )= 0$ iff $\mathsf{WP}(\rho_i )= 0$ for some voter $v_i$. \end{proof} \begin{Remark} After Step 4 in QLV, every quantum voting machine gets the records of all quantum voting machines. Those records is a collection of 0s and 1s. The collection is a probability distribution on $\{0,1\}$ according to the winning probability of $AND(\rho_1 \otimes \dots \otimes \rho_m)$. Every quantum voting machine has the same collection. \end{Remark} \subsection{Quantum nomination} The quantum nomination is dual to quantum veto, in which the proposal to be voted will be agreed as long as there is one voter who votes for ``agree". Intuitively, this type of voting can be understood as that a candidate is nominated as long as there is one voter who nominates her/him. Classical nomination has been widely used in many political and economic elections. Nomination-like vote has also been used in some TV programs. For example, in \textit{The Voice of China}, which is a Chinese reality television singing competition, a contestant get elected in the blind audition phase as long as there is at least one coach who votes for her/him. To the best of our knowledge, quantum nomination in which ballots are in quantum state has never been studied before. Our QLN operates in the following steps: \begin{enumerate} \item Every voter $v_i$ sends her/his ballot $\rho_i$ to every quantum voting machine. \item Every quantum voting machine $M_j$ calculates $OR(\rho_1 \otimes \dots \otimes \rho_m) = \rho^{j}$. \item Every quantum voting machine measures $\rho^{j}$ using the projector $P_1 = |1\rangle \langle 1 |$. It records 1 if the result of measurement is ``yes". It records 0 if the result of measurement is ``no". \item Every quantum voting machine sends its record to every other voting machine. \item Every quantum voting machine reads all the records it has received and outputs ``Agree'' if at least half of the records is 1. Otherwise it outputs ``Disagree''. \end{enumerate} The following lemmas and theorem demonstrate that quantum nomination is indeed a voting rule for nomination-like vote. \begin{Lemma} For $\rho \in D(\mathbb{C}^2)$, $\mathsf{WP}(NOT(\rho) ) = 1- \mathsf{WP}(\rho) $. \end{Lemma} \begin{proof} $\mathsf{WP}(NOT(\rho) ) = \mathsf{WP}(X(\rho)X^{\dagger} )= \mathsf{WP}(X(\rho)X ) = \Tr (P_1 X(\rho)X ) = \Tr (XP_1 X(\rho) )$. By simple calculation we have $P_0:= |0\rangle \langle 0| = XP_1 X $ and $P_0+P_1 =I$. Therefore, $\Tr (XP_1 X(\rho) )= \Tr (P_0 (\rho) ) = \Tr ( ( I-P_1 ) \rho ) = \Tr ( \rho - P_1 \rho )= \Tr(\rho) - \Tr(P_1 \rho) = 1- \mathsf{WP}(\rho)$. \end{proof} \begin{Lemma} For $\rho,\sigma \in D(\mathbb{C}^2)$, $\mathsf{WP}(OR(\rho \otimes \sigma) ) = \mathsf{WP}(\rho) + \mathsf{WP}(\sigma) - \mathsf{WP}(\rho) \cdot \mathsf{WP}(\sigma)$. \end{Lemma} \begin{proof} $\mathsf{WP}(OR(\rho \otimes \sigma) ) = \mathsf{WP}(NOT(AND(NOT(\rho) \otimes NOT(\sigma))) ) = 1- \mathsf{WP}(AND(NOT(\rho) \otimes NOT(\sigma) )= 1- \mathsf{WP}(NOT(\rho)) \cdot \mathsf{WP}(NOT(\sigma))= 1- (1- \mathsf{WP}(\rho))\cdot (1- \mathsf{WP}(\sigma) )= 1- (1- \mathsf{WP}(\rho) -\mathsf{WP}(\sigma ) + \mathsf{WP}(\rho) \cdot \mathsf{WP}(\sigma ) ) =\mathsf{WP}(\rho) + \mathsf{WP}(\sigma) - \mathsf{WP}(\rho) \cdot \mathsf{WP}(\sigma)$. \end{proof} \begin{Theorem} For every quantum voting machine, it records 1 with probability 1 iff at least one voter's ballot is in state $|1\rangle \langle 1|$. \end{Theorem} \begin{proof} We only prove cases in which there are only two voters. The case with multiple voters can be generalized straightforwardly.\\ $(\Leftarrow)$ $\mathsf{WP}(OR(|1\rangle \langle 1| \otimes \sigma) )= \mathsf{WP}(|1\rangle \langle 1|) + \mathsf{WP}(\sigma) - \mathsf{WP}(|1\rangle \langle 1|) \cdot \mathsf{WP}(\sigma)=1+ \mathsf{WP}(\sigma) -\mathsf{WP}(\sigma)=1$. The case in which $\sigma = |1\rangle \langle 1|$ is similar.\\ $(\Rightarrow )$ Assume $\sigma $ is not state $ |1\rangle \langle 1|$ and $\rho $ is not state $ |1\rangle \langle 1|$. Then $\mathsf{WP} (\sigma )<1$ and $\mathsf{WP}(\rho)<1$. Then $\mathsf{WP}(OR(\rho \otimes \sigma) ) = \mathsf{WP}(\rho) + \mathsf{WP}(\sigma) - \mathsf{WP}(\rho) \cdot \mathsf{WP}(\sigma) = \mathsf{WP}(\rho) (1- \mathsf{WP}(\sigma)) + \mathsf{WP}(\sigma) < 1- \mathsf{WP}(\sigma) + \mathsf{WP}(\sigma) =1$. \end{proof} \subsection{Extension and Application} In this subsection we use AND and OR to build other voting rules and study some mathematical properties of the quantum computed vote. \subsubsection{Logical formulas and voting rules} The essential feature of QLV and QLN is determined by the logical operators they use. It turns out that different quantum voting rules can be defined by different combinations of logical operators. We illustrate some of them in the following examples. \begin{Example}[role-weighted vote] Suppose $v_1$ is a professor of quantum information, $v_2$ and $v_3$ are two associate professors of quantum information. Then the following formula determines a voting rule based on the voters' roles: \begin{center} $OR(\rho_1 \otimes (AND(\rho_2 \otimes \rho_3)))$. \end{center} \noindent According to the above formula, as long as the professor votes for ``agree'', the proposal will be agreed. Otherwise both of the two associate professors needs to vote for ``agree'' in order for the proposal to be agreed. \end{Example} \begin{Example}[majority vote] Majority vote is probably the most popular voting rule in our society. The quantum majority vote for three voters can be determined by the following formula: \begin{center} $OR((AND(\rho_1 \otimes \rho_2)) \otimes (AND(\rho_2 \otimes \rho_3)) \otimes (AND(\rho_1 \otimes \rho_3)) )$. \end{center} \noindent Indeed, if at least two voters vote for ``agree'', then the proposal will be agreed according to the above formula. On the other hand, if the preference of the voters is 0.6, 0.6 and 0 respectively, then they can set $\rho_1 = \rho_2 = 0.4 |0\rangle\langle 0 | + 0.6 |1 \rangle\langle 1 | $ and $\rho_3= |0\rangle\langle 0 | $. Then $\mathsf{WP}( OR((AND(\rho_1 \otimes \rho_2)) \otimes (AND(\rho_2 \otimes \rho_3)) \otimes (AND(\rho_1 \otimes \rho_3)) ) = 0.36$. \end{Example} \subsubsection{Embedding probabilistic ballot into quantum ballot} Let $r\in [0,1]$, $| \theta_r \rangle = \sqrt{1-r} |0\rangle +\sqrt{r} |1\rangle$ and $\Theta_r:= | \theta_r \rangle \langle \theta_r | $. Then $\mathsf{WP}(\Theta_r) = r$ and we call the quantum ballot $\Theta_r$ a canonical representation of the probabilistic ballot $r$. Therefore, if a voter's preference is $r$, then he can set his ballot into the pure state $\Theta_r$ to represent his preference. In this way all probabilistic ballot $r$ can be represented by a quantum ballot $\Theta_r$. Nevertheless, not all quantum ballot can be represented by probabilistic ballot. For example, ballots in the entangled state $\frac{|00\rangle + |11 \rangle }{\sqrt{2}}$ cannot be represented by probabilistic ballots. We demonstrate this fact by the following observations. \begin{Observation} If two quantum ballots in state $\frac{|00\rangle + |11 \rangle }{\sqrt{2}}$ are submitted to QLV, then the proposal will be agreed with probability $\frac{1}{2}$. The same probability will appear when they are submitted to QLN. \end{Observation} \begin{Observation} There are no probabilistic ballot $x,y \in [0,1]$ such that when they are submitted to veto/nomination, the proposal will be agreed with probability $\frac{1}{2}$. \end{Observation} \begin{proof} Suppose $x,y \in [0,1]$ are two probabilistic ballots and they produce probability $\frac{1}{2}$ in both veto and nomination. Then $xy =\frac{1}{2}$ and $x +y - xy = \frac{1}{2}$. Hence $x+y=1$. Then we know $x(1-x)=\frac{1}{2}$, $x^2 -x +\frac{1}{2}=0$. But there is no real $x$ satisfies $x^2 -x +\frac{1}{2}=0$. \end{proof} In fact, two quantum ballots $\rho_1, \rho_2$ in states $\rho_1 = \frac{1+i}{2} |0\rangle + \frac{1-i}{2} |1\rangle$ and $\rho_2 = \frac{1-i}{2} |0\rangle + \frac{1+i}{2} |1\rangle$ also produce the same probability in QLV and QLN as the quantum ballots in state $\frac{|00\rangle + |11 \rangle }{\sqrt{2}}$. Therefore, there even exist non-entangled quantum ballots which cannot be represented by probabilistic ballots. \section{Conclusion and future work}\label{Conclusion and future work} We have designed two rules of binary quantum computed vote: QLV and QLN. In both of them ballots are cast into quantum states. The conjunction and disjunction from quantum computational logic are used to define quantum veto and quantum nomination, respectively. Compared to other rules of quantum computed vote, QLV and QLN have advantages in scalability. Both of them can be physically realized by the current technology and the difficulty of physical realization does not grow with the increase of the number of voters. They can also be combined to define other interesting and useful quantum voting rules without loss of scalability. In the future, we will be interested in the physical realization of quantum veto and nomination. For example, an ion trap quantum computer is a good candidate because the realization of the quantum Toffoli gate with trapped ions has been successful since 2009 \cite{Monz09}. We also plan to study quantum veto and nomination in the situation where some quantum voting machines suffer from faulty behaviour such as crash failure or Byzantine failure. In these situations we will use quantum blockchain \cite{Sun19blockchain,Sun19vote} as a platform to execute quantum veto and nomination. \section*{Funding} The project is funded by the Minister of Education and Science within the program under the name ``Regional Initiative of Excellence'' in 2019-2022, project number: 028/RID/2018/19, to the amount: 11,742,500 PLN and by Polish Agency for Enterprise Development in 2021-2022, project number POPW.01.01.02-06-0031/21.
{ "timestamp": "2022-02-02T02:07:33", "yymm": "2202", "arxiv_id": "2202.00147", "language": "en", "url": "https://arxiv.org/abs/2202.00147" }
\section*{APPENDIX} \end{document} \section{Introduction} \end{document} \section{Acknowledge} This project is funded by Ministry of Science and Technology of Taiwan (MOST 109-2634-F-007-016) and supported by NOVATEK Fellowship. \section{Conclusion} \label{sec:conclusion} We propose a framework that only takes few instances of an articulate object with different viewpoints as references; then, infers the corresponding deformable neural radiance field to predict the image and part segmentation with the specified camera pose. With the well-trained framework, the articulate pose of an object can be estimated via inversely optimize the deformation condition. In the experiments, we evaluate the framework in both synthesis objects collected from SAPIEN and our manually collected real-world data. In all cases, our method shows realistic deformation results and accurate articulated pose estimation. \section{Experiments} \label{sec:experiment} We evaluate CLA-NeRF on three different tasks: view synthesis, part segmentation, and articulated pose estimation. \subsection{Dataset} \paragraph{Synthetic data} We consider the ``laptop'', ``scissors'', ``eyeglasses'', ``stapler'' and ``pliers'' classes of SAPIEN \cite{CVPR20_SAPIEN,arxiv15_shapenet,CVPR19_partnet} with 46, 54, 65, 24 and 23 instances respectively. We split these instances into two sets: training and held-out. We train on 200 observations of each training instance at a resolution of $200 \times 200$ pixels. Camera poses are randomly generated on a sphere with the object at the origin. Transparencies and specularities are disabled. We further render the held-out instances to construct a dataset for performance evaluation. \paragraph{Real-world data} To further test our method, we manually collect real-world images and the corresponding camera poses for the “laptop” and “scissors” with articulate poses at $[0^{\circ}, 30^{\circ}, 60^{\circ}, 90^{\circ}]$. \subsection{View Synthesis and Part Segmentation} \label{sec:view} \begin{table}[h] \setlength{\tabcolsep}{3.2pt} \caption{Quantitative result including segmentation, articulate pose, and novel view synthesis of our framework evaluated on the dataset generated from SAPIEN\cite{CVPR20_SAPIEN}.} \centering \begin{tabular}{ |c|c c c c| c c| } \hline \multicolumn{1}{|c|}{} & \multicolumn{4}{|c|}{Novel View Synthesis} & \multicolumn{2}{|c|}{Segmentation} \\ \hline & MSE$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & Pixel Acc$\uparrow$ & mIoU$\uparrow$ \\ \thickhline Laptop & 0.0811 & 23.89 & 0.94 & 0.1323 & 0.981 & 0.971 \\ Scissors & 0.0722 & 24.01 & 0.92 & 0.1456 & 0.989 & 0.969 \\ Eyeglasses & 0.0991 & 23.72 & 0.89 & 0.1755 & 0.973 & 0.941 \\ Stalper & 0.0771 & 26.91 & 0.96 & 0.1022 & 0.969 & 0.940 \\ Pliers & 0.0413 & 25.90 & 0.96 & 0.0711 & 0.971 & 0.940 \\ \hline \end{tabular} \label{table:simu_quantitative} \vspace{-0.5\baselineskip} \end{table} \begin{table}[h] \caption{Quantitative results for our real-world data.} \centering \begin{tabular}{ |c|c c c c| } \hline category & MSE$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ \\ \hline Laptop & 0.1021 & 22.12 & 0.93 & 0.1600 \\ Scissors & 0.1281 & 23.39 & 0.92 & 0.1492 \\ \hline \end{tabular} \label{table:real_quantitive} \vspace{-0.8\baselineskip} \end{table} We show the qualitative results in Fig.~\ref{fig:simu_qualitative} and quantitative results in Table~\ref{table:simu_quantitative} for the synthetic data. We find that CLA-NeRF successfully renders the held-out object at different articulated poses. For the real data, we report the quantitative results in Table \ref{table:real_quantitive} and qualitative results in Fig. \ref{fig:real_qualitative}. The used metrics are MSE/PSNR/SSIM (higher is better) and LPIPS \cite{zhang2018perceptual} (lower is better). We found that the network trained on synthetic data effectively infers the shape and texture of the real object, suggesting our model can transfer beyond the synthetic domain. \subsection{Articulated Pose Estimation} \label{sec:exp_ape} \begin{table*}[ht] \caption{Quantitative results for articulated pose estimation and joint localization. We show the pose error $\mathbf{a}_{\text{error}}$, angle error $\mathbf{u}_{\text{error}}$, distance error $\mathbf{v}_{\text{error}}$. For the real-world results, please see Sec. \ref{sec:exp_ape}.} \centering \begin{tabular}{c|c c c | c c c | c c c | c c c} \hline \multicolumn{1}{c|}{Dataset} & \multicolumn{9}{|c|}{Synthetic} & \multicolumn{3}{|c}{Real-World} \\ \hline \multicolumn{1}{c|}{Apporach} & \multicolumn{3}{|c}{Ours} & \multicolumn{3}{|c}{ScrewNet~\cite{arxiv20_screwnet}} & \multicolumn{3}{|c}{Abbatematteo et al. \cite{corl20_learn}} & \multicolumn{3}{|c}{Ours} \\ \hline & $\mathbf{a}_{\text{error}}$ & $\mathbf{u}_{\text{error}}$ & $\mathbf{v}_{\text{error}}$ & $\mathbf{a}_{\text{error}}$ & $\mathbf{u}_{\text{error}}$ & $\mathbf{v}_{\text{error}}$ & $\mathbf{a}_{\text{error}}$ & $\mathbf{u}_{\text{error}}$ & $\mathbf{v}_{\text{error}}$ & $\mathbf{a}_{\text{error}}$ (sim2real)& $\mathbf{a}_{\text{error}} $ (generalize)& $\mathbf{a}_{\text{error}}$(overfit) \\ \thickhline Laptop & 0.138 & 0.010 & 0.091 & 0.129 & 0.019 & 0.062 & 0.137 & 0.012 & 0.041 & 0.179 & 0.174 & 0.179 \\ Scissors & 0.130 & 0.016 & 0.120 & 0.116 & 0.149 & 0.136 & 0.131 & 0.037 & 0.041 & 0.179 & 0.170 & 0.170 \\ Eyeglasses & 0.151 & 0.109 & 0.071 & 0.141 & 0.140 & 0.136 & 0.149 & 0.108 & 0.082 & - & - & - \\ Stalper & 0.182 & 0.021 & 0.010 & 0.119 & 0.146 & 0.101 & 0.172 & 0.031 & 0.008 & - & - & - \\ Pliers & 0.171 & 0.010 & 0.010 & 0.121 & 0.132 & 0.102 & 0.183 & 0.009 & 0.009 & - & - & - \\ \hline \end{tabular} \label{table:real_articulate} \vspace{-1\baselineskip} \end{table*} \begin{table}[ht] \caption{We also evaluate our approach with pose error $\mathbf{a}_{\text{error}}$, angle error $\mathbf{u}_{\text{error}}$, distance error $\mathbf{v}_{\text{error}}$ on Shape2Motion validation set. Note that ANCSH~\cite{cvpr20_ANCSH} requires depth to estimate pose.} \label{table:ancsh} \centering \begin{tabular}{|c|c c c | c c c|} \hline \multicolumn{1}{|c|}{Approach} & \multicolumn{3}{|c|}{Ours} & \multicolumn{3}{|c|}{ANCSH~\cite{cvpr20_ANCSH}} \\ \hline & $\mathbf{a}_{\text{error}}$ & $\mathbf{u}_{\text{error}}$ & $\mathbf{v}_{\text{error}}$ & $\mathbf{a}_{\text{error}}$ & $\mathbf{u}_{\text{error}}$ & $\mathbf{v}_{\text{error}}$ \\ \thickhline Laptop & 0.179 & 0.011 & 0.110 & 0.169 & 0.009 & 0.017 \\ Eyeglasses & 0.169 & 0.109 & 0.091 & 0.076 & 0.039 & 0.016 \\ \hline \end{tabular} \vspace{-1\baselineskip} \end{table} The results on synthetic data and real-world data are presented in Table \ref{table:real_articulate}. Fig. \ref{fig:articulate_matrix} shows the L1 error with different articulate poses of source images and target images on the real-world dataset. We can first find that the error is the lowest when no deformation is required. Second, if the articulated poses are too different, the estimation will be less accurate. It is because we only optimize articulated pose with $\mathcal{L}_{\text{color}}$ and local minimum occur during the optimization process. To test the limit of our method, we compare our method with ScrewNet~\cite{arxiv20_screwnet} and Abbatematteo et al. \cite{corl20_learn} on our dataset (Table \ref{table:real_articulate}). Besides, we also evaluate CLA-NeRF on the Shape2Motion dataset without fine-tuning. The results compare against ANSCH~\cite{cvpr20_ANCSH} are shown in Table~\ref{table:ancsh}. Despite not using depth images as inputs and not finetuned, we find our model to only perform slightly worse than ANSCH~\cite{cvpr20_ANCSH} and ScrewNet~\cite{arxiv20_screwnet} and \cite{corl20_learn}. It shows that the proposed representation is a promising direction for category-level articulated pose estimation. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/heatmap_final.png} \end{center} \vspace{-0.3\baselineskip} \caption{Error heatmap of articulate pose estimation.} \label{fig:articulate_matrix} \vspace{-0.7\baselineskip} \end{figure} To understand whether the articulated pose estimation can be improved, we finetune the model in two manners. First, we finetune the framework with a set of real-world objects, then, we test it on unseen real-world objects. Second, we directly finetune the framework on specific object and test on it. These fine-tuning approaches are labeled as \textbf{generalize} and \textbf{overfit} in Table \ref{table:real_articulate}, respectively. Only minor improvement is observed. \subsection{Failure Cases} \label{sec:failure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{figures/failure_v2.png} \end{center} \vspace{-.2\baselineskip} \caption{Failure cases. (a) Incorrect keyboard appearance due to missing observation in source images. (b) Incorrect geometry due to lack of texture.} \label{fig:failure} \vspace{-.2\baselineskip} \end{figure} Despite the promising results shown in Sec. \ref{sec:view} and Sec. \ref{sec:exp_ape}, there are some failure cases that need to be discussed. First, since our framework only takes few instances as conditions, if query camera poses are highly distinct from the source camera pose, the appearance may have some defect. From Fig. \ref{fig:failure} (a), the predicted screen color is not the same as the ground truth. However, the geometry of the object and its part segmentation are reconstructed correctly, so the articulated pose estimation isn't affected by this issue. Besides, despite the color of the screen isn't accurate, the appearance still matches the normal appearance of the laptop screen. Second, for the CAD model without texture, it is hard for our framework to correctly infer its geometry. From Fig. \ref{fig:failure} (b), we find that the shape of the screen is distorted. Besides, our joint localization is inaccurate if the source images are closed laptop. It is because the NeRF model the two part of laptop attach with each other, so the intersection points form a surface. Therefore, the joint axis can't be estimated correctly. \section{Introduction} Our living environment is full of articulated objects: objects composed of more than one rigid parts (links) connected by joints allowing rotational or translational motion, such as doors, refrigerators, scissors, and laptops. Endowing robots with the ability to perceive and interact with these objects requires a detailed understanding of the objects' part-level poses, 3D shape, and materials. Prior works~\cite{desingh2019factored,BMVC2015_181} on estimating these properties of articulated objects often assume the object's CAD model and thus cannot generalize to objects unseen during training. To address this limitation, several recent works have explored category-level representations for articulated objects. These representations do not assume CAD models during testing and therefore can achieve intra-category generalization. For instance, ANCSH~\cite{cvpr20_ANCSH}, designed specifically for articulated object pose estimation, uses the 3D coordinates in the canonical frame as the representation where the canonical frame is determined by authors who manually align the center and orientation of different CAD models. A-SDF~\cite{arxiv21_asdf}, focusing on articulated object shape reconstruction, uses deep implicit signed distance function~\cite{cvpr19_deepsdf} as the representation and factors the latent space into shape codes and articulation angles. Although these representations have shown impressive results, both of them are limited by the requirement of access to ground truth 3D geometry during training, which is costly to scale up for articulated objects~\cite{3DIndoorObjects}. Furthermore, during testing, both works require depth images as inputs. This poses additional requirements on hardware and may not work on articulated objects that are thin or highly reflective, e.g., scissors. In this work, we seek to relax these requirements and build a category-level representation for articulated objects that doesn't require 3D CAD models or depth sensing during both training and testing --- only using RGB images with camera poses and part segmentation labels for training, and RGB images alone for testing. To this end, we introduce CLA-NeRF, a Category-Level Articulated NeRF representation that supports multiple downstream tasks including novel view synthesis, part segmentation, and articulated pose estimation. Our representation is based on Neural Radiance Fields (NeRF~\cite{eccv20_nerf}), a method that has shown impressive performance on novel view synthesis of a specific scene by encoding volumetric density and color through a neural network. As NeRF typically requires a lengthy optimization process for each scene independently, we follow recent works~\cite{cvpr21_pixelnerf,wang2021ibrnet} to directly predict NeRFs from one or several RGB images in a feed-forward manner. However, simply doing so cannot capture articulated objects' part attributes (e.g., part poses and segmentation) and joint attributes (e.g., joint axis). We, therefore, propose to explicitly model the object articulation by predicting a part segmentation field in addition to the volumetric density and color. Joint attributes can then be inferred by performing line-fitting on the part segmentation field. In the experiments, we focus on modeling objects with revolute joints that cause 1D rotational motion (e.g., eyeglasses). We show that CLA-NeRF can render the object and its part segmentation map at unseen articulated poses by performing articulation-aware volume rendering. Additionally, it can perform category-level articulated pose estimation with RGB inputs by minimizing the residual between the rendered and observed pixels. We note that these tasks are not possible with existing NeRF formulations~\cite{eccv20_nerf,cvpr21_pixelnerf,wang2021ibrnet} which explicitly model the camera poses but don't consider the object articulation. To the best of our knowledge, our work is the first to model general articulated objects with neural radiance fields. We summarize our primary contributions as follows, and more information are provided in our project website\footnote{https://weichengtseng.github.io/project\_website/icra22/index.html}: \begin{itemize} \item We propose CLA-NeRF, a differentiable representation for articulated objects that explicitly models the part and joint attributes. The proposed representation disentangles camera pose, part pose, part segmentation, and joint attributes, allowing us to independently control each property during rendering. \item We show that the proposed representation can perform category-level articulated pose estimation through analysis-by-synthesis with only RGB inputs. To the best of our knowledge, existing works for this task all require depth inputs~\cite{cvpr20_ANCSH,weng2021captra,arxiv20_screwnet,corl20_learn}. \end{itemize} \section{Method} Although NeRF has shown impressive results on modeling the appearance of static objects, its formulation only allows control over the camera poses during rendering. Therefore, it cannot render a deformable articulated object (e.g., laptop) at different articulated poses (e.g., closing vs. opening) because it has more than 6 degree of freedom (DoF). CLA-NeRF is designed to tackle these issues. Instead of simply predicting colors $\mathbf{c}$ and densities $\sigma$ for each 3D location, we propose to additionally estimate part segmentation $\mathbf{s}$. Instead of only controlling the camera poses during rendering, our formulation allows user to input articulated poses. And instead of casting rays solely based on camera poses, we also transform camera rays based on query articulated poses, predicted part segmentation, and inferred joint attributes during volume rendering. These modifications together allow CLA-NeRF to render articulated objects at unseen articulated poses. \vspace{-0.5\baselineskip} \subsection{Category-Level Semantic NeRF} \label{sec:part_segmentation} Here we first describe how we extend NeRF to predict part segmentation. For each 3D location $\mathbf{x}$ and viewing direction $\mathbf{d}$, we add another linear layer on top of NeRF's MLP backbone to predict part segmentation: \begin{equation} (\sigma, \textbf{c}, \textbf{s}) = F_{\Theta}\Big(\gamma(\textbf{x}), \textbf{d}\Big) \end{equation} where $\mathbf{s}$ is the segmentation logits with $P+1$ dimension ($P$ parts and background). With the volumetric field of predicted part segmentation, we can predict which part a pixel belongs to following the procedure we used to approximate the volume rendering of RGB: \begin{equation} \begin{split} \hat{\textbf{S}}(\mathbf{r}) = \sum_{k=1}^{K} \hat{T}_k (1- \exp(-\sigma_k (t_{k+1} - t_k))) \textbf{s}_k, \\ \text{with} \quad \hat{T}_k = \text{exp} (-\sum_{k' < k} \sigma_{k'} (t_{k'+1} - t_{k'})) \end{split} \end{equation} where $\textbf{s}_k$ is the predicted part segmentation of sampled point $\mathbf{r}(t_k)$. The new model can then be trained with both color loss $\mathcal{L}_{\text{color}}$ and segmentation loss $\mathcal{L}_{\text{seg}}$: \begin{equation} \mathcal{L}_{\text{color}} = \sum_{\mathbf{r} \in \mathbf{R}} \left[ || \hat{\mathbf{C}}_c(\mathbf{r}) - \mathbf{C}(\mathbf{r}) ||_2^2 + || \hat{\mathbf{C}}_f(\mathbf{r}) - \mathbf{C}(\mathbf{r}) ||_2^2 \right] \end{equation} \begin{equation} \mathcal{L}_{\text{seg}} = -\sum_{\mathbf{r} \in \mathbf{R}} \left[ \sum_{i=1}^{P+1} p^i(\mathbf{r}) \text{log}\, \hat{p}_{c}^{i}(\mathbf{r}) + p^{i}(\mathbf{r}) \text{log}\, \hat{p}_{f}^{i}(\mathbf{r}) \right] \end{equation} where $\hat{p}^i(\mathbf{r}) = \frac{\text{exp}(\hat{\mathbf{S}}^i(\mathbf{r}))}{\sum_{j=1}^P\text{exp}(\hat{\mathbf{S}}^j(\mathbf{r}))}$ Here, $\mathbf{C}(\mathbf{r})$, $\hat{\mathbf{C}}_c (\mathbf{r})$ and $\hat{\mathbf{C}}_f(\mathbf{r})$ are the ground truth color, color predicted by the coarse network, and color predicted by the fine network for ray $\mathbf{r}$, respectively. In the segmentation loss $\mathcal{L}_{\text{seg}}$, $p^i$ is the ground truth probability of part $i$, while $\hat{p}_c^i$ and $\hat{p}_f^i$ represent the probability predicted by the coarse and fine network for ray $\mathbf{r}$. In summary, the color loss $\mathcal{L}_{\text{color}}$ is the L2 distance between ground truth color and the color predicted by both coarse and fine networks, and the segmentation loss $\mathcal{L}_{\text{seg}}$ is a multi-class cross-entropy loss that encourages the rendered semantic labels to be consistent with the provided labels. A coefficient $\lambda$ is used to modulate these two losses during training: $\mathcal{L}_{\text{total}} = \mathcal{L}_{\text{color}} + \lambda \cdot \mathcal{L}_{\text{seg}}$. We note that the current formulation still requires lengthy optimization for each articulated object and does not share knowledge between different objects. To make our method generalize to objects within the same category, we customize the framework of previous works~\cite{cvpr21_pixelnerf,wang2021ibrnet} to directly predict the proposed semantic NeRF given one or a few input images of the articulated object. For brevity, we explain the framework with a single input image $I$. First, we extract the image feature with an image encoder $E$ to form a feature map $W = E(\mathbf{I})$. Then, we project each sampled 3D point $\mathbf{x}$ to the input image plane and get the projected coordinate $\pi(\mathbf{x})$. Finally, we augment the input to NeRF $F_{\Theta}$ with the associated feature $W(\pi(\textbf{x}))$, resulting in the following formulation: \begin{equation} (\sigma, \textbf{c}, \textbf{s}) = F_{\Theta}\Big(\gamma(\textbf{x}), \textbf{d}, W(\pi(\textbf{x}))\Big) \end{equation} The model is jointly trained on a collection of articulated objects belonging to the same category. Relative camera poses between multi-view images and the corresponding part segmentation labels are used for supervision. \vspace{-0.5\baselineskip} \subsection{Joint Attributes Estimation} In this work, we consider 1D revolute joints. The joint attributes consist of the direction of the rotation axis $\mathbf{u}$ as well as a pivot point $\mathbf{v}$ on the rotation axis. Given an input image of the articulated object, we propose to infer the joint attributes from the predicted segmentation field through ray marching. For each pixel on the image plane, we cast a ray $\mathbf{r}(t) = \mathbf{o} + t\mathbf{d}$ starting from the camera center $\mathbf{o}$ along the direction $\mathbf{d}$ passing through that pixel. We then sample $K$ points $\{\mathbf{x}_k = \mathbf{r}(t_k)\}_{k=1}^K$ along the ray and feed them into the semantic NeRF to get their predicted density and part segmentation $\{\sigma_k, \mathbf{s}_k\}_{k=1}^K$. Since the 1D revolute joint lies at the intersection of two parts, we filter the sampled points to collect points that are close to the intersection: \begin{equation} \mathbf{X}_{\text{intersection}} = \{\mathbf{x}_k \,|\, \argmax(\mathbf{s}_k) \neq \argmax(\mathbf{s}_{k+1}) \wedge \sigma_k \geq H\} \end{equation} where $H$ is a predefined threshold to remove points with low density. After collecting $\mathbf{X}_{\text{intersection}}$ from all the pixels, we can perform linear regression on these 3D points to estimate both rotation axis $\mathbf{u}$ and the pivot point $\mathbf{v}$. \vspace{-0.3\baselineskip} \subsection{Articulation-aware Volume Rendering} After predicting the part segmentation field and joint attributes $\mathbf{J}$, we discuss a modified volume rendering procedure that allows us to perform view synthesis at unseen articulated poses. Given an input articulated pose $\mathbf{a}$ specified by users, we construct deformation matrices \{$\mathbf{D}^i(\mathbf{a}, \mathbf{J})\}_{i=1}^{P+1}$ that describe the rigid transformation between part $i$ and the root part. During volume rendering, we deform the rays with each part's deformation matrix $\mathbf{D}^i$ and collect the outputs for all ${P+1}$ parts: \begin{equation} \{\sigma^i, \mathbf{c}^i, \mathbf{s}^i = F_\Theta\Big(\gamma(\mathbf{D}_i\mathbf{x}), \mathbf{d}, W(\pi(\mathbf{D}_i\mathbf{x}))\Big)\}_{i=1}^{P+1} \end{equation} To merge these outputs for articulation-aware volumetric rendering, we weight all the fields with predicted part segmentation $\hat{p}$, where $\hat{p}^i$ indicates the estimated probability of being classified as part $i$. The predicted color $\hat{\mathbf{C}}(\mathbf{r})$ and segmentation $\hat{\mathbf{S}}(\mathbf{r})$ are therefore the weighted sum of each part: \begin{equation} \hat{\mathbf{C}}(\mathbf{r}) = \sum_{k=1}^{K} \hat{T}(t_k) \sum_i^{P+1} \hat{p}^i(t_k) (1-\exp(-\sigma^i_k(t_{k+1}-t_k)))\mathbf{c}^i(t_k) \end{equation} \begin{equation} \hat{\mathbf{S}}(\mathbf{r}) = \sum_{k=1}^{K} \hat{T}(t_k) \sum_i^{P+1} \hat{p}^i(t_k) (1-\exp(-\sigma^i_k(t_{k+1}-t_k)))\mathbf{s}^i(t_k) \end{equation} where the accumulated transmittance is \begin{equation} \hat{T}_k = \text{exp} (-\sum_{k' < k} \sum_i^{P+1} \hat{p^i}(t_k) \sigma^i_{k'} (t_{k'+1} - t_{k'})) \end{equation} \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth]{figures/simple_qual.png} \vspace{-.5\baselineskip} \caption{The typical results on (a) synthetic data and (b) real-world data. We can find that the part in the object is consistently deformed with the joint parameter. } \label{fig:simu_qualitative} \label{fig:real_qualitative} \vspace{-0.5\baselineskip} \end{figure*} \subsection{Articulated Pose Estimation} \label{sec:ape} Here we explain how we perform category-level articulated object pose estimation with CLA-NeRF. We assume the semantic NeRF $F_\Theta$ of an articulated object has already been predicted from source images and both the camera intrinsics and extrinsics are known. The goal is to estimate the articulated pose $\mathbf{a}$ of a given input image $I$. Unlike CLA-NeRF's training procedure which optimizes $\Theta$ using image observations and part segmentations, we instead solve the inverse problem~\cite{arxiv20_inerf} of recovering the articulated pose $\mathbf{a}$ given the weights $\Theta$ and the image $I$: \begin{equation} \hat{\mathbf{a}} = \text{argmin}_{\mathbf{a} \in \mathbf{A}}\mathcal{L}_{\text{color}}(\mathbf{a}|\Theta, \mathbf{d}, \mathbf{J}) \end{equation} To solve this optimization problem, we iteratively perform gradient-based optimization to minimize the residuals between the rendered image and the observed image. \section{Related Works} \vspace{-0.3\baselineskip} \subsection{Articulated 3D Shape Representations} Meshes and rigging techniques~\cite{skinningcourse:2014} are widely used to model the shape and deformation of articulated objects. Leveraging the abundant prior knowledge of human bodies, efficient techniques~\cite{SMPL2015,Zheng2019DeepHuman,bhatnagar2019mgn,peng2021neural,kocabas2019vibe,hmrKanazawa17,zhang2020phosa,omran2018nbf,aaai21_tseng,meta-cpr}1 have been developed to model the deformation of a wide variety of body shapes. However, creating watertight meshes and rigs remains a labor-intensive process for general articulated objects whose part and joint attributes are less constrained. For the robotics community, it is very costly, if not impossible, to hire specially trained experts to model all sorts of articulated objects exist in our daily life. Recently, NASA~\cite{deng2019neural} proposes to represent articulated shapes with a neural indicator function that successfully circumvents the complexity of meshes and the issue of water-tightness. A-SDF~\cite{arxiv21_asdf} uses neural networks to encode signed distance function for articulated shape modeling. It's trained on multiple instances of the same category and learns a disentangled latent space that allows it to synthesize novel shapes at unseen articulated poses. However, both of them require ground truth 3D models for training and thus still suffer from the scalability issue. Concurrently with our work, NARF~\cite{2021narf} also proposes to explicitly consider articulation within NeRF and show impressive results on view synthesis of human bodies. Compared to NARF, our method differs in two aspects. First, our method focuses on general articulated objects and thus doesn't assume known joint attributes (e.g., root joint's pose, bone length) during test time. Instead, we infer them from the predicted segmentation field and further show results on articulated pose estimation. Second, our method uses RGB images and part segmentation labels as supervision, while NARF uses RGB images and joint attributes. We believe both works complement each other and further supports the possibility that explicitly considering articulation within NeRF can lead to better generalization. \vspace{-0.3\baselineskip} \subsection{Articulated Object Pose Estimation} Most existing approaches for articulated object pose estimation requires instance-level information. They either assume the articulated object's exact CAD model~\cite{desingh2019factored,BMVC2015_181} or need to generate the object's motion through deliberate interaction before inference~\cite{katz2008manipulating,katz2013interactive,martin2014online,martin2016integrated,hausman2015active,arxiv15_visual}. Both directions require the robot to learn about each object from scratch, no matter how similar the object is to those it has previously experienced. To address this issue, recent works have proposed to predict canonicalized object coordinates~\cite{wang2019normalized} for category-level articulated object pose estimation~\cite{cvpr20_ANCSH,weng2021captra}. However, such representation is designed specifically for articulated pose estimation and can't perform other tasks such as shape reconstruction or view synthesis. Additionally, it requires articulated objects' ground truth 3D geometries for training and depth images for testing. As for inferring articulated pose from visual data, \cite{corl20_learn} proposed to use a mixture density network that consumes an RGB-D image to predict the probability of the joint attribute and articulated pose. ScrewNet\cite{arxiv20_screwnet} takes multiple depth images with different articulated poses and the same camera pose as input to predict joint attribute and articulated pose. \cite{iros20_learn} extended \cite{arxiv11_prob} by including reasoning about the applied actions along with the observed motion of the object while estimating its kinematic structure. Different from these works, we focus on building a category-level representation that only requires 2D supervision. Also, we demonstrate results on view synthesis besides articulated pose estimation. \begin{figure*}[t] \centering \includegraphics[width=0.94\textwidth]{figures/overview_final.png} \vspace{-1\baselineskip} \caption{The overview of our framework. (a) Our framework retrieves features from two instance as the condition of NeRF model and predicts color $\mathbf{c}$, density $\sigma$ and segmentation $\mathbf{s}$. The volume rendering is applied to predict rendered results. (b) We calculate the deformation matrix based on the articulated pose. Then, we deform the sampled rays with the deformation matrix. Finally, the deformed visual image is rendered using our learned framework. (c) The articulated pose is estimated via inversely minimizing $\mathcal{L}_{\text{color}}$.} \label{fig:overview} \vspace{-1.5\baselineskip} \end{figure*} \vspace{-0.3\baselineskip} \subsection{Preliminaries: NeRF}\label{sec:background} NeRF learns to synthesize novel views associated with unseen camera poses given a collection of RGB images with known camera poses. Specifically, NeRF represents a scene as a volumetric field of density $\sigma$ and RGB color $\mathbf{c}$. The density models the shape of the scene and the color models the view-dependent appearance of occupied regions of the scene, both of which lie within a bounded 3D volume. A multilayer perceptron (MLP) parameterized by the weights $\Theta$ is used to predict the density $\sigma$ and RGB color $\mathbf{c}$ of each point by taking its 3D position $\mathbf{x} = (x, y, z)$ and unit-norm viewing direction $\mathbf{d}$ as input, where $(\sigma, \mathbf{c}) \leftarrow F_{\Theta}(\gamma(\mathbf{x}), \mathbf{d})$ and $\gamma(\cdot)$ is a high-frequency positional encoding \cite{neurips17_attention}. To render a pixel, NeRF emits a camera ray $\mathbf{r}(t) = \mathbf{o} + t\mathbf{d}$ from the camera center $\mathbf{o}$ along the direction $\mathbf{d}$ passing through that pixel on the image plane. Along the ray, $K$ points $\{\mathbf{x}_k = \mathbf{r}(t_k)\}_{k=1}^K$ are sampled for use as input to the MLP which outputs a set of densities and colors $\{\sigma_k, \mathbf{c}_k\}_{k=1}^K$. These values are then used to estimate the color $\hat{\textbf{C}}(r)$ of that pixel following volume rendering~\cite{kajiya84} approximated with numerical quadrature~\cite{max95}: \begin{equation} \label{equ:volume_rendering} \begin{split} \hat{\textbf{C}}(\mathbf{r}) = \sum_{k=1}^{K} \hat{T}_k (1- \exp(-\sigma_k (t_{k+1} - t_k)))\, \textbf{c}_k, \\ \text{with} \quad \hat{T}_k = \text{exp} (-\sum_{k' < k} \sigma_{k'} (t_{k'+1} - t_{k'})) \end{split} \end{equation} where $\hat{T}_k$ can be interpreted as the probability of the ray successfully transmits to point $\mathbf{r}(t_k)$. NeRF is then trained to minimize a photometric loss $\mathcal{L} = \sum_{\mathbf{r} \in \mathbf{R}} || \hat{\mathbf{C}}(\mathbf{r}) - \mathbf{C}(\mathbf{r})||_2^2$, using some sampled set of rays $\mathbf{r} \in \mathbf{R}$ where $\mathbf{C}(\mathbf{r})$ is the observed RGB value of the pixel corresponding to ray $\mathbf{r}$ in some image. To improve rendering efficiency one may train two MLPs: one ``coarse'' and one ``fine'', where the coarse model serves to bias the samples that are used for the fine model. For more details, we refer readers to Mildenhall \etal~\cite{eccv20_nerf}. While NeRF originally needs to optimize the representation for every scene independently, several recent works~\cite{cvpr21_pixelnerf,wang2021ibrnet,icml21_sharf} on category-level NeRF have been proposed to directly predict a NeRF conditioned on one or few input images.
{ "timestamp": "2022-03-07T02:06:19", "yymm": "2202", "arxiv_id": "2202.00181", "language": "en", "url": "https://arxiv.org/abs/2202.00181" }
\section{ Introduction } It's hard to have evidence-based data about industry adoption. From the subjective observations, one of the barriers to industrial adoption of the Scala language is an unnecessary high learning curve. The tradition of using embedded DSL ({\it Hudak} \cite{10.1145/242224.242477}) instead of the base language leads to a situation when the ‘cognitive load’ of relatively simple development tasks, such as querying an extra resource, is higher than in mainstream languages. A programmer cannot use control flow constructions of base language but should learn a specific DSL and use a suboptimal embedding of this DSL, usually within monadic for comprehensions. Therefore, developers proficient in Java or TypeScript cannot be immediately proficient in Scala without additional training. Can we provide a development environment that gives the programmer an experience comparable to the state of the art mainstream back-end programming, such as Kotlin structured concurrency, Swift async functions or \verb|F#| computation expressions ? Dotty-cps-async intends to be an element of the possible answer. It provides the way to embed monadic expressions into base scala language using well-known async/await constructs, existing for nearly all mainstream programming languages. Although the main idea is not new, dotty-cps-async provides behind well-known interfaces a set of novel features, such as support of the generic monads, transformation of higher-order function applications, generation of call-chain proxies, and automatic coloring. The package is open-source and can be downloaded from the GitHub repository {\small \verb|https://github.com/rssh/dotty-cps-async|}. The paper is organized as follows: In section {\ref{EmbeddingGeneric}} we briefly describe the async/await interface and the process of monadification; in \ref{MonadsParametrization} add monads parametrization and provide some examples of applying async/await transformation with non-standard monads; in \ref{HO} describe a support of using await inside the arguments of higher-order functions, then in \ref{AutomaticColoring} intriduce an optional automatic colouring facility and show an example of non-blocking file copying on \ref{AACopyFile} on page \pageref{AACopyFile} elaborated with coloring on page \pageref{AACopyFileAC}. Short overview on related work in this area is in section \ref{RelatedWork}. \section{ Embedding generic monadic cps transform into Scala } \label{EmbeddingGeneric} Dotty-cps-async implements an interface similar to scala-async by ({\it Haller} \cite{hallerScalaAsync}) based on optimized monadic CPS transform. It is implemented as Scala macros and provides a simple generic interface with a well-known async/await signature, slightly changed to support monad parametrization. In simplified form this is a combination of two generic pseudofunctions: \begin{lstlisting} def async[F[_],T](f: T): F[T] \end{lstlisting} \begin{lstlisting} def await[F[_],T]( x: F[T]): T \end{lstlisting} where we can use \lstinline|await| inside the \lstinline|async| block . Complete definitions are more complex, although most of this complexity is hidden from the application programmer. Let's look at it in detail and briefly describe the basics of the Scala 3 language features used in those definitions: Full definition of \lstinline|async| : \begin{lstlisting} transparent inline def async[F[_]](using monad:CpsMonad[F]) = InferAsyncArg(monad) class InferAsyncArg[F[_]](am:CpsMonad[F]) { transparent inline def apply[T](f: am.Context ?=> T): F[T] } \end{lstlisting} \begin{itemize} \item \lstinline|F[_]| is a type parameter of async. The compiler will deduce type parameters automatically from context if it is possible. Underscore inside square brackets in \lstinline|F| means F is a higher-kinder type with one type parameter. \item \lstinline|transparent inline| means a macro, which should be expanded during typing phase of the compiler.The compiler passes the typed tree to the macro and refines the output type of the result of the macro application. \item \lstinline|using| clause at the beginning of the argument list, means that the compiler will substitute the appropriative parameters by the given instance of the parameter type, if such instance is defined in the current scope. Given instance can be introduced in scope via \lstinline|given| clause. For example, the following definition will introduce \lstinline|myMonad|. as given instance of \lstinline|CpsMonad[Future]|. \begin{lstlisting} given CpsMonad[Future] = myMonad \end{lstlisting} Exists predefined function \lstinline|summon[A]| which will give us a given instance of A if it's defined. \item Our \lstinline{async} returns an object of the auxiliary class \lstinline{InferAsyncArg} that defines \lstinline{apply} method. In Scala, any object with the \lstinline{apply} method can be applied as if it were a function. The auxiliary class is a work-around for the lack of multiple type parameters in function definitions. The ideal definition, without auxiliaries, would have looked as: \begin{lstlisting} transparent inline def async[F[_]](using am:CpsMonad[F])[T](am.Context ?=>T) \end{lstlisting} The optimizer should erase the constructor of the intermediate class. \item \lstinline| f: (am.Context ?=>> T)] | is a context function. Here \lstinline|f| is an expression of type \lstinline|T| in context \lstinline|am.Context|. Inside context function we can use \lstinline|summon[am.Context]| to access a context parameter. Context provides an API that can be used only in \lstinline|async| block. \item \lstinline| am.Context | is a path-depended type. Type \lstinline|Context| should be defined in the scope of \lstinline|am|. \end{itemize} When the developer uses async pseudo function in the application code, the compiler will pass to the macro transformation typed tree with expanded implicit parameters and context arguments. An example of original code: \begin{lstlisting} async[Future] { val ec = summon[FutureContext].executionContext val x = doSomething() .... } \end{lstlisting} Expansion passed to the macro: \begin{lstlisting} async[Future](using FutureAsyncMonad()).apply{ (fc:FutureContext) ?=> val ec = fc.executionContext val x = doSomething() ..... } \end{lstlisting} Here we assume that \lstinline|FutureAsyncMonad|. is a given monadic interface for \lstinline|Future| where type \lstinline|Context| is defined as alias to \lstinline|FutureContext|. Monadic operations are defined in the given \lstinline|CpsMonad[F]| parameter which should implement the following typeclass: \begin{figure} \begin{lstlisting} trait CpsMonad[F[_]] { type Context <: CpsMonadContext[F] def pure[A](v:A): F[A] def map[A,B](fa:F[A])(f: A=>B): F[B] def flatMap[A,B](fa:F[A])(f: A=>F[B]): F[B] } \end{lstlisting} \end{figure} Optionally extended by error generation and handling operations: \begin{figure} \begin{lstlisting} trait CpsTryMonad[F[_]] extends CpsMonad[F] { def error[A](e: Throwable): F[A] def flatMapTry[A,B](fa:F[A])(f: Try[A] => F[B]):F[B] } \end{lstlisting} \end{figure} The \lstinline|Cps| prefix here refers to the relation with continuation passing style for which transformation from direct to monadic style is closely related. This prefix is more related to a library rather than an interface: \lstinline|dotty-cps-async| should coexist with generic functional programming frameworks, like \lstinline|cats| or \lstinline|scalaz| where other forms of \lstinline|Monad| are defined, so we need to have another name, to prevent name resolution conflict in application code. Now let us look at the full definition of \lstinline|await|: \begin{lstlisting} def await[G[_],T,F[_])(x:G[T]) (using CpsMonadConversion[G,F], CpsMonadContext[F]):T \end{lstlisting} Here \lstinline|G|. is a type which we awaiting, \lstinline|F| -- monad which is used in enclosing \lstinline|async|. Note that $F$ and $G$ can be different; if the given instance of conversion morphism from $G[\_]$ to $F[\_]$ is defined in the current scope, then \lstinline|await[F]| can be used inside \lstinline|async[G]|. This morphism is represented by the \lstinline|CpsMonadConversion| interface: \begin{lstlisting} trait CpsMonadConversion[G[_],F[_]] { def apply[T](gt:G[T]): F[T] } \end{lstlisting} \lstinline|CpsMonadContext[F]| is an upper bound of \lstinline|CpsMonad[F].Context| with one operation defined: \begin{lstlisting} trait CpsMonadContext[F[_]] { def adoptAwait[T](v:F[T]):F[T] } \end{lstlisting} The work of \lstinline|adoptAwait| is to pass information from the current monad context into the awaiting monad. For example, this can be a cancellation event in the implementation of a structured concurrency framework. Underlying source transformation is an optimized version of monadification (Ervig, Martin and others \cite{10.1016/j.scico.2004.03.004} ), similar to translating terms into continuations monad (Syme \cite{10.1145/174675.178053}). This translation is limited to the code block inside an async argument. We will use notations $F.op$ as shortcut for appropriative operation over monad typeclass for $F$. $C_F \llbracket code \rrbracket $ is a translation of code $code$ in the context of $F$. Let us recap the basic monadification transformations adopted to scala control-flow construction: $$ \begin{array}{ l l }\\ \text{trivial} & \frac{ C_F \llbracket t \llbracket \textrm{ where $t$ ia a constant or an identifier }} { F.pure(t) } \\ \\ \text{sequential} & \frac{ C_F \llbracket \{a;b\} \rrbracket } { F.flatMap(C_F \llbracket a \rrbracket )(\_ \Rightarrow C_F \llbracket b \rrbracket ) } \\ \\ \text{val definition} & \frac{ C_F \llbracket {val\,a = b;\, c} \rrbracket} { F.flatMap(C_F \llbracket a \rrbracket )(a' \Rightarrow C_F \llbracket b_{[a \leftarrow a']} \rrbracket) } \\ \\ \text{condition} & \frac{ C_F \llbracket if\,a\,then\,b\,else\,c \rrbracket } { F.flatMap(C_F \llbracket a \rrbracket)(a' \Rightarrow if\,(a')\,then\,C_F \llbracket b \rrbracket \,else\,C_F \llbracket c \rrbracket ) } \\ \\ \text{match} & \frac{ C_F \llbracket a\,match\{\,case\,\, r_1 \Rightarrow v_1 \dots r_n \Rightarrow v_n\} \rrbracket } {F.flatMap(C_F \llbracket a \rrbracket)\{a' \Rightarrow match\{ case\, r_1 \Rightarrow C_F \llbracket v_1 \rrbracket \dots r_n \Rightarrow C_F \llbracket v_n \rrbracket \} \} } \\ \\ \text{while} & \frac{C_F \llbracket while(a)\{ b \} \rrbracket } {whileHelper(C_F \llbracket a \rrbracket ,C_F \llbracket b \rrbracket )} \\ \\ \end{array} $$ where $whileHelper$ is a helper function, defined as \begin{lstlisting} def whileHelper(cond: F[Boolean], body: F[Unit]):F[Unit] = F.flatMap(cond){ c => if (c) { F.flatMap(body){ _ => whileHelper(cond, body) } } else { F.pure(()) } } \end{lstlisting} $$ \begin{array}{l l} \\ \text{try/catch} & \frac{C_F \llbracket try\{a\}\{ catch\, e \Rightarrow b \}\{ finally\, c \} \rrbracket } { \mbox{\tiny{\ensuremath{ \begin{array}{l} F.flatMap( \\ \,\, F.flatMapTry(C_F \llbracket a \rrbracket)\{ \\ \,\,\,\,\,\,case\,Success(v) => F.pure(v) \\ \,\,\,\,\,\,case\,Failure(e) => C_F \llbracket b \rrbracket \\ \,\,\} \\ )\{ x \Rightarrow F.map(C_F \llbracket c \rrbracket ,x) \} \\ \end{array} } } } } \\ \\ \\ \text{throw} & \frac{ C_F \llbracket throw\,ex \rrbracket } { F.error(ex) } \\ \\ \text{lambda} & \frac{C_F \llbracket a \Rightarrow b \rrbracket } {a \Rightarrow C_F \llbracket b \rrbracket }. \\ \\ \text{application} & \frac { C_F \llbracket f(a) \rrbracket \textrm{ where $a$ is non-functional type} } { F.flatMap(C_F \llbracket a \rrbracket )(x => f(x)) } \\ \\ & \frac { C_F \llbracket f(a) \rrbracket \textrm{where $f$ is a lamba-function. } } {F.flatMap(C_F \llbracket a \rrbracket )(x => C_F \llbracket f \rrbracket (x)) } \\ \\ & \frac { C_F \llbracket f(a) \rrbracket \textrm{ where } \exists f' \textrm{ is an external-provided shifted variant of $f$ } } { F.flatMap(C_F \llbracket a \rrbracket )(x => f'(x))} \\ \\ \text{await} & \frac { C_F \llbracket await_G(a)(m,c) \rrbracket \textrm{ where } F = G }{ c.adoptAwait(a) } \\ \\ & \frac{ C_F \llbracket await_G(a)(m,c) \rrbracket \textrm{ where } F \ne G }{ c.adoptAwait(CpsMonadConversion[F,G](a)) } \\ \\ \end{array} $$ The mechanism for definition and substitution of shifted functions is described in \ref{HO}. Implementation differs from the basic transformation, by few optimizations, which are direct applications of unit monad law: \begin{itemize} \item few sequential blocks with trivial CPS transformations are merged into one: $$ {F.flatMap(F.pure(a))(x \Rightarrow F.pure(b(x))} \over { F.pure(b(a)) } $$ \item translation of control flow operators are specializated for the cases when transformations of some subterms are trivial. In the best case, control-flow construction is lifted inside monad barriers. For example, the transformation rules for \lstinline|if-then-else| taking into account optimizations will look like: $$ \begin{array}{ l l }\\ \,\,\,\,\,\, & \frac { C_F \llbracket if\,a\,then\,b\,else\,c \rrbracket , C_F \llbracket a \rrbracket \ne F.pure(a) \land C_F \llbracket b \rrbracket \ne F.pure(b) \land C_F \llbracket c \rrbracket \ne F.pure(c) } { F.flatMap(C_F \llbracket a \rrbracket )(v => if (v) then\, C_F \llbracket b\rrbracket \, else\, C_F \llbracket c \rrbracket ) } \\ \\ \,\,\,\,\,\, & \frac { C_F \llbracket if\,a\,then\,b\,else\,c \rrbracket , C_F \llbracket a \rrbracket = F.pure(a) \land (C_F \llbracket b \rrbracket \ne F.pure(b) \lor C_F \llbracket c \rrbracket \ne F.pure(c)) } { if\,a\,then\,C_F \llbracket b \rrbracket \,else\,C_F \llbracket c \rrbracket } \\ \\ \,\,\,\,\,\, & \frac { C_F \llbracket if\,a\,then\,b\,else\,c \rrbracket , C_F \llbracket a \rrbracket = F.pure(a) \land C_F \llbracket b \rrbracket = F.pure(b) \land C_F \llbracket c \rrbracket = F.pure(c) } { F.pure(if\, a\, then\, b\, else\, c) } \\ \\ \end{array} $$ Such specializations are defined for each control-flow construction. \end{itemize} In the resulting code, the number of monadic binds is usually the same as a number of awaits in the program, which made performance characteristics of code, written in a direct style and then transformed to the monadic style, the same, as the code manually written by hand in monadic style. \subsection{ Monads parametrization }. \label{MonadsParametrization} Async expressions are parameterized by monads, which allows the CPS macro to support behind the standard case of asynchronous processing other more exotic applications, such as processing effects( {\it Syme } \cite{10.1145/601775.601776}), ({\it Brachth{\"{a}}user} \cite{DBLP:journals/jfp/BrachthauserSO20}), logical search ({\it Kiselyov and Shan } \cite{10.1145/1090189.1086390}), or probabilistic programming ({\it Adam and Ghahramani and others}\cite{10.1145/2887747.2804317}). Potentially, list of problem domains from \verb|F#| computation expression Zoo ({\it Tomas and Syme} \cite{computation-zoo-padl14}) is directly applicable to dotty-cps-async. In practice, most used monads constructed over \verb|Future|, effect wrappers, like IO and constructions over effect classes extended by additional custom logic. Let's look at the following example: \label{AACopyFile} \begin{lstlisting} val prg = async[[X] =>> Resource[IO,X]] { val input = await(open(Paths.get(inputName),READ)) val output = await(open(outputName,WRITE, CREATE, TRUNCATE_EXISTING)) var nBytes = 0 while val buffer = await(read(input, BUF_SIZE)) val cBytes = buffer.position() await(write(output, buffer)) nBytes += cBytes cBytes == BUF_SIZE do () nBytes } \end{lstlisting} Here \lstinline|[X] =>> Resource[IO,X]| is a type-lambda, which represent the computational effect monad IO, extended by the acquisition and release of resources of type of the argument \lstinline|X|. Inside \lstinline|async[[X] =>> Resource[IO,X]]|, \lstinline|input| and \lstinline|output|. resources will be automatically closed at the end of the appropriative scope. Without async/await, one would have to write the following: \begin{lstlisting}[basicstyle=\small] ( for{ input <- open(Paths.get(inputName),READ) output <- open(outputName,WRITE, CREATE, TRUNCATE_EXISTING) } yield (input, output) ).evalTap{ case (input, output) => var nBytes = 0 def step(): IO[Unit] = { read(input, BUF_SIZE).flatMap{ buffer => val cBytes = buffer.position() write(output, buffer).flatMap{ _ => nBytes += cBytes if (cBytes == BUFF_SIZE) step() else IO.pure(()) } } } step().map{ _ => nBytes } } \end{lstlisting} \label{CombSearch} The next example illustrate a monadic representation of combinatorial search. Monad \lstinline|[X] =>> ReadChannel[Future,X]| represent a CSP[Communicating Sequential Processes]-like channel {\it Hoare} \cite{hoare1985communicating}, where monadic combinators apply the functions over the stream of possible states. We want to solve the classical N-Queens puzzle: placing N queens on a chessboard so that no two figures threaten each other. Let us represent the chessboard state as a vector of queens second coordinates $queens$ with additional helper method \lstinline|isUnderAttack| with obvious semantics. The $i$-th queen is situated at $(i,queens[i])$ location. \begin{lstlisting} type State = Vector[Int] extension(queens:State) { def isUnderAttack(i:Int, j:Int): Boolean = queens.zipWithIndex.exists{ (qj,qi) => qi == i || qj == j || i-j == qi-qj || i+j == qi+qj } def asPairs: Vector[(Int,Int)] = queens.zipWithIndex.map(_.swap) } \end{lstlisting} Function \lstinline|putQueen|. generate from one starting state a channel of possible next states. \lstinline|async[Future]| in \lstinline|putQueen| spawns a concurrent process for enumerating the next possible steps in N-Queens solution. \begin{lstlisting} def putQueen(state:State): ReadChannel[Future,State] = val ch = makeChannel[State]() async[Future] { val i = state.length if i < N then for{ j <- 0 until N if !state.isUnderAttack(i,j) } ch.write(state appended j ) ch.close() } ch \end{lstlisting} And we can recursive explore all possible steps with help of the \verb|solution| function, which returns the stream of finish states: \begin{lstlisting} def solutions(state: State): ReadChannel[Future,State] = async[[X] =>> ReadChannel[Future,X]] { if(state.length < N) then val nextState = await(putQueen(state)) await(solutions(nextState)) else state } \end{lstlisting} (runnable version is available at \url{https://github.com/rssh/scala-gopher/blob/master/shared/src/test/scala/gopher/monads/Queens.scala}) The computation is directed by reading from the stream of solutions. In \lstinline|putQueen|, the computations inside the loop will suspend after each write to the output channel, until all descendent states are explored. The suspension point \lstinline|await| is hidden inside \lstinline|ch.write| inside for loop. \lstinline|ch.write| is defined in \lstinline|ReadChannel[F,A]| as \begin{lstlisting} transparent inline def write(inline a:A): Unit = await(awrite(a))(using CpsMonad[F]) \end{lstlisting} transparent inline macros in scala are expanded in code at the same compiler phase before enclosing macro, so async code transformer process this expression in for loop instead \lstinline|ch.write|. In such way \lstinline|solutions(State.empty).take(2)| will return the first two solutions without performing a breadth-first search. \subsection{ Translation of higher-order functions } \label{HO} Supporting cps-transformation of higher-order functions is important for a functional language because it allows \lstinline|await| expression inside loops and arguments of common collection operators. As an example, in the previous section await inside \lstinline|for| loop was used for asynchronous channel write. Using await inside higher-order function enables idiomatic functional style, such as \begin{lstlisting} val v = cache.getOrElse(url, await fetch(url)) \end{lstlisting} Local cps transform changes the type of a lambda function. If the runtime platform supports continuations, we can keep the shape of the arguments in the application unchanged by defining 'monad-escape' function transformers, which can restore the view of $cps(f):A \Rightarrow F[B]$ back to $A \rightarrow B$. However, for the platform without continuation support, higher-order functions from other module is a barrier for local async transformations. For the time of the writing of this article, none of the available Scala runtimes (i.e. JVM, Js, or Native) have continuations support. For JVM exists a plan to implement continuations support via Project Loom \cite{ProjectLoom}, but it is not available for production use yet. JavaScript runtime is significantly smaller than JVM, and runtime semantics is precisely defined asynchronous. For those runtimes and for cases when semantic of monad does not allow us to build such escape function, dotty-cps-async implements limited support of higher-order functions. Macro performs a set of transformations, which allows developers to describe the substitution for the origin higher-order function in their code. Let us have a first-order function: $f: A \Rightarrow B$ which have form $\lambda x: code_{f}(x)$ and higher-order method $o.m: (A \Rightarrow B) \Rightarrow C$.. For simplicity, let's assume that $o$ is reference to external symbol and not need cps-transformation itself, since we want to show only function call transaltions here. Async transformation transform $code:X$ into $C_F[code]: F[X]$, where $F$ is our monad. Let us informally describe a set of transformations used to translate function call: $$ \begin{array}{l l} \text{ unchanged\,\,\,\,\,} & \frac{C_F \llbracket o.m(f) \rrbracket , C_F(code_f) == F.pure(code_f) } { F.pure(o.m(f)) } \\ \\ \text{ monadic } & \frac {C_f \llbracket o.m(f) \rrbracket , B = F[B'] } { F.pure(o.m(\lambda x: A \Rightarrow CpsMonad[F].flatMap(C_F \llbracket code_f \rrbracket ))(identity) } \\ \\ \text{ asyncShift-fo} & \frac { \begin{array}{l l} \scriptstyle{ C_F \llbracket o.m(f) \rrbracket }, & \scriptstyle{ \exists asyncShift_O = summon[AsyncShift[O]] : } \\ & \scriptstyle{ asyncShift_O.m: O\times CpsMonad[F] \Rightarrow (A \Rightarrow F[B]) \Rightarrow F[C] } \\ \end{array} } {asyncShift_O.m(monad,o)(x \Rightarrow C_F \llbracket code_f(x) \rrbracket } \\ \\ \text{ asyncShift-o} & { \begin{array}{l l} C_F \llbracket o.m(f) \rrbracket , & \exists asyncShift_O = summon[AsyncShift[O]] : \\ & asyncShift_O.m: O \Rightarrow (A \Rightarrow F[B]) \Rightarrow F[C] \\ \end{array} } \over {asyncShift_O.m(o)(x \Rightarrow C_F \llbracket code_f(x) \rrbracket } \\ \\ \text{ inplace-f} & {\frac {.C_f \llbracket o_m(f) \rrbracket , \exists m_{asyncShift} \in methods(O) : CpsMonad[F] \Rightarrow (A \Rightarrow F[B]) \Rightarrow F[C] } { m_{asyncShift}(C_F \llbracket code_f(x) \rrbracket ) } } \\ \\ \text{ inplace} & {\frac {.C_f \llbracket o_m(f) \rrbracket , \exists m_{asyncShift} \in methods(O) : (A \Rightarrow F[B]) \Rightarrow F[C] } { m_{asyncShift}(C_F \llbracket code_f(x) \rrbracket ) } } \\ \\ \end{array} $$ Explanation: \begin{itemize} \item In {\textbf unchanged} case we can leave the call unchanged because no cps transformation was needed. Note, that this handle a case when we have no acccess to the source of the $f$ argument: if $x$ is defined externally it can't contains \lstinline|await|. \item In {\textbf monadic} case is possible to reshape function arguments, to keep the same signature to receive. \item case {\textbf asyncShift-fo} define call substitution: If we have instance $asyncShift_O$ of marker typeclass \lstinline|AsyncShift[O]|, which provide a substitution methods. $m_{shift}$ with additional parameter list where we pass original object and target monad. |.e. let we have class with higher-order function, for example: \begin{lstlisting} class Cache[K,V] { def getOrUpdate(k: K, whenAbsent: =>V): V } \end{lstlisting} and want to use this class in asynchronous environment like next code fragment: \begin{lstlisting} async[Future] { ... cache.getOrUpdate(k, await(fetchValue(k))) } \end{lstlisting} where fetchValue return \lstinline|Future[V]|. For defined async substitution for \lstinline|getOrUpdage| method we should define a given instance of marker typeclass $asyncShift_{Cache}$ when shifted method is defined. \begin{lstlisting} class CacheAsyncShift[K,V] extends AsyncShift[Cache[K,V]]{ def getOrUpdate[F[_]](o:Cache[K,V],m:CpsMonad[F]) (k:K, whenAbsent: ()=> F[V]):F[V] = .... } given CacheAsyncShift[K,V]() \end{lstlisting} Here substitution method have one additional list of arguments \lstinline|(o:Cache[K,V],m:CpsMonad[F])|, where we pass original object itself and our target monad. Since monad parameter is generic, we also have additional type parameter \lstinline|F|. Functional call will be transformed to \begin{lstlisting} summon[AsyncCache[Cache[K,V]]] .getOrUpdate[Future](cache,monad)(k, ()=>fetchValue(k)) \end{lstlisting} and \lstinline|summon[AsyncCache[Cache[K,V]]]| will be resolved to \lstinline|CacheAsyncShift[K,V]| by implicit resolution rules, so resulting expression will be \begin{lstlisting} CacheAsyncShift[K,V]() .getOrUpdate[Future](cache,monad)(k, ()=>fetchValue(k)) \end{lstlisting} \item case {\textbf asyncShift-o} is a modification of the previous rule for the situation when our target monad already parametrizes the substitution class, so we do not need an extra type parameter and monad instance in the additional parameter list. \item case {\textbf inplace-f } and {\textbf inplace } describe a situation when the author of a class is aware of the existence of dotty-cps-async and define a shifted method in the same scope as the original method. By convention, such shifted methods are prefixed with \lstinline|Async| suffix. Example: \begin{lstlisting} class Cache[K,V] { def getOrUpdate(k: K, whenAbsent: =>V): V def getOrUpdateAsync[F[_]](m: CpsMonad[F]) (k: K, whenAbsent: () => F[V]): F[V] } \end{lstlisting} \end{itemize} Such substitutors for most higher-order functions from Scala standard library are supplied with dotty-cps-async runtime. Also, developers can provide their substitution for third-party libraries. The return type of substituted function can be: \begin{itemize} \item $C$, the same as the origin \item $F[C]$ origin return type wrapped into the monad. \item \lstinline|CallChainAsyncShiftSubst[F,C,F[C]]|. This is a special marker interface for call chain substitution, which wich will be described later. \end{itemize} \begin{itemize} \item Exists method in \lstinline|O| with name \lstinline|m_async| or \lstinline|mAsync| which accept shifted argument $f: A \Rightarrow F[B]$. The conventions for the return type are the same as in the previous case. This case is helpful for the fluent development of API, which is accessible in both synchronous and asynchronous forms. \item If none of the above is satisfied, the macro generates a compile-time error. \end{itemize} These rules are extended to multiple parameters and multiple parameters list, assuming that if we have one higher-order async parameter, then all other parameters should also be transformed, having only one representation of the asynchronous method. \subsection{ Call-chain substitutions } As shown in previous section, one of the possible variant of return method of substituted higher-order function is \\ \lstinline|CallChainAsyncShiftSubst[F[_],B,F[B]]|. The developer can use this variant to delay applying $F[\_]$ till the end of the call chain. For example, let's look at the next block of code: \begin{lstlisting} for { url <- urls if await(score(url)) > limit) } yield await(fetchData(url) \end{lstlisting} wich is desugared as \begin{lstlisting} urls.withFilter( url => await(score(url)) > limit ).map(url => await(fetchData)) \end{lstlisting} The programmer expects that the behavior of the code should be the same, regardless of using \lstinline|await| inside a loop, so the list of URL-s will be iterated once. However, if the result of \lstinline|withFilter| has form \lstinline|F[List.WithFilter]|, two iterations are performed - one for filtering the list of URLs and the other over the filtered list to perform fetching data. User objects for call-chain substitution can accumulate the sequence of higher-order functions in one batch and perform iteration once. This block of code is transformed as follows: \begin{lstlisting} summon[AsyncShift[List[String]].withFilter[F](urls,m)( url => m.map(score(url))(x=>x>limit) ) // CallChainAsyncShiftSubst[F,WithFilter,F[A]] .mapAsync(url => fetchData) // function added to builder ._finishChain() // finally eval all. \end{lstlisting} \subsection{ Automatic coloring }. \label{AutomaticColoring} Automatic coloring is the way to free the developer from writing boilerplate await statements. Since most industrial code is built with some asynchronous framework, await expressions are often situated literally in each line. Those expressions do not carry out business logic; when writing code, we should not care how an object is coming to code, synchronously or asynchronously, the same as we do not care how memory to our objects should be allocated and deallocated. Apart from performance-critical applications (e.g., developing a web server for which the developer requires complete control of low-level concurrency details), it is preferable to use higher-level constructs hiding low-level details. . When writing business logic using some low-level system framework, we expect that framework provides a reasonable generic concurrency model and abstracts away from manual coloring. We can provide implicit conversion from $F[T]$ to $T$. Can we make such conversion safe and preserve semantics with automatic coloring? It is safe when $F[\_]$ is a \lstinline|Future|, because multiple calls of \lstinline|await| on \lstinline|Future| produce the same effect as one call -- after the result value will be available after first await, it will be returned immediately after other calls of awaits of the same Future. I.e., we can say, that \lstinline|Future| is cached. For other types of monads, where each \lstinline|await| can perform a new computation, such implicit conversion will be unsafe - the behavor of the following code snippets will be different: \begin{lstlisting} val a = x f1(await(a)) f2(await(a)) \end{lstlisting} and \begin{lstlisting} val a = await(x) f1(a) f2(a) \end{lstlisting} To overcome this, we can provide memoization of execution, by embedding the memoization into the transformation of val definitions. Let us have block of code \lstinline[basicstyle=\small]|{ val v = expr; | $tail_v$ \lstinline| }|, $expr$ return value of type $F[T]$ and exists \lstinline|CpsMemoization[F]| with method \lstinline|apply[T](F[T]):F[F[T]]|. Cps transformer can check the variable type and rewrite this to. $$ \texttt{summon[CpsMonad[F]]}.\texttt{flatMap}( \texttt{CpsMemoization[F]}(expr))( v1 \Rightarrow cps(tail_{v1}) ) $$ \normalsize Implicit conversions are often criticized as an unsafe technique, which can be a source of bugs and maintainability problems. In our case, uncontrolled usage of implicit conversions can break the semantics of building complex effects, where some building parts can be automatically memoized. Dotty-cps-async implements preliminary analysis of automatically generated conversion, which emits errors when detecting potentially unsafe usage. To make transformation safe, we should check that developer cannot pass memoized value to API, which expects a delayed effect. Preliminary analysis ensures that all usages of memoized values are in synchronous context by forcing the next rules: \begin{itemize} \item If some variable is used only in a synchronous context (i.e., via await), the macro will color it as synchronous (i.e., cached if used more than once). \item If some variable is passed to other functions as an effect - it is colored as asynchronous (i.e., uncached). \item If the variable is simultaneously used in synchronous and asynchronous contexts, we cannot deduce the programmer’s intention, and the coloring macro will report an error. \item If the variable, defined outside of the async block, is used in synchronous context more than once - the macro also will report an error. \end{itemize} Behind providing implicit conversion, automatic coloring should also care about value discarding: expressions that provide only side-effects are not an assignment to some value but discarded. When we do automatic coloring, the monad with side-effect generation becomes the value of an expression. So, we should also transform statements with value discard to insert awaits there. Dotty-cps-async interfaces has a \lstinline|ValueDiscard[T]| typeclass. The statement inside async block can discard value of type \lstinline[basicstyle=\small]|T| only if exists implementation of \lstinline|ValueDiscard[T]| interfaces: in such case macro transforms value discard into \lstinline|summon[ValueDiscard[T]].discard(t)|. A special marker typeclass \lstinline|AwaitValueDiscard[F[T]]| is used when this value discard should be a call to await. If we will apply automatic coloring to our example with copying file, we will see that difference between synchronous and asynchronous code become invisible. \begin{figure} \label{AACopyFileAC} \begin{lstlisting} val prg = async[[X] =>> Resource[IO,X]] { val input = open(Paths.get(inputName),READ) val output = open(outputName,WRITE, CREATE, TRUNCATE_EXISTING) var nBytes = 0 while val buffer = read(input, BUF_SIZE) val cBytes = buffer.position() write(output, buffer) nBytes += cBytes cBytes == BUF_SIZE do () nBytes } \end{lstlisting} \end{figure} \section{ Related work } \label{RelatedWork} The idea of 'virtual' program flow encapsulated in a monad is tracked to ({\it{Claessen}} \cite{claessen_1999}), which become a foundation for Haskell concurrent library. Later \verb|F#| computation expressions were implemented as further development of do-notation. Furthermore, \verb|C#| moves async/await from virtual monadic control-flow to 'normal control-flow,‘ which becomes a pattern for other languages ({\it{Syme}} \cite{10.1145/3386325}). ({\it{Tomas and Syme}} \cite{computation-zoo-padl14}) provides an overview of computation expression usage in different areas. Generic monadic operation pairs [reify/reflect] and links between monadic and cps transformations are described in ({\it Filinski } \cite{10.1145/174675.178047}.) In scala land, the first cps transformer was implemented as a compiler plugin ({\it Rompf, Maier, Odersky })\cite{DBLP:conf/icfp/RompfMO09}. It provides quite a powerful but complex interface based on delimited continuations. Scala-Async ({\it Haller} \cite{hallerScalaAsync}) provides a more familiar interface for developers for organizing asynchronous processing by compling async control flow to state machines. The main limitation is the absence of exception handling. Recently, a Lightbend team moved implementation of scala-async from macro to compiler plugin and extended one to support external 'Future systems' such as IO or Monix. Although dotty-cps-async is internally based on another type of transformation, it can be viewed as an extension of the scala-async interface for the next language version with a similar role in the Scala ecosystem. The new facilities are a generic monad interface, support of try/catch, and limited support for higher-order functions. In ({\it Haller and Miller } \cite{DBLP:journals/corr/HallerM15}) scala-async model is extended to handle reactive streams. Scala coroutines ({\it Prokopec and Fengyun} \cite{prokopec_et_al:LIPIcs:2018:9208}) provides a model which allows to build async/await interface on top of coroutines. Scala Virtualized ({\it Rompf, Amin, Moors, Haller} \cite{ScalaVirtualized}) devotes to solving a more general problem: providing deep embedding not only for monadic costructions but for arbitrary language. Scala Effekt ({ \it Brachth{\"{a}}user and others } \cite{DBLP:journals/jfp/BrachthauserSO20}) allows interpretation of effect handlers inside control monad with delimited continuations. The same authors released a monadic reflection library for scala3 ({ \it Brachth{\"{a}}user and others } \cite{brachthaeuser21representing}), using the capabilities of yet not released support for continuations in JVM in ({\it Project Loom}\cite{ProjectLoom}). This approach can be a convenient way for implementing async/await like functionality for the future versions of the JVM, for monads, which can be implemented on top of the one-shot continuations. Note, that example of combinatorial search from section \ref{MonadsParametrization} on page \pageref{CombSearch}, cannot be implemented with the runtime monadic because combinatorial search, as other applications of non-determenism requires multiple-shot continuations, where captured continuations can be invoked more than once. \section{ Conclusion and further work. } \label{Conclusion} Prerelease versions of dotty-cps-async have been available as open-source for more than a year, and we have some information based on actual usage in application projects. Macro library is used in open-source chatbot server with \lstinline|Future| based stack. An experimental proofspace.id internal microservice for connecting PostgreSQL database to VoIP server is built with the help of dotty-cps-async with \lstinline|cats-effect| stack. The most frequently used monad here is \lstinline|[X] =>> Resource[IO,X]| (the common pattern is to acquire database connection from the pool for each request). Also, we can point to the port of scala-gopher to scala3, which provides an implementation of concurrent sequential process primitives on top of generic monadic API and experimental TypeLevel project, which brings the direct style into the cats-effect ecosystem. Overall feedback is mostly positive. Reported issues usually have form of inability to compile some specific tree and often tracked down to the issues in the compiler. Note, that scala3 compiler also was higthly experimental at this time. The number of reports about ergonomic and diagnostics is relative low. Particulary this is because scala3 macros is applied after typing, so usually type errors are catched before macro applications. In some cases we have an error during retyping of transformed tree: for this case the path for better error diagnostics was was submitted and merged into scala3 compiler. The work of the patch is extending error message by showing the tree, which was transformed by current macros, in addition to the code position under -explain compiler option. Performance issues are not reported at all. Dotty-cps-async does not provide its own asynchronous runtime but is used with some existing runtime. Furthermore, if synchronous code remains unchanged and asynchronous code does not add extra runtime operations, then any benchmark will show a performance of the underlying runtime. Ability to use direct control-flow on top of some library is a one half of programming experience. The other part is the library itself. Currently, we have a set of asynchronous scala runtimes with a different sets of capabilities and it would be interesting to build some uniform facilities for concurrency programming. One of the open questions is to extend eager Future runtime to support structured concurrency; Problem from the other side -- users of effect stacks, such as IO, need to wrap impure API into effects. Can we automate this process? Also it will be interesting to adopt using of runtime continuations instead compile-time transformations for some type of monads, which will eliminate the need to manually write substitutions for higher-order functions on continuation-enabled platforms. Another direction is the expressivity of internal language, which can be extended by building appropriate wrapper control monad.
{ "timestamp": "2022-09-23T02:13:14", "yymm": "2209", "arxiv_id": "2209.10941", "language": "en", "url": "https://arxiv.org/abs/2209.10941" }
\section{Introduction} \label{sec:Introduction} Enabled by recent technologies and large demands in fast delivery and transport, the density of aerial vehicles will be increased drastically in the airspace. Especially when operating unmanned aerial vehicles (UAVs), safety is mandatory as a first prerequisite. The improvement of safety has always been a major target in research and development of new radio-based systems for aerial applications. These systems include use for communication, navigation, and collision avoidance~\cite{NLALO13,PTSTJ18,BDOACS21,ED275}. Several radio-based applications are in use in general aviation, some of which cooperatively exchange information with other airspace participants, while others non-cooperatively collect information about the environment only. In recent developments, the visible spectrum is taken into account as well, e.g. by including cameras. However, focus of this article is on the option of combining different radio applications in a single system. A simple example of multiple use of signals in airborne applications is the advanced collision avoidance system (ACAS). Initially, transponders have been developed for military applications to allow the identification of friend or foe (IFF), introducing a secondary radar. Today, transponders allow the flight controllers an improved situational awareness by providing flight altitude and further information~\cite{ED73C}, while enabling collision avoidance based on the mutual exchange and interrogation of transponder data~\cite{ED275}. The combination of communication and sensor technology is of great interest, as it opens up further degrees of freedom in the processing of the information obtained. Joint communication, sensing and localization (JCSL) can provide improved situational awareness and reliability in the development of small, agile, and customized transportation solutions. Data rates can be increased and the number of independent radio-frequency (RF) systems that need to be individually coordinated for simultaneous use can be reduced. The ongoing Master360 project (``Multisensor System for Helicopter Automation and Safe Integration of UAVs into Air Traffic with 360$^\circ$ Coverage'') is dedicated to safety issues of UAVs with particular interest on unmanned autonomous flying taxis. The goal of the project is the safe integration of UAVs in the segregated airspace. A wide variety of on-board communication and sensing systems must be coordinated to enable reliable operation of the aerial platforms. Therefore, a new chipset for the radar signal processing is developed which allows the miniaturization of the radar system. The novelties mentioned in this paper are exemplified by the developments in the Master360 project, illustrating possible benefits through JCSL developments. Novel aspects of the paper include the following: \begin{itemize} \item Scenarios for implementing JCSL in urban air mobility (UAM) settings and aerial applications are presented and illustrated with practical examples, \item a waveform design for JCSL applications is proposed, and \item multi-mode multi-port antennas are shown to provide good (beamforming) performance to achieve the desired goals. \end{itemize} The remainder is organized as follows: In Section~\ref{sec:ClassificationCAU}, different air-based communication applications and their origins are discussed and the potential regarding JCSL is outlined. In Section~\ref{sec:SensingHAW}, the increasing demand in sensing and localization applications in aviation is pointed out. The challenges arising from the application example in the project motivate a joint waveform design for JCSL introduced in Section~\ref{sec:WaveformDesignHAW}. In Section~\ref{sec:AntennasLUH}, multi-mode multi-port antennas (M$^3$PAs) are proposed as a key-feature to solve demanding tasks. Finally, conclusions are drawn in Section~\ref{sec:Conclusion}. \section{Classification of Radio Communication Applications in Aviation} \label{sec:ClassificationCAU} Beginning with the early days of aviation, communication applications in aviation have been of special interest to both flight crews and ground personnel. In addition to the early stages of transponders for IFF mentioned in the introduction, British aircraft were equipped with wireless telegraphy as early as World War I to allow rapid transmission of information from tracking pilots to command~\cite{Mar20}. Both communication systems worked in different manner, serving different tasks: communication in terms of a human to human (H2H) connection (radio application) and human to system / machine (H2M). Due to the different nature of the communication (H2H vs. H2M), different techniques were used. This has not changed even for the latest versions of radio systems aboard airplanes. In general, most systems have been developed to solve a single task. Some of them have been used for additional tasks compared to what they have initially been developed for. ACAS II, for example, which provides advice on how to avoid hazardous situations in air traffic, was derived from the use of transponder signals, which were originally introduced to provide controllers with better situational awareness. When trying to improve JCSL, the different kinds of communication services aboard an aerial vehicle need to be distinguished and compared. Several options of comparison are available: Some systems are based on the ground, whereas others are employed airborne. Some transmit voice, using analog modulation techniques, while others transmit data using digital schemes. In the context of this paper, the communication services are sorted by their use-case: \begin{itemize} \item Telephony, \item collision avoidance, and \item navigation. \end{itemize} The use-case of the communication technology determines required bandwidth and signal design, including modulation and transmit power. The signal design is particularly important for the implementation of JCSL, as the applications must not have a negative impact on the coexisting systems. \begin{figure} \centering \input{tikz/Piper} \caption{Piper with antennas for different purposes. The numbers indicate functions and typical frequencies as follows: 1: Transponder (1030/1090 MHz), 2: GNSS / SatCom (GPS for aerial applications: 1176.45 MHz), 3: ELT (406 MHz), 4: Radio communication (117.975 - 137 MHz), 5: VOR navigation (108 - 117.95 MHz).} \label{fig: PiperAntennas} \end{figure} In Fig.~\ref{fig: PiperAntennas}, a Piper plane with its antennas is shown. The antennas can be related to the applications aboard the plane. The probably best-known application is radio telephony. Using analog amplitude modulation in the frequency band between 117.975~\si{MHz} and 137~\si{MHz} allows for understanding of speech even under poor signal-to-noise ratio (SNR) conditions, which would lead to a connection loss using alternative transmission schemes. The channels for radio telephony are separated by 8.33~\si{kHz}. Flight transponders usually have to be seen as a part of the secondary radar, providing flight controllers required information to allow safe travel. Initially, in the simple modes supported by early transponders, only a given code, named "squawk", was transmitted. However, nowadays these transponders are capable of providing a 24 bit address of the aircraft, altitude and velocity, and additional information which either include rate of climb or descent, and allow the use for collision avoidance. This is based on interrogating transponders from other aircraft. Corresponding transponders being capable of interacting with other transponders in such a way are called interrogators. Apart from the ACAS~II system, various additional collision avoidance systems are in widespread use, depending on the application. In terms of unmanned aircraft integration, ACAS~X is a new system rather than an improvement on the established ACAS~II. It includes communication with ACAS~II employed aerial vehicles, but offers new options. The new ACAS~X standard family includes ACAS~Xu for unmanned and ACAS~Xa for large aircraft~\cite{MaJe16}. For small aircraft, including gliders, the FLARM system (868~- 915~MHz) has been developed to assist the pilot to prevent collisions e.g. in small lifts and at rather short distances. This system does not provide any resolution advisory, but provides information about the conflict with highest probability. If a collision of the aircraft happens, the emergency locator transmitter (ELT) on motorized airplanes starts its transmission due to either acceleration, contact to water, or manual activation. The device transmits a locator signal at 121.5~MHz and 406~MHz. The locator signal contains identification data and allows a localization based on e.g. Doppler shift-based algorithms. Employing the same frequency band as radio telephony, but using a spacing of 50~\si{kHz}, very high frequency (VHF) omni-directional radio range (VOR) systems allow navigation without Global Navigation Satellite Systems (GNSS) like the Global Positioning System (GPS). The signal of the ground stations contains the angle of the aerial vehicle measured w.r.t. magnetic north, modulated as a Doppler frequency shift. Improved versions also provide the option of distance measuring, using distance measurement equipment (DME). These systems work similar to a secondary radar: The vehicle transmits a number of pulses, which are then answered by the ground station. Since the delay of the signal processing in the ground station is defined, the processor in the receiver aboard can calculate the time of flight of the signals, the so-called round-trip time. This may serve as an example of a narrow line between localization and communication in aviation applications. The DME system works by utilizing a kind of code division multiple access (CDMA) scheme: The pulses are transmitted in a randomly chosen manner to distinguish the pattern of replies between the own call and others. An extensive literature survey on navigation for aerial vehicles is given in \cite{CaPeAl18}. Even older than VOR-DME is the usage of non-directional beacons (NDBs), which can be seen as an equivalent to the naval lighthouses for guiding planes. These old navigation aids provide a low-frequency identification signal only, of which the direction-of-arrival (DoA) needs to be measured or estimated, using a combination of directional and non-directional antennas, denoted automatic direction finder (ADF). Nowadays, most navigation is done or assisted by GPS. However, as reported in \cite{Har21}, relying on a single navigation aid like GNSS is not sufficient due to possible system failure (single point of failure). In summary, the above mentioned communication systems have traditionally been treated separately. They employ their own antennas as well as individual installation space and signal processing units and occupy different frequency bands. \section{Radar Sensing and Localization in Aviation} \label{sec:SensingHAW} When Christian Hülsmeyer filed the first radar patent in 1904 to detect ships in all weather conditions - especially fog - the first radar application was invented. During the Second World War, the focus shifted towards ground-based applications for detection (sensing) and localization of aerial vehicles, for example for early-warning and fire-control tasks. After the World War, the compiled radar knowledge migrated into various civil applications including ground-based radars for air-traffic control and airborne radar systems for pilot assistance and situational awareness. Nowadays, weather radar and altimetry radar are mandatory systems to be installed in large airplanes. Compared to optical sensors (e.g. cameras), a radar system offers all-weather capabilities and day-and-night operation. Recent trends are tending towards UAM, and new flight maneuvers in aviation, especially in the vertical dimension during automatic / vertical take-off and landing (ATOL / VTOL) as well as en-route demand for fast, secure, light-weight, cost-effective and small installation-size radar systems with basically 360$^\circ$ coverage in azimuth and elevation. Due to the densification of traffic patterns especially expected for UAM scenarios, aerial vehicles will be required to be equipped with multiple systems capable of handling radio-based communication and localization / sensing tasks both automatically and reliably. If realized as separate systems, confined installation spaces are likely to entail increased levels of interference leading to basic coexistence issues. Mutual system design may alleviate these problems by either utilizing the frequency or time domain via advanced signal processing (Section~\ref{sec:WaveformDesignHAW}), the spatial domain via intelligent hardware design (Section~\ref{sec:AntennasLUH}), or a combination thereof. \section{Waveform Design for JCSL in Aviation} \label{sec:WaveformDesignHAW} \begin{figure} \centering \input{tikz/trad_block.tex} \input{tikz/block_legend.tex} \input{tikz/jcsl_block.tex} \caption{Block diagram of the proposed JCSL system as a part of the communication and localization system for UAVs. Inter-system interference and coupling between the different antenna systems occurring traditionally (top) is avoided by employing the JCSL unit (bottom). Improved data fusion can enhance obstacle avoidance (OA) and automatic take-off and landing (ATOL) processing. } \label{fig:jcsl block diagram} \end{figure} Regarding the design of JCSL systems in time and frequency domain, three different approaches with increasing complexity and performance can be differentiated. In the simplest form, the radio signals of communication and sensing systems can on one hand be separated in the time domain via a time-multiplexed waveform design, thus reducing the time budget per task and potentially leading to a compromised performance of each one. On the other hand, a separation in the frequency domain via a frequency-multiplexed waveform design can be accomplished by utilizing separated frequency bands for each task. This can be costly as the frequency spectrum is a valuable and scarce resource and international regulations restrict the use of it. A more complex approach relies on using one waveform for both approaches in a modified way. When a radar signal is simultaneously used for (secondary) communication applications, this can be accomplished by embedding information into the existing radar signal, for instance via additional phase coding. Such an approach is often referred to as \emph{RadCom} as the radar system defines the base for the joint system. Preferably, the embedded information should not deteriorate the radar performance in this setup. Vice-versa, communication waveforms that are characterized by favorable radar properties, e.g., regarding their autocorrelation properties in time and/or frequency, can also be used for target object detection and localization by means of range/Doppler processing. In accordance with the prior definition, such approaches are referred to as \emph{ComRad} systems. In this setup, the communication task is the primary one. Fig.~\ref{fig:jcsl block diagram} illustrates the difference between traditional systems and a JCSL system. In traditional system designs, each part of the system requires its own system components, including processing, antennas, and RF components. Contrarily, a JCSL system designed to fit all requirements and shares a major part of the processing, RF components, and antennas. If the same amount of antennas is used, a diversity gain is achievable. Vice versa, the required hardware can be reduced, which leads to a reduction of occupied space and weight. Hence, the main challenge of the JCSL design is the JCSL unit. In this unit, the data and protocol information of the communication transceiver need to be matched with the requirements of the radar system, which are defined by the flight controller and its needs in terms of sensing performance. Finally, when JCSL systems are designed bottom up, the waveform can be tailored to meet the requirements of specific tasks. In this context, orthogonal frequency-division multiplexing (OFDM) has been shown to be a good candidate to serve as a modulation scheme for both communication and radar tasks, see for instance \cite{sturm2011waveform} and references therein. In conjunction with an appropriate digital single-carrier modulation scheme -- such as M-ary phase-shift keying (PSK) or quadrature amplitude modulation (QAM) -- the orthogonal subcarriers are well suited for signal separation at the receiver and are even robust against Doppler shifts when the subcarrier spacing is chosen appropriately. Although being well suited for JCSL applications, OFDM comes with the disadvantage of relatively high peak-to-average power ratios (PAPRs), however. If no special countermeasures are implemented, this requires the transmitter to employ a corresponding power backoff, in order to avoid clipping and non-linear behavior within the power amplifier. Yet, reducing the transmit power entails an undesired decrease regarding the maximum achievable radar and communication range. Given a pure radar application without simultaneous wireless communication functionality, classic techniques (e.g., Newman phases), may be employed for PAPR reduction, which are typically tailored to real-valued transmit signals \cite{MPVZ19}. To this end, a fixed complex-conjugate symmetry between the OFDM sub-carriers can be established in the complex baseband domain, in order to obtain a real-valued multi-carrier signal, as no random data symbols need to be included at this point. For RadCom applications, however, more advanced PAPR reduction techniques are required, which are able to handle random data symbols mapped onto the individual OFDM sub-carriers (see, e.g., Reference [9] in \cite{mietzner2019dftspread} for an overview). An alternative approach for moderate data-rate communication links could lie in alternative modulation schemes, such as Discrete Fourier-Transform (DFT)-spread OFDM~\cite{mietzner2019dftspread}. Similar to OFDM, a time domain matched filter can be employed for DFT-spread OFDM, in order to maximize the signal-to-noise ratio (SNR) in the range domain. Alternatively, frequency-domain signal processing may be used. A corresponding analysis for DFT-spread OFDM showed a significant reduction in PAPR of several decibel compared to conventional OFDM. An example is illustrated in Fig.~\ref{fig:papr compared dft ofdm} in form of the complementary cumulative distribution functions (CCDFs) of resulting PAPR values. The simulations were performed for a single-input single-output (SISO) and a multiple-input multiple-output (MIMO) configuration with two transmit antennas and interleaved subcarriers. Specifically, 256 sub-carriers with a spacing of $\Delta f=\SI{15\,}{\kilo\hertz}$ have been used in the simulations (the sub-carrier spacing might be further enlarged to improve Doppler resilience). The cyclic prefix length has been chosen to allow for maximum target ranges of $\SI{2\,}{\kilo\meter}$, and an 8-ary PSK modulation was employed, in order to include random data symbols. As can be seen, with DFT-spread OFDM the resulting PAPR-value is below $8\,\mathrm{dB}$ with a probability exceeding $1-10^{-5}$ (corresponding to $99.999\,\%$), while conventional OFDM is associated with significantly larger PAPR values. Overall, PAPR-reduced OFDM or DFT-s-OFDM waveforms thus seem to be promising candidates for JCSL applications in the context of future UAM scenarios. \begin{figure} \centering \setlength{\figH}{0.5\linewidth} \setlength{\figW}{0.8\linewidth} \input{tikz/papr.tex} \caption{CCDFs for the resulting PAPR values simulated for both a SISO and a MIMO scenario using OFDM and DFT-spread OFDM.} \label{fig:papr compared dft ofdm} \end{figure} \section{Miniaturization of Antennas and Application of Multi-Port Antennas for JCSL in Aviation} \label{sec:AntennasLUH} The demand in efficient use of spectra increases and the miniaturization and automatization of aerial vehicles requests for light-weight but emerging technological solutions. As shown in Fig.~\ref{fig: PiperAntennas}, currently a large number of antennas is employed on airplanes to fulfill the different needs of the communication and radar systems aboard. By introducing antennas allowing broadband transmission and the use of different antenna patterns at the same time, weight and number of antennas could be reduced. This especially holds for the co-design of antennas and JCSL systems and can allow the integration on even smaller and light-weight aerial vehicles. The aerial use-case changes some of the requirements during the design process of the antennas. First, the antenna system has space and weight constraints. Additional weight which has to be lifted increases fuel and power consumption. For the same reason, space aboard aircraft usually is limited. The chassis of aerial vehicles typically is designed such that it fulfills the requirements for its main task, which is lifting a certain payload. Therefore, introducing antennas and radomes can lead to positioning conflicts, which should be taken care of. As an example, if communication systems shall be employed next to sensing systems, as typically done, the antennas of one system shadow the antennas and sensors of the other or cause inter-system interference. This encourages the design of JCSL especially in aviation, including the design of the antennas. Here, M$^3$PAs are of special interest, as they are capable of being designed for wideband and ultra-wideband applications, as shown in \cite{MaMa16,Johannsen2020} and particularly for direction-of-arrival estimation in \cite{GrMa22}. The latter antenna has been designed for transponder-based air-to-air communication based on the ACAS~II standard. Using the theory of characteristic modes, it has been shown that the symmetry group of a given antenna structure determines the maximum number of uncorrelated ports on the structure. Therefore, a sufficiently symmetric structure is chosen. \begin{figure} \centering \includegraphics[width=\columnwidth]{pic/cuboid_onGND_withModalCurrents.pdf} \caption{Example shape of a cuboid-shaped antenna prototype as discussed in \cite{GrMa22} with modal current densities.} \label{fig:CubicAntenna} \end{figure} The antenna depicted in Fig.~\ref{fig:CubicAntenna} allows the usage of three independent ports, including the omni-directional monopole mode~1. Note that the limitation to three ports is a design decision rather than being imposed by the shape of the antenna. It is selected to allow a simpler connection to the signal processing. Given the discussed scenario of JCSL, the omni-directional monopole mode could be used for communication, like broadcasting information. The two additional ports are providing radiation patterns based on orthogonal sets of modes pointing back and forwards, as well as left and right, respectively, which is shown by curves $G_4$ and $G_5$ in Fig.~\ref{fig:Beamforming}. By employing suitable weighting coefficients using all three ports, an improved and steerable directivity can be achieved, as shown by curves $G_{\textrm{max}}$ and $G_{30^{\circ}}$ in Fig.~\ref{fig:Beamforming}. In an exemplary ACAS II-based, simple JCSL scenario, this could be used to mask a certain angular region when interrogating other aerial vehicles transponders. During reception, both direction-of-arrival and distance can be estimated, based on the received signal and its round trip time. The estimation can be used to improve the situational awareness of the system and to possibly avoid collisions. \begin{figure}[h] \centering \input{tikz/PlotGainOptim30Deg.tex} \caption{Beamforming performance of a set of three orthogonal modes (1, 4 and 5) of the structure depicted in Fig.~\ref{fig:CubicAntenna}. Gain can be optimized by combining the available radiation characteristics for the full angular range (dashed line). However, when optimizing for certain angles, the maximum of the mainlobe may point to a different direction (black curve, optimized for 30$^\circ$).} \label{fig:Beamforming} \end{figure} \begin{figure}[h] \centering \input{tikz/ArrayFactor3Elements.tex} \caption{ Beamforming performance of a uniformly fed three element array, using a spacing of half a wavelength. The curves show the antenna and array factor ($G_{\textrm{AAF}}$, array factor plus gain of element), gain of an example monopole antenna ($G_{\textrm{Ant}}$, assumed to be 5.2~dBi), and array factor ($G_{\textrm{AF}}$). These analytical results show that the achievable gain using an antenna array and employing monopole antennas can increase the gain compared to the proposed M$^3$PA but at the cost of an increased sidelobe level.} \label{fig:ThreeElementArray} \end{figure} In Fig.~\ref{fig:ThreeElementArray}, the beamforming performance in terms of the antenna-and-array factor (AAF) of a three element monopole array is shown. The AAF combines the antenna gain of the employed antenna elements in the array with the processing gain achieved by the structure of the antenna array. The assumed ideal quarter-wavelength monopole antenna gain is 5.2~dBi and the distance of the antennas is half a wavelength. The space occupied by the antenna array is similar to the space occupied by the M$^3$PA shown in Fig.~\ref{fig:CubicAntenna}. While a slightly larger AAF can be achieved by the array, the sidelobe level is -5~dB, compared to -14~dB using the M$^3$PA. These results could find their initial application in an improved version or new generation of interrogators. In this scenario, an interrogator equipped with an M$^3$PA is capable of performing a directive interrogation of other vehicles, due to the improved side lobe level. Since M$^3$PAs can also be used for localization estimation, transponder-based collision avoidance could be improved by means of additional information. \section{Conclusion} \label{sec:Conclusion} In this paper, JCSL has been discussed as an emerging technology towards challenges and goals of the upcoming trends in urban air mobility. It was shown that multi-mode multi-port antennas allow improved performance compared to traditional antenna-array configurations and increase the available number of ports. Hence, this advanced antenna type is a promising candidate for the design of JCSL systems. Furthermore, joint waveform design for JCSL is regarded as an enabling technology for developing highly integrated, multi-functional, and light-weight on-board RF systems meeting the requirements of confined installation spaces within future UAVs. Therefore, the key technologies addressed in this article are expected to offer a significant contribution when elevating autonomous driving to the third dimension. \section*{Acknowledgment} This project is part of the Master360 program under research grant 20D1905, funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK). Thanks to Nils Segebrecht for providing a picture of his Piper. The cooperative work with f.u.n.k.e. GmbH, Ulm, Germany, and Dr. Askold Meusling (project leader ``Master360''), Airbus Defence and Space GmbH, Taufkirchen, Germany, is highly appreciated. \bibliographystyle{IEEEtran}
{ "timestamp": "2022-09-23T02:15:08", "yymm": "2209", "arxiv_id": "2209.10991", "language": "en", "url": "https://arxiv.org/abs/2209.10991" }
\section{Introduction} \label{sec:intro} Since the first detection of the binary neutron star (NS) merger, GW170817 \citep{LIGOScientific:2017vwq}, which was accompanied by the observation of electromagnetic signals originating from the same source, GRB170817A and AT2017gfo \citep{LIGOScientific:2017ync}, we have being witnessing exciting breakthroughs in our understanding of compact stars and their merger dynamics. In fact, gravitational wave (GW) astronomy and multi-messenger astrophysics became a new tool to extract information about the internal structure of NSs from GW and electromagnetic observations \citep{Bauswein:2017vtn, Annala:2017llu,Hinderer:2018pei}. Thus, from the combining analysis of the GW170817 signal measured by the advanced LIGO and advanced Virgo detectors, the constraint on the tidal deformability parameter of NS matter $\Lambda_{1.4} \leq 800$ was extracted \citep{Abbott_2018}. The second binary NS merger event, GW190425 \citep{LIGOScientific:2020aai}, provided constraints consistent with GW170817, but due to its lower signal-to-noise ratio did not deepen our knowledge about the NS EoS. In addition to GW observations, also X-ray observations by NICER \citep{Miller_2019,Riley:2019yda,Raaijmakers:2019dks,Miller:2021qha,Riley:2021pdl}, radio measurements of the heaviest pulsars, e.g. PSR J0348+0432 of mass $2.01\pm 0.04$ M$_{\odot}$ \citep{PSRj03480432Article}, PSR J0740+6620 of $2.08^{+0.07}_{-0.07}$ M$_{\odot}$ \citep{Fonseca:2021wxt} and optical observations of the ``black widow" pulsars, e.g. PSR J1810+1744 of $2.13\pm 0.04$ M$_{\odot}$ \citep{Romani:2021xmb}, and PSR J0952-0607 of $2.35\pm 0.17$ M$_{\odot}$ \citep{Romani:2022jhd} constrain the NSs properties. While all the mentioned analyses and models assume that NSs are embedded in pure vacuum and do not contain dark matter (DM), they, indeed, could accumulate a sizable amount of DM in their interior and surroundings. Due to high compactness, NSs could effectively trap DM particles, that will rapidly thermalize and become accrued inside the stars, altering their properties. The presence of DM affects the internal structure and compactness of compact stars. Thus, as it was shown, e.g. \citet{Ciarcelluti:2010ji,2018PhRvD..97l3007E,2019JCAP...07..012N,PhysRevD.102.063028,2020MNRAS.495.4893D,Sagun:20224z}, DM may either form an extended halo or a dense core inside a NS. Depending on the mass of DM particles, their self-interaction strength and its relative abundance inside the star, one of the above mentioned scenarios takes place. Since DM halos are invisible for typical astrophysical observations, we would see only the baryonic matter (BM) radius, independent of the fact that the outermost radius can extend further than its BM component \citep{Rafiei_Karkevandi_2022}. Contrary, a DM core formation will lead to a reduction of the NS radius. Moreover, DM will affect tidal deformability parameters and the merger dynamics \citep{Ellis:2017jgp,Bezares:2019jcb,Bauswein:2020kor,Leung:2022wcf}. Nowadays, while there are studies investigating possible alternative scenarios beyond `standard' compact binary mergers described by general relativity in pure vacuum \citep{LIGOScientific:2018dkp}, the models used to analyse GW signal do not account directly for DM. Thus, to understand the effect of DM on the coalescence of NSs, numerical-relativity simulations for different DM fractions, particle mass, and interaction strength are required. As a step in this direction there have been first two-fluid 3D simulations of coalescencing binary NS admixed with DM with the following studies of GW emission of the merger remnant, e.g. \citet{Bauswein:2020kor} and \citet{Emma:2022xjs}. By considering different binary masses and EoSs \citet{Bauswein:2020kor} showed that the GW frequency of the orbiting DM components scales with the compactness of NSs. Moreover, the relations between the DM GW frequency and the dominant post-merger GW frequency of the stellar fluid or the tidal deformability were found, which opens a possibility to probe the EoS effects during the binary inspiral. \citet{Emma:2022xjs} studied the effect of mirror DM concentrated inside the core on a deceleration of the inspiral phase, as well as on a modification to the ejecta and debris disk formation. Depending on whether DM has particle-antiparticle asymmetry, we will refer to it as asymmetric or symmetric matter. Symmetric DM particles can self-annihilate leaving a possibility of its detection via X-ray, $\gamma$-ray, or neutrino telescopes \citep{KouvarisThermalEvolution}. Moreover, as studied in \citet{2012PhLB..711....6P}, self-annihilating DM in the inner regions of NSs may have a significant impact on the kinematic properties, namely velocity kicks and rotation patterns. Another possible effect of DM particle annihilation inside the NS core is related to the late-time heating which could be detected from observations of the surface temperature of the old part of the NS population \citep{2010PhRvD..81l3521D,Hamaguchi:2019oev}. Unfortunately, nowadays, our database of old NSs is still quite limited. Contrary to the annihilating DM, asymmetric DM will become accumulated inside a star. Models that consider such scenario should allow old NSs to exist. Especially, it is important for bosonic DM particles which at zero temperature could form Bose-Einstein condensate (BEC) leading to gravitational collapse of the bosonic DM to a black hole \citep{Kouvaris}. Light DM particles, such as axions, could contribute as an additional cooling channel in compact stars. Thus, in the NS core axions can be produced either in nucleon bremsstrahlung or in Cooper pair breaking and formation processes \citep{PhysRevD.93.065044,Sedrakian:2018kdm,Buschmann:2021juv}, causing an alteration of the surface temperature and thermal evolution of a star. In addition, most of the existing models are constrained by the results on neutrino emission coming from the supernova observation SN1987A \citep{Chang:2018rso} and existing NS cooling data. The results of NS merger simulations \citep{Dietrich:2019shr} show that axions produced in nucleon-nucleon bremsstrahlung do not lead to a measurable change of the emitted GW signal, ejecta mass, as well as the temperature profile of the merger remnant. An amount of DM accrued by an ordinary accretion throughout a stellar evolution will depend on a position of the considered NS in the Galaxy \citep{2010PhRvD..82f3531K}. As the DM density in the Galactic center is many orders of magnitude greater than in its arms, we may expect a higher DM fraction in compact stars towards the Milky Way center \citep{2020Univ....6..222D}. Moreover, some local non-homogeneity of DM distribution may contribute to an increase of DM fraction, leading even to dark compact objects \citep{Dengler:2021qcq}, dark stars \citep{2015PhRvD..92f3526K,Maselli:2017vfi}. Since DM properties are still unknown, different models have been employed, considering its fermionic \citep{Goldman:2013qla,Gresham:2017zqi,PhysRevD.102.063028} and bosonic \citep{Colpi:1986ye,Petraki:2013wwa,Rafiei_Karkevandi_2022} nature. As it was discussed by \citet{Bramante:2013hn} to be consistent with the observations of old NSs, bosonic DM has to be either self-interacting, decaying, or self-annihilating. Considering asymmetric DM a repulsive self-interaction is required due to zero degeneracy pressure. At the moment when accumulated bosonic asymmetric DM exceeds the Chandrasekhar mass, nothing can prevent its gravitational collapse and the formation of a black hole inside the NS, which could potentially disrupt the star \citep{Kouvaris, Zurek:2013wia}. Using an analog of visible matter and the Standard Model particles we see that all interactions have an exchange character, an interaction between particles occurs due to an exchange of a mediator, e.g., interaction between nucleons is mediated by pions. In the present article we extend this approach for a dark sector by formulating a model of self-interacting asymmetric bosonic DM, which includes vector interaction mediated by a real $\omega$-field coupled to the scalar one. We model DM-admixed compact stars by considering the mixed system of two fluids with different relative fractions. An implication of the proposed equation of state (EoS) and tests against astrophysical and GW observations are performed. The paper is organized as follows. In Section \ref{sec:EoS}, we present models for the BM and DM components, with detailed derivation provided in the Appendix \ref{appA:FullLagrangian}. Section \ref{sec:MIXEoS} is dedicated to the equilibrium configurations of DM-admixed compact stars. In Section \ref{sec:TID}, we discuss how the speed of sound and the tidal deformability are affected by the presence of DM. In Section \ref{sec:Results} the main results are presented, including the constraints on mass and interaction scale of DM particles. In Section \ref{sec:Discussions}, we discuss the smoking gun evidences of the presence of DM that could be tested in the nearest future before concluding in Section \ref{sec:Concl}. In Appendix \ref{appA:FullLagrangian}, we show the full derivation of the DM EoS, with a focus on the effective speed of sound for a DM-admixed NS in Appendix \ref{appB:SpeedOfSound}. In Appendix \ref{appC:Scan}, we show the scan over the model parameters and the obtained constraints. Throughout the article, we utilize a unit system in which $\hbar=c=G=1$. \section{Models of dark and baryonic matter} \label{sec:EoS} \subsection{Dark matter EoS} \label{subsec:DMEoS} We consider the model of massive spinless DM particles carrying a conserved charge. Such particles are described by a complex scalar field, have mass $m_\chi$ and chemical potential $\mu_\chi$. At sufficiently low temperatures bosonic DM exists in the form of the BEC. In the absence of interaction such BEC has zero pressure and is mechanically unstable against gravitational compression. We stabilize the BEC of DM by introducing repulsive interaction mediated by real vector field coupled to the scalar one. The minimal Lagrangian representing this model is given in the Appendix \ref{appA:FullLagrangian}. This Lagrangian implies a Noether current corresponding to the invariance of action with respect to global U(1) transformations. If the vector field was not a Yukawa but a gauge one, local U(1) symmetry would also be respected and another Noether current could be introduced \citep{Brading:2000hc}. Given a quantum treatment, expectation values of these two currents produce the same conserved charge, which is not the case within the used mean-field approximation corresponding to a classical treatment of the vector field (see Appendix \ref{appA:FullLagrangian} for details). We use the Noether current resulting from global U(1) transformations, which leave the action invariant even at the mean-field level. In this work we assume vanishing temperature of the DM, being totally converted to the BEC. In the considered case thermal fluctuations are suppressed and mean-field approximation can be applied in order to derive the corresponding EoS. Chemical potentials of the BM and DM components of NS scale proportionally (for more details see Section \ref{sec:MIXEoS}). This significantly simplifies solving two coupled TOV-like equations for BM and DM components, as shown by \citet{PhysRevD.102.063028}. Therefore, it is convenient to formulate the DM EoS in the Grand Canonical Ensemble (GCE), where $\mu_\chi$ is an independent variable. Appendix \ref{appA:FullLagrangian} includes details of the corresponding derivation for the interval of physical values of $\mu_\chi\in[0,\sqrt{2}m_\chi]$ performed in the locally flat space-time, provided by small gradients of metrics and absence of the anisotropy issues (see \citet{Rafiei_Karkevandi_2022} for details). The corresponding pressure and energy density are \begin{eqnarray} \label{EqI} p_\chi&=&\frac{m_I^2}{4} \left(m_\chi^2-\mu_\chi\sqrt{2m_\chi^2-\mu_\chi^2}\right),\\ \label{EqII} \varepsilon_\chi&=&\frac{m_I^2}{4} \left(\frac{\mu_\chi^3}{\sqrt{2m_\chi^2-\mu_\chi^2}}-m_\chi^2\right), \end{eqnarray} for $\mu_\chi\in[m_\chi,\sqrt{2}m_\chi]$ and $p_\chi=\varepsilon_\chi=0$ for $\mu_\chi\in[0,m_\chi]$. The parameter $m_I$ has the unit of mass and controls the interaction strength. It is proportional to the vector meson mass and inversely proportional to its coupling. Thus, large $m_I$ corresponds to weak interaction and vice versa. At a first glance, the present EoS in the weak coupling regime paradoxically leads to an infinite pressure due to $m_I\rightarrow\infty$. This, however, is not the case since in the considered regime chemical potential of the DM BEC $\mu_\chi$ coincides with its mass $m_\chi$ leading to vanishing of the brackets in Eqs.~\eqref{EqI} and~\eqref{EqII}. In the case of $p_\chi$ the bracket vanishes faster than $m_I^2$ yielding to a zero pressure $\sim m_I^{-2}$, while for $\varepsilon_\chi$ the bracket behaves as $\sim m_I^{-2}$ providing a finite energy density of the DM BEC $m_\chi n_\chi$. In the strong coupling regime $m_I\rightarrow0$ chemical potential of DM converges to $\sqrt{2}m_\chi$. As a result, the bracket in Eq.~\eqref{EqI} gets equal to $m_\chi^2$ and the pressure vanishes as $m_I^2m_\chi^2/4$. The corresponding bracket in Eq.~\eqref{EqII} diverges as $\sim m_I^{-2}$ leading to finite energy density $\sqrt{2}m_\chi n_\chi$. Remarkably, weak and strong coupling limits of the present EoS are similar, since DM pressure vanishes in both these cases. At $m_I\rightarrow\infty$ it is due to the absence of repulsion. The limit $m_I\rightarrow0$ is equivalent to the case of massless vector field, which does not have a non-trivial mean field solution needed to stiffen an EoS. Detailed analysis of the weak and strong coupling limits of the present EoS is performed in Appendix \ref{appA:FullLagrangian}. \begin{figure} \includegraphics[width=1.15\columnwidth]{combined_mu_e} \caption{{\bf Left panel:} Scaled pressure $p_\chi/p_\infty$ (black solid curve), energy density $\varepsilon_\chi/p_\infty$ (black dashed curve) and speed of sound squared $c_{s,\chi}^2$ (red dotted curve) of DM as functions of its chemical potential $\mu_\chi$ given in units of $m_\chi$. {\bf Right panel:} Scaled pressure $p_\chi/p_\infty$ (black solid curve) and speed of sound $c_{s,\chi}^2$ (red dotted curve) of DM as functions of scaled energy density $\varepsilon_\chi$ given in units of $p_\infty$. } \label{fig1} \end{figure} A remarkable feature of the present EoS is that at infinite density its pressure is limited by the value $p_\infty=m_I^2m_\chi^2/4$. This regime is reached at $\mu_\chi=\sqrt{2} m_\chi$. Thus, compressibility of DM vanishes at asymptotically high densities regardless of $m_\chi$ and $m_I$. The same conclusion holds for the speed of sound $c_{s,\chi}^2=dp_\chi/d\varepsilon_\chi$. In other words, high density configurations of bosonic DM are gravitationally unstable at any strength of the repulsive interaction. The left panel of Fig.~\ref{fig1} shows pressure, energy density and speed of sound of the considered DM EoS as functions of the corresponding chemical potential. It is worth mentioning, that the square of the speed of sound is limited from above by the value $1/9$, which is reached at $\mu_\chi=\sqrt{3/2}~m_\chi$ and does not depend on $m_\chi$ and $m_I$. Thus, $c_{s,\chi}^2$ is bounded by quite small values and corresponds to soft EoS of DM. The right panel of Fig.~\ref{fig1} shows this EoS as a function of energy density. \vspace*{1.5cm} \subsection{Baryon matter EoS} \label{subsec:BMEoS} In order to thoroughly study the impact of DM on compact stars made of mostly BM, we consider two EoSs of different stiffness. One of them is the Induced Surface Tension (IST) EoS, formulated on the basis of the hard-core approach. Thus, nucleons are characterized by an effective hard-core radius that provides a short range repulsion between the particles of different species. This part of the model was fixed from the fit of heavy-ion collision data \citep{Sagun:2017eye}, while the IST contribution was implemented by accounting for an interparticle interaction at high density. The corresponding parameters were fitted to reproduce the nuclear matter ground state properties, correct behaviour of the nuclear liquid-gas phase transition \citep{Sagun:2016nlv} and proton flow constraint \citep{Ivanytskyi:2017pkt}. Furthermore, in \citet{Sagun2019IST} the model was generalized to describe NSs showing a big application range of the unified IST approach. In the present work, we consider the Set B described in details in \citet{NSOscillationsEoS}, while the crust is modeled in a simplified way by the polytropic EoS with adiabatic index $\gamma=4/3$. In addition, we consider the DD2 EoS \citep{Typel2009,Typel1999} with and without $\Lambda$ hyperons. The DD2 is a mean-field relativistic nuclear model with density dependent couplings, whose parameters were fitted to the ground-state properties of nuclei. Hyperons have been included in several works. In the present study, the density dependence of the hyperon couplings to the $\sigma$, $\omega$ and $\rho$ mesons is considered to be same as the one of the nucleons. For the $\phi$ coupling the density dependence of the $\omega$ meson is considered. The couplings of the $\sigma$ meson to the $\Lambda$ and $\Xi$ have been taken from \cite{Fortin:2017cvt} and \cite{Fortin:2020qin}, respectively, and have been fitted to the binding energy of $\Lambda$ and $\Xi$ hypernuclei. The coupling to the $\Sigma$ hyperon was chosen so that the $\Sigma$ potential in symmetric nuclear matter is $+30$ MeV, see \cite{Gal:2016boi} for a discussion. For the vector mesons, the quark model predictions are used, \begin{align*} &g_{\omega\Lambda}=g_{\omega\Sigma}=\frac{2}{3}g_{\omega N},\quad g_{\omega\Xi}=\frac{1}{3}g_{\omega N},\\ &g_{\phi\Lambda}=g_{\phi\Sigma}=-\frac{\sqrt{2}}{3}g_{\omega N},\quad g_{\phi\Xi}=-\frac{2\sqrt{2}}{3}g_{\omega N}. \end{align*} Finally, the effective $\rho$-meson coupling is determined by the product of the hyperon isospin with the $\rho$ meson-nucleon coupling. Further on the DD2 EoS with $\Lambda$ hyperons will be referred as the DD2$\Lambda$. The complete NS EoS contains, besides the core EoS, the BPS EoS \cite{bps} for the outer crust, and the inner crust was calculated within a Thomas-Fermi calculation taking DD2 as the underlying model and allowing for the appearance of several geometries as discussed in \cite{grill14}. The inner crust EoS has been published in \cite{Fortin2016}. \section{Mixed system of two components} \label{sec:MIXEoS} We assume no interaction between DM and BM, except through gravity. This assumption is fully justified by the latest constraints coming from the DM direct detection experiments and Bullet Cluster \citep{Clowe_2006}, showing that the DM-BM cross section is many orders of magnitude lower than the typical nuclear one, $\sigma_\chi\sim 10^{-45}\ \mathrm{cm}^2\ll \sigma_N\sim10^{-24}\ \mathrm{cm}^2$. Therefore, the stress-energy tensors of both components are conserved separately, leading to the system of the Tolman-Oppenheimer-Volkoff equations (TOV) with split components \citep{PhysRev.55.364,PhysRev.55.374} \begin{equation}\label{TOV} \frac{dp_i}{dr}=-\frac{(\epsilon_i +p_i)(M_\mathrm{tot}+4\pi r^3p_\mathrm{tot})}{r^2\left(1-{2M_\mathrm{tot}}/{r}\right)}, \end{equation} which describes the relativistic hydrostatic equilibrium of a DM-admixed NS. In Eq.~\eqref{TOV}, the subscript index refers both to the BM and DM, i.e., $i=B,D$, while $M(r)$ is the gravitational mass enclosed inside a sphere of radius $r$ \begin{equation}\label{M_i} M_i(r) = 4\pi\int^r_0 \varepsilon_i (r^\prime)r^{\prime 2}dr^\prime. \end{equation} Using Eq.~\eqref{M_i}, we define the total gravitational mass as the sum of the two components, $M_\mathrm{tot} = M_B(R_B)+M_D(R_D)$, where the radii $R_i$ are evaluated using the zero-pressure condition at the surface \begin{equation} p_i(R_i)=0. \end{equation} After having the total mass of the system, it is possible and convenient to write the fraction of the accumulated DM as \begin{equation}\label{DMFRAC} f_\chi = \frac{M_D}{M_\mathrm{tot}}. \end{equation} It is worth noting, that we refer to the microscopic/thermodynamic DM parameters as $\chi$, while the macroscopic ones have an index $D$. It is easy to obtain directly from Eq.~\eqref{TOV} the relation between the chemical potentials of the BM and DM. In fact, \cite{PhysRevD.102.063028} showed that \begin{equation} \frac{d \ln \mu_B}{dr}=\frac{d \ln \mu_\chi}{dr} = -\frac{M_\mathrm{tot}+4\pi r^3 p_\mathrm{tot}}{r^2(1-2M_\mathrm{tot}/r)}, \end{equation} which yields to the conclusion that the two chemical potentials are proportional to each other. The value their ratio attains in the center of the star is the proportionality constant, which can be used to simplify the model: \begin{equation}\label{DMBM} \mu_\chi = \left(\frac{\mu_\chi}{\mu_B} \right)_{r=0} \mu_B. \end{equation} \begin{figure*} \centering \setkeys{Gin}{width=1.15\linewidth} \begin{tabularx}{\linewidth}{XXX} \includegraphics{MRCurves100MeV_250_Constraints_title.pdf} & \includegraphics{Profiles_DD2_zoomed_region.pdf} & \includegraphics{Profiles_IST_2.pdf} \end{tabularx} \begin{tabularx}{\linewidth}{XXX} \includegraphics{MRCurves1GeV_1GeV} & \includegraphics{Profiles_DD2.pdf} & \includegraphics{Profiles_IST.pdf} \end{tabularx} \caption{ {\bf Upper row:} Total gravitational mass of the DM-admixed NS as a function of its visible radius $R$ (left panel). Black solid, dash-dotted and dotted curves correspond to pure BM stars described by the IST EoS, DD2 EoS and DD2 EoS with hyperons. Red, blue, and greed colours depict relative DM fractions equal to 1\%, 3\%, and 5\%, correspondingly. Green, gray, and teal bands represent 1$\sigma$ constraints on mass of PSR J0348+0432 \citep{PSRj03480432Article}, PSR J1810+1744 \citep{Romani:2021xmb}, and PSR J0952-0607 \citep{Romani:2022jhd}. Pink and beige contours show the NICER measurements of PSR J0030+0451 \citep{Riley:2019yda,Miller_2019}, while orange and blue contours depict the PSR J0740+6620 measurements \citep{Miller:2021qha,Riley:2021pdl}. LIGO-Virgo observations of GW170817 \citep{Abbott_2018} and GW190425 \citep{LIGOScientific:2020aai} binary NS mergers are shown in blue and magenta. Energy density profiles for the BM (dotted curves) and DM (dashed curves) components are shown for the DD2 EoS (middle panel) and IST EoS (right panel). The solid back curve represents the profile for pure BM $1.4$ M$_\odot$ NS, while the other profiles were sampled to have the same total gravitational mass. Both panels were obtained for $m_{\chi}$=100 MeV, $m_{I}$=250 MeV. {\bf Lower row:} The same as on the upper row, but calculated for $m_{\chi}$=1 GeV, $m_{I}$=1 GeV. } \label{fig:MRprofiles} \end{figure*} By solving TOV Eqs.~\eqref{TOV} with the boundary conditions and accounting for the relation between both components from Eq.~\eqref{DMBM}, we calculate the M-R relations for DM-admixed NSs for different values of DM fractions $f_\chi$, particle's mass $m_{\chi}$, and the interaction scale $m_{I}$. To better understand the impact of each parameter we consider light and heavy DM particles with $m_{\chi}$=100 MeV and $m_{\chi}$=1 GeV (see the left column of Fig.~\ref{fig:MRprofiles}). Moreover, to address our ignorance of the EoS for baryonic component we studied the effect of DM on the soft IST EoS, depicted as a solid black curve on the left panels of Fig.~\ref{fig:MRprofiles}, as well as on the stiff DD2$\Lambda$ EoS (dotted black curve) and DD2 EoS (dash-dotted black curve). The chosen EoSs represent different sides of mass and radius region allowed by the recent astrophysical, GW and nuclear physics constraints, and, therefore, provide a good coverage of BM parameters. As it can be seen, the DD2$\Lambda$ EoS (dotted black curve) and DD2 EoS coincide until $\sim 1.4~M_{\odot}$, a point where the onset of hyperons happens. Further, hyperon production softens the EoS leading to smaller total maximum mass and star's radius. The left panels of Fig.~\ref{fig:MRprofiles} show the effect of DM with different relative fractions inside a star on its mass and radius. Thus, we see a reduction of $M_{\rm max}$ and radius of stars for larger DM fractions caused by a DM core formation. In fact, the formation of more compact objects for an outside observer would look like a softening of the BM EoS. This degeneracy between the effect of DM and possible change of the strongly interacting matter properties at high density will be discussed in Section \ref{sec:Discussions}. Due to the fact that in the considered model at $\mu_\chi \rightarrow \sqrt{2}m_\chi$ energy density diverges at finite pressure, DM falls under the Schwarzschild radius forming a black hole. It takes place for the high mass stars for which the DM chemical potential in the center reaches the limit (see the upper left panel of Fig.~\ref{fig:MRprofiles}). The panels on the middle and right columns of Fig.~\ref{fig:MRprofiles} demonstrate the split energy density profiles of DM (dashed curves) and BM (dotted curves). The solid black curve depicts the energy density profile for the $1.4$ M$_\odot$ star. The profiles for DM-admixed NSs are shown for stars with the same total gravitational mass as the pure BM NS. As the onset of hyperons occur after $1.4$ M$_\odot$, two formulations of the DD2 EoS give the same prediction for the matter distribution inside the stars. Therefore, on Fig.~\ref{fig:MRprofiles} we show profiles only for the DD2 EoS. For heavy bosons a compact DM core is formed, which is seen from the high values of the $\epsilon_D$, being an order of magnitude above $\epsilon_B$ (see middle and right panel of the low row on Fig.~\ref{fig:MRprofiles}). Furthermore, the $\epsilon_D$ drops to zero at radius $\sim$2 km corresponding to the size of a DM core. For the DM fraction 3\% and 5\% at $m_{\chi}$=100 MeV and $m_{I}$=250 MeV there is a halo with the radius $R_{D}$=25.6 km and 13.0 km, respectively. \section{Tidal deformability of DM-admixed NSs} \label{sec:TID} The tidal deformability parameter $\lambda$ quantifies the response of an object to a static external quadrupolar tidal field $\mathcal{E}_{ij}$ by relating it to a quadrupolar moment $\mathcal{Q}_{ij} = -\lambda\mathcal{E}_{ij}$. For a given stellar configuration of the total mass $M_{\mathrm{tot}}$ and radius $R$ this tidal deformability can be expressed through the Love number $k_2$ as $\lambda=2k_2 R^{5}/3$ and is commonly mapped to the dimensionless $\Lambda = \lambda/M_{\mathrm{tot}}^5$ \citep{Hinderer_2008}. In the two-component case $R$ should be understood as the outermost radius, i.e., $R=R_{B}$ in the DM core scenario and $R=R_{D}$ in the DM halo one. The Love number is defined through the solution of an ordinary differential equation (ODE) appearing as a leading order expansion of the Einstein equations with a metric perturbed by the external gravitational field \citep{1957PhRv..108.1063R}. The microscopic properties of matter are encoded into this ODE through the change of total pressure $p_{\mathrm{tot}}\equiv p_B+p_\chi$ caused by perturbation of the total energy density $\varepsilon_\mathrm{tot}\equiv \varepsilon_B+\varepsilon_\chi$. This change is quantified by the derivative $dp_{\mathrm{tot}}/d\varepsilon_{\mathrm{tot}}$. In the barotropic one fluid case this derivative represents the corresponding speed of sound. In the two-fluid case the speed of sound derivation as $dp_{tot}/d\varepsilon_{tot}$ is mathematically identical to the expression obtained by \citet{Das:2020ecp}. Therefore, in what follows, we refer to it as the effective speed of sound of two-fluid system. It can be expressed through the speed of sound of baryonic $c_{s,B}^2$ and dark $c_{s,\chi}^2$ components as \begin{eqnarray} \label{IX} c_{s,\mathrm{eff}}^2=\eta c_\mathrm{s,B}^2+(1-\eta)c_{s,\chi}^2 \end{eqnarray} with $\eta\in[0,1]$. The lower and upper edges of this interval correspond to the cases of pure DM and BM, respectively. Appendix \ref{appB:SpeedOfSound} contains derivation of Eq.~\eqref{IX} and parameter $\eta$. This expression demonstrates that the effective speed of sound lays between the ones of pure components. In Fig.~\ref{fig:sound} we show the effective speed of sound for different $\xi=\frac{\mu_\chi}{\mu_B}$ values, as well as the speed of sound for pure BM and DM components. A relation between the parameters $\xi$ and $\eta$ is given in Eq.~\eqref{B3} in Appendix. The upper panel of Fig.~\ref{fig:sound} indicates how the effective speed of sound behaves with DM accumulated in a core of a compact star. Note, that it is in between the speed of sound values for pure components. On the lower panel of Fig.~\ref{fig:sound} we see that the effective speed of sound follows the BM, and only in the outer crust the DM component stars to dominate, which is related to a halo configuration. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{CS2_eps_Core.pdf} \includegraphics[width=0.95\columnwidth]{CS2_eps_Halo.pdf} \caption{The effective speed of sound for a mixture of BM and DM as a function of total energy density. {\bf Upper panel:} The curves were obtained for $m_{\chi}$=0.75 GeV and $m_{I}$=0.25 GeV that represents a DM core configuration. {\bf Lower panel:} The same as on the upper panel, but for $m_{\chi}$=0.20 GeV and $m_{I}$=0.1 GeV illustrating a DM halo configuration. The horizontal line at low densities corresponds to the polytropic EoS for a crust.} \label{fig:sound} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{LambdaM_1000_1000.pdf} \caption{Tidal deformability as a function of total gravitational mass calculated for pure BM stars (black curves) and DM-admixed NSs with relative DM fractions 1\%, 3\%, and 5\%, in red, blue, and green, correspondingly. Solid, dash-dotted and dotted curves represent the IST EoS, DD2 EoS and DD2$\Lambda$ EoS. The colours and symbols coincide with the ones used on Fig.~ \ref{fig:MRprofiles} for a better comparison. The figure is obtained for $m_{\chi}$=1 GeV, $m_{I}$=1 GeV. Green, gray, and teal bands represent 1$\sigma$ constraints on mass of PSR J0348+0432 \citep{PSRj03480432Article}, PSR J1810+1744 \citep{Romani:2021xmb}, and PSR J0952-0607 \citep{Romani:2022jhd}. The magenta area visualizes the constraints obtained from GW170817 \citep{Abbott_2018}.} \label{fig:tides} \end{figure} As you can see on Fig.~\ref{fig:tides}, DM condensed in a core leads to a decrease of the total gravitational mass, radius, and, consequently, tidal deformability parameter compared to a pure baryonic star, what a distant observer will perceive as an effective softening of the EoS. On the other hand, the presence of a DM halo leads to a significant increase of the outermost radius that goes beyond the BM component, increase of the tidal deformability parameter, and a consequent effective stiffening of the EoS. The considered IST, DD2, and DD2$\Lambda$ EoSs makes us to conclude that the soft EoS, being on the lower limit of the GW170817 90\% CL region (see a magenta area on Fig.~\ref{fig:tides}), provides a stringent constraint on a DM core scenario, while the stiff EoS, being on the upper border of it, allows much higher DM fractions, and disfavours an extended halo configuration. This degeneracy between the effect of DM and strongly interacting matter properties at high densities possesses limitations on DM detection, except several DM smoking guns that are going to be discussed in Section \ref{sec:Discussions}. Despite of it, we have to be aware of the fact that observational data on compact stars could be affected by an accumulated dark matter and, consequently, constraints we put on strongly interacting matter at high densities. \section{Results} \label{sec:Results} To study an interplay between boson mass and the interaction scale, as well as to put constrains on the DM fraction, we perform a scan over those parameters for the IST EoS (upper row), DD2 EoS (middle row), and DD2$\Lambda$ EoS (bottom row) for fixed DM fractions of 1\%, 3\%, and 5\% (see Fig.~\ref{fig:contours} in Appendix \ref{appC:Scan}). The color maps represent the total maximum gravitational mass of DM-admixed NSs. The white curve on each panel corresponds to $M_{max}=1.4~M_{\odot}$, whereas the red curve represents $M_{max}=2.0~M_{\odot}$. In the case $2.0~M_{\odot}$ configurations are not reachable, we indicate $1.9~M_{\odot}$ stars with a green curve. As one can see from the upper row on Fig.~\ref{fig:contours}, increase of the DM fraction narrows the range of the values of the interaction scale $m_{I}$ consistent with the masses of the heaviest known pulsars. On the other hand, existence of the high mass stars with the significant amount of heavy DM requires low values of the interaction scale. We see the samedependence between $m_{\chi}$ and $m_{I}$ values. In fact, lower $m_I\equiv m_\omega/g$ values correspond to the higher coupling constant $g$ or, equivalently, stronger repulsion between the DM particles. The IST EoS for any DM fraction is always in agreement with the tidal deformability constraint, independent of $m_{\chi}$ and $m_{I}$ (see the upper row on Fig.~\ref{fig:contours}). At the same time, only for 1\% and 3\% of DM the total maximum mass of DM-admixed NSs can reach $2.0~M_{\odot}$. Thus, to simultaneously reproduce $2.0~M_{\odot}$ and GW170817 tidal deformability constraints boson mass and the interaction scale are restricted to the values shown in yellow. The shaded areas correspond to the non-allowed regions of parameters that cannot simultaneously provide the heaviest pulsars and GW constraints. For 3\% and 5\% of DM the DD2 EoS reproduces both constraints in a wide range of parameters disfavouring MeV mass range of bosonic DM with low values of the interaction strength. The black curve on the middle and bottom rows of Fig.~\ref{fig:contours} depicts the GW170817 tidal deformability constraint $\tilde{\Lambda}_{1.36} = 720$ \citep{LIGOScientific:2018hze} above which the model is consistent with the GW170817 merger. The dashed area corresponds to non-allowed range of parameters, including 1\% of DM for the DD2 EoS. For the DD2$\Lambda$ EoSs there are no $m_{I}$ and $m_{\chi}$ values that simultaneously reproduce the heaviest pulsars and GW constraints. In fact, only one of these criteria was reproduced for considered values of DM fractions. This is directly related to the fact that at the onset of $\Lambda$ hyperons the EoS becomes softer in addition to the DM softening effect in a core configuration. From this analysis we can conclude that, contrary to the stiff BM EoS (the DD2 EoS, as an example), the soft BM EoS (the IST EoS, as an example) provides a weaker limit on DM particles mass and interaction strength. It is related to the fact that the pure baryonic DD2 or DD2$\Lambda$ EoSs are on the upper border of the $\Lambda_{1.4}$ constraint from GW170817. Any decrease of $\Lambda_{1.4}$ due to a DM core will not violate this condition, whereas a small DM halo configuration will do it. As you can see on Fig.~\ref{fig:tides}, the IST EoS is located on the lower limit of the magenta area favouring a halo formation. It is worth to note that this result is obtained under the assumption of a similar DM fraction in all galaxies. As a matter of fact, an application of the GW170817 tidal deformability result and multi-messenger data as a universal constraint on the amount of DM is questionable. Each galaxy could be characterised by a different DM profile, as well as to have local DM inhomogeneities. Strictly speaking, GW170817 probes an amount of DM only in a part of the NGC 4993, the host galaxy for this particular merger. Therefore, a larger sample size of NS-NS and NS-BH mergers is required to constrain the DM properties. Due to current uncertainties of the BM EoS at high density we cannot discriminate between the effect of DM and properties of BM. As it will be discussed in the following Section \ref{sec:Discussions}, we expect a higher DM fraction inside compacts stars towards the Galactic center. If so, the compact star population would follow the scenarios presented from the left to right panels on Fig.~\ref{fig:contours}, i.e., from low to high DM fraction. \section{Discussions} \label{sec:Discussions} As described above, there are various effects of DM on compact stars. A natural question arises: how we can narrow down the proposed DM models and constrain the DM properties using NSs? Can compact stars provide a smoking gun evidence for the presence of DM? There are several different approaches: (i) by measuring mass, radius, and moment of inertia of NSs with few-\%-accuracy. Nowadays, NICER \citep{Miller_2019,Raaijmakers:2019dks,Miller:2021qha,Raaijmakers:2021uju} and in the nearest future ATHENA \citep{Cassano:2018zwm}, eXTP \citep{eXTP:2018kws}, and STROBE-X \citep{STROBE-XScienceWorkingGroup:2019cyd} are expected to measure $M$ and $R$ of NSs with a high accuracy. Using the synthetic data for the STROBE-X telescope, and assuming two NSs of the same mass and BM EoS, \citet{Rutherford:2022xeb} concluded that a measurement of radii with a 2\% accuracy would be enough to draw a conclusion about the presence of DM in star's interior. However, an existence of the deconfinement phase transition in a core would exhibit in the same way, leading to a degeneracy between the effect of DM and the phase transition. The main drawback of this approach is that the effect of DM could mimic the softening/stiffening of BM at high density and vice versa. Current uncertainties of the baryonic EoS do not allow a discrimination of two effects. In addition, radio telescopes, e.g., MeerKAT \citep{Bailes:2018azh}, SKA \citep{Watts:2014tja} and ngVLA \citep{Bower:2018mta} plan to increase radio pulsar timing and discover Galactic center pulsars. A mass reduction of NSs towards the Galaxy center or variation of mass, radius, and moment of inertia in different parts of the Galaxy could shed light on the amount of accumulated DM in compact stars. In fact, we could see a paucity of old millisecond pulsars in the Galactic center either due to light extinction on dust, or the collapse of DM-admixed NSs into black holes after exceeding the Schwarzschild limit \citep{Bramante:2014zca}. (ii) by performing binary numerical-relativity simulations and kilonova ejecta for DM-admixed compact stars for different DM candidates, mass of particles, interaction strength and fractions with the further comparison to GW and electromagnetic signals. The smoking gun of the presence of DM could be a supplementary peak in the characteristic GW spectrum of NS mergers \citep{Ellis:2017jgp}, exotic waveforms \citep{Giudice:2016zpa} or presence of a strong oscillation mode in the waveforms during the post-merger stage \citep{Bezares:2019jcb}. The next generation of GW detectors, i.e., the Cosmic Explorer (CE) \citep{Mills:2017urp} and Einstein Telescope (ET) \citep{Punturo:2010zz} will open another perspective of detection post-merger regimes and probing an internal composition of compact stars. (iii) by detecting objects that go in contradiction with our understanding. A potential candidate for DM-admixed NS could be the secondary component of GW190814 \citep{LIGOScientific:2020zkf}. While likely being a black hole \citet{Tews:2020ylw,Essick:2020ghc}, this compact object with mass of $\sim 2.6~M_{\odot}$ raised debates about its nature \citep{Tsokaros:2020hli} as a pure baryon matter EoS would not be able to explain a compact star of $\sim 2.6~M_{\odot}$. Hence, if not being a black hole, the compact object would have to be supplemented either with exotic degrees of freedom, such as hyperons and/or quarks \citep{Tan:2020ics,Dexheimer:2020rlp}, an early deconfinement phase transition \citep{Ivanytskyi:2022oxv}, very fast rotation \citep{Zhang:2020zsc}, or extra stiffening of the EoS at high densities \citep{Fattoyev:2020cws}. An alternative explanation of this puzzle would be a DM-admixed NS \citep{DiGiovanni:2021ejn}, which could also explain a formation of a black hole of so low mass as a collapsed DM-admixed NS \citep{Bramante:2014zca}. (iv) modification of the pulsar pulse profile due to the extra light-bending \citep{Miao:2022rqj} and/or gravitational microlensing in the case of a dark-halo existence. (v) modification of the cooling rate of compact stars \citep{2010PhRvD..81l3521D,Hamaguchi:2019oev,Buschmann:2021juv, AngelesPerez-Garcia:2022qzs}. We want to note, that this effect is the most inaccurate between the above mentioned ones. Thus, NSs need to have a well-measured surface luminosity and age. In addition to it, uncertainties related to a particle composition, EoS, magnetic field, superfluidity/superconductivity, NS masses, chemical composition of an atmosphere, etc., could wash-out an effect of DM. Old NSs are less affected by the mentioned effects, as a photon cooling stage starts to dominate over a neutrino cooling stage that is very sensitive to a particle composition and superfluidity/superconductivity \citep{Page:2004fy}. Magnetic field is also expected to be unimportant for old isolated NSs. Therefore, a possible heating mechanism of NSs due to DM annihilation could be probed by increasing a statistics on observational data of old NSs. \section{Conclusions} \label{sec:Concl} We proposed a model of bosonic DM represented by a complex scalar field coupled to the vector one through the covariant derivative. The model describes DM existing in the form of BEC with repulsive interaction. Pressure of the present EoS saturates at asymptotically high densities leading to the vanishing speed of sound and compressibility at this regime. From the thermodynamic requirements, chemical potential of DM existing as such BEC is limited to the interval $\mu_\chi\in[m_\chi,\sqrt{2}m_\chi]$, with $m_\chi$ being the DM particle mass. In the weak and strong coupling limits this interval shrinks to its lower and upper bounds, respectively, while pressure vanishes even at any density. This spectacular feature of the present model makes its weak and strong coupling limits qualitatively similar and requires a further clarification. DM-admixed compact stars were modelled by considering the mixed system of two fluids with different relative fractions. The performed derivation of the effective speed of sound for two-fluid system allowed us to calculate the tidal deformability parameter for compact stars admixed with different amount of DM. We argue that one-fluid approach cannot be applied to a mixed system of several components with different proper speed of sound values. To account for a discrepancy related to the baryonic component the soft IST EoS and stiffer DD2 EoS with and without hyperons were considered. For different DM particle's mass, its relative fraction and interaction scale we found the conditions of DM core formation. We argue that in the framework of the considered model only a small DM halo is possible, with the outermost radius around twice the baryonic one. We performed a thorough analysis of the effect of DM particle mass in MeV-GeV mass-range and self-interacting scale on maximum total gravitational mass and tidal deformabilities of NSs for several fixed DM fractions. We found that for 1\%, 3\% of DM for the IST EoS and 3\%, 5\% of DM for the DD2 EoS the model can simultaneously reproduce heaviest pulsars and GW170817 tidal deformability constraint. The obtained allowed region of boson mass $m_{\chi}$ and interaction scale $m_{I}$ for a fixed DM fraction shows an anti-correlated dependence between these parameters, i.e. an high $m_{\chi}$ value favours a low $m_{I}$ value. For the DD2$\Lambda$ EoS no allowed region of parameters was found due to inability to simultaneously reproduce both constraints. In Section \ref{sec:Discussions}, we discussed the possible smoking gun signatures of DM in compact stars that could be probed in the nearest future, e.g., alteration of maximum total gravitational mass and radius of compact stars as a function of a distance from the Galactic center; modification of the surface temperature (additional heating or cooling mechanism) of NSs towards the Galactic center; lack of old millisecond pulsars in the Galactic center; presence of supplementary peak(s) in the GW signal from NS-NS and/or NS-BH mergers, exotic waveform or modification of the kilonova ejecta; gravitational-lensing effect or alteration of the pulsar pulse profile due to the extra light-bending in a dark halo. Moreover, such objects as a secondary component of the GW190814 event challenges the existing models of compact stars and black holes, giving a possibility of this object to be a DM-admixed NS. We argue that compact stars and their mergers provide a novel sensitive indirect method of detection and constraining the DM properties. Based on the performed analysis it is clear that the present data analysis of X-ray, radio and GW observations without accounting for an accumulated DM could miss a valuable piece of information as well as to give a wrong prediction about the strongly interacting matter properties at high density. \begin{acknowledgments} The work of E.G., C.P. and V.S. was supported by national funds from FCT – Fundação para a Ciência e a Tecnologia, I.P., within the Projects No. UIDB/04564/2020, UIDP/04564/2020, EXPL/FIS-AST/0735/2021. E.G. also acknowledges the support from the Project No. PRT/BD/152267/2021. C.P. is supported by the Project No. PTDC/FIS-AST/28920/2017. V.S. also acknowledges the PHAROS COST Action CA16214. The work of O.I. was supported by the Polish National Science Center under the grant No. 2019/33/BST/03059. \end{acknowledgments} \subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document} \section{Introduction} \label{sec:intro} Since the first detection of the binary neutron star (NS) merger, GW170817 \citep{LIGOScientific:2017vwq}, which was accompanied by the observation of electromagnetic signals originating from the same source, GRB170817A and AT2017gfo \citep{LIGOScientific:2017ync}, we have being witnessing exciting breakthroughs in our understanding of compact stars and their merger dynamics. In fact, gravitational wave (GW) astronomy and multi-messenger astrophysics became a new tool to extract information about the internal structure of NSs from GW and electromagnetic observations \citep{Bauswein:2017vtn, Annala:2017llu,Hinderer:2018pei}. Thus, from the combining analysis of the GW170817 signal measured by the advanced LIGO and advanced Virgo detectors, the constraint on the tidal deformability parameter of NS matter $\Lambda_{1.4} \leq 800$ was extracted \citep{Abbott_2018}. The second binary NS merger event, GW190425 \citep{LIGOScientific:2020aai}, provided constraints consistent with GW170817, but due to its lower signal-to-noise ratio did not deepen our knowledge about the NS EoS. In addition to GW observations, also X-ray observations by NICER \citep{Miller_2019,Riley:2019yda,Raaijmakers:2019dks,Miller:2021qha,Riley:2021pdl}, radio measurements of the heaviest pulsars, e.g. PSR J0348+0432 of mass $2.01\pm 0.04$ M$_{\odot}$ \citep{PSRj03480432Article}, PSR J0740+6620 of $2.08^{+0.07}_{-0.07}$ M$_{\odot}$ \citep{Fonseca:2021wxt} and optical observations of the ``black widow" pulsars, e.g. PSR J1810+1744 of $2.13\pm 0.04$ M$_{\odot}$ \citep{Romani:2021xmb}, and PSR J0952-0607 of $2.35\pm 0.17$ M$_{\odot}$ \citep{Romani:2022jhd} constrain the NSs properties. While all the mentioned analyses and models assume that NSs are embedded in pure vacuum and do not contain dark matter (DM), they, indeed, could accumulate a sizable amount of DM in their interior and surroundings. Due to high compactness, NSs could effectively trap DM particles, that will rapidly thermalize and become accrued inside the stars, altering their properties. The presence of DM affects the internal structure and compactness of compact stars. Thus, as it was shown, e.g. \citet{Ciarcelluti:2010ji,2018PhRvD..97l3007E,2019JCAP...07..012N,PhysRevD.102.063028,2020MNRAS.495.4893D,Sagun:20224z}, DM may either form an extended halo or a dense core inside a NS. Depending on the mass of DM particles, their self-interaction strength and its relative abundance inside the star, one of the above mentioned scenarios takes place. Since DM halos are invisible for typical astrophysical observations, we would see only the baryonic matter (BM) radius, independent of the fact that the outermost radius can extend further than its BM component \citep{Rafiei_Karkevandi_2022}. Contrary, a DM core formation will lead to a reduction of the NS radius. Moreover, DM will affect tidal deformability parameters and the merger dynamics \citep{Ellis:2017jgp,Bezares:2019jcb,Bauswein:2020kor,Leung:2022wcf}. Nowadays, while there are studies investigating possible alternative scenarios beyond `standard' compact binary mergers described by general relativity in pure vacuum \citep{LIGOScientific:2018dkp}, the models used to analyse GW signal do not account directly for DM. Thus, to understand the effect of DM on the coalescence of NSs, numerical-relativity simulations for different DM fractions, particle mass, and interaction strength are required. As a step in this direction there have been first two-fluid 3D simulations of coalescencing binary NS admixed with DM with the following studies of GW emission of the merger remnant, e.g. \citet{Bauswein:2020kor} and \citet{Emma:2022xjs}. By considering different binary masses and EoSs \citet{Bauswein:2020kor} showed that the GW frequency of the orbiting DM components scales with the compactness of NSs. Moreover, the relations between the DM GW frequency and the dominant post-merger GW frequency of the stellar fluid or the tidal deformability were found, which opens a possibility to probe the EoS effects during the binary inspiral. \citet{Emma:2022xjs} studied the effect of mirror DM concentrated inside the core on a deceleration of the inspiral phase, as well as on a modification to the ejecta and debris disk formation. Depending on whether DM has particle-antiparticle asymmetry, we will refer to it as asymmetric or symmetric matter. Symmetric DM particles can self-annihilate leaving a possibility of its detection via X-ray, $\gamma$-ray, or neutrino telescopes \citep{KouvarisThermalEvolution}. Moreover, as studied in \citet{2012PhLB..711....6P}, self-annihilating DM in the inner regions of NSs may have a significant impact on the kinematic properties, namely velocity kicks and rotation patterns. Another possible effect of DM particle annihilation inside the NS core is related to the late-time heating which could be detected from observations of the surface temperature of the old part of the NS population \citep{2010PhRvD..81l3521D,Hamaguchi:2019oev}. Unfortunately, nowadays, our database of old NSs is still quite limited. Contrary to the annihilating DM, asymmetric DM will become accumulated inside a star. Models that consider such scenario should allow old NSs to exist. Especially, it is important for bosonic DM particles which at zero temperature could form Bose-Einstein condensate (BEC) leading to gravitational collapse of the bosonic DM to a black hole \citep{Kouvaris}. Light DM particles, such as axions, could contribute as an additional cooling channel in compact stars. Thus, in the NS core axions can be produced either in nucleon bremsstrahlung or in Cooper pair breaking and formation processes \citep{PhysRevD.93.065044,Sedrakian:2018kdm,Buschmann:2021juv}, causing an alteration of the surface temperature and thermal evolution of a star. In addition, most of the existing models are constrained by the results on neutrino emission coming from the supernova observation SN1987A \citep{Chang:2018rso} and existing NS cooling data. The results of NS merger simulations \citep{Dietrich:2019shr} show that axions produced in nucleon-nucleon bremsstrahlung do not lead to a measurable change of the emitted GW signal, ejecta mass, as well as the temperature profile of the merger remnant. An amount of DM accrued by an ordinary accretion throughout a stellar evolution will depend on a position of the considered NS in the Galaxy \citep{2010PhRvD..82f3531K}. As the DM density in the Galactic center is many orders of magnitude greater than in its arms, we may expect a higher DM fraction in compact stars towards the Milky Way center \citep{2020Univ....6..222D}. Moreover, some local non-homogeneity of DM distribution may contribute to an increase of DM fraction, leading even to dark compact objects \citep{Dengler:2021qcq}, dark stars \citep{2015PhRvD..92f3526K,Maselli:2017vfi}. Since DM properties are still unknown, different models have been employed, considering its fermionic \citep{Goldman:2013qla,Gresham:2017zqi,PhysRevD.102.063028} and bosonic \citep{Colpi:1986ye,Petraki:2013wwa,Rafiei_Karkevandi_2022} nature. As it was discussed by \citet{Bramante:2013hn} to be consistent with the observations of old NSs, bosonic DM has to be either self-interacting, decaying, or self-annihilating. Considering asymmetric DM a repulsive self-interaction is required due to zero degeneracy pressure. At the moment when accumulated bosonic asymmetric DM exceeds the Chandrasekhar mass, nothing can prevent its gravitational collapse and the formation of a black hole inside the NS, which could potentially disrupt the star \citep{Kouvaris, Zurek:2013wia}. Using an analog of visible matter and the Standard Model particles we see that all interactions have an exchange character, an interaction between particles occurs due to an exchange of a mediator, e.g., interaction between nucleons is mediated by pions. In the present article we extend this approach for a dark sector by formulating a model of self-interacting asymmetric bosonic DM, which includes vector interaction mediated by a real $\omega$-field coupled to the scalar one. We model DM-admixed compact stars by considering the mixed system of two fluids with different relative fractions. An implication of the proposed equation of state (EoS) and tests against astrophysical and GW observations are performed. The paper is organized as follows. In Section \ref{sec:EoS}, we present models for the BM and DM components, with detailed derivation provided in the Appendix \ref{appA:FullLagrangian}. Section \ref{sec:MIXEoS} is dedicated to the equilibrium configurations of DM-admixed compact stars. In Section \ref{sec:TID}, we discuss how the speed of sound and the tidal deformability are affected by the presence of DM. In Section \ref{sec:Results} the main results are presented, including the constraints on mass and interaction scale of DM particles. In Section \ref{sec:Discussions}, we discuss the smoking gun evidences of the presence of DM that could be tested in the nearest future before concluding in Section \ref{sec:Concl}. In Appendix \ref{appA:FullLagrangian}, we show the full derivation of the DM EoS, with a focus on the effective speed of sound for a DM-admixed NS in Appendix \ref{appB:SpeedOfSound}. In Appendix \ref{appC:Scan}, we show the scan over the model parameters and the obtained constraints. Throughout the article, we utilize a unit system in which $\hbar=c=G=1$. \section{Models of dark and baryonic matter} \label{sec:EoS} \subsection{Dark matter EoS} \label{subsec:DMEoS} We consider the model of massive spinless DM particles carrying a conserved charge. Such particles are described by a complex scalar field, have mass $m_\chi$ and chemical potential $\mu_\chi$. At sufficiently low temperatures bosonic DM exists in the form of the BEC. In the absence of interaction such BEC has zero pressure and is mechanically unstable against gravitational compression. We stabilize the BEC of DM by introducing repulsive interaction mediated by real vector field coupled to the scalar one. The minimal Lagrangian representing this model is given in the Appendix \ref{appA:FullLagrangian}. This Lagrangian implies a Noether current corresponding to the invariance of action with respect to global U(1) transformations. If the vector field was not a Yukawa but a gauge one, local U(1) symmetry would also be respected and another Noether current could be introduced \citep{Brading:2000hc}. Given a quantum treatment, expectation values of these two currents produce the same conserved charge, which is not the case within the used mean-field approximation corresponding to a classical treatment of the vector field (see Appendix \ref{appA:FullLagrangian} for details). We use the Noether current resulting from global U(1) transformations, which leave the action invariant even at the mean-field level. In this work we assume vanishing temperature of the DM, being totally converted to the BEC. In the considered case thermal fluctuations are suppressed and mean-field approximation can be applied in order to derive the corresponding EoS. Chemical potentials of the BM and DM components of NS scale proportionally (for more details see Section \ref{sec:MIXEoS}). This significantly simplifies solving two coupled TOV-like equations for BM and DM components, as shown by \citet{PhysRevD.102.063028}. Therefore, it is convenient to formulate the DM EoS in the Grand Canonical Ensemble (GCE), where $\mu_\chi$ is an independent variable. Appendix \ref{appA:FullLagrangian} includes details of the corresponding derivation for the interval of physical values of $\mu_\chi\in[0,\sqrt{2}m_\chi]$ performed in the locally flat space-time, provided by small gradients of metrics and absence of the anisotropy issues (see \citet{Rafiei_Karkevandi_2022} for details). The corresponding pressure and energy density are \begin{eqnarray} \label{EqI} p_\chi&=&\frac{m_I^2}{4} \left(m_\chi^2-\mu_\chi\sqrt{2m_\chi^2-\mu_\chi^2}\right),\\ \label{EqII} \varepsilon_\chi&=&\frac{m_I^2}{4} \left(\frac{\mu_\chi^3}{\sqrt{2m_\chi^2-\mu_\chi^2}}-m_\chi^2\right), \end{eqnarray} for $\mu_\chi\in[m_\chi,\sqrt{2}m_\chi]$ and $p_\chi=\varepsilon_\chi=0$ for $\mu_\chi\in[0,m_\chi]$. The parameter $m_I$ has the unit of mass and controls the interaction strength. It is proportional to the vector meson mass and inversely proportional to its coupling. Thus, large $m_I$ corresponds to weak interaction and vice versa. At a first glance, the present EoS in the weak coupling regime paradoxically leads to an infinite pressure due to $m_I\rightarrow\infty$. This, however, is not the case since in the considered regime chemical potential of the DM BEC $\mu_\chi$ coincides with its mass $m_\chi$ leading to vanishing of the brackets in Eqs.~\eqref{EqI} and~\eqref{EqII}. In the case of $p_\chi$ the bracket vanishes faster than $m_I^2$ yielding to a zero pressure $\sim m_I^{-2}$, while for $\varepsilon_\chi$ the bracket behaves as $\sim m_I^{-2}$ providing a finite energy density of the DM BEC $m_\chi n_\chi$. In the strong coupling regime $m_I\rightarrow0$ chemical potential of DM converges to $\sqrt{2}m_\chi$. As a result, the bracket in Eq.~\eqref{EqI} gets equal to $m_\chi^2$ and the pressure vanishes as $m_I^2m_\chi^2/4$. The corresponding bracket in Eq.~\eqref{EqII} diverges as $\sim m_I^{-2}$ leading to finite energy density $\sqrt{2}m_\chi n_\chi$. Remarkably, weak and strong coupling limits of the present EoS are similar, since DM pressure vanishes in both these cases. At $m_I\rightarrow\infty$ it is due to the absence of repulsion. The limit $m_I\rightarrow0$ is equivalent to the case of massless vector field, which does not have a non-trivial mean field solution needed to stiffen an EoS. Detailed analysis of the weak and strong coupling limits of the present EoS is performed in Appendix \ref{appA:FullLagrangian}. \begin{figure} \includegraphics[width=1.15\columnwidth]{combined_mu_e} \caption{{\bf Left panel:} Scaled pressure $p_\chi/p_\infty$ (black solid curve), energy density $\varepsilon_\chi/p_\infty$ (black dashed curve) and speed of sound squared $c_{s,\chi}^2$ (red dotted curve) of DM as functions of its chemical potential $\mu_\chi$ given in units of $m_\chi$. {\bf Right panel:} Scaled pressure $p_\chi/p_\infty$ (black solid curve) and speed of sound $c_{s,\chi}^2$ (red dotted curve) of DM as functions of scaled energy density $\varepsilon_\chi$ given in units of $p_\infty$. } \label{fig1} \end{figure} A remarkable feature of the present EoS is that at infinite density its pressure is limited by the value $p_\infty=m_I^2m_\chi^2/4$. This regime is reached at $\mu_\chi=\sqrt{2} m_\chi$. Thus, compressibility of DM vanishes at asymptotically high densities regardless of $m_\chi$ and $m_I$. The same conclusion holds for the speed of sound $c_{s,\chi}^2=dp_\chi/d\varepsilon_\chi$. In other words, high density configurations of bosonic DM are gravitationally unstable at any strength of the repulsive interaction. The left panel of Fig.~\ref{fig1} shows pressure, energy density and speed of sound of the considered DM EoS as functions of the corresponding chemical potential. It is worth mentioning, that the square of the speed of sound is limited from above by the value $1/9$, which is reached at $\mu_\chi=\sqrt{3/2}~m_\chi$ and does not depend on $m_\chi$ and $m_I$. Thus, $c_{s,\chi}^2$ is bounded by quite small values and corresponds to soft EoS of DM. The right panel of Fig.~\ref{fig1} shows this EoS as a function of energy density. \vspace*{1.5cm} \subsection{Baryon matter EoS} \label{subsec:BMEoS} In order to thoroughly study the impact of DM on compact stars made of mostly BM, we consider two EoSs of different stiffness. One of them is the Induced Surface Tension (IST) EoS, formulated on the basis of the hard-core approach. Thus, nucleons are characterized by an effective hard-core radius that provides a short range repulsion between the particles of different species. This part of the model was fixed from the fit of heavy-ion collision data \citep{Sagun:2017eye}, while the IST contribution was implemented by accounting for an interparticle interaction at high density. The corresponding parameters were fitted to reproduce the nuclear matter ground state properties, correct behaviour of the nuclear liquid-gas phase transition \citep{Sagun:2016nlv} and proton flow constraint \citep{Ivanytskyi:2017pkt}. Furthermore, in \citet{Sagun2019IST} the model was generalized to describe NSs showing a big application range of the unified IST approach. In the present work, we consider the Set B described in details in \citet{NSOscillationsEoS}, while the crust is modeled in a simplified way by the polytropic EoS with adiabatic index $\gamma=4/3$. In addition, we consider the DD2 EoS \citep{Typel2009,Typel1999} with and without $\Lambda$ hyperons. The DD2 is a mean-field relativistic nuclear model with density dependent couplings, whose parameters were fitted to the ground-state properties of nuclei. Hyperons have been included in several works. In the present study, the density dependence of the hyperon couplings to the $\sigma$, $\omega$ and $\rho$ mesons is considered to be same as the one of the nucleons. For the $\phi$ coupling the density dependence of the $\omega$ meson is considered. The couplings of the $\sigma$ meson to the $\Lambda$ and $\Xi$ have been taken from \cite{Fortin:2017cvt} and \cite{Fortin:2020qin}, respectively, and have been fitted to the binding energy of $\Lambda$ and $\Xi$ hypernuclei. The coupling to the $\Sigma$ hyperon was chosen so that the $\Sigma$ potential in symmetric nuclear matter is $+30$ MeV, see \cite{Gal:2016boi} for a discussion. For the vector mesons, the quark model predictions are used, \begin{align*} &g_{\omega\Lambda}=g_{\omega\Sigma}=\frac{2}{3}g_{\omega N},\quad g_{\omega\Xi}=\frac{1}{3}g_{\omega N},\\ &g_{\phi\Lambda}=g_{\phi\Sigma}=-\frac{\sqrt{2}}{3}g_{\omega N},\quad g_{\phi\Xi}=-\frac{2\sqrt{2}}{3}g_{\omega N}. \end{align*} Finally, the effective $\rho$-meson coupling is determined by the product of the hyperon isospin with the $\rho$ meson-nucleon coupling. Further on the DD2 EoS with $\Lambda$ hyperons will be referred as the DD2$\Lambda$. The complete NS EoS contains, besides the core EoS, the BPS EoS \cite{bps} for the outer crust, and the inner crust was calculated within a Thomas-Fermi calculation taking DD2 as the underlying model and allowing for the appearance of several geometries as discussed in \cite{grill14}. The inner crust EoS has been published in \cite{Fortin2016}. \section{Mixed system of two components} \label{sec:MIXEoS} We assume no interaction between DM and BM, except through gravity. This assumption is fully justified by the latest constraints coming from the DM direct detection experiments and Bullet Cluster \citep{Clowe_2006}, showing that the DM-BM cross section is many orders of magnitude lower than the typical nuclear one, $\sigma_\chi\sim 10^{-45}\ \mathrm{cm}^2\ll \sigma_N\sim10^{-24}\ \mathrm{cm}^2$. Therefore, the stress-energy tensors of both components are conserved separately, leading to the system of the Tolman-Oppenheimer-Volkoff equations (TOV) with split components \citep{PhysRev.55.364,PhysRev.55.374} \begin{equation}\label{TOV} \frac{dp_i}{dr}=-\frac{(\epsilon_i +p_i)(M_\mathrm{tot}+4\pi r^3p_\mathrm{tot})}{r^2\left(1-{2M_\mathrm{tot}}/{r}\right)}, \end{equation} which describes the relativistic hydrostatic equilibrium of a DM-admixed NS. In Eq.~\eqref{TOV}, the subscript index refers both to the BM and DM, i.e., $i=B,D$, while $M(r)$ is the gravitational mass enclosed inside a sphere of radius $r$ \begin{equation}\label{M_i} M_i(r) = 4\pi\int^r_0 \varepsilon_i (r^\prime)r^{\prime 2}dr^\prime. \end{equation} Using Eq.~\eqref{M_i}, we define the total gravitational mass as the sum of the two components, $M_\mathrm{tot} = M_B(R_B)+M_D(R_D)$, where the radii $R_i$ are evaluated using the zero-pressure condition at the surface \begin{equation} p_i(R_i)=0. \end{equation} After having the total mass of the system, it is possible and convenient to write the fraction of the accumulated DM as \begin{equation}\label{DMFRAC} f_\chi = \frac{M_D}{M_\mathrm{tot}}. \end{equation} It is worth noting, that we refer to the microscopic/thermodynamic DM parameters as $\chi$, while the macroscopic ones have an index $D$. It is easy to obtain directly from Eq.~\eqref{TOV} the relation between the chemical potentials of the BM and DM. In fact, \cite{PhysRevD.102.063028} showed that \begin{equation} \frac{d \ln \mu_B}{dr}=\frac{d \ln \mu_\chi}{dr} = -\frac{M_\mathrm{tot}+4\pi r^3 p_\mathrm{tot}}{r^2(1-2M_\mathrm{tot}/r)}, \end{equation} which yields to the conclusion that the two chemical potentials are proportional to each other. The value their ratio attains in the center of the star is the proportionality constant, which can be used to simplify the model: \begin{equation}\label{DMBM} \mu_\chi = \left(\frac{\mu_\chi}{\mu_B} \right)_{r=0} \mu_B. \end{equation} \begin{figure*} \centering \setkeys{Gin}{width=1.15\linewidth} \begin{tabularx}{\linewidth}{XXX} \includegraphics{MRCurves100MeV_250_Constraints_title.pdf} & \includegraphics{Profiles_DD2_zoomed_region.pdf} & \includegraphics{Profiles_IST_2.pdf} \end{tabularx} \begin{tabularx}{\linewidth}{XXX} \includegraphics{MRCurves1GeV_1GeV} & \includegraphics{Profiles_DD2.pdf} & \includegraphics{Profiles_IST.pdf} \end{tabularx} \caption{ {\bf Upper row:} Total gravitational mass of the DM-admixed NS as a function of its visible radius $R$ (left panel). Black solid, dash-dotted and dotted curves correspond to pure BM stars described by the IST EoS, DD2 EoS and DD2 EoS with hyperons. Red, blue, and greed colours depict relative DM fractions equal to 1\%, 3\%, and 5\%, correspondingly. Green, gray, and teal bands represent 1$\sigma$ constraints on mass of PSR J0348+0432 \citep{PSRj03480432Article}, PSR J1810+1744 \citep{Romani:2021xmb}, and PSR J0952-0607 \citep{Romani:2022jhd}. Pink and beige contours show the NICER measurements of PSR J0030+0451 \citep{Riley:2019yda,Miller_2019}, while orange and blue contours depict the PSR J0740+6620 measurements \citep{Miller:2021qha,Riley:2021pdl}. LIGO-Virgo observations of GW170817 \citep{Abbott_2018} and GW190425 \citep{LIGOScientific:2020aai} binary NS mergers are shown in blue and magenta. Energy density profiles for the BM (dotted curves) and DM (dashed curves) components are shown for the DD2 EoS (middle panel) and IST EoS (right panel). The solid back curve represents the profile for pure BM $1.4$ M$_\odot$ NS, while the other profiles were sampled to have the same total gravitational mass. Both panels were obtained for $m_{\chi}$=100 MeV, $m_{I}$=250 MeV. {\bf Lower row:} The same as on the upper row, but calculated for $m_{\chi}$=1 GeV, $m_{I}$=1 GeV. } \label{fig:MRprofiles} \end{figure*} By solving TOV Eqs.~\eqref{TOV} with the boundary conditions and accounting for the relation between both components from Eq.~\eqref{DMBM}, we calculate the M-R relations for DM-admixed NSs for different values of DM fractions $f_\chi$, particle's mass $m_{\chi}$, and the interaction scale $m_{I}$. To better understand the impact of each parameter we consider light and heavy DM particles with $m_{\chi}$=100 MeV and $m_{\chi}$=1 GeV (see the left column of Fig.~\ref{fig:MRprofiles}). Moreover, to address our ignorance of the EoS for baryonic component we studied the effect of DM on the soft IST EoS, depicted as a solid black curve on the left panels of Fig.~\ref{fig:MRprofiles}, as well as on the stiff DD2$\Lambda$ EoS (dotted black curve) and DD2 EoS (dash-dotted black curve). The chosen EoSs represent different sides of mass and radius region allowed by the recent astrophysical, GW and nuclear physics constraints, and, therefore, provide a good coverage of BM parameters. As it can be seen, the DD2$\Lambda$ EoS (dotted black curve) and DD2 EoS coincide until $\sim 1.4~M_{\odot}$, a point where the onset of hyperons happens. Further, hyperon production softens the EoS leading to smaller total maximum mass and star's radius. The left panels of Fig.~\ref{fig:MRprofiles} show the effect of DM with different relative fractions inside a star on its mass and radius. Thus, we see a reduction of $M_{\rm max}$ and radius of stars for larger DM fractions caused by a DM core formation. In fact, the formation of more compact objects for an outside observer would look like a softening of the BM EoS. This degeneracy between the effect of DM and possible change of the strongly interacting matter properties at high density will be discussed in Section \ref{sec:Discussions}. Due to the fact that in the considered model at $\mu_\chi \rightarrow \sqrt{2}m_\chi$ energy density diverges at finite pressure, DM falls under the Schwarzschild radius forming a black hole. It takes place for the high mass stars for which the DM chemical potential in the center reaches the limit (see the upper left panel of Fig.~\ref{fig:MRprofiles}). The panels on the middle and right columns of Fig.~\ref{fig:MRprofiles} demonstrate the split energy density profiles of DM (dashed curves) and BM (dotted curves). The solid black curve depicts the energy density profile for the $1.4$ M$_\odot$ star. The profiles for DM-admixed NSs are shown for stars with the same total gravitational mass as the pure BM NS. As the onset of hyperons occur after $1.4$ M$_\odot$, two formulations of the DD2 EoS give the same prediction for the matter distribution inside the stars. Therefore, on Fig.~\ref{fig:MRprofiles} we show profiles only for the DD2 EoS. For heavy bosons a compact DM core is formed, which is seen from the high values of the $\epsilon_D$, being an order of magnitude above $\epsilon_B$ (see middle and right panel of the low row on Fig.~\ref{fig:MRprofiles}). Furthermore, the $\epsilon_D$ drops to zero at radius $\sim$2 km corresponding to the size of a DM core. For the DM fraction 3\% and 5\% at $m_{\chi}$=100 MeV and $m_{I}$=250 MeV there is a halo with the radius $R_{D}$=25.6 km and 13.0 km, respectively. \section{Tidal deformability of DM-admixed NSs} \label{sec:TID} The tidal deformability parameter $\lambda$ quantifies the response of an object to a static external quadrupolar tidal field $\mathcal{E}_{ij}$ by relating it to a quadrupolar moment $\mathcal{Q}_{ij} = -\lambda\mathcal{E}_{ij}$. For a given stellar configuration of the total mass $M_{\mathrm{tot}}$ and radius $R$ this tidal deformability can be expressed through the Love number $k_2$ as $\lambda=2k_2 R^{5}/3$ and is commonly mapped to the dimensionless $\Lambda = \lambda/M_{\mathrm{tot}}^5$ \citep{Hinderer_2008}. In the two-component case $R$ should be understood as the outermost radius, i.e., $R=R_{B}$ in the DM core scenario and $R=R_{D}$ in the DM halo one. The Love number is defined through the solution of an ordinary differential equation (ODE) appearing as a leading order expansion of the Einstein equations with a metric perturbed by the external gravitational field \citep{1957PhRv..108.1063R}. The microscopic properties of matter are encoded into this ODE through the change of total pressure $p_{\mathrm{tot}}\equiv p_B+p_\chi$ caused by perturbation of the total energy density $\varepsilon_\mathrm{tot}\equiv \varepsilon_B+\varepsilon_\chi$. This change is quantified by the derivative $dp_{\mathrm{tot}}/d\varepsilon_{\mathrm{tot}}$. In the barotropic one fluid case this derivative represents the corresponding speed of sound. In the two-fluid case the speed of sound derivation as $dp_{tot}/d\varepsilon_{tot}$ is mathematically identical to the expression obtained by \citet{Das:2020ecp}. Therefore, in what follows, we refer to it as the effective speed of sound of two-fluid system. It can be expressed through the speed of sound of baryonic $c_{s,B}^2$ and dark $c_{s,\chi}^2$ components as \begin{eqnarray} \label{IX} c_{s,\mathrm{eff}}^2=\eta c_\mathrm{s,B}^2+(1-\eta)c_{s,\chi}^2 \end{eqnarray} with $\eta\in[0,1]$. The lower and upper edges of this interval correspond to the cases of pure DM and BM, respectively. Appendix \ref{appB:SpeedOfSound} contains derivation of Eq.~\eqref{IX} and parameter $\eta$. This expression demonstrates that the effective speed of sound lays between the ones of pure components. In Fig.~\ref{fig:sound} we show the effective speed of sound for different $\xi=\frac{\mu_\chi}{\mu_B}$ values, as well as the speed of sound for pure BM and DM components. A relation between the parameters $\xi$ and $\eta$ is given in Eq.~\eqref{B3} in Appendix. The upper panel of Fig.~\ref{fig:sound} indicates how the effective speed of sound behaves with DM accumulated in a core of a compact star. Note, that it is in between the speed of sound values for pure components. On the lower panel of Fig.~\ref{fig:sound} we see that the effective speed of sound follows the BM, and only in the outer crust the DM component stars to dominate, which is related to a halo configuration. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{CS2_eps_Core.pdf} \includegraphics[width=0.95\columnwidth]{CS2_eps_Halo.pdf} \caption{The effective speed of sound for a mixture of BM and DM as a function of total energy density. {\bf Upper panel:} The curves were obtained for $m_{\chi}$=0.75 GeV and $m_{I}$=0.25 GeV that represents a DM core configuration. {\bf Lower panel:} The same as on the upper panel, but for $m_{\chi}$=0.20 GeV and $m_{I}$=0.1 GeV illustrating a DM halo configuration. The horizontal line at low densities corresponds to the polytropic EoS for a crust.} \label{fig:sound} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{LambdaM_1000_1000.pdf} \caption{Tidal deformability as a function of total gravitational mass calculated for pure BM stars (black curves) and DM-admixed NSs with relative DM fractions 1\%, 3\%, and 5\%, in red, blue, and green, correspondingly. Solid, dash-dotted and dotted curves represent the IST EoS, DD2 EoS and DD2$\Lambda$ EoS. The colours and symbols coincide with the ones used on Fig.~ \ref{fig:MRprofiles} for a better comparison. The figure is obtained for $m_{\chi}$=1 GeV, $m_{I}$=1 GeV. Green, gray, and teal bands represent 1$\sigma$ constraints on mass of PSR J0348+0432 \citep{PSRj03480432Article}, PSR J1810+1744 \citep{Romani:2021xmb}, and PSR J0952-0607 \citep{Romani:2022jhd}. The magenta area visualizes the constraints obtained from GW170817 \citep{Abbott_2018}.} \label{fig:tides} \end{figure} As you can see on Fig.~\ref{fig:tides}, DM condensed in a core leads to a decrease of the total gravitational mass, radius, and, consequently, tidal deformability parameter compared to a pure baryonic star, what a distant observer will perceive as an effective softening of the EoS. On the other hand, the presence of a DM halo leads to a significant increase of the outermost radius that goes beyond the BM component, increase of the tidal deformability parameter, and a consequent effective stiffening of the EoS. The considered IST, DD2, and DD2$\Lambda$ EoSs makes us to conclude that the soft EoS, being on the lower limit of the GW170817 90\% CL region (see a magenta area on Fig.~\ref{fig:tides}), provides a stringent constraint on a DM core scenario, while the stiff EoS, being on the upper border of it, allows much higher DM fractions, and disfavours an extended halo configuration. This degeneracy between the effect of DM and strongly interacting matter properties at high densities possesses limitations on DM detection, except several DM smoking guns that are going to be discussed in Section \ref{sec:Discussions}. Despite of it, we have to be aware of the fact that observational data on compact stars could be affected by an accumulated dark matter and, consequently, constraints we put on strongly interacting matter at high densities. \section{Results} \label{sec:Results} To study an interplay between boson mass and the interaction scale, as well as to put constrains on the DM fraction, we perform a scan over those parameters for the IST EoS (upper row), DD2 EoS (middle row), and DD2$\Lambda$ EoS (bottom row) for fixed DM fractions of 1\%, 3\%, and 5\% (see Fig.~\ref{fig:contours} in Appendix \ref{appC:Scan}). The color maps represent the total maximum gravitational mass of DM-admixed NSs. The white curve on each panel corresponds to $M_{max}=1.4~M_{\odot}$, whereas the red curve represents $M_{max}=2.0~M_{\odot}$. In the case $2.0~M_{\odot}$ configurations are not reachable, we indicate $1.9~M_{\odot}$ stars with a green curve. As one can see from the upper row on Fig.~\ref{fig:contours}, increase of the DM fraction narrows the range of the values of the interaction scale $m_{I}$ consistent with the masses of the heaviest known pulsars. On the other hand, existence of the high mass stars with the significant amount of heavy DM requires low values of the interaction scale. We see the samedependence between $m_{\chi}$ and $m_{I}$ values. In fact, lower $m_I\equiv m_\omega/g$ values correspond to the higher coupling constant $g$ or, equivalently, stronger repulsion between the DM particles. The IST EoS for any DM fraction is always in agreement with the tidal deformability constraint, independent of $m_{\chi}$ and $m_{I}$ (see the upper row on Fig.~\ref{fig:contours}). At the same time, only for 1\% and 3\% of DM the total maximum mass of DM-admixed NSs can reach $2.0~M_{\odot}$. Thus, to simultaneously reproduce $2.0~M_{\odot}$ and GW170817 tidal deformability constraints boson mass and the interaction scale are restricted to the values shown in yellow. The shaded areas correspond to the non-allowed regions of parameters that cannot simultaneously provide the heaviest pulsars and GW constraints. For 3\% and 5\% of DM the DD2 EoS reproduces both constraints in a wide range of parameters disfavouring MeV mass range of bosonic DM with low values of the interaction strength. The black curve on the middle and bottom rows of Fig.~\ref{fig:contours} depicts the GW170817 tidal deformability constraint $\tilde{\Lambda}_{1.36} = 720$ \citep{LIGOScientific:2018hze} above which the model is consistent with the GW170817 merger. The dashed area corresponds to non-allowed range of parameters, including 1\% of DM for the DD2 EoS. For the DD2$\Lambda$ EoSs there are no $m_{I}$ and $m_{\chi}$ values that simultaneously reproduce the heaviest pulsars and GW constraints. In fact, only one of these criteria was reproduced for considered values of DM fractions. This is directly related to the fact that at the onset of $\Lambda$ hyperons the EoS becomes softer in addition to the DM softening effect in a core configuration. From this analysis we can conclude that, contrary to the stiff BM EoS (the DD2 EoS, as an example), the soft BM EoS (the IST EoS, as an example) provides a weaker limit on DM particles mass and interaction strength. It is related to the fact that the pure baryonic DD2 or DD2$\Lambda$ EoSs are on the upper border of the $\Lambda_{1.4}$ constraint from GW170817. Any decrease of $\Lambda_{1.4}$ due to a DM core will not violate this condition, whereas a small DM halo configuration will do it. As you can see on Fig.~\ref{fig:tides}, the IST EoS is located on the lower limit of the magenta area favouring a halo formation. It is worth to note that this result is obtained under the assumption of a similar DM fraction in all galaxies. As a matter of fact, an application of the GW170817 tidal deformability result and multi-messenger data as a universal constraint on the amount of DM is questionable. Each galaxy could be characterised by a different DM profile, as well as to have local DM inhomogeneities. Strictly speaking, GW170817 probes an amount of DM only in a part of the NGC 4993, the host galaxy for this particular merger. Therefore, a larger sample size of NS-NS and NS-BH mergers is required to constrain the DM properties. Due to current uncertainties of the BM EoS at high density we cannot discriminate between the effect of DM and properties of BM. As it will be discussed in the following Section \ref{sec:Discussions}, we expect a higher DM fraction inside compacts stars towards the Galactic center. If so, the compact star population would follow the scenarios presented from the left to right panels on Fig.~\ref{fig:contours}, i.e., from low to high DM fraction. \section{Discussions} \label{sec:Discussions} As described above, there are various effects of DM on compact stars. A natural question arises: how we can narrow down the proposed DM models and constrain the DM properties using NSs? Can compact stars provide a smoking gun evidence for the presence of DM? There are several different approaches: (i) by measuring mass, radius, and moment of inertia of NSs with few-\%-accuracy. Nowadays, NICER \citep{Miller_2019,Raaijmakers:2019dks,Miller:2021qha,Raaijmakers:2021uju} and in the nearest future ATHENA \citep{Cassano:2018zwm}, eXTP \citep{eXTP:2018kws}, and STROBE-X \citep{STROBE-XScienceWorkingGroup:2019cyd} are expected to measure $M$ and $R$ of NSs with a high accuracy. Using the synthetic data for the STROBE-X telescope, and assuming two NSs of the same mass and BM EoS, \citet{Rutherford:2022xeb} concluded that a measurement of radii with a 2\% accuracy would be enough to draw a conclusion about the presence of DM in star's interior. However, an existence of the deconfinement phase transition in a core would exhibit in the same way, leading to a degeneracy between the effect of DM and the phase transition. The main drawback of this approach is that the effect of DM could mimic the softening/stiffening of BM at high density and vice versa. Current uncertainties of the baryonic EoS do not allow a discrimination of two effects. In addition, radio telescopes, e.g., MeerKAT \citep{Bailes:2018azh}, SKA \citep{Watts:2014tja} and ngVLA \citep{Bower:2018mta} plan to increase radio pulsar timing and discover Galactic center pulsars. A mass reduction of NSs towards the Galaxy center or variation of mass, radius, and moment of inertia in different parts of the Galaxy could shed light on the amount of accumulated DM in compact stars. In fact, we could see a paucity of old millisecond pulsars in the Galactic center either due to light extinction on dust, or the collapse of DM-admixed NSs into black holes after exceeding the Schwarzschild limit \citep{Bramante:2014zca}. (ii) by performing binary numerical-relativity simulations and kilonova ejecta for DM-admixed compact stars for different DM candidates, mass of particles, interaction strength and fractions with the further comparison to GW and electromagnetic signals. The smoking gun of the presence of DM could be a supplementary peak in the characteristic GW spectrum of NS mergers \citep{Ellis:2017jgp}, exotic waveforms \citep{Giudice:2016zpa} or presence of a strong oscillation mode in the waveforms during the post-merger stage \citep{Bezares:2019jcb}. The next generation of GW detectors, i.e., the Cosmic Explorer (CE) \citep{Mills:2017urp} and Einstein Telescope (ET) \citep{Punturo:2010zz} will open another perspective of detection post-merger regimes and probing an internal composition of compact stars. (iii) by detecting objects that go in contradiction with our understanding. A potential candidate for DM-admixed NS could be the secondary component of GW190814 \citep{LIGOScientific:2020zkf}. While likely being a black hole \citet{Tews:2020ylw,Essick:2020ghc}, this compact object with mass of $\sim 2.6~M_{\odot}$ raised debates about its nature \citep{Tsokaros:2020hli} as a pure baryon matter EoS would not be able to explain a compact star of $\sim 2.6~M_{\odot}$. Hence, if not being a black hole, the compact object would have to be supplemented either with exotic degrees of freedom, such as hyperons and/or quarks \citep{Tan:2020ics,Dexheimer:2020rlp}, an early deconfinement phase transition \citep{Ivanytskyi:2022oxv}, very fast rotation \citep{Zhang:2020zsc}, or extra stiffening of the EoS at high densities \citep{Fattoyev:2020cws}. An alternative explanation of this puzzle would be a DM-admixed NS \citep{DiGiovanni:2021ejn}, which could also explain a formation of a black hole of so low mass as a collapsed DM-admixed NS \citep{Bramante:2014zca}. (iv) modification of the pulsar pulse profile due to the extra light-bending \citep{Miao:2022rqj} and/or gravitational microlensing in the case of a dark-halo existence. (v) modification of the cooling rate of compact stars \citep{2010PhRvD..81l3521D,Hamaguchi:2019oev,Buschmann:2021juv, AngelesPerez-Garcia:2022qzs}. We want to note, that this effect is the most inaccurate between the above mentioned ones. Thus, NSs need to have a well-measured surface luminosity and age. In addition to it, uncertainties related to a particle composition, EoS, magnetic field, superfluidity/superconductivity, NS masses, chemical composition of an atmosphere, etc., could wash-out an effect of DM. Old NSs are less affected by the mentioned effects, as a photon cooling stage starts to dominate over a neutrino cooling stage that is very sensitive to a particle composition and superfluidity/superconductivity \citep{Page:2004fy}. Magnetic field is also expected to be unimportant for old isolated NSs. Therefore, a possible heating mechanism of NSs due to DM annihilation could be probed by increasing a statistics on observational data of old NSs. \section{Conclusions} \label{sec:Concl} We proposed a model of bosonic DM represented by a complex scalar field coupled to the vector one through the covariant derivative. The model describes DM existing in the form of BEC with repulsive interaction. Pressure of the present EoS saturates at asymptotically high densities leading to the vanishing speed of sound and compressibility at this regime. From the thermodynamic requirements, chemical potential of DM existing as such BEC is limited to the interval $\mu_\chi\in[m_\chi,\sqrt{2}m_\chi]$, with $m_\chi$ being the DM particle mass. In the weak and strong coupling limits this interval shrinks to its lower and upper bounds, respectively, while pressure vanishes even at any density. This spectacular feature of the present model makes its weak and strong coupling limits qualitatively similar and requires a further clarification. DM-admixed compact stars were modelled by considering the mixed system of two fluids with different relative fractions. The performed derivation of the effective speed of sound for two-fluid system allowed us to calculate the tidal deformability parameter for compact stars admixed with different amount of DM. We argue that one-fluid approach cannot be applied to a mixed system of several components with different proper speed of sound values. To account for a discrepancy related to the baryonic component the soft IST EoS and stiffer DD2 EoS with and without hyperons were considered. For different DM particle's mass, its relative fraction and interaction scale we found the conditions of DM core formation. We argue that in the framework of the considered model only a small DM halo is possible, with the outermost radius around twice the baryonic one. We performed a thorough analysis of the effect of DM particle mass in MeV-GeV mass-range and self-interacting scale on maximum total gravitational mass and tidal deformabilities of NSs for several fixed DM fractions. We found that for 1\%, 3\% of DM for the IST EoS and 3\%, 5\% of DM for the DD2 EoS the model can simultaneously reproduce heaviest pulsars and GW170817 tidal deformability constraint. The obtained allowed region of boson mass $m_{\chi}$ and interaction scale $m_{I}$ for a fixed DM fraction shows an anti-correlated dependence between these parameters, i.e. an high $m_{\chi}$ value favours a low $m_{I}$ value. For the DD2$\Lambda$ EoS no allowed region of parameters was found due to inability to simultaneously reproduce both constraints. In Section \ref{sec:Discussions}, we discussed the possible smoking gun signatures of DM in compact stars that could be probed in the nearest future, e.g., alteration of maximum total gravitational mass and radius of compact stars as a function of a distance from the Galactic center; modification of the surface temperature (additional heating or cooling mechanism) of NSs towards the Galactic center; lack of old millisecond pulsars in the Galactic center; presence of supplementary peak(s) in the GW signal from NS-NS and/or NS-BH mergers, exotic waveform or modification of the kilonova ejecta; gravitational-lensing effect or alteration of the pulsar pulse profile due to the extra light-bending in a dark halo. Moreover, such objects as a secondary component of the GW190814 event challenges the existing models of compact stars and black holes, giving a possibility of this object to be a DM-admixed NS. We argue that compact stars and their mergers provide a novel sensitive indirect method of detection and constraining the DM properties. Based on the performed analysis it is clear that the present data analysis of X-ray, radio and GW observations without accounting for an accumulated DM could miss a valuable piece of information as well as to give a wrong prediction about the strongly interacting matter properties at high density. \begin{acknowledgments} The work of E.G., C.P. and V.S. was supported by national funds from FCT – Fundação para a Ciência e a Tecnologia, I.P., within the Projects No. UIDB/04564/2020, UIDP/04564/2020, EXPL/FIS-AST/0735/2021. E.G. also acknowledges the support from the Project No. PRT/BD/152267/2021. C.P. is supported by the Project No. PTDC/FIS-AST/28920/2017. V.S. also acknowledges the PHAROS COST Action CA16214. The work of O.I. was supported by the Polish National Science Center under the grant No. 2019/33/BST/03059. \end{acknowledgments} \subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document}
{ "timestamp": "2022-09-23T02:12:22", "yymm": "2209", "arxiv_id": "2209.10905", "language": "en", "url": "https://arxiv.org/abs/2209.10905" }
\section{Introduction \& Related Work} Large Language Models (LLMs) have recently gained tremendous popularity in the NLP community \citep{devlin2019bert, Liu2019RoBERTaAR, unilmv2, brown2020language}. The ever-increasing size in both models and training data renders many traditional learning methods impractical/intractable. As a result, prompt-based learning has emerged as a new paradigm tailored specifically towards leveraging the power of LLMs \citep{radford2019language, petroni2019language, raffel2020exploring, brown2020language, schick2021just, gao2021making, liu2021pre}. In the zero-shot setting (such as in this study), a data sample is first ``verbalized'' into an input prompt and a ground-truth response --- both often in natural language forms. The prompt is then issued to a pre-trained LLM to obtain a predicted response, which can then be compared to the ground-truth for evaluation. This new technique has been successfully applied to many applications including text classification \citep{yin2019benchmarking, schick2021exploiting}, QA \citep{jiang2021know}, natural language generation \citep{li2021prefix} and NLG evaluation \citep{yuan2021bartscore}. Despite the impressive results on popular NLP benchmarks, however, the back-end LLMs are usually pre-trained with general-domain data, leading to sub-optimal performance in new domains for prompt-based learning. There are two major challenges in successful domain adaptation. Firstly, aside from the many known issues of LLMs \citep{webson2021prompt,min2022rethinking,zhao2021calibrate, lampinen2022can}, their sheer size and/or accessibility (e.g., served via API over the internet) makes it prohibitively expensive and impractical for domain adaptation. These limitations have inspired a recent line of work known as prompt editing/tuning \citep{gao2021making,li2021prefix,madaan2022memory}. The general idea is to systematically study the correlation between prompt construction and the performance on a specific task. Prompt construction comes in a wide variety of flavours ranging from adapting real-valued \emph{prompt embeddings} to the order/wording/etc.\ of few-shot in-context learning examples. Meanwhile, it also introduces a second challenge: prompt-tuning often relies on the availability of ground-truth labels of the data, which imposes much uncertainty in applications where labeled data are scarce. \rev{ Given the ubiquity of the aforementioned challenges, we focus our study on alleviating the constraints on both annotation availability and access to model parameters, and consequently making LLMs more accessible to be deployed and used in real-world applications. We take a mainstream NLG task, namely question generation, as a case study \citep{du2017learning,yuan2017machine,du2018harvesting,pan2019recent,liu2020asking,pyatkin2021asking}. In this task, a model is trained to generate a natural language question conditioned on a context and an answer, such that the generated question can be answered by the provided answer using the context as supporting evidence. Question generation is the corner stone for many NLP applications including education \cite{kurdi2020systematic,abdelghani2022conversational}, automatically FAQ generation \citep{mass2020unsupervised}, information seeking \citep{qi2020stay}, etc. In an educational setting, for example, a question generation system can generate demonstrations that inspire students' curiosity and thinking (teaching), or to help assess students' proficiency on certain knowledge or skills (examining). These use cases would benefit greatly from reduced dependency on computing resources, data availability, and the required expertise for fine-tuning an LM. } \rev{ To align with these real-world scenarios, our goal is to obtain better outputs from an inference-only LLM (i.e., as a ``black-box'', which is relatively more accessible, e.g., through online APIs). In particular, given the common practice of sampling multiple outputs for improved generation diversity, we propose a method that aims at selecting the best candidate based on multiple aspects of question quality in a zero-shot manner --- notably without model adaptation or human annotations. Our method can be seen as a post-hoc selection process within a larger NLG pipeline, and thus is orthogonal and applicable to zero-shot and in-context learning methods \citep{rubin2021learning, lu2022fantastically, liu2022makes}. } \section{Problem Setting} \label{sec:problem} \paragraph{Notations} Formally, we consider a dataset of context-answer pairs $(c, a)$ both as strings. The task of question generation is to generate a question $q$ that can be answered by $a$ using $c$ as supporting evidence. We use an off-the-shelf pre-trained LLM-based question generator in a zero-shot setting (prompt construction detailed in Appendix~\ref{sec:app:prompt}). To simulate the black-box generator scenario, we refrain from any form of model tuning. We do, however, assume access to a set of output sequences stochastically sampled from the question generator. We thus ground our study to this application scenario by sampling $k$ questions $Q = \{q_i:i=1,\dots,k\}$. For comparison as a baseline, we also denote $q_g$ as the question generated with a greedy algorithm (i.e., generating the most probable token at each time step). Our goal is to devise an algorithm $S$ which selects the best candidate $q_{i^*}$ that maximizes some evaluation metric $M:Q\mapsto\mathbb{R}$, i.e., $S(Q)=i^*=\argmax_iM(q_i)$. We use $M_s$\xspace, $M_{\overline{s}}$\xspace, and $M_{\underline{s}}$\xspace to denote the mean, min, and max of $\{M(q):q\in Q\}$, resp., and $M_g$\xspace for the greedy output $M(q_g)$. Semantically, $M_{\underline{s}}$\xspace$\leq$$M_s$\xspace$\leq$$M_{\overline{s}}$\xspace is tautologically true, and a positive result on the design of $S$ would translate to $M(q_{S(Q)})$ outperforming both $M_s$\xspace and $M_g$\xspace. \paragraph{Datasets} In this work, we adopt two question generation datasets with distinctive characteristics, namely SQuAD\xspace \citep{rajpurkar2016squad} and Fairytale QA\xspace \citep{xu2022fantastic}. SQuAD\xspace was originally proposed as an extractive question answering (QA) dataset. In the question generation literature \citep{du2018harvesting,yuan2017machine,unilmv2}, it has been used as a \textit{sentence-level} question generation task, i.e., a context $c$ is a single sentence that contains the corresponding answer $a$ as a sub-string. Fairytale QA\xspace has also been used for both question answering and question generation. It features \textit{paragraph-level question generation} (with $c$ being one or more paragraphs), and the answer $a$ is not necessarily a sub-string of $c$. Since we do not perform any form of model/prompt tuning, we use the testing split for datasets, which consist of 11,877 data points for SQuAD\xspace and 1,007 for Fairytale QA\xspace. \paragraph{Model} We leverage a pre-trained GPT-3\xspace model \citep{brown2020language} for both question generation and selection (detailed in \S\ref{sec:method}). In all our experiments, we prompt the GPT-3\xspace model in a 0-shot manner. Details on all our prompts are provided in Appendix~\ref{sec:app:prompt}. \paragraph{Evaluation Metrics \label{sec:problem:Metrics}} We use two quantitative methods to evaluate the selected question $q'=M(q_{S(Q)})$:\\ $\bullet$ Reference-based evaluation: Following prior works, we use BLEU-4\xspace for SQuAD\xspace \cite{du2018harvesting,unilmv2} and ROUGE-L\xspace for Fairytale QA\xspace \citep{xu2022fantastic}. These metrics compare $q'$ against the reference question $\hat{q}$ (a.k.a. the ``ground-truth'' question in the existing literature).\\ $\bullet$ Human evaluation: we solicit human annotations on a subset of the data. \rev{We postulate that an over-all score given holistically to rate a question would be highly subjective and thus less likely to induce agreement among annotators.} Accordingly, we decompose the quality of a question into seven dimensions\footnote{Namely, grammatical correctness, offensiveness, clarity, relevance, importance, specificity, and answerability.}, and ask human annotators to rate a question on each dimension followed by an overall rating of the question. We collect three annotations from different annotators for each data points. We provide details of the human study in Appendix~\ref{sec:app:human_study}. \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{figures/prompt_score.pdf} \caption{Prompting GPT-3\xspace to rate a question's relevance. GPT-3\xspace output is highlighted in green.} \label{fig:prompt_score} \vspace{-0.3em} \end{figure} \section{Method} \label{sec:method} In this section we propose three question selection methods. As described in \S\ref{sec:problem}, each method is used to score $k$ sampled questions in $Q$ and the candidate with the highest score is proposed as the final output. \paragraph{$n$-gram\xspace similarity} We use $n$-gram\xspace similarity between a question and its corresponding context to measure their relevance. This method reflects the intuitive assumption that favorable question be closely related to the information provided by the context. Specifically, we extract all unique $n$-grams\footnote{In all our experiments $n$ ranges from 1 to 5.} $s^n(c)$ from a given context $c$, $s^n(q)$ from a question $q$. The $n$-gram\xspace similarity score is then defined as: \begin{equation} \small \text{sim}^n = \frac{|s^n(c) \cap s^n(q)|}{|s^n(q)|}, \end{equation} where $|s|$ indicates the size of set $s$. \begin{table}[t!] \centering \scriptsize \begin{tabular}{l|cc} \toprule & SQuAD\xspace & Fairytale QA\xspace \\ & (BLEU-4\xspace) & (ROUGE-L\xspace) \\ \midrule \multicolumn{3}{c}{prior works (models trained/fine-tuned on these datasets)}\\ \midrule \cite{du2018harvesting} & 0.152 & -- \\ \cite{zhang2019addressing} & 0.184 & -- \\ UniLM Large \cite{unilmv2} & 0.228 & -- \\ UniLM v2 Base \cite{unilmv2} & 0.244 & -- \\ ERNIE-GEN Large \cite{xiao2021ernie_gen} & 0.254 & -- \\ BART \cite{xu2022fantastic} & -- & 0.527 \\ \midrule \multicolumn{3}{c}{baselines (notations defined in \S\ref{sec:problem})}\\ \midrule $M_g$\xspace (greedy) & 0.372 & 0.424\\ $M_s$\xspace (sample avg) & 0.359 & 0.399\\ $M_{\underline{s}}$\xspace (lowerbound) & 0.225 & 0.259 \\ $M_{\overline{s}}$\xspace (upperbound) & 0.496 & 0.548 \\ \midrule \multicolumn{3}{c}{question selection}\\ \midrule bi-gram & 0.382 & 0.403 \\ tri-gram & 0.380 & 0.403 \\ round-trip & 0.392 & 0.434 \\ overall prompt score (OPS) & 0.373 & 0.399 \\ averaged prompt score (APS) & 0.380 & 0.406 \\ \midrule \multicolumn{3}{c}{ensemble multiple methods}\\ \midrule APS + round-trip& 0.397 & \textbf{0.439} \\ bi-gram + round-trip & \underline{0.400} & 0.429\\ tri-gram + round-trip & 0.398 & 0.430\\ bi-gram + APS & 0.384 & 0.406\\ tri-gram + APS & 0.383 & 0.409\\ bi-gram + APS + round-trip & \textbf{0.401} & 0.431 \\ tri-gram + APS + round-trip & \underline{0.400} & \underline{0.435} \\ \bottomrule \end{tabular} \caption{Reference-based evaluation scores. Best and second best numbers (excluding baselines) are highlighted with \textbf{boldface} and \underline{underline}, respectively.} \label{tab:ref_based_eval} \vspace{-2em} \end{table} \paragraph{Round-trip} Intuitively, the answer to a generated question should be semantically equivalent to the answer that has been used to generated the question. Formally, a question generation model $\text{QG}$ and a $\text{QA}$ model (both with reasonable performance) should satisfy the following: \begin{equation} \small q' = \text{QG}(c, a); \quad a' = \text{QA}(c, q'); \quad a' = a. \end{equation} This idea is closely related to \textit{cycle consistency} in the existing literature on image generation \citep{zhu2017unpaired}, machine translation \citep{artetxe2018unsupervised}, and QA \citep{alberti2019synthetic,shah2019cycle}). Here, we use GPT-3\xspace as an off-the-shelf QA model to obtain $a'$ for each pair of $c$ and $q'$, resulting in $k$ answers $A=\{a'_1,\dots,a'_k\}$ for the $k$ sampled questions in $Q$. We then measure the similarity between each $a'_i$ and the ground-truth answer $a$ using $F_1$ score for SQuAD\xspace and ROUGE-L\xspace for Fairytale QA\xspace (in accordance with the evaluation setup from the original papers for the two datasets). Finally, we select the question corresponding to the generated answer $a'_{i^*}$ that overlaps the most with $a$ (i.e., that can be best answered by GPT-3\xspace) Prompts used in these experiments are detailed in Appendix~\ref{sec:app:prompt}. \begin{figure*}[!t] \centering \includegraphics[width=0.9\textwidth]{figures/tab2n3.png} \caption{Human evaluation results, averaged over three annotators' scores, normalized per column. Left: SQuAD\xspace; right: Fairytale QA\xspace. Abbreviations in x-axis denote \textbf{G}rammatical correctness, \textbf{O}ffensiveness, \textbf{C}larity, \textbf{R}elevance, \textbf{I}mportance, \textbf{S}pecificity, \textbf{A}nswerability, \textbf{A}veraged \textbf{H}uman \textbf{R}ating (over all dimensions to the left), \textbf{O}verall \textbf{H}uman \textbf{R}ating (an overall score given by annotators). Exact scores are provided in Appendix~\ref{sec:app:additional_res}.} \label{fig:human_eval} \vspace{-0.3em} \end{figure*} \paragraph{Prompt-based Score} We propose a two-step procedure (Figure~\ref{fig:prompt_score}) for prompting GPT-3\xspace to answer the same set of meta-questions (i.e., questions about the quality of a given question) used for human evaluation (\S\ref{sec:problem}). In step 1, given a context-question pair, GPT-3\xspace is prompted to answer a meta-question as an open question (as opposed to choosing among a list of options) as well as to verbalize a reason for its answer. In step 2, GPT-3\xspace is prompted to choose from a list of options representing the rating scale of the meta-question. We empirically observe that without the first step, GPT-3\xspace output tends to have a low-entropy distribution, i.e., often choosing the same option for a given meta-question disregarding the different context-question pairs. In contrast, the model appears to be better primed wrt output diversity with the additional first-step, which is inline with observations made in some existing studies \citep{nye2021show,wei2022chain}. Similar to human evaluation, we also prompt GPT-3\xspace to generate an overall score to a question. We use \textit{overall prompt-based score (OPS)} to denote this GPT-3\xspace-labeled score, and \textit{averaged prompt-based score (APS)} to denote the average score over all individual meta-questions. \section{Results and Discussion} \label{sec:results} To measure the performance of a selection method (\S\ref{sec:method}), we use it to select one out of $k$ questions stochastically sampled from GPT-3\xspace, and score the selection with the evaluation metrics outlined in \S\ref{sec:problem:Metrics}. We set $k=5$ for all our experiments. \rev{Additionally, we test the ensemble performance with multiple methods. To ensure comparability, we normalize the scores obtained from each selection method into the range between 0 and 1, and use their average score to perform question selection.} \subsection{Reference-based evaluation} Reference-based evaluation are automatic metrics that are applied to the entire test sets of SQuAD\xspace and Fairytale QA\xspace. We observe in Table~\ref{tab:ref_based_eval} that on both datasets, all question selection methods outperform $M_s$\xspace, \rev{the average score over all five sampled questions,} validating the effectiveness of the proposed methods. While all individual methods outperform \rev{the greedy generation baseline} $M_g$\xspace on SQuAD\xspace, round-trip is the best performing one, outperforming $M_g$\xspace on both datasets. It can be further improved via ensemble with $n$-gram\xspace and/or prompt-based scores (using uniform weights). Note that prior studies require a large amount of labeled data for model training/fine-tuning, while GPT-3\xspace performs zero-shot inference. Despite this major difference in learning paradigm, most GPT-3\xspace-based models proposed here outperform previous results by significant margins on the SQuAD\xspace dataset --- even the least performant samples $M_{\underline{s}}$\xspace \rev{(lowerbound)} achieve competitive results. For Fairytale QA\xspace, however, only the best samples $M_{\overline{s}}$\xspace \rev{(upperbound)} outperform previous results \citep{xu2022fantastic}, indicating margins for improvement on question selection strategies for future work. \subsection{Human Evaluation} Human evaluation consists of $16,800$ annotations (collected from 87 annotators) evenly split across the two datasets (details in Appendix~\ref{sec:app:human_study}). For question generation (among many language generation tasks), model outputs may exhibit linguistic diversity while maintaining semantic equivalence. It is thus highly problematic to evaluate such outputs against a single reference (i.e, ``ground-truth'' questions). Figure~\ref{fig:human_eval} empirically shows that the ground-truth (GT) questions provided in the datasets often fail to receive the highest human ratings, on many occasions scoring lower than stochastic samples from GPT-3\xspace ($M_s$\xspace). Consequently, we strongly advocate for human evaluation, which we believe is higly effective in improving generalizability of our results to real-world applications. Another prominent observation is that $n$-gram\xspace and APS perform quite differently on the two datasets. On SQuAD\xspace, $n$-gram\xspace similarity outperforms other individual methods, with further noticeable improvements via ensemble with round-trip. APS, on the other hand, does not work nearly as well, performing the worst for almost all meta-questions. In contrast, $n$-gram\xspace (particularly tri-gram) similarity shows the worst performance on Fairytale QA\xspace, while APS outperforms all other methods by a noticeable margin. We posit that the reversed trend in comparing $n$-gram\xspace and APS can be explained by the distinct natures of the datasets. For SQuAD\xspace, the sentence-level contexts are relatively short and simple with strictly extractive answers (i.e., the answers being sub-strings of the corresponding contexts). As a result, paraphrasing the context can be a rather effective question generation strategy, hence the stronger correlation between question quality and the $c$--$q$ $n$-gram\xspace similarity. On the other hand, with multi-paragraph contexts and abstractive, open-ended answers, questions are more likely posed about abstract ideas rather than simple context paraphrasing. Consequently, $n$-gram\xspace similarity, which favors local context paraphrasing, can no longer serve as a good question selection strategy. \subsection{Limitations and Future Work} We acknowledge that our system has some limitations that warrants further investigation. For example, one needs to be mindful of the specific downstream applications of the proposed methods, both in terms of potentially large variance in out-of-distribution performance (e.g. \textit{divergent} question generation, \citealt{abdelghani2022conversational}) and of mitigating harmful/toxic contents in educational applications \citep{bender2021dangers}. We also acknowledge the prohibitively restrictive access to the GPT-3 model at the time of writing. We do believe that this constraint will relax over time, and meanwhile, hoping that our proposal can shed light on research and applications with more accessible LLMs such as GPT-J \citep{gpt-j} and BLOOM \citep{bloom} for future work. \rev{ \section{Conclusion} In this study, we investigate the practical problem of selecting the best output from multiple samples generated by an LLM. Using question generation as a case study, we propose two prompt-based approaches that select high-quality questions according to question quality from multiple perspectives. To alleviate real-world constraints on using large LMs such as computation resources and data availability, the proposed methods do not rely on model fine-tuning nor human annotation. Extensive experiments with both automatic and human evaluations evince the effectiveness of our approach on selecting high-quality questions from stochastic samples. }
{ "timestamp": "2022-09-23T02:15:18", "yymm": "2209", "arxiv_id": "2209.11000", "language": "en", "url": "https://arxiv.org/abs/2209.11000" }
\section{Introduction} For a simple graph $G$ of order $n$, denote the eigenvalues of its adjacency matrix by $\lambda_1(G)\ge\lambda_2(G)\ge \dots\ge\lambda_n(G)$. The degree of a vertex $u$ in $G$, denoted by $d_G(u)$, is defined as the number of edges incident to $u$, and we write $d(u)$ when $G$ is clear. The well-known result of Alon--Boppana bound may be stated as follows: \begin{thm}\cite{alon1986, Nilli1991} For any $d$-regular graph $G$ containing two edges with distance at least $ 2k+2$, \begin{equation}\label{eq-Alon} \lambda_2(G)\ge2\sqrt{d-1}\left(1-\frac{1}{k+1}\right)+\frac{1}{k+1}. \end{equation} \end{thm} It is natural to generalize the Alon--Boppana bound to graphs that are not necessarily regular. One plausible extention is to verify whether the inequality $\liminf_{i\rightarrow\infty}\lambda_2(G_i)\ge 2\sqrt{d-1}$ still holds for any sequence of graphs $\{G_i\}$ with average degree at least $d$ and growing diameter. However, Hoory \cite{Hoory2005} constructed a counterexample to disprove the problem. Furthermore, he proposed a new idea of $r$-robust and provided an Alon--Boppana type bound for a class of irregular graphs as follows. For a graph $G$, the \emph{ball} of radius $r$ centered at $v$, denoted by $G(v,r)$, is the induced subgraph of $G$ on the vertices with distance at most $ r$ from $v$. A graph $G$ \emph{has an $r$-robust average degree $\ge d$} if for the induced subgraph obtained by deleting any ball of radius $r$, its average degree is at least $d$. \begin{thm}\cite{Hoory2005}\label{thm-Hoory} Given a real number $d\ge 2$ and a natural number $r\ge 2$, if a graph $G$ has an $r$-robust average degree $\ge d$, then \begin{equation}\label{eq-Hoory} \max\{\lambda_2(G),|\lambda_{n}(G)|\}\ge 2\sqrt{d-1}\left(1-c\cdot \frac{\log r}{r}\right), \end{equation} where $c$ is an absolute constant. \end{thm} Recently, Jiang \cite{Jiang2019} presented a method of unraveled balls to improve the bound above. \begin{thm}\cite{Jiang2019}\label{thm-Jiang} Given a real number $d\ge 1$ and a natural number $r\ge 1$, if a graph $G$ has an $r$-robust average degree $\ge d$, then \begin{equation}\label{eq-Jiang} \lambda_2(G)\ge 2\sqrt{d-1}\cos\left(\frac{\pi}{r+1}\right). \end{equation} \end{thm} It is benefit to consider normalized Laplacian eigenvalues, since they can reveal many fundamental properties of a graph (see\cite{chung1997}). Especially, the second smallest one is tightly related with expansion and algorithmic properties of a graph (see \cite[Chapter 2]{chung1997}). The normalized Laplacian matrix $\mathcal {L}(G)$ of a graph $G$ is defined to be $I-D^{-\frac 12}AD^{-\frac 12}$, where $D$ is the diagonal degree matrix with diagonal element $D(v,v)=d(v)$ for $v\in V(G)$, and $A$ is the adjacency matrix of $G$. Denote the eigenvalues of $\mathcal {L}(G)$ by $0=\mu_1(G)\le\mu_2(G)\le \dots\le\mu_n(G)\le 2$. In terms of the normalized Laplacian, the well-known Alon--Boppana bound theorem says that for any $d$-regular graph $G$, \begin{equation}\label{eq-Alon-nor} \mu_2(G)\le1-2\frac{\sqrt{d-1}}{d}+o(1), \end{equation} as the diameter of $G$ going to infinity. One may ask whether the assertion that $\limsup_{i\rightarrow \infty}\mu_2(G_i)\le1-2\frac{\sqrt{d-1}}{d}+o(1)$ for any sequence of graphs $G_i$ with average degree at least $ d$ and growing diameter still holds. Indeed, Young\cite{Young2022} proved that there exists some fixed $\varepsilon\ge 0$ and a sequence of graphs $\{G_i\}$ with the common average degree $d$ and common maximum degree (and hence growing diameter) such that for every $i$, \begin{equation} \mu_2(G_i)\ge 1-2\frac{\sqrt{d-1}}{d}+\varepsilon. \end{equation} Furthermore, Young\cite{Young2022} generalized the Alon--Boppana bound on the second smallest normalized Laplacian eigenvalue to graphs that may be irregular by adapting the idea of Hoory\cite{Hoory2005}. Young indicated that \cite[Theorem 7]{Young2022} can be phrased in the $r$-robust average degree framework of Hoory. The \emph{second order average degree} of a graph $G$ is defined to be $$\widetilde{d}_{G}=\frac{\sum_{u\in V(G)}d(u)^2}{\sum_{u\in V(G)}d(u)}.$$ A graph $G$ is \emph{$(r,d,\widetilde{d})$-robust} if for the induced subgraph of $G$ obtained by deleting any ball of radius $r$, its average degree is at least $d$, and its second order average degree is at most $ \widetilde{d}$. \begin{thm}\cite{Young2022}\label{thm-Young} Given real numbers $\widetilde{d}\ge d\ge 2$ and a natural number $r\ge 2$, if a graph $G$ is $(r,d,\widetilde{d})$-robust, then \begin{equation}\label{eq-Young} \mu_2(G)\le 1- \frac{2\sqrt{d-1}}{\widetilde{d}} \left(1-c\cdot\frac {\log r}{r}\right). \end{equation} \end{thm} In addition, Chung \cite{Chung2016} used a different approach to obtain, under some technical assumptions on a graph, another analogous upper bound $\mu_2(G)\le 1-\sigma\left(1-\frac ck\right)$, where $\sigma=\frac{2\sum_{u\in V (G)} d(u)\sqrt{d(u)-1}}{\sum_{u\in V(G)}d(u)^2}$, $k$ is the diameter of $G$, and $c$ is a constant. Let $G$ be a simple graph. The matrix $D^{-\frac 12}AD^{-\frac 12}$ associated with $G$ may be regarded as the adjacency matrix of the weighted graph $(G,w_0)$ with edge weight $w_0(uv)=(d(u) d(v))^{-\frac 12}$. Moreover, if we denote the second largest eigenvalue of $D^{-\frac 12}AD^{-\frac 12}$ by $\lambda_2(G,w_0)$, then the second smallest normalized Laplacian eigenvalue $\mu_2(G)$ is equal to $1-\lambda_2(G,w_0)$. Based on the observations above, it is believable that by considering weighted graphs one could provide a tighter upper bound on $\mu_2(G)$. The related results are referred to \cite{angel2015, chung1997, Srivastava2018, Mohar2010}. The motivation of this paper is to solve the problem above by combining the weighting idea of Young\cite{Young2022} and the idea of unraveled ball of Jiang\cite{Jiang2019}. Indeed, we present an upper bound on $\mu_s(G)$ for $s\ge 2$. Hereafter, denote the set of real numbers and positive real numbers by $\mathcal{R}$ and $\mathcal{R}^+$ respectively. Recall that the degree of a vertex $u$ in $G$, denoted by $d(u)$, is the number of edges incident to $u$. One of the main result in this paper is as follows: \begin{thm}\label{thm1} If a connected (positively) weighted graph $(G,w)$ has minimum degree at least 2, then for any natural number $r$ with $r\ge 1$ and any function $g\colon V(G)\rightarrow \mathcal{R}^+$, there exists a vertex $v$ of $G$ such that the weighted spectral radius of the unraveled ball $\widetilde{G}(v,r)$ of $G$ satisfies \begin{equation}\label{eq-thm1} \lambda_1(\widetilde{G}(v,r),w) \ge \frac{2\sum_{v_1\in V(G)}\sqrt{d(v_1)-1}\sum_{v_2\in N(v_1)}~w(v_1v_2)~\sqrt{g(v_1)g(v_2)}}{\sum_{v\in V(G)} g(v)d(v)}\cos\left(\frac{\pi}{r+2}\right), \end{equation} where the neighborhood $N(v_1)$ of $v_1$ is $\{v\in V(G)\colon v_1v\in E(G)\}$. \end{thm} \begin{rem} \begin{description} \item[(1)] The function $g(x)$ in Theorem~\ref{thm1} is just a technical function which may be appropriately chosen to get simpler bounds for different weight graphs. For instance, if $w\equiv1$, then let $g\equiv 1$ in Theorem~\ref{thm1}, and we obtain \cite[Theorem 1]{Jiang2019}. As a result, Theorem~\ref{thm1} extends the result of Jiang to weighted graphs. \item[(2)] A graph is positively weighted if the graph has a positive weight on every edge. \end{description} \end{rem} The other main result in this paper is to derive an upper bound on the $s$th smallest normalized Laplacian eigenvalue. A graph $G$ is \emph{$(r,d,\widetilde{d},s)$-robust} if for the induced subgraph of $G$ obtained by sequentially deleting any $s$ ball of radius $r$, its average degree is at least $d$, and its second order average degree is at most $ \widetilde{d}$. When $s=1$, $(r,d,\widetilde{d},1)$-robust is just $(r,d,\widetilde{d})$-robust. We prove the following: \begin{thm}\label{thm2} Given real numbers, $\widetilde{d}\ge d\ge 2$, and natural numbers, $r\ge 1$ and $s\ge 2$, if a graph $G$ is $(r,d,\widetilde{d},s-1)$-robust, then the $s$-th smallest normalized Laplacian eigenvalue satisfies \begin{equation}\label{eq-m2} \mu_s(G)\le 1- \frac{2\sqrt{d-1}}{\widetilde{d}}\cos\left(\frac{\pi}{r+1}\right). \end{equation} \end{thm} \begin{rem} Note that $$\cos\left(\frac{\pi}{r+1}\right = 1-\frac{\pi^2}{2(r+1)^2}+o\left(\frac{1}{r^3}\right).$$ It is easy to see that \eqref{eq-m2} is a slight improvement of \eqref{eq-Young} for large $r$ and $s=2$. \end{rem} The rest of the paper is organized as follows. In Section 2, some related concepts and symbols are introduced. In Section 3, we prove Theorem \ref{thm1}, and include some corollaries. In Section 4, we present a lower bound on the weighted spectral radius of a ball, which is used to prove Theorem \ref{thm2} in Section 5. \section{Preliminary} \begin{defi} A graph $G^{\prime}$ (possibly infinite) is a covering of another graph $G$ via a covering map $\varphi\colon V(G^{\prime})\rightarrow V(G)$ if $\varphi$ is a surjective map and is a local isomorphism: for every vertex $v$ of $G^{\prime}$, the map $\varphi$ induces a bijection from the edges incident to $v$ in $G^{\prime}$ to the edges incident to $\varphi(v)$ in $G$. \end{defi} \begin{defi} The universal cover $\widetilde{G}$ of a connected graph $G$ is a covering of $G$ that is a (possibly infinite) tree. \end{defi} The universal cover of a connected graph is unique up to isomorphism\cite{Leighton1982,Bordenave2019}. If $G$ is a finite tree, then the universal cover of $G$ is itself. Otherwise, the universal cover of $G$ is an infinite graph. For instance, the universal cover of $d$-regular graph is the infinite $d$-regular tree. A \emph{non-backtracking walk} of $G$ is defined as a walk $(v_0,v_1,\dots)$ on $G$ satisfying $v_i\neq v_{i+2}$, for every $i$ with $i\ge 0$. Specifically, a walk of length at most 1 is just non-backtracking. From the view of random walks, the universal cover can be defined in an equivalent way: \begin{defi}\cite{Leighton1982} The universal cover $\widetilde{G}$ of a connected graph $G$ is defined as follows: the vertex set consists of all non-backtracking walks on $G$ starting at a fixed vertex $v_0$, and two vertices are adjacent if and only if one is a simple extension of the other. The covering map $\varphi\colon V(\widetilde{G})\rightarrow V(G)$ is defined by $\varphi((v_0,\dots,v_i))=v_i$ for $(v_0,\dots,v_i)\in V(\widetilde{G})$. \end{defi} In fact, the universal cover is independent of the choice of the fixed vertex $v_0$. Given a graph $G$, the unraveled ball of $G$ is the ball of radius $r$ centered at $v$ in the universal cover $\widetilde{G}$ of $G$, and thus it has an equivalent definition: \begin{defi} Given a graph $G$ and a vertex $v$ of $G$, the unraveled ball of $G$, denoted by $\widetilde{G}(v,r)$, is defined as follows: the vertex set contains all non-backtracking walks on $G$ of length at most $r$ starting at $v$, and two vertices are adjacent if and only if one is a simple extension of the other. \end{defi} Next we introduce weighted graphs. \begin{defi} A weighted graph $(G,w)$ is a graph $G$ along with a weight function on edges, $w\colon E(G)\rightarrow \mathcal{R}^+$. A weighted graph $(G_1,w_1)$ is called the weighted subgraph of $(G,w)$ if $G_1$ is a subgraph of $G$ and $w_1=w|_{E(G_1)}$. For simplicity of notations, we denote the weighted subgraph by $(G_1,w)$ instead of $(G_1,w_1)$. \end{defi} Note that unweighed graphs are just the special case where all the edge weights are equal to 1. Let $(G,w)$ be a weighted graph, and $\widetilde{G}$ the universal cover of $G$. By the covering map $\varphi\colon V(\widetilde{G})\rightarrow V(G);~\varphi((v_0,\dots,v_i))=v_i$, the weighted function $w$ can lift in a natural way to a weighted function $\widetilde{w}\colon E(\widetilde{G})\rightarrow \mathcal{R}^+$, which is defined by $\widetilde{w}((v_0,\dots,v_{i-1})(v_0,\dots,v_{i-1},v_i))=w(v_{i-1}v_i)$. Thus we naturally get a weighted universal cover $(\widetilde{G},\widetilde{w})$ from the weighted graph $(G,w)$. For simplicity of notation, we write $(\widetilde{G},w)$ instead of $(\widetilde{G},\widetilde{w})$. \begin{defi} For a weighted graph $(G,w)$ of order $n$. The adjacency matrix $A(G,w)$ of $(G,w)$ is defined by $$(A(G,w))_{u,v}=\left\{ \begin{array}{ll} w(uv),& \text{if}~ uv\in E(G); \\ 0, & \text{otherwise}. \\ \end{array} \right.$$ The weighted spectral radius of $G$ is the spectral radius of $A(G,w)$, and denoted by $\lambda_1(G,w)$. Order the eigenvalues of $A(G,w)$ as $\lambda_1(G,w)\ge \lambda_2(G,w)\ge \dots \ge\lambda_n(G,w)$. \end{defi} \section{Proof of Theorem \ref{thm1} and Corollaries} The proof of Theorem \ref{thm1} uses an old idea of constructing a weighted test function via non-backtracking walks (see e.g. \cite{Jiang2019,Chung2016,Srivastava2018}), and also via the eigenvector of a path (see e.g. \cite{Jiang2019}). ~ \begin{Proof of 1} For $e=(v_0,v_1)\in W_1$, let $T_e$ be the component of $\widetilde{G}(v_0,r+1)-(v_0)$ containing the vertex $e$, and it is a tree. Let $T$ be the disjoint union of a class of graphs $\{T_e\}_{e\in W_1}$, and thus $T$ is a forest. The vertex set of $T$ is $\bigcup_{i=1}^{r+1}W_i$, where $W_i$ is defined as the set of all non-backtracking walks of length $i$ on $G$. By regarding every vertex $(v_0,v_1,\dots,v_i)$ in $T_e$, where $1\le i\le r+1$, as the vertex $(v_1,\dots,v_i)$ in $\widetilde{G}(v_1,r)$, we observe that $(T_e,w)$ is a weighted subgraph of $(\widetilde{G}(v_1,r),w)$. By monotonicity of weighted spectral radius, $\lambda_1(\widetilde{G}(v_1,r),w)\ge\lambda_1(T_e,w)$ for $e=(v_0,v_1)\in W_1$. Since $\lambda_1(T,w)=\max\{\lambda_1(T_e,w)\colon e\in W_1\}$, there exists a vertex $e^{\ast}$ such that $\lambda_1(T,w)=\lambda_1(T_{e^{\ast}},w)$. It derives that there exists a vertex $v_1^{\ast}$, the terminal vertex of $e^{\ast}$, such that $\lambda_1(\widetilde{G}(v_1^{\ast},r),w)\ge\lambda_1(T,w)$. It suffices to prove that for any function $g\colon V\rightarrow \mathcal{R}^+$, $$\lambda_1(T,w)\ge 2\cos(\frac{\pi}{r+2})\cdot\frac{\sum_{(v_1,v_2)\in W_1}\sqrt{d(v_1)-1}~w(v_1v_2)~\sqrt{g(v_1)g(v_2)}} {\sum_{v\in V(G)} g(v)d(v)}.$$ Now we consider a Markov chain on $W_1$ as follows: the initial state $E_1$ is chosen from $W_1$ uniformly at random, and if the current state $E_i=(v_{i-1},v_i)$ is given, the next state $E_{i+1}$ will be chosen from $\{(v_i,v_{i+1})\in W_1\colon v_{i+1}\neq v_{i-1}\}$ uniformly at random. The transition matrix $P$ is $$ P_{(u,v),(w,z)}=\left\{ \begin{array}{lll} \frac{1}{d(v)-1}, & \text{if }v=w \text{ and } z\neq u;\hfill& \\ 0,& \text{otherwise}.\hfill& \end{array} \right. $$ We can attach $E_1,\dots, E_i$ one by one to form a non-backtracking walk on $G$ of length $i$, which is denoted by the random variables $Y_i=(X_0,X_1\dots,X_i)$. It is known that $\lambda=2\cos(\frac{\pi}{r+2})$ is the spectral radius of the path $P_{r+1}$ on $r+1$ vertices. Let $(x_1,\dots,x_{r+1})\in \mathcal{R}^{r+1}$ be a positive eigenvector of $P_{r+1}$ associated with $\lambda$. By the Rayleigh principle, it follows that \begin{equation}\label{eq1} \sum_{i=2}^{r+1}2x_{i-1}x_i=\lambda\cdot\sum_{i=1}^{r+1}x_i^{2}. \end{equation} Define the vector \begin{equation} f\colon \bigcup_{i=1}^{r+1}W_i\rightarrow \mathcal{R}; ~ f(\omega)=x_i\sqrt{g(v_i) {\rm Pr}(Y_i=\omega)} \end{equation} for $\omega=(v_0,v_1,\dots,v_i)\in W_i$, where $g\colon V(G)\rightarrow\mathcal{R}^+$ is a fixed vertex weight function. Let $A(w)$ be the adjacency matrix of the weighted forest $(T,w)$. For $\omega=(v_0,\dots,v_{i-1},v_i)$, let $\omega^-= (v_0,\dots,v_{i-1})$. By simple calculations, we have \begin{align} \langle f,f \rangle&=\sum_{i=1}^{r+1}\sum_{\omega\in W_i} f(\omega)^2=\sum_{i=1}^{r+1}x_i^2\sum_{\omega\in W_i} g(v_i) {\rm Pr}(Y_i=\omega),\label{eq2}\\ \langle f,A(w)f \rangle&=\sum_{i=2}^{r+1}\sum_{\omega\in W_i} 2f(\omega^-)f(\omega)\cdot w(\omega^-\omega)\nonumber\\ &=\sum_{i=2}^{r+1}2x_{i-1}x_i\sum_{\omega\in W_i}w(\omega^-\omega)\sqrt{g(v_{i-1})g(v_i)}\sqrt{{\rm Pr}(Y_{i-1}=\omega^-)\cdot {\rm Pr}(Y_i=\omega)}.\label{eq3} \end{align} For simplicity of notation, let \begin{align*} I_i & =\sum_{\omega\in W_i} g(v_i) {\rm Pr}(Y_i=\omega), \\ J_i & =\sum_{\omega\in W_i}w(\omega^-\omega)\sqrt{g(v_{i-1})g(v_i)}\sqrt{{\rm Pr}(Y_{i-1}=\omega^-)\cdot {\rm Pr}(Y_i=\omega)}. \end{align*} In order to complete the proof, it suffices to simplify $I_i$ and $J_i$ for every $i$. Firstly, we have \begin{align}\label{eq4} I_i={\rm E}\left[ g(X_i)\right]=\sum_{v\in V(G)}g(v)\cdot{\rm Pr}(X_{i}=v). \end{align} for $i\ge 1$. Secondly, by the Markov property, for $i\ge 2$ and $\omega=(v_0,v_1,\dots,v_i)\in W_i$, \begin{align*} {\rm Pr}(Y_{i-1}=\omega^-) =\frac{{\rm Pr}(Y_i=\omega)}{{\rm Pr}(E_i=(v_{i-1},v_i)|E_{i-1}=(v_{i-2},v_{i-1}))} =(d(v_{i-1})-1){\rm Pr}(Y_i=\omega). \end{align*} Note that $w(\omega^-\omega)$ is defined to be $w(v_{i-1}v_i)$. Thus it follows that for $i\ge 2$, \begin{align}\label{eq5} J_i&=\sum_{\omega=(v_0,v_1,\dots,v_i)\in W_i}\sqrt{d(v_{i-1})-1}~w(v_{i-1}v_i)\sqrt{g(v_{i-1})g(v_i)}~{\rm Pr}(Y_{i}=\omega)\nonumber\\ &={\rm E}\left[\sqrt{d(X_{i-1})-1}~w(X_{i-1}X_i)\sqrt{g(X_{i-1})g(X_i)}\right]\nonumber\\ &=\sum_{(v_1,v_2)\in W_1}\sqrt{d(v_1)-1}~w(v_1v_2)~\sqrt{g(v_1)g(v_2)}~{\rm Pr}(X_{i-1}=v_1,X_{i-2}=v_2). \end{align} Now we focus on the probabilities in \eqref{eq4} and \eqref{eq5}. Since the minimum degree of $G$ is at least $2$, the Markov chain has no absorbing states. And it is easy to see that the uniform distribution $x=(\frac{1}{|W_1|},\dots,\frac{1}{|W_1|})$ on $W_1$ is a stationary distribution of the Markov chain, that is $x=xP^{i-1}$ for $i\ge 1$, where $P$ is the transition matrix. Thus we have ${\rm Pr}(E_i=e)=1/|W_1|$ for $i\ge 1$ and $e\in W_1$, which derives that for $i\ge 1$ and $v\in V(G)$, $${\rm Pr}(X_i=v)=\sum_{e\sim v}{\rm Pr}(E_i=e)=\frac{d(v)}{|W_1|},$$ where $e\sim v$ denotes ranging over all the edges incident to $v$. Hence for $ i\ge 2$ and $(v_1,v_2)\in W_1$, \begin{align*} {\rm Pr}(X_{i-1}=v_1,X_i=v_2)&=\sum_{\{u\colon (u,v_1)\in W_1,u\neq v_2\}}{\rm Pr}(E_{i-1}=(u,v_1),E_i=(v_1,v_2))\\ &=\sum_{\{u\colon (u,v_1)\in W_1,u\neq v_2\}}{\rm Pr}(E_i=(v_1,v_2)|E_{i-1}=(u,v_1)){\rm Pr}(E_{i-1}=(u,v_1))\\ &=\sum_{\{u\colon (u,v_1)\in W_1,u\neq v_2\}}\frac{1}{(d(v_1)-1)|W_1|}=\frac{1}{|W_1|}. \end{align*} By rewriting ${\rm Pr}(X_i=v)$ in \eqref{eq4} and ${\rm Pr}(X_{i-1}=v_1,X_i=v_2)$ in \eqref{eq5}, we can simplify $I_i$ and $J_i$, and substitute them to \eqref{eq2} and \eqref{eq3} to obtain \begin{align*} \langle f,f \rangle&=\sum_{i=1}^{r+1}x_i^2\sum_{v\in V(G)} g(v)\frac{d(v)}{|W_1|},\\ \langle f,A(w)f \rangle&=\sum_{i=2}^{r+1}2x_{i-1}x_i\sum_{(v_1,v_2)\in W_1}\sqrt{d(v_1)-1}~w(v_1v_2)~\sqrt{g(v_1)g(v_2)}\frac{1}{|W_1|}. \end{align*} Finally, combining \eqref{eq1}, the equalities above, and the Rayleigh principle, we obtain $$\lambda_1(T,w)\ge\frac{\langle f,A(w)f \rangle}{\langle f,f \rangle}= 2\cos(\frac{\pi}{r+2})\frac{\sum_{(v_1,v_2)\in W_1}\sqrt{d(v_1)-1}~w(v_1v_2)~\sqrt{g(v_1)g(v_2)}} {\sum_{v\in V(G)} g(v)d(v)},$$ for any vertex weight function $g$, and complete the proof. \end{Proof of 1} If the weight function $w_0$ is defined by $w_0(uv)=(d(u)d(v))^{-\frac 12}$ for every edge $uv$ of a graph $G$, then let $g(v)=d(v)$ for every vertex $v$ of $G$ in Theorem \ref{thm1}, and we derive the following corollary. \begin{coro}\label{coro1} If a connected weighted graph $(G,w_0)$ with edge weight $w_0(uv)=(d(u)d(v))^{-\frac 12}$ has minimum degree at least $2$, then for any natural number $r$ with $r\ge 1$, there exists a vertex $v$ of $G$ such that $$\lambda_1(\widetilde{G}(v,r),w_0)\ge \frac{2\sum_{u\in V(G)}d(u)\sqrt{d(u)-1}}{\sum_{u\in V(G)} d(u)^2}\cos\left(\frac{\pi}{r+2}\right).$$ \end{coro} Since the weighted unraveled ball $(\widetilde{G}(v,r),w_0)$ is a weighted induced subgraph of the weighted universal cover $(\widetilde{G},w_0)$, it follows that $\lambda_1(\widetilde{G},w_0)\ge\lambda_1(\widetilde{G}(v,r),w_0)$ by the monotonicity of weighted spectral radius. Thus we can obtain a lower bound on $\lambda_1(\widetilde{G},w_0)$ by letting $r$ go to infinity in Corollary \ref{coro1}. \begin{coro}\label{coro11} If a connected weighted graph $(G,w_0)$ with edge weight $w_0(uv)=(d(u)d(v))^{-\frac 12}$ has minimum degree at least $2$, then the weighted spectral radius of its universal cover satisfies $$\lambda_1(\widetilde{G},w_0)\ge \frac{2\sum_{u\in V(G)}d(u)\sqrt{d(u)-1}}{\sum_{u\in V(G)} d(u)^2}.$$ \end{coro} \section{Weighted spectral radius of a ball} For a weighted graph, {\it the weight of a closed walk} is the product of weights of all edges on the closed walk. The following result is well-known. \begin{lem}\cite{Mohar1989}\label{lem2} For any connected weighted graph $(G,w)$ (possibly infinite) and every vertex $v$ of $G$, the weighted spectral radius of $(G,w)$ is $$\lambda_1(G,w)=\limsup_{k\rightarrow\infty}\sqrt[2k]{t_{2k}^{(w)}(v)},$$ where $t_{2k}^{(w)}(v)$ is the total weight of all closed walks of length $2k$ from $v$ to itself in $G$. \end{lem} The following lemma establishes connections between the weighted spectral radius of a ball and its corresponding unraveled ball, and is an extension of \cite[Theorem 2.2]{Mohar2010}. \begin{lem}\label{lem1} For every vertex $v$ of a weighted graph $(G,w)$ and any natural number $r$ with $r\ge 1$, $$\lambda_1(G(v,r),w)\ge \lambda_1(\widetilde{G}(v,r),w).$$ \end{lem} \begin{Proof} Recall that the vertex set of $\widetilde{G}(v,r)$ consists of all non-backtracking walks of length at most $ r$ starting at $v$. By the covering map $\varphi\colon V(\widetilde{G})\rightarrow V(G);~\varphi((v_0,\dots,v_i))=v_i$, we can naturally construct a map $\sigma$, mapping a closed walk $(\omega=\omega_0,\dots,\omega_{2k}=\omega)$ of length $2k$ in $\widetilde{G}(v,r)$ to a closed walk $(v=v_0,\dots,v_{2k}=v)$ of length $2k$ in $G(v,r)$ for $k\ge 0$, where $v_j$ is the terminal vertex of $\omega_j$ for every $j$. It is obvious that $\sigma$ is an injective map. In fact, since the covering map $\varphi$ is a local isomorphism, there exists an inverse map $\tau$ such that $\tau\sigma=\text{id}$. In addition, by naturally lifting the weight function $w$ to a weight function of $\widetilde{G}$, and the weights of walks are invariant under the map $\sigma$. Hence, the sum of the weights of closed walks of length $2k$ on $\widetilde{G}(v,r)$ is no more than the sum of the weights of closed walks of length $2k$ on $G(v,r)$. Therefore, Lemma \ref{lem2} gives that $\lambda_1(G(v,r),w)\ge \lambda_1(\widetilde{G}(v,r),w)$. \end{Proof} Combining Theorem \ref{thm1} and Lemma \ref{lem1}, we prove a lower bound on the weighted spectral radius of a ball of a graph with edge weight $w_0(uv)=(d(u)d(v))^{-\frac 12}$, which also has its own interest. \begin{thm}\label{thm3} Let $(G,w_0)$ be a weighted graph with edge weight $w_0(uv)=(d(u)d(v))^{-\frac 12}$. If the graph $G$ has average degree $d$ with $d\ge 2$ and second order average degree $\widetilde{d}$, then for any natural number $r$ with $r\ge 1$, there exists a vertex $v$ of $G$ such that \begin{equation} \lambda_1(G(v,r),w_0) \ge \frac{2\sqrt{d-1}}{\widetilde{d}} \cos\left(\frac{\pi}{r+2}\right). \end{equation} \end{thm} \begin{Proof} Since $G$ may have vertices of degree 1, we cannot use Corollary \ref{coro1} directly. Instead, we consider the 2-core $H$ in $G$, the largest induced subgraph of $G$ with minimum degree at least $2$. Observe that the 2-core $H$ can be obtained from $G$ by deleting vertices of degree 1 sequentially. Since removing vertices of degree 1 from a graph of average degree at least cannot decrease its average degree, the 2-core $H$ is non-empty. We decompose the proof into two parts. \textbf{Case 1:} The 2-core $H$ is connected. Similarly with Corollary \ref{coro1}, by setting $w_0(uv)=(d_{G}(u)d_{G}(v))^{-\frac 12}$ and $g(v)=d_{G}(v)$ in Theorem \ref{thm1} we can derive that there exists a vertex $v$ of $H$ such that $$\lambda_1(\widetilde{H}(v,r),w_0) \ge \frac{2\sum_{u\in V(H)}d_{H}(u)\sqrt{d_{H}(u)-1}}{\sum_{u\in V(H)}d_{H}(u)d_G(u)} \cos\left(\frac{\pi}{r+2}\right).$$ Since $(H(v,r),w_0)$ is a weighted subgraph of $(G(v,r),w_0)$, the monotonicity of weighted spectral radius and Lemma \ref{lem1} derive that $$\lambda_1(G(v,r),w_0) \ge\lambda_1(H(v,r),w_0)\ge\lambda_1(\widetilde{H}(v,r),w_0) .$$ Combing the inequalities above, we only need to prove that \begin{equation}\label{eq6} \frac{\sum_{u\in V(H)}d_{H}(u)\sqrt{d_{H}(u)-1}}{\sum_{u\in V(H)}d_{H}(u)d_G(u)} \ge \frac{\sqrt{d-1}}{\widetilde{d}}. \end{equation} Recall that the average degree of $H$ is at least $d$. Note that $h(x)=x\sqrt{x-1}$ is a convex function for $x\ge 2$. It follows from Jensen's inequality that \begin{equation}\label{eq6.5} \sum_{u\in V(H)}d_{H}(u)\sqrt{d_{H}(u)-1} \ge\sum_{u\in V(H)}d_{H}(u)\sqrt{d-1}. \end{equation} Then it suffices to prove \begin{equation}\label{eq7} \frac{\sum_{u\in V(H)}d_{H}(u)d_G(u)} {\sum_{u\in V(H)}d_{H}(u)} \le\widetilde{d}=\frac{\sum_{u\in V(G)}d_G(u)^2}{\sum_{u\in V(G)}d_G(u)}, \end{equation} since \eqref{eq6.5} and \eqref{eq7} imply \eqref{eq6}. Let $H^{\prime}$ be the spanning subgraph of $G$ composed of $H$ plus all isolated vertices in $V\setminus V(H)$. It is obvious that \begin{equation}\label{eq8} \frac{\sum_{u\in V(H)}d_{H}(u)d_G(u)} {\sum_{u\in V(H)}d_{H}(u)} =\frac{\sum_{u\in V(G)}d_{H^{\prime}}(u)d_G(u)} {\sum_{u\in V(G)}d_{H^{\prime}}(u)}. \end{equation} Recall that the 2-core $H$ can be obtained from $G$ by deleting vertices of degree 1 sequentially. Thus we can recover $G$ by $H^{\prime}$, by sequentially adding some edges of $G$ in the opposite order, and each of the edges joins a non-isolated vertex and an isolated vertex in the current state. Assume that there are $m$ edges to be added. For simplicity of notation, let $G^{(0)}=H^{\prime}$ and $G^{(m)}=G$. In the $i$th step, assume that some edge $v_1v_2$ of $G$ is added to $G^{(i-1)}$, and $G^{(i)}$ is the resulting graph. Note that $\frac xy\le\frac{x+1}{y+2}$ for $y\ge 2x$. It follows that for all $i$, $$\frac{d_{G^{(i)}}(v_j)}{\sum_{u\in V(G)}d_{G^{(i)}}(u)}\le\frac{d_{G^{(i)}}(v_j)+1}{\sum_{u\in V(G)}d_{G^{(i)}}(u)+2} =\frac{d_{G^{(i+1)}}(v_j)}{\sum_{u\in V(G)}d_{G^{(i+1)}}(u)},~\forall ~j=1,2,$$ $$\frac{d_{G^{(i)}}(z)}{\sum_{u\in V(G)}d_{G^{(i)}}(u)}\le\frac{d_{G^{(i)}}(z)}{\sum_{u\in V(G)}d_{G^{(i)}}(u)+2} =\frac{d_{G^{(i+1)}}(z)}{\sum_{u\in V(G)}d_{G^{(i+1)}}(u)},~\forall ~z\in V(G)\setminus\{v_1,v_2\}.$$ Consequently, we have \begin{equation}\label{eq9} \frac{d_{H^{\prime}}(z)}{\sum_{u\in V(G)}d_{H^{\prime}}(u)}\le \frac{d_{G^{(1)}}(z)}{\sum_{u\in V(G)}d_{G^{(1)}}(u)}\le\dots\le \frac{d_{G}(z)}{\sum_{u\in V(G)}d_{G}(u)},~\forall~ z\in V(G). \end{equation} Therefore, \eqref{eq7} follows by \eqref{eq8} and \eqref{eq9}. \textbf{Case 2:} The 2-core $H$ is disconnected. Now $G$ is also disconnected. Assume that the disconnected graph $G$ is composed of $G_1,\dots,G_t$. Let $H_i$ be the connected 2-core of $G_i$ with vertex set $V_i$ for $1\le i\le t$. By the same argument with Case 1, there exists a vertex $u_i$ of $H_i$ such that $$\lambda_1(G(u_i,r),w_0) \ge \frac{2\sum_{u\in V_i}d_{H}(u)\sqrt{d_{H}(u)-1}}{\sum_{u\in V_i}d_{H}(u)d_G(u)} \cos\left(\frac{\pi}{r+2}\right)=\colon \frac{M_i}{N_i} \cos\left(\frac{\pi}{r+2}\right).$$ One can simply verify that there exists an $i_0\in[1,t]$ such that $$\frac{M_1+\dots+M_t}{N_1+\dots+N_t}\le \max\left\{\frac{M_1}{N_1},\dots,\frac{M_t}{N_t}\right\}=\frac{M_{i_0}}{N_{i_0}}.$$ Thus there exists a vertex $u_{i_0}$ of $H_{i_0}$ such that $$\lambda_1(G(u_{i_0},r),w_0) \ge \frac{2\sum_{u\in V(H)}d_{H}(u)\sqrt{d_{H}(u)-1}}{\sum_{u\in V(H)}d_{H}(u)d_G(u)} \cos\left(\frac{\pi}{r+2}\right).$$ Using the same argument with Case 1, we can prove that there exists a vertex $u_{i_0}$ of $G$ such that $$ \lambda_1(G(u_{i_0},r),w_0)\ge \frac{2\sqrt{d-1}}{\widetilde{d}} \cos\left(\frac{\pi}{r+2}\right). $$ \end{Proof} \section{Proof of Theorem \ref{thm2}} Recall that a graph $G$ is $(r,d,\widetilde{d},s)$-robust if for the induced subgraph of $G$ obtained by sequentially deleting any $s$ ball of radius $r$, its average degree is at least $d$, and its second order average degree is at most $ \widetilde{d}$. Before proving Theorem \ref{thm2}, we provide a lower bound on the $s$-th largest weighted eigenvalue of a graph \begin{lem}\label{lem4} Let $(G,w_0)$ be a weighted graph with edge weight $w_0(uv)=(d(u)d(v))^{-\frac 12}$. Given real numbers, $\widetilde{d}\ge d\ge 2$, and natural numbers, $r\ge 1$ and $s\ge 2$, if $G$ is $(r,d,\widetilde{d},s-1)$-robust, then $$\lambda_s(G,w_0)\ge \frac{2\sqrt{d-1}}{\widetilde{d}}\cos\left(\frac{\pi}{r+1}\right).$$ \end{lem} \begin{Proof} We show how to sequentially construct $G_1,\dots,G_s$, a collection of pairwise disjoint weighted induced subgraphs of $G$ such that no vertex in $V(G_i)$ is adjacent to a vertex of $V(G_j)$ for $i\neq j$, and $\lambda_1(G_{i},w_0)\ge\frac{2\sqrt{d-1}}{\widetilde{d}} \cos\left(\frac{\pi}{r+1}\right)$ holds for $1\le i\le s$. For simplicity of notation, let $G^{(0)}=G$. For $1\le i\le s,$ assume that we have obtained $ G^{(i-1)}$, an induced subgraph of $G$, by sequentially deleting $i-1$ balls of radius $r$ during the previous steps. In the $i$th step, let $H_i$ be the graph obtained by sequentially deleting any $s-i$ balls of radius $r$ from $G^{(i-1)}$. Since $G$ is $(r,d,\widetilde{d},s-1)$-robust, the average degree of $H_i$ is at least $d$, and the second order average degree of $H_i$ is at most $\widetilde{d}$. By Theorem \ref{thm3}, there exists a vertex $v_i$ of $H_i$ such that $$\lambda_1(H_i(v_i,r-1),w_0) \ge\frac{2\sqrt{d-1}}{\widetilde{d}} \cos\left(\frac{\pi}{r+1}\right).$$ Since $H_i$ is a subgraph of $G^{(i-1)}$, $H_i(v_i,r-1)$ is a subgraph of $G^{(i-1)}(v_i,r-1)=\colon G_{i}$. Thus as desired we obtain $$\lambda_1(G_{i},w_0)\ge\lambda_1(H_i(v_{i},r-1),w_0)\ge\frac{2\sqrt{d-1}}{\widetilde{d}} \cos\left(\frac{\pi}{r+1}\right).$$ Let $G^{(i)}$ be the induced subgraph of $G^{(i-1)}$ obtained by deleting a ball $G^{(i-1)}(v_i,r)$. Turn to the next step, until we get $G_1,\dots,G_s$. For $1\le i\le s$, let $A_i(w_0)$ be the adjacency matrix of $(G_i,w_0)$. Let $A(w_0)$ be the adjacency matrix of $(G,w_0)$. Note that $A_i(w_0)$ is just the principal sub-matrix of $A(w_0)$ corresponding to $G_i$, but not equal to $I-\mathcal{L}(G_i)$. Additionally, let $f_i$ be a positive unit eigenvector of $A_i(w_0)$ associated with $\lambda_1(G_i,w_0)$. We can define a vector $g_i\colon V(G)\rightarrow\mathcal{R}$ by $$g_i(u)= \left\{ \begin{array}{cc} f_i(u),\hfill & \text{if } u\in V(G_i); \hfill\\ 0,\hfill&\text{otherwise},\hfill\\ \end{array} \right. $$ for every $i$ with $1\le i\le s$. It is obvious that $V(G_i)\cap V(G_j)=\emptyset$ for $ i\neq j$, so $\{g_1,\dots,g_s\}$ is a set of orthonormal vectors. Define $W_0$ as an $s$-dimension vector space spanned by $\{g_1,\dots,g_s\}$. By the Rayleigh Principle, it follows that $$\lambda_s(G,w_0)=\max_{\text{dim}W=s}\min_{f\in W}\frac{\langle f,A(w_0) f\rangle}{\langle f, f\rangle}\ge\min_{f\in W_0}\frac{\langle f,A(w_0) f\rangle}{\langle f, f\rangle}.$$ It is obvious that $uv\notin E$ for $u\in V(G_i)$ and $v\in V(G_j)$ with $i\neq j$. Taking any non-zero element $f=c_1g_1+\dots+c_sg_s$ in $W_0$, we have \begin{align*} \frac{\langle f,A(w_0) f\rangle}{\langle f, f\rangle} &=\frac{c_1^2\langle f_1,A_1(w_0) f_1\rangle+\dots+c_s^2\langle f_s,A_s(w_0) f_s\rangle}{c_1^2\langle f_1, f_1\rangle+\dots+c_s^2\langle f_s, f_s\rangle} \\ & =\frac{c_1^2\lambda_1(G_1,w_0)\langle f_1,f_1\rangle+\dots+c_s^2\lambda_1(G_s,w_0)\langle f_s, f_s\rangle}{c_1^2\langle f_1, f_1\rangle+\dots+c_s^2\langle f_s, f_s\rangle}\\ &\ge\frac{2\sqrt{d-1}}{\widetilde{d}} \cos\left(\frac{\pi}{r+1}\right). \end{align*} Finally, we complete the proof by $$\lambda_s(G,w_0)\ge\min_{f\in W_0}\frac{\langle f,A(w_0) f\rangle}{\langle f, f\rangle}\ge\frac{2\sqrt{d-1}}{\widetilde{d}} \cos\left(\frac{\pi}{r+1}\right).$$ \end{Proof} \begin{Proof of 2} Recall that the normalized Laplacian matrix $\mathcal {L}$ of $G$ is defined to be $I-D^{-\frac 12}AD^{-\frac 12}$, where $D$ is the diagonal degree matrix of $G$ and $A$ is the adjacency matrix of $G$. If we consider a weighted graph $(G,w_0)$ with edge weight $w_0(uv)=(d(u) d(v))^{-\frac 12}$, then we have $\mathcal {L}=I-A(G,w_0)$. By applying Lemma \ref{lem4}, we can obtain the desired upper bound on $\mu_s(G)=1-\lambda_s(G,w_0)$. \end{Proof of 2} \subsection*{Acknowledgements} The authors would be grateful to the referees for their valuable suggestions and comments which make a great improvement of the manuscript.
{ "timestamp": "2022-09-23T02:11:41", "yymm": "2209", "arxiv_id": "2209.10883", "language": "en", "url": "https://arxiv.org/abs/2209.10883" }
\section{Introduction} The dynamics of granular matter has been an emerging field for several decades now \cite{JNB96,dG99}. This is partly due to the many industrial and engineering applications that this kind of materials has \cite{AT06}; and partly due the fact that, granular set-ups can be used as prototype non-equilibrium systems for experiments \cite{OU98} and also, from a theoretical viewpoint, they allow for the deployment of the theory of non-equilibrium statistical mechanics \cite{G03}, fluid mechanics \cite{VU09} and materials science \cite{LAYKA14,GRLV2020}. Moreover advances on granular dynamics theory have clearly put in evidence that, both at mesoscopic and macroscopic level, the dynamics of granular matter has analogous features to that of molecular matter but at the same time presents a much more complex behavior \cite{VU09}. So, granular dynamics can be regarded, from the theoretical point of view, as an extension or generalization of the dynamics of molecular matter \cite{G03,MS16}. For instance, stratification \cite{AT06}, phase transitions \cite{PMEU04,Melby2005,VU08}, ordering \cite{OU98}, pattern formation , segregation \cite{HKGMO99}, mixing \cite{Melby2005}, fluid convection \cite{RLV20}, hydrodynamic instabilities \cite{EWMBL07}, turbulence \cite{I12a,I03}, etc. are all phenomena that are known to be present in granular matter in significantly more varied forms and behaviors. For instance, just out of illustration, the set of steady base flows that can be observed in a plane Fourier/Couette configuration (a fluid confined within two infinite parallel walls) includes those that are present in molecular gases plus new steady flows, that are specific of granular fluids \cite{VU09}. In particular, the Fourier configuration (two static parallel walls) for a molecular gas yields steady flows with constant heat flux; these constant heat flux states are however possible in a granular gas if the confining parallel walls are moving (Couette configuration) \cite{VSG10}. In this work, we focus instead in the phase transitions, and order/disorder phenomenology in a two-dimensional system. As it is well known, an equilibrium fluid in two-dimensions (2D) crystallizes to a hexagonal phase (the hexatic phase) via a continuous transition that is mediated by a phase that is specific of two dimensions. This process is well described by the KTHNY scenario (from their main authors K\"osterlitz, Thouless, Halperin, Nelson, Young; see their independent works \cite{Kosterlitz1972,KT73,NH79,Y79}). The new hexatic phase appears exclusively in 2D and is characterized by having quasi-long-ranged translational correlations (with power-law decay) and short-ranged translational correlations (with exponential decay). This liquid-hexatic-crystal scenario has also been observed in a monolayer of vertically vibrated macroscopic spheres \cite{OU05,KT15} and in 2D particle simulations of system of active particles \cite{DLSCGP18}. In both cases, however, important departures with respect to the equilibrium phase transition are reported. In effect, the KTHNY scenario appears in both granular and active matter only as a special case within a more complex framework of different combinations of continuous and discontinuous transitions. Furthermore, it has been reported recently that in a configuration of ping-pong balls rolling over a flat surface the KTHNY is, apparently, absent \cite{Abate2005,LGRV20,LGRAYV21,KMN21}; i.e., no hexatic phase could be detected. Instead, two consecutive phase transitions were detected, both with two coexisting phases. In the first one, at low granular temperature, a glass is observed in coexistence with the arrest phase (with this term, we refer to particles that at very low energy input are still static due to friction \cite{OU98,NRTMS14}). The second phase coexistence occurs at higher packing fraction, in which we can observe in certain ranges of driving intensity a coexistence between a liquid and a hexagonal crystal \cite{LGRAYV21}. In summary, the phase behavior of a monolayer of rolling spheres seems to suggest that dynamics of each of the observed phases would have very peculiar properties with respect to the 2D (either equilibrium or non-equilibrium) analogs discussed above. Thus, a detailed analysis of the dynamics of the phases of rolling spheres is needed. Moreover, in order to complete the description of non-equilibrium 2D phase transitions, whose relevance we have discussed above, this analysis should be undertaken for each (coexisting) phase separately. Therefore, we will study in this work the specific features and properties of the dynamics of each of these two coexisting pair of phases. We will show that strong energy non-equipartition occurs between the coexisting phases in all cases. We will also focus specifically on the diffusive properties and velocity autocorrelations of each of the observed phases. We show that the glass and the crystal phases are clearly subdiffusive. Surprisingly, the liquid phase can display either normal diffusion or weakly subdiffusive or superdiffusive behavior. As we will see, these transitions in the diffusive behavior of the system occur in a continuous way. Furthermore, in the glassy phase, particle velocities are strongly anticorrelated at early times, whereas the crystal anticorrelations are weak. The paper is structured as follows: Section~\ref{sec:experiments} is devoted to the description of the experimental set-up and methods, and also to a qualitative description of the observed phase behavior. In Section~\ref{sec:results}, the results for particle diffusion and velocity correlations of each of the observed phases are analyzed separately and in detail. Finally, in Section~\ref{sec:discussion} the results and final conclusions are discussed. \section{Description of the experiments} \label{sec:experiments} \subsection{Setup} \label{subsec:setup} The experimental configuration we use in this work was designed in our lab. It consists of an air-table set-up \cite{MBF81}. In our case, it is composed by two essential parts: a) the driving unit, that produces a stable quasi-laminar air upflow, and that consists of a high power fan (SODECA HCT-71-6T); and b) the arena, which consists of a flat metallic plate with a hexagonal lattice of perforated circular holes (of 3~mm diameter) is surrounded by circular walls (PLA plastic) of $4.5~\mathrm{cm}$ height. The metallic plate is carefully levelled to be horizontal (so that gravity does not enter into the dynamics if restrained within the plate). Both parts are connected by a pair of perpendicular channels that conduct the air released from the fan upwards to the metallic grid. See Figure~\ref{fig:sketch} for a schematic representation of this configuration. The metallic grid has a square-shaped ($80\times80~\mathrm{cm^2}$) but a (circle-shaped) plastic wall is put inside it, centered, so that the particles are enclosed within this circular region of radius $R = 36.25~\mathrm{cm}$. In the middle of the conducting channels there is a foam that homogenizes the upflow, which impinges a set of spherical particles (ping pong balls) disposed over the metallic grid. The spherical particles are all identical, having a diameter of $\sigma=4~\mathrm{cm}$ and a mass density $\rho=0.08~\mathrm{g\,cm^{-3}}$ (ABS plastic material). The ping-pong ball configuration is inspired in a previous work by Ohja et al., where the equation of movement of a Brownian particle, consisting on a ping-pong ball in an air table, was solved \cite{OLDLD04}. Fan power is carefully adjusted so that the particles never loose physical contact with the plate. So, within the appropriate ranges of fan power, air upflow past the spheres produces turbulent vortexes \cite{vD82} that yield stochastic horizontal movement to the spheres and thus the particle dynamics (if sphere rolling is excluded) is strictly two-dimensional. As fan power is increased, the system passes through a series of different physical configurations which are accessed through phase transitions. We have observed phase coexistence during these transitions, for experiments in a range of values of particle density. We characterize particle density by means of the packing fraction, that here is defined as $\phi\equiv N\sigma^2/R^2$, where $N$ is the number of particles present in the system. \begin{figure}[t!] \centering \includegraphics[height=0.3\textheight]{./fig1.pdf} \caption{Sketch of the experimental set up. \label{fig:sketch} } \end{figure} Everything is recorded by means of a high-speed camera (Phantom VEO 410L) at 250~fps. Experiments movies are processed by a particle tracking code that we developed specifically for this configuration. This code is composed by a series of OpenCV \cite{opencv} and TrackPy \cite{dan_allan_2019_3492186} functions, which allow to obtain all particle positions over the acquired images and tag each particle so that it will be tracked through the entire movie. Experiments are 100 s long, much higher than the typical transitory time towards the steady state. In this way, a set of $\approx 2\times10^4$ steady state statistical replica (corresponding to recorded frames after the transient) are available to process. From these data sets (please refer to the Data Statement at the end of this document), a statistically meaningful description of the magnitudes and configurations here described for each experiment is obtained. \subsection{Phase behavior} \label{subsec:phases} Figure~\ref{fig:snapshots} presents a series of movie snapshots displaying the different phase states that we have detected in our experiments, for two different packing fraction values. For each packing fraction, snapshots are placed in ascending order of fan power. At the lowest fan power ($\phi=0.18$, several particles are still static, since the turbulent vortexes intensity is not strong enough so as to overcome static friction. We denote this static phase as \textit{arrest} phase, due to its static nature. (Eventually, the arrest phase can develop a quasi-static ordered state that has been called traditionally \textit{collapse} phase in analogous configurations \cite{OU98}). At sufficiently high air current intensity, the vortexes become strong enough so that all particles undergo stochastic movement. However, we can observe caging effects for these particles. Here, we have detected coexistence between the arrest phase and a glassy phase for the caged moving particles (Figure~\ref{fig:snapshots} a). The arrest phase eventually disappears, then giving rise to glass-liquid phase coexistence (Figure~\ref{fig:snapshots} b). At higher density ($\phi=0.55$, we observe consecutively: liquid phase (Figure~\ref{fig:snapshots} c), crystal coexistence (Figure~\ref{fig:snapshots} d), and crystal-liquid phase (Figure~\ref{fig:snapshots} e). At this point, if fan power is still increased, a shrink of the crystal (that \textit{melts}), with the crystal completely disappearing at high enough air current intensity. This last stage is not represented since they look much like the snapshots in Figure~\ref{fig:snapshots} (c),(d). In any case, grasping the phase configuration out of these snapshots is not straightforward and for this reason we analyze in more detail the particle trajectory structure in the next section. \begin{figure}[h] \includegraphics[height=0.33\textheight]{./fig2.pdf} \caption{Snapshots of the different phase configurations observed in experiments. Packing fraction is $\phi=0.18$ for (a)-(b) and $\phi=0.55$ for (c)-(e). Granular temperatures are, in order: $T/m = [0.16,~0.74,~0.38,~0.47,~0.70]~\sigma\mathrm{^2 s^{-2}}$ for the configurations (same order): glass-arrest phase, glass-liquid, liquid, crystal, crystal-lquid.\label{fig:snapshots} } \end{figure} \section{Results} \label{sec:results} \subsection{Trajectories and granular temperature field} \label{subsec:track_T} In order to analyze in more detail the dynamic properties (except for the static arrest phase), we compute separately the trajectory shape, temperature field, diffusion coefficient, and velocity autocorrelations of each of the observed phases. So, Figure~\ref{fig:track_T} shows particle trajectories (left column) and granular temperature fields for a representative set of experiments. The experiments correspond to two different densities (low and high), with each subset in ascending order of upflow current, so that we can see the phases that consecutively appear as more energy is input into the system. Figure~\ref{fig:track_T} (a) shows two qualitatively different types of arrangements of particle trajectories phases: a disordered lattice of particle trajectories (moving particles that remain close to a disordered set of fixed points), and a disordered lattice of static particles (arrest phase). In effect, the former set of trajectories can be identified as a glassy phase since, although particles undergo continuous stochastic movement, caging effects are predominant \cite{Desmond2009,Rodriguez-Rivas2019} and a disordered but permanent particle trajectory structure (lattice) can be observed. To our knowledge, it is not very common to find glass transitions at such low densities. With respect to the latter, it is apparent that particles remain static during the complete 100 s experiment. From this qualitative difference between these two phases a strong energy non-equipartition emerges. In effect, as we can see in Figure~\ref{fig:track_T} (b), the region corresponding to the arrest phase has vanishing granular temperature $T$ whereas for the glassy phase $T$ is clearly non-null. Note that, contrary to what has been observed in thin layers, the static phase does not necessarily appear as a hexagonally ordered collapse phase, as in a vertically vibrated monolayer of spheres. This peculiar arrest phase, that is present also in a vibrated granular monolayer \cite{NRTMS14}, disappears here gradually as the upflow current is increased, to a point where we can observe two-phase coexistence between glass-like and liquid-like phases, as in Figure~\ref{fig:track_T} (c), where the liquid phase is observed in the region where all trajectories mix and cross each other during the experiment, in contrast with the disordered pattern of localized trajectories that is visible in the upper left corner. As we can see in Figure~\ref{fig:track_T} (d), energy non-equipartition is strong here again, with the glass phase being noticeably cooler. At higher density (packing fraction $\phi=0.55$), we observe, consecutively, a monophase liquid-like system (Figure~\ref{fig:track_T}~e-f); a hexagonal crystal phase (Figure~\ref{fig:track_T}~g-h); and a two-phase system, with a liquid coexisting with a hexagonal lattice (Figure~\ref{fig:track_T}~i-j). Notice also that non-equipartition is also present in the case of the liquid-crystal two phase system, with the crystal colder than the liquid (Figure~\ref{fig:track_T} h). \begin{figure}[H] \centering \includegraphics[height=0.80\textheight]{./fig3.pdf} \caption{Phase behavior of our system, in a central region of interest. Left column represents particle trajectories; right column shows the corresponding granular temperature 2D fields ($T$). Packing fraction is $\phi=0.18$ for (a)-(d) and $\phi=0.55$ for (e)-(j). Meanwhile, granular temperatures for each pair of panels are, respectively: $T/m = [0.16,~0.74,~0.38,~0.47,~0.70]~\sigma\mathrm{^2 s^{-2}}$. In (a)-(b) we can see phase coexistence between a glassy phase and the arrest phase at low density. In (c)-(d), low density but higher $T$, there is glass and liquid phase coexistence. (e)-(f) shows that the system is completely disordered (there is only a liquid phase), state that can be observed at intermediate temperatures for all densities. At higher densities, if the liquid is further heated (air upflow is increased), a cooler crystallite develops in coexistence with the liquid, as in (g)-(h). At stronger driving, the liquid tends to disappear and the crystal occupies the whole system. \label{fig:track_T} } \end{figure} \begin{figure}[t!] \includegraphics[width=0.95\textwidth]{./fig4.pdf} \caption{Mean squared displacement for three representative experiments of $\phi = 0.18$, the central panel corresponds to the case presented in Figure~\ref{fig:track_T}(c)-(d).} \label{fig:msd} \end{figure} \begin{figure}[t!] \includegraphics[width=0.95\textwidth]{./fig5.pdf} \caption{Mean squared displacement for three representative experiments of $\phi = 0.55$. The central panel corresponds to the case presented in Figure~\ref{fig:track_T}(i)-(j).} \label{fig:msd0} \end{figure} Overall, the fact that non-equipartition is noticeably present in all two-phase system configuration denotes that each phase has its own peculiar dynamics. In particular, this can be an indication that the diffusion process in each phase might have different scales and behavior \cite{MJCB14}. Specially because the structure of Brownian trajectories in each phase is very different, as trajectories in the left column of Figure~\ref{fig:track_T} show. For this reason, by identifying first which trajectories belong to each phase in all experiments, we have computed the diffusion coefficient for each phase. We have done so by tracking the mean squared displacement (MSD) for each phase, Figures~\ref{fig:msd},~\ref{fig:msd0} show the evolution of ensemble mean squared displacements, that in 2D can be defined as \cite{LVR22} \begin{equation} \label{eq:msd} \langle\Delta r(t)^2\rangle\equiv\langle\Delta x(t)^2 + \Delta y(t)^2\rangle, \end{equation} where $\Delta x(t)^2\equiv (1/\mathcal{N}(t))\sum_{\{t_0\}}[x(t+t_0)-x(t_0)]^2$ (and analogously for $\Delta y(t)^2$). For each lag time under steady state conditions, the squared displacements $\Delta x(t)^2, \Delta y(t)^2$ can be obtained from averages on the $\mathcal{N}(t)$ available initial times $t_0$. As a guide to the eye, the ballistic ($\alpha=2$) and normal ($\alpha=1$) diffusion values were indicated inside each panel in Figures~\ref{fig:msd},~\ref{fig:msd0}. More specifically, in Figure~\ref{fig:msd} we can see the following cases: observed; glass (a), glass-liquid (b), liquid (c); whereas in Figure~\ref{fig:msd0} we can see the cases: only crystal (a), crystal-liquid (b) and only liquid(c). It is very apparent that the behavior of the MSD for each phase is very different. In particular, the monophase glassy configuration (Figure~\ref{fig:msd} a) presents an MSD with a local maximum at the end of the ballistic regime, after which it presents a characteristic curvature in the diffusive part of the curve, which is besides strongly subdiffusive. The MSD behaviour of the glass-like phase is thus characterized by a short plateau in the MSD followed by an increase (when particles escape the current "caging" area and move to a new location. At higher $T$, in Figure~\ref{fig:msd} (b), we can see the glass-liquid coexistence. In this case, the emerging liquid phase is still weakly subdiffusive (although with a clearly faster MSD, if compared to the companion glass). In Figure~\ref{fig:msd} (c), the system with only liquid phase shows already a normal diffusion scenario. By contrast, Figure~\ref{fig:msd0} (a) shows a single crystal configuration, with the diffusive part of the MSD close to stagnation (zero time growth of the MSD); i.e., the dynamics is very strongly subdiffusive. In the case of crystal-liquid coexistence, Figure~\ref{fig:msd0} (b), the less disordered phase (glass) clearly undergoes subdiffusion, whereas the liquid has normal diffusion. Normal diffusion can also be seen in Figure~\ref{fig:msd0} (c), where the single liquid phase is recovered. An important and surprising result is the observation of glassy transitions with clear caging processes at low densities, when in general these processes are observed (to the best of our knowledge) in dense granular fluids \cite{KSZ10}. This result may be the outcome of an effective potential developed by the interaction between the spherical balls through the intermediate air flow. \subsection{Diffusion coefficient} \label{subsec:diff} Thus, from the results in Figures~\ref{fig:msd},~\ref{fig:msd0}, we may conclude that the evolution of the MSD for each phase is qualitatively very different. For this reason, we compute the diffusion coefficient separately for each phase, according to the relation \cite{MJCB14} \begin{equation} \label{eq:diff} \langle\Delta r(t)^2\rangle=(4D)t^\alpha, \end{equation} where $\alpha$ is the diffusive exponent, previously obtained from a linear fit in the diffusive part (after ballistic regime) of the MSD in curves of the type displayed in Figure~\ref{fig:msd}. This diffusive exponent is, as it is known, $\alpha=2$ for the ballistic regime and $\alpha=1$ for the diffusive regime, if diffusion is \textit{normal} (and $\alpha<1$ for subdiffusion, $2>\alpha>1$ for superdiffusion \cite{MJCB14}). Therefore, we represent in two figures our measurements of the diffusion coefficient. In Figure~\ref{fig:D_vs_phi}, $D$ is represented vs. packing fraction, for a series of experiments in different ranges of $T$: $T/m<0.6\; \sigma^2/\mathrm{s}^2;\; 0.6~\sigma^2/\mathrm{s}^2< T/m < 0.8~\sigma^2/\mathrm{s}^2;\; 0.8~\sigma^2/\mathrm{s}^2< T/m <1.2~\sigma^2/\mathrm{s}^2$ and $T/m> 1.2~\sigma^2/\mathrm{s}^2$, whereas in Figure~\ref{fig:D_vs_T}, we plot $D$ vs. $T$ for three representative packing fraction values ($\phi=0.18; 0.46; 0.55$). Figure~\ref{fig:D_vs_phi} highlights the diffusive stages of the different phase configurations, including those with phase coexistence (the coexisting phases are here joined with dashed vertical lines). As we can see, at $T/m<0.6~\sigma^2/\mathrm{s}^2$ (top left panel), the diffusion coefficient tends to decrease for increasing $\phi$ in general. Moreover, only glass or liquid phases are visible at very low $T$, with the liquid coexisting with the glass at low packing fractions whereas at intermediate packing fractions we find crystal-liquid coexistence and at larger $\phi$ only the crystal is detected, in this case with the lowest $D$ values. At higher intermediate temperatures (at $0.6~\sigma^2/\mathrm{s}^2< T/m < 0.8~\sigma^2/\mathrm{s}^2$, in top right panel; and at $0.8~\sigma^2/\mathrm{s}^2< T/m <1.2~\sigma^2/\mathrm{s}^2$, bottom left) we can see the glass-liquid at low density again the effects of larger $T$ cause the withdrawal of the crystal-liquid coexistence at intermediate $\phi$, leaving the liquid (red symbols) alone. Again, at higher $\phi$, crystal-liquid and crystal are detected. Finally, in the largest range of values of $T$, it is apparent that only the liquid is observed (except for a configuration with the densest system we used) and that in this regime the diffusion coefficient is nearly constant with respect to packing fraction, except for a steep decay at large $\phi$ (where the only two cases of coexistence with a crystal are here observed). It is also interesting to note that an extrapolation of the curve averaged by the crystalline states extends to the low-density glass transition zones. In summary, $D$ tends to decrease for denser systems, except at very high $T$, where it tends to keep approximately constant. Now, in Figure~\ref{fig:D_vs_T}, which represents $D$ vs. $T$, summarizes well the quantitative differences in the diffusion coefficient for the three phases (glass, liquid, crystal), together with the ranges of coexistence of glass and crystal with the liquid phase. Overall, liquid predominates at low and moderate density (left and center panels), whereas glass and crystal predominate at very low and high density respectively. It can also be observed that both glass and crystal are less diffusive than the liquid, as it was to be expected, with the crystal having the lowest values, systematically, of the diffusion coefficient. \begin{figure}[t!] \includegraphics[width=0.471\textwidth]{./fig6a.pdf} \includegraphics[width=0.471\textwidth]{./fig6b.pdf} \includegraphics[width=0.471\textwidth]{./fig6c.pdf} \includegraphics[width=0.471\textwidth]{./fig6d.pdf} \caption{Diffusion coefficient $D$ vs. packing fraction $\phi$ divided in four panels by the overall granular temperature of each experiment.\label{fig:D_vs_phi}} \end{figure} \begin{figure}[t] \includegraphics[width=0.33\textwidth]{./fig7a.pdf} \includegraphics[width=0.33\textwidth]{./fig7b.pdf} \includegraphics[width=0.33\textwidth]{./fig7c.pdf} \caption{Average diffusion coefficients represented against granular temperature for three different packing fractions. Each point corresponds to an experiment; where coexistence is visible, we have split $D$ into two different points, for the fluid (red) and crystal/glass phase (blue/green). \label{fig:D_vs_T}} \end{figure} As we mentioned before, previously to computing the diffusion coefficient we determine the diffusive exponent $\alpha$ as defined by eq. (\ref{eq:diff}), and whose value defines if the system is under super-diffusion ($\alpha>1$), sub-diffusion ($\alpha<1$) or normal diffusion ($\alpha=1$) \cite{LVR22,MJCB14}. So, we plot in Figure~\ref{fig:alpha} the measurement of $\alpha$ for all the performed experiments altogether. They are represented as a function system granular temperature $T$ for all the particle densities combined (here, represented in the form of packing fraction $\phi$). Red points signal the liquid phase diffusive exponents, green stands for the glass phase and blue for the crystal. As we can see, diffusion is weakest in the crystal phase, with very low values of $\alpha$. The glass phases is also very subdiffusive, but in general with stronger diffusion as compared to the crystal. The liquid however can be either weakly sub-diffusive or weakly superdiffusive, oscillating around the normal diffusion value for all experiments, except in the range of very low $T$, where the distinction between glass and liquid is not entirely clear. \begin{figure}[ht!] \centering\includegraphics[width=0.45\textwidth]{./fig8.pdf} \caption{Diffusive exponent represented against granular temperature for all experiments. It has been calculated by averaging the logarithmic slope of the MSD in the [3-6] s range. Each point corresponds to an experiment; where coexistence is visible, we have split $D$ into two different points, for the fluid (red) and crystal/glass phase (blue/green). \label{fig:alpha}} \end{figure} \subsection{Velocity autocorrelations} \label{subsec:autocorrelations} We also represented the velocity autocorrelation function, computing its trend for glass, liquid and crystal. Velocity autocorrelations provide information on the dynamics of particle collision, in particular on the statistical relation between pre-collisional and post-collisional velocities. We define the velocity autocorrelation function at lag time $\tau$ as usual \cite{Melby2005} \begin{equation} A_{\mathbf{v}}(\tau) = \frac{\langle\mathbf{v}(t) \cdot\mathbf{v}(t+\tau)\rangle}{\langle\mathbf{v}(t)\cdot\mathbf{v}(t)\rangle}, \label{eq:autocorrelation} \end{equation} where here $\langle\dots\rangle$ stands for ensemble averaging over all steady states at initial times $t_0$. Figure~\ref{fig:A_dilute} represents velocity correlations $A_{\mathbf{v}}(\tau)$ in the glass-liquid transition and Figure~\ref{fig:A_dense} represents $A_{\mathbf{v}}(\tau)$ for the crystal-liquid transition. It is to be noted that particles in the glass phase (left panel in Figure~\ref{fig:A_dilute}) show strong velocity anticorrelations at early times ($A_{\mathbf{v}}(\tau<1)<0$) and that these anticorrelations are transmitted to the coexisting liquid (center panel of Figure~\ref{fig:A_dilute}). Surprisingly as well, the depth of the anticorrelation well is increased in the glass-liquid two-phase system, with respect to the pure glass (left panel). Furthermore, the liquid remains anticorrelated at $\tau<1$ even when the glass has disappeared at high $T$. By contrast, the pure liquid phase does not display autocorrelations in the crystal-liquid transition (right panel in Figure~\ref{fig:A_dense}), as well as an increase in the time required for the autocorrelation function to cancel for the first time. However, there are weaker anticorrelations in the pure crystal (left panel in Figure~\ref{fig:A_dense}) and crystal-liquid two-phase state, and a very short time for the first cancellation of the autocorrelation function, typical of the crystal phase. Let us remark here that the right panels in Figures~\ref{fig:A_dilute},~\ref{fig:A_dense} combined reveal that the liquid phase has a variety of internal behaviors. This variety of behavior is closely related to the occurrence of the glass transition at low densities, because as mentioned before, at low densities there is a repulsive interaction between the particles mediated by the upward air flow (as if they had a soft core with a diameter greater than that of the balls), and when the density is increased, this effective potential does not prevent the direct collision between the spherical balls. \begin{figure}[ht] \includegraphics[width=0.99\textwidth]{./fig9.pdf} \caption{Normalized velocity autocorrelation for three different temperatures at $\phi = 0.18$. They correspond to the cases presented for the MSD in Figure~\ref{fig:msd}. \label{fig:A_dilute}} \end{figure} \begin{figure}[ht] \includegraphics[width=0.99\textwidth]{./fig10.pdf} \caption{Normalized velocity autocorrelation for three representative experiments of $\phi = 0.55$. They correspond to the cases presented for the MSD in Figure~\ref{fig:msd0}. \label{fig:A_dense}} \end{figure} \section{Discussion} \label{sec:discussion} We have studied the nearly-2D dynamics of a system of rolling (inelastic) spheres. The dynamics of the set of spheres is activated by means of the turbulent vortexes that originate out of an air upflow past the spheres. As we have seen, the phase behavior of the system is very complex, and we have been able to detect an arrest phase (particles that remain still or static for low energy input), a glass phase (disordered lattice of Brownian particles with sporadic jumps to other lattice positions), a liquid (completely disordered phase) and a hexagonal crystal. In particular, the glass phase appears at very low densities, which to our knowledge is a very rare situation \cite{Rodriguez-Rivas2019}. Moreover, the glass and the hexagonal crystal can coexist with the liquid. Additionally, the glass can also coexist with the arrest phase, at low air current (dynamics in the process of activation). In fact, the dynamics of the system is so complex that we have been able to detect important qualitative differences in the behavior of a single phase. For instance, as we mentioned above, the velocities of particles in the liquid phase can be either strongly anticorrelated at early times or not anticorrelated at all, depending on the configuration of the system. As another example, the crystal can display vanishing diffusive exponent (and in any case, the crystal is always very subdiffusive, see Figure~\ref{fig:alpha}). In general, the diffusive properties of the observed phases are rather different. Both the glass and the crystal are always subdiffusive, and present anticorrelated velocities. By contrast, the liquid presents always nearly normal diffusion (departures from normal diffusion can be attributed here either to measurement error or to limitations of our set-up). Measurement of the granular temperature field confirms that in all cases of phase coexistence there is a significant energy non-equipartition. This enhances the idea that these transitions are occurring in states very far from equilibrium. It may be that because of this (that our experimental configurations seem to be very far away from equilibrium) the phase transition scenario here described completely differs from the KTHNY scenario described for 2D equilibrium systems \cite{S88} and also, under certain conditions, in non-equilibrium systems such as a vibrated granular monolayer \cite{OU05,KT15} and 2D active brownian disks \cite{DLSCGP18}. Here, however, this KTHNY scenario is absent and the hexatic phase has not been observed. Furthermore, all transitions we observed in this work occur through phase coexistence, contrary to the scenario of liquid-hexatic-crystal continuous phase transition described in the KTHNY theory. Therefore, our results differ fundamentally in this aspect from previous results in both equilibrium and non-equilibrium systems (where the hexatic phase has always been observed, at least, under certain conditions). It remains for future work to study in more detail the structure of this intriguing phase behavior. \authorcontributions{J.F. G.-S. and M. A. L.-C. performed all the experiments. M. A. L.-C. and F. V. R. developed the particle tracking and processing codes. M.A. L-.C. prepared all figures. All authors participated in the formal analysis of the experimental data. A. R.-R. and F. V. R. reviewed and edited the manuscript. Design of the experiment, conceptualization, original draft preparation, and supervision was performed by F. V. R. All authors have read and agreed to the published version of the manuscript.”} \funding{ We acknowledge funding from the Government of Spain through Agencia Estatal de Investigaci\'on (AEI) project No. PID2020-116567GB-C22). A.R.-R. also acknowledges financial support from Consejer\'ia de Transformaci\'on Econ\'omica, Industria, Conocimiento y Universidades de la Junta de Andaluc\'ia/FEDER for funding through project P20-00816 and FSE through post-doctoral grant no. DC00316 (PAIDI 2020). F. V. R. is supported by the Junta de Extremadura grant No. GR21091, partially funded by the ERDF. The APC was funded by the MDPI editorial.} \dataavailability{Experimental data tables and trajectories are available in a public repository, at:\\ \texttt{https://doi.org/10.5281/zenodo.7097642}.} \acknowledgments{The authors are indebted to the Taller de Mec\'anica de la Escuela de Ingenier\'ias Industriales for construction of the air table.} \conflictsofinterest{The authors declare no conflict of interest.} \begin{adjustwidth}{-\extralength}{0cm} \reftitle{References}
{ "timestamp": "2022-09-23T02:12:50", "yymm": "2209", "arxiv_id": "2209.10924", "language": "en", "url": "https://arxiv.org/abs/2209.10924" }
\section{Introduction} \subsection{Main result} The Gamma function and the Riemann zeta function are defined as usual as: \bean \Gamma(s) & = & \int_0^\infty e^{-x}x^{s-1}dx, \quad \mathfrak{R}(s)>0,\\ \zeta(s) & = &\sum_{n=1}^\infty \frac{1}{n^s}, \quad \mathfrak{R}(s)>1, \eean and by analytic continuation. We often write $\Gamma\zeta(s)=\Gamma(s)\zeta(s)$, for some complex number $s$, with real part $\mathfrak{R}(s)$. Throughout the paper, we use the following notations and convention: \begin{itemize} \item $N,n,k,j$ are natural integers, possibly $0$ or $-1$; The convention $\sum_{n=2}^0=\sum_{n=2}^1=0$ holds; \item $i^2=-1$, and $\gamma=0,577215665\cdots$ is the Euler constant; \item $B_j$ denotes the $j$-th Bernoulli number (See Section \ref{bernoulli} for more details); \item $S(n,k)$ are the Stirling numbers of the second kind ($S(n,k)=0$ if $k>n$, see Section \ref{stirling}); \item The notation for binomial coefficients is standard. \end{itemize} The main result of this paper is the following \begin{theorem}\label{main} For all $N\ge0$, \bean \frac{(-4)^N}{2\pi}\int_{-\infty}^\infty t^{2N} \left|\Gamma\zeta\left(\frac{1}{2}+it\right)\right|^2 dt & = & \log(2\pi)-\gamma -4N + \left(\frac{4^N}{2}-1\right) B_{2N}+ \sum_{j=2}^{2N} T_{2N,j}\frac{\zeta(j)B_{j}}{j}, \eean where for all $j\ge2$, \bean T_{N,j} & = & (j-1)!\sum_{n=2}^{N} \binom{N}{n} 2^{n} \left[(-1)^n S(n+1,j) + (-1)^j S(n,j-1)\right]. \eean \end{theorem} Let us make four comments. (i) Notice that the $T_{2N,j}$'s are integers and the numbers $\zeta(j)B_j/j$ only involve $\pi$, $j$ and $B_{2j}$, since $2(2j)!\ \zeta(2j)=(-1)^{j+1}B_{2j}(2\pi)^{2j}$ and $B_{2j+1}=0$, $j\ge1$. We keep this elegant notation in reference to the connection with the fundamental works of Bettin and Conrey \cite{BC13a, BC13b} (see below for more details). \\ (ii) The quantities under study \bean M^{\Gamma\zeta}_{k} & = & \int_{-\infty}^\infty t^{k} \left|\Gamma\zeta\left(\frac{1}{2}+it\right)\right|^2 dt, \quad k\ge0, \eean are moments related to a measure with a density where $|\zeta|^2$ appears. These moments are not to be confused with the moments of the Riemann zeta function \bean \int_0^T\left|\zeta\left(\frac{1}{2}+it\right)\right|^{2k}dt, \quad T>0, \quad k\ge1, \eean which constitute a very important topic in analytic number theory, and has been the subject of huge research works; See recently e.g. \cite{CK19, HRS19, Naj20} and the fundamental references and connections therein.\\ (iii) The sequence $M^{\Gamma\zeta}$ entirely characterizes $|\zeta|$ on the critical line. Indeed, on one hand, $|\Gamma(s)|^2$ is entirely known due to Euler's reflection formula (where we set $s=1/2+it$ for real $t$): \bean |\Gamma(s)|^2 = \Gamma(s)\Gamma(1-s) = \frac{\pi}{\sin(\pi s)} = \frac{2\pi}{e^{\pi t}+e^{-\pi t}}. \eean We could have written $1/\cosh (\pi t)$ within the measure instead of $\Gamma$, but we keep this notation in reference to its basic relationship with Mellin-Plancherel isometry (see Lemma \ref{moment-ram}). On the other hand, using $|\Gamma(s)|^2=O(e^{-\pi t})$ and the crude bound $\zeta(s)=O(t)$ as $t\to+\infty$ (see e.g. \cite[Corollary 3.7 p.234]{Ten95}), one has \bean \int_{-\infty}^\infty |t|^{k} \left|\Gamma\zeta\left(s\right)\right|^2 dt = O\left(\int_{0}^\infty t^k e^{-t}dt\right) = O(k!)\ , \eean so the Hamburger moment problem $(-\infty,\infty)$ is determinate (\cite[Prop.1.5 p.88]{Sim98}), i.e. $|\Gamma\zeta(s)|^2dt$ is the only measure verifying Theorem \ref{main}. Notice that $M^{\Gamma\zeta}_{2k+1}=0$ since $t\mapsto |\Gamma\zeta(s)|$ is even.\\ (iv) The right hand side of the main formula is an alternate quantity, which is the $2N$-th derivative at $0$ of a special function unveiled by Ramanujan in 1915 \cite{Ram15}. This latter is related to the "exponential auto--correlation" function $A$ introduced in \cite{DH21a}: \bean A(v) & = & \int_0^\infty \left(\frac{1}{xv}-\frac{1}{e^{xv}-1}\right)\left(\frac{1}{x}-\frac{1}{e^{x}-1}\right)dx, \qquad v>0. \eean \subsection{Secondary result} We can now state our secondary result on which Theorem \ref{main} heavily relies. As usual, $\delta_{k,j}$ denotes the Kronecker symbol, and let $H_k$ be the harmonic number: \bean H_k & = & \sum_{j=1}^k \frac{1}{j}, \qquad k\ge 1, \eean with the convention $H_{-1}=H_0=0$. Set \bea C=\frac{\log(2\pi)-\gamma}{2}=0.6303307\cdots \eea \begin{theorem}\label{second} For all $k\ge0$, \bean A^{(k)}(1) & = & (-1)^k k! \left( (1+\delta_{k,0})C -\frac{1}{2(k+1)} - \frac{H_{k-1}}{2} + \sum_{j=2}^{k} {k \choose j-1} \frac{\zeta(j)B_j}{j} \right). \eean \end{theorem} The last sum term of this equality is exactly the same term as in Theorem 1-Lemma 1 in \cite{BC13b}, since $A(x)$ is related to their "period" function $\psi(x)$ up to some $1/x$ and $\log(x)$ factors, as shown in \cite{DH21a}. These last facts then allow for a short proof of Theorem \ref{second}, see Section \ref{bettin-conrey}. We also provide a self-contained and elementary proof, whose various techniques might be of independent interest, showing e.g. how Bettin--Conrey's numbers arise from a combinatorial and real analysis setting. \subsection{Previous works and motivation} The so-called second moment of zeta $\int_0^T|\zeta(s)|^2dt$ appears in a variety of contexts and is well understood since Hardy and Littlewood in 1916, see \cite[Chap. VII]{Tit86}. Integrating on $(0,\infty)$ requires a weight, and we encounter the denomination "weighted moment" in the literature. For instance, asymptotic expansion of $\int_0^\infty|\zeta(s)|^2 e^{-\delta t}dt$, $\delta>0$, can be obtained, see e.g. \cite[Theorem 7.15 p.164]{Tit86}, and \cite{BC13a} for new remarkable formulas with convergent asymptotic series (See also \cite{BC13c}). Second moments are also used and studied for other Dirichlet series, see e.g. \cite{BISS18, ABBRS19} for various applications combining interesting tools. Higher weighted moments of zeta, especially the fourth one, are also studied, see e.g. \cite[Chap. VII]{Tit86}, \cite{IM06}, \cite{BC13a}, and references therein. The motivation for studying $M^{\Gamma\zeta}$ stems from some generalization of the Nyman-Beurling criterion (NB) for the Riemann hypothesis (RH). NB is an approximation problem of the indicator function of $(0,1)$ in $L^2(0,\infty)$ by linear combination of functions $t\mapsto\{\theta_k/t\}$ where $\theta_k\in(0,1]$, and $\{\cdot\}$ is the fractional part function. B\'aez-Duarte \cite{BD03} showed one can take $\theta_k = 1/k$ in NB. We refer to \cite{dR07,DFMR13} where the authors consider generalizations of the zeta function. RH then reads as a geometric problem studying a square distance in the Hilbert space $L^2(0,\infty)$; One then has to consider the scalar products \bea G_{k,j} & = & \int_0^{\infty} \left\{ \frac{1}{kt}\right\}\left\{ \frac{1}{jt}\right\} dt,\qquad 1\le k,j\le n, \eea where $n$ is the size of the corresponding Gram matrix. See \cite{BDBLS05} for a fine study of the corresponding auto-correlation function. Vasyunin \cite{Vas95} proved that $G_{k,j}$ is a finite cotangent sum, which is connected to a variety of important objects and topics: Estermann function, reciprocity formulas, modular forms, Lewis--Zagier theory, see \cite{BCH85, LZ01, BC13a, MR16, ABB17}. Replacing the $\theta_k$ in NB by random variables (r.v.) $X_{k,n}$, $1\le k\le n$, produced new characterizations and structures, for instance using functions $t\mapsto \pE\{X_{k,n}/t\}$ ($\pE Z$ is the expectation of a r.v. $Z$). See \cite{DH21b} and an important generalization in \cite{ADH22}. Two main frameworks arise: \begin{itemize} \item[(d)] {\em Dilation} --- Take e.g. exponential r.v. $X_k=X_1/k\sim \cal E(k)$; The corresponding square distance written with Mellin isometry involves Dirichlet polynomials, as in the original criterion, but a smoothing effect appears: see the auto-correlation function in \cite{DH21a}; \item[(c)] {\em Concentration} --- Take e.g. Gamma distributed r.v. $X_{k,n}$ concentrated in $n$ around $1/k$; The square distance involves polynomials and the structure of zeta is then contained within the measure $|\Gamma\zeta(s)|^2dt$. The moments directly appear in the Gram matrix (see \cite{ADH22}), sort of Hankel matrix with a symbol having power-like singularities (see e.g. \cite{Kra07}). \end{itemize} Surprisingly, the auto-correlation function $A$ plays a role in both scalar products: through its value at rational numbers $A(k/j)$ in (d), or its derivatives $A^{(k)}(1)$ in (c). \subsection{Outline} In Section \ref{reminder}, we first gather useful information and properties on Bernoulli and Stirling numbers that we will use throughout the paper. Second, we relate the polynomial moments with the derivatives of the remarkable function unveiled by Ramanujan in 1915. Third, we set the tools to differentiate this function. In Section \ref{sec:proofmain}, we prove Theorem \ref{main}, especially including the short proof of Theorem \ref{second} based on \cite{BC13b} and \cite{DH21a}. Section \ref{sec:elementary} is devoted to the elementary proof of Theorem \ref{second}, based on several tools developed in Section \ref{reminder}. Finally Section~\ref{sec:numerical} exposes numerical results. \medskip \section{Reminders and Preliminary results}\label{reminder} \subsection{Bernoulli numbers}\label{bernoulli} The sequence of Bernoulli numbers $(B_n)_{n \ge 0}$ can be defined by its exponential generating function: \bea \frac{t}{e^t-1} & = & \sum_{n=0}^\infty B_n \frac{t^n}{n!}, \qquad |t|<2\pi. \eea The generating function for the Bernoulli polynomials $B_n(\cdot)$ reads \bea \frac{te^{tx}}{e^t-1} & = & \sum_{n=0}^\infty B_n(x) \frac{t^n}{n!}, \qquad |t|<2\pi. \eea The $N$--th Bernoulli polynomial reads \bean B_N(x) & = & \sum_{n=0}^N \binom{N}{n} B_n x^{N-n}, \eean The first Bernoulli numbers are \bean B_0 = 1, \ \ B_1 = -\frac{1}{2}, \ \ B_2 = \frac{1}{6}, \ \ B_3 = 0, \ \ B_4 = -\frac{1}{30}, \ \ B_5=0, \ \ B_6=\frac{1}{42}, \ \cdots \eean For $0\le j \le k$, we set: \bea K_{k,j} & = & \frac{1}{k} \binom{k}{j} B_{k-j} +\delta_{j,k-1}. \eea \subsection{Stirling numbers}\label{stirling} \subsubsection{Change of basis} We use the notation $(x)_n$ for the falling factorial polynomial: \bean (x)_n = x(x-1)\cdots(x-n+1), \quad n\ge1, \eean with $(x)_0=1$. Both families $(1,x,x^2,\ldots,x^n)$ and $((x)_0,(x)_1,\ldots,(x)_n)$ are bases of the linear space of polynomial of degree at most $n$. The change of basis is ruled by Stirling numbers. The Stirling numbers of the first kind (signed), resp. second kind, is the family of coefficients $(s(n,k))_{n \ge 1, 1 \le k \le n}$, resp. $(S(n,k))_{n \ge 1, 0\le k \le n}$, defined by \bean (x)_n & = & \sum_{k=0}^n s(n,k) x^k, \\ x^n & = & \sum_{k=0}^n S(n,k) (x)_k. \eean If $k \le 0$ or $k \ge n$ we set $s(n,k)=S(n,k)=0$. Stirling numbers satisfy the recurrence relations: \bean s(n+1,k) & = & s(n,k-1)-n\ s(n,k) \\ S(n+1,k) & = & S(n,k-1)+k\ S(n,k). \eean The interplay between Stirling numbers of the first and second kind transcripts with matrices. Let $S_1 = (s(n,k))_{0 \le n,k \le N}$ and $S_2 = (S(n,k))_{0 \le n,k \le N}$. As an example, for $N=4$, \bean S_2 = \begin{pmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&1&1&0&0 \\ 0&1&3&1&0 \\ 0&1&7&6&1 \end{pmatrix},\quad S_1 = \begin{pmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&-1&1&0&0 \\ 0&2&-3&1&0 \\ 0&-6&11&-6&1 \end{pmatrix}. \eean One then have the fundamental relation $S_2 = S_1^{-1}$. By computing each coefficient of $S_1S_2 = \rm{Id}_N$, one obtains that for any couple of indices $j \le k$: \bean \sum_{p=j}^k S_2(k,p) s(p,j) & = & \delta_{j,k}. \eean The following identity, see \cite[(6.100)]{GKP94}, will be used in the elementary proof of Theorem \ref{second}: \bea \sum_{p=j}^k \frac{S(k,p) s(p,j)}{p} & = & K_{k,j}. \eea This formula is obtained writing a finite sum $\sum_k n^k$ using two times the change of basis by means of Stirling numbers, and then by identification using the Faulhaber formula. \subsubsection{Operator and generating function} Let $(u_k)_{k \ge 0}$ be a real sequence such that the power series \bean F_u(x) & = & \sum_{p \ge 0} u_p x^p \eean has radius $r>0$. Denote by $\cal E(u)$ the sequence defined by: \bean \cal E(u)_n & = & \sum_{k=0}^n S(n,k) (-1)^k k!\ u_{k}, \eean and define the sequence $U$ by \bean U_n & = & \frac{(-1)^n}{n!} \cal E(u)_n. \eean The following lemma uses standard techniques of exponential generating functions, and will be very useful in the sequel. \begin{lemma} \label{prop:petitesomme} For all $t\in [0,\log(1+r)[$, \begin{equation*} F_U(t) = F_u(1-e^{-t}). \end{equation*} \end{lemma} \begin{proof} Let $t\in [0,\log(1+r)[$, i.e. $0\le e^t-1<r$ and $0\le 1-e^{-t}<r$. We have, since $S(n,k)\ge0$, \bean \sum_{n \ge 0}\sum_{k=0}^n \frac{1}{n!}S(n,k) k!\ |u_{k}| t^n= \sum_{k \ge 0} |u_k| \left(k! \sum_{n \ge k} S(n,k) \frac{t^n}{n!} \right)=\sum_{k=0}^\infty |u_k| (e^{t}-1)^k <\infty. \eean Therefore, by Fubini Theorem, \bean \sum_{n \ge 0} U_n t^n &=& \sum_{k \ge 0} (-1)^k u_k \left(k! \sum_{n \ge k} S(n,k) (-1)^n \frac{t^n}{n!} \right) \\ &=& \sum_{k\ge0} (-1)^k u_k (e^{-t}-1)^k \\ &=& \sum_{k\ge0} u_k (1-e^{-t})^k, \eean as desired. \end{proof} \subsection{Ramanujan's identity and the polynomial moments} In \cite{Ram15}, Ramanujan obtained remarkable identities concerning the $\Xi$ and $\xi$ functions, see \cite{D11,Kim16} for many investigations. The following last Ramanujan's identity of \cite{Ram15} was a fundamental start for our work: \bea \label{ram} \int_0^\infty \left|\Gamma\left(-\frac{1}{4}+i\frac{t}{2}\right)\right|^2 \Xi\left(\frac{t}{2}\right)^2 \frac{\cos(vt)}{1+t^2}dt & = & \pi\sqrt{\pi}\ G(v), \eea where $\Xi(t)=\xi(\frac{1}{2}+it)=\frac{1}{2}s(s-1) \pi^{-s/2}\Gamma(s/2)\zeta(s) $, and for real $v$, \bea G(v) & = & \int_0^\infty\left(\frac{1}{e^{xe^v}-1}-\frac{1}{xe^v}\right)\left(\frac{1}{e^{xe^{-v}}-1}-\frac{1}{xe^{-v}}\right)dx. \eea Fourier type transform involving the $\Xi$-function (and not its square) is a vast subject, see e.g. \cite[2.6 p.35]{Tit86}, and we refer to \cite{DRRZ15} and references therein for recent applications. The identity (\ref{ram}) is today interpreted as a consequence of Mellin-Plancherel isometry, see \cite{Kim16} for generalizations and the full treatment of Ramanujan's identities contained in \cite{Ram15}. The following lemma, which only gathers the arguments in our case of interest, relates the moments $M^{\Gamma\zeta}_{2N}$ to the derivatives of Ramanujan's function $G$: \begin{lemma} \label{moment-ram} For all $N\ge0$, \bean \int_{-\infty}^\infty t^{2N} \left|\Gamma\zeta\left(\frac{1}{2}+it\right)\right|^2 dt & = & 2\pi(-1)^N 2^{-2N}G^{(2N)}(0). \eean \end{lemma} \begin{proof} Let us recall the fundamental formula \bea \label{mellin-gamma-zeta} \int_0^\infty \left(\frac{1}{e^{x}-1}-\frac{1}{x}\right)x^{w-1}dx & = & \Gamma(w)\zeta(w), \quad 0<\mathfrak{R}(w)<1. \eea Set for real $v$, \bea f_v(x) & = & \frac{1}{e^{xe^v}-1}-\frac{1}{xe^v}, \qquad x>0. \eea Therefore, we have in particular for $s=\frac{1}{2}+it$: \bean \widehat{f_v}(s) = \int_0^\infty \left(\frac{1}{e^{xe^v}-1}-\frac{1}{xe^v}\right)x^{s-1}dx \ =\ e^{-vs} \widehat{f_0}(s) \ = \ e^{-vs} \Gamma(s)\zeta(s). \eean Then, by Mellin-Plancherel isometry: \bean G(v)=\int_0^\infty f_v(x)f_{-v}(x)dx & = & \frac{1}{2\pi}\int_{-\infty}^\infty \widehat{f_v}(s)\widehat{f_{-v}}(\b s)dt \\ & = & \frac{1}{2\pi}\int_{-\infty}^\infty e^{-vs+v\b s}\ |\Gamma(s)\zeta(s)|^2 dt \\ & = & \frac{1}{2\pi}\int_{-\infty}^\infty \cos(2vt) |\Gamma(s)\zeta(s)|^2 dt, \eean since $-vs+v\b s=-v(\frac{1}{2}+it)+v(\frac{1}{2}-it)=-2ivt$, and $t\mapsto \sin(2vt)|\Gamma(s)\zeta(s)|^2$ is odd. Therefore \bean G^{(2N)}(v) & = & \frac{(-1)^N}{2\pi}2^{2N}\int_{-\infty}^\infty \cos(2vt)t^{2N} |\Gamma(s)\zeta(s)|^2 dt, \eean which yields the desired conclusion taking $v=0$. \end{proof} Using the change of variable $y=xe^{-v}$, we directly obtain \begin{lemma} For all real $v$, \bea G(v) & = & e^v A(e^{2v}). \eea \end{lemma} To differentiate the composed function $v\mapsto A(e^{2v})$, we will need the two lemmas of the following subsection. \medskip \subsection{Differentiating functions composed with the exponential} Let us consider the differential operator $\cal D:C^\infty(\R)\to C^\infty(\R)$: \bean (\cal D \varphi)(x) & = & x\varphi'(x), \eean which is a basic example of a Cauchy-Euler operator. We set $\cal D^{n+1}=\cal D\circ \cal D^n$ for all $n\ge0$ with $\cal D^{0}={\rm Id}$. The following lemma is the first question of Exercise 13 p. 300 in \cite{GKP94}: \begin{lemma}\label{Dn} For all $\varphi\in C^\infty$ and $n\ge0$, we have \bean \cal D^{n} \varphi(x) & = & \sum_{k=0}^n S(n,k) x^k \varphi^{(k)}(x). \eean \end{lemma} \begin{proof} We check that $\cal D^0\varphi(x)=S(0,0)\varphi^{(0)}(x)=\varphi(x)$ and $\cal D^1\varphi(x)=x\varphi'(x)=S(1,1)x^1\varphi^{(1)}(x)$. Assume the equality holds for some $n\ge1$. Since $\cal D$ is linear, we have \begin{eqnarray*} \cal D^{n+1} \varphi(x) & = & \sum_{k=0}^n S(n,k) x(kx^{k-1} \varphi^{(k)}(x)+x^k \varphi^{(k+1)}(x)) \\ & = & S(n,1) x \varphi'(x)+ \sum_{k=2}^n [k S(n,k)+ S(n,k-1)]x^k \varphi^{(k)}(x) + S(n,n) x^{n+1} \varphi^{(n+1)}(x) \\ & = & S(n,1) x \varphi'(x)+ \sum_{k=2}^n S(n+1,k) x^k \varphi^{(k)}(x) + S(n,n) x^{n+1} \varphi^{(n+1)}(x) \\ & = & \sum_{k=0}^{n+1} S(n+1,k) x^k \varphi^{(k)}(x), \end{eqnarray*} where we used $S(n,n)=1=S(n+1,n+1)$, $S(n,0)=0=S(n+1,0)$ (since $n\ge1$) and $S(n,1)=1=S(n+1,1)$. \end{proof} \begin{lemma}\label{compo-exp} Let $\phi(x)=\varphi(e^x)$ for any real number $x$. Then, for all $n\ge 1$, \bean \phi^{(n)}(x) & = & (\cal D^n \varphi)(e^x) \\ & = & \sum_{k=0}^n S(n,k) e^{kx}\varphi^{(k)}(e^x). \eean \end{lemma} \begin{proof} We have $\phi'(x)=e^x \varphi'(e^x)=(\cal D \varphi)(e^x)$. By induction, assume that for some $n\ge1$, $\phi^{(n)}(x) = (\cal D^n \varphi)(e^x)$. Then \bean \phi^{(n+1)}(x) = e^x(\cal D^n \varphi)'(e^x) = (\cal D (\cal D^n \varphi))(e^x), \eean and the conclusion follows. \end{proof} \bigskip \section{Proof of the main result} \label{sec:proofmain} \subsection{Short Proof of Theorem \ref{second} based on Bettin Conrey's power series identity} \label{bettin-conrey} Following \cite{BC13a,BC13b}, consider, for $\mathfrak{Im}(s)>0$, \bean E_1(z) & = & 1-4\sum_{n\ge1}d(n)e^{2in\pi z} \\ \psi(z) & = & E_1(z)-1/z E_1(-1/z), \eean and their analytic continuation to $\C\setminus (-\infty,0]$ ($d(n)$ is the number of divisors of $n$). For $|z|<1$, Bettin and Conrey \cite[Lemma 1]{BC13b} obtain \bea \psi(1+z) & = & \frac{2i}{\pi}\sum_{k\ge0} \psi_k z^k, \eea with \bea \psi_k & = & \frac{(-1)^k}{k+1} + 2 \sum_{j=1}^{k-1}(-1)^{k-j}{k \choose j} \frac{\zeta(j+1)B_{j+1}}{j+1}, \quad k\ge2, \eea and $\psi_0=1$, $\psi_1=-1/2$. Moreover, the theorems \cite[Theorem 1]{BC13b} and \cite[Theorem 1]{DH21a} allow us to connect $\psi$ and $A$: \begin{lemma} For all $x>0$ \bean A(x) & = & \frac{i\pi}{4} \psi(x) + R(x), \eean where \bean R(x) & = & C \left(1+\frac{1}{x}\right) - \frac{1}{2}\left(1-\frac{1}{x}\right) \log(x). \eean \end{lemma} \begin{proof} Following the notations of \cite{BC13b}, we set \bean c(x)=-\sum_{a=1}^{k-1} \frac{a}{k} \cot\left(\frac{\pi a h}{k} \right), \eean where $x=h/k$, $k>0$ and ${\rm gcd}(h,k)=1$. The reciprocity formula \cite[Theorem 1]{BC13b} states that \bea xc(x)+c\left(\frac{1}{x}\right)-\frac{1}{\pi k} & = & \frac{ix}{2}\psi(x). \eea Theorem 1 in \cite{DH21a} can be rewritten as \bea xc(x)+c\left(\frac{1}{x}\right)-\frac{1}{\pi k} & = & \frac{1}{\pi} \left(2 x A(x)-2(1+x) C+(x-1) \log(x) \right). \eea Therefore \bean A(x) & = & \frac{i\pi}{4} \psi(x) + C (1+\frac{1}{x}) - \frac{1}{2}(1-\frac{1}{x}) \log(x), \eean as claimed. \end{proof} We can now differentiate $A$. On one hand, \bean R^{(k)}(x) & = & \frac{(-1)^k k!}{x^{k+1}}C-\frac{1}{2}\sum_{j=0}^k{k \choose j}(1-1/x)^{(j)}\log(x)^{(k-j)} \\ & = & \frac{(-1)^k k!}{x^{k+1}}C + \frac{1}{2}\sum_{j=1}^{k-1}{k \choose j}\frac{(-1)^j j!}{x^{j+1}}\frac{(-1)^{k-j-1} (k-j-1)!}{x^{k-j}} \\ & & \qquad \qquad -\frac{1}{2}(1-1/x)\frac{(-1)^{k-1} (k-1)!}{x^{k}} + \frac{1}{2} \frac{(-1)^k k!}{x^{k+1}}\log(x). \eean Therefore \bean R^{(k)}(1) & = & (-1)^k k!C - \frac{(-1)^k k!}{2}\sum_{j=1}^{k-1}\frac{1}{k-j}\\ & = & (-1)^k k! \left(C-\frac{H_{k-1}}{2}\right), \eean with the convention $H_0=0$. On the other hand, with Taylor formula, \bean \frac{i\pi}{4}\psi^{(k)}(1) & = & \frac{i\pi}{4}\cdot \frac{2i}{\pi} \psi_k \ k!\\ & = & -\frac{1}{2}\frac{(-1)^k k!}{k+1}- (-1)^k k!\sum_{j=2}^{k}(-1)^{j-1}{k \choose j-1} \frac{\zeta(j)B_{j}}{j}, \eean which yields to the desired expression in Theorem \ref{second}, noting that $(-1)^jB_j=B_j$ for $j\ge2$. \bigskip \subsection{Computation of the derivatives of $G$} Let $u$ be a real sequence, recall the definition of $\cal E(u)$ and define the sequence $\cal L(u)$: \bean \cal E(u)_n & = & \sum_{k=0}^n S(n,k) (-1)^k k!\ u_{k}\\ \cal L(u)_N & = & \sum_{n=0}^N {N\choose n} 2^{n} u_{n}. \eean The $\cal E$ is used to suggest that it is related to differentiating a function composed by an exponential. We use the notation $\cal L$ to refer to the Leibniz rule to differentiate a product. \begin{lemma} For all $N\ge 1$, \bean G^{(N)}(0) & = & \left(\cal L\circ \cal E\left(c-\frac{1}{2}(\iota+\eta)+\beta\right)\right)_N, \eean where we define for all $k\ge0$, \bean c_k & = & C(1+\delta_{k,0}) \\ \iota_k & = & \frac{1}{k+1} \\ \eta_k & = & H_{k-1} \\ \beta_k & = & \sum_{j=2}^{k} {k \choose j-1} \frac{B_{j}\zeta(j)}{j}. \eean \end{lemma} \begin{proof} Recall that \bean G(v) & = & e^v A(e^{2v}). \eean Using the Leibniz rule, Lemma \ref{compo-exp} together with the composition $x\mapsto 2x$, we obtain \bean G^{(N)}(x) & = & \sum_{n=0}^N {N\choose n} 2^{n}\sum_{k=0}^n S(n,k) e^{2kx}A^{(k)}(e^{2x}), \eean and then \bean G^{(N)}(0) & = & \sum_{n=0}^N {N\choose n} 2^{n}\sum_{k=0}^n S(n,k) A^{(k)}(1). \eean Writing \bean A^{(k)}(1) & = & (-1)^k k! \left( c_k-\frac{1}{2}(\iota_k+\eta_k)+\beta_k \right), \eean and using the notations, yields the desired expression. \end{proof} \begin{lemma}\label{} We have \bean \cal L\circ\cal E(c)_N = C\sum_{n=0}^N {N\choose n} 2^{n} \sum_{k=0}^n S(n,k) (-1)^k k!(1+\delta_{k,0}) = (1+(-1)^N)C. \eean \end{lemma} \begin{proof} First, we have due to the definition of $S(n,k)$: \bean \sum_{k=0}^n S(n,k)(-1)^k k! = \sum_{k=0}^n S(n,k) (-1)_k = (-1)^n, \eean and \bean \sum_{n=0}^N {N\choose n} 2^{n} (-1)^n & = & (-1)^N. \eean Second, notice that \bean \sum_{n=0}^N {N\choose n} 2^{n} \sum_{k=0}^n S(n,k) (-1)^k k!\ \delta_{k,0} = \sum_{n=0}^N {N\choose n} 2^{n} S(n,0) = {N\choose 0} S(0,0)=1. \eean \end{proof} \begin{lemma} We have \bean \cal E(\iota)_n & = & B_n \\ \cal L\circ\cal E(\iota)_N & = & (2-2^N) B_N. \eean \end{lemma} \begin{proof} First: \bean F_\iota(x) = \sum_{k=0}^\infty \frac{x^k}{k+1} = - \frac{\ln(1-x)}{x}, \quad 0<x<1. \eean Therefore, noting that $B_1+1=(-1)^1B_1$ and $B_n=(-1)^n B_n$ for $n\ge2$, \bean F_\iota(1-e^{-t}) = \frac{t}{1-e^{-t}} = \frac{t}{e^{t}-1} +t = \sum_{n=0}^\infty (-1)^n\frac{B_n}{n!} t^n. \eean But $F_\iota(1-e^{-t})=F_I(t)$ with $I_n=\frac{(-1)^n}{n!}\cal E(\iota)_n$. Therefore, by identification, \bean \cal E(\iota)_n & = & B_n. \eean Second: Recall that the $N^{th}$ Bernoulli polynomial is \bean B_N(x) & = & \sum_{n=0}^N \binom{N}{n} B_n x^{N-n}, \eean and $B_N(0)= B_N$. Thus \bean B_N\left(\frac{1}{2}\right) & = & \frac{1}{2^N}\sum_{n=0}^N \binom{N}{n} B_n 2^{n} \\ & = & \frac{1}{2^N}\cal L\circ\cal E(\iota)_N. \eean But it is known that \bea B_N\left(\frac{1}{2}\right) & = & \left(\frac{1}{2^{N-1}} - 1 \right) B_N. \eea Hence \bean \cal L\circ\cal E(\iota)_N & = & (2-2^N) B_N. \eean \end{proof} \begin{lemma} We have \bean \cal E(\eta)_n & = & (-1)^n n + \delta_{n,1}\\ \cal L\circ\cal E(\eta)_N & = & 2N (1+(-1)^N). \eean \end{lemma} \begin{proof} First, we compute $\cal E(\iota)_n$. Since $H_{-1}=H_0=0$, \bean F_\eta(x) = \sum_{k\ge0} H_{k-1} x^k= \sum_{k\ge2} H_{k-1} x^k=\sum_{k\ge1} H_{k} x^{k+1}, \quad 0<x<1. \eean Thus, writing $H_k$ and using Fubini Tonneli, \bean F_\eta(x) = x\sum_{k=1}^\infty \sum_{1\le p\le k} \frac{x^k}{p} = x\sum_{p\ge1} \frac{1}{p} \sum_{k \ge p} x^k = \frac{x}{1-x} \sum_{p\ge1} \frac{x^p}{p} = - \frac{x\ln(1-x)}{1-x}. \eean Then \bean F_\eta(1-e^{-t}) = -\frac{(1-e^{-t})(-t)}{e^{-t}} = t(e^t-1) = \sum_{n\ge1} \frac{t^{n+1}}{n!} = \sum_{n\ge2} \frac{t^{n}}{(n-1)!} \eean But $F_\eta(1-e^{-t})=F_{\cal H}(t)$ with $\cal H_n=\frac{(-1)^n}{n!}\cal E(\eta)_n$. Therefore $\cal H_0=\cal H_1=0$ and \bean \frac{(-1)^n}{n!}\cal E(\eta)_n & = & \frac{1}{(n-1)!}, \quad n\ge2. \eean Hence \bean \cal E(\eta)_n & = & (-1)^n n, \quad n\ge2, \eean and $\cal E(\eta)_0=\cal E(\eta)_1=0$, which we can write for all $n\ge0$, \bean \cal E(\eta)_n & = & (-1)^n n + \delta_{n,1}. \eean Second: \bean \cal L\circ\cal E(\eta)_N & = & \sum_{n=0}^N \binom{N}{n} 2^{n} \left((-1)^n n + \delta_{n,1}\right) \\ & = & 2N + \sum_{n=0}^N \binom{N}{n} (-2)^{n} n. \eean But \bean \sum_{n=0}^N \binom{N}{n} (-2)^{n} n & = & \sum_{n=1}^N n\binom{N}{n} (-2)^{n} \\ & = & N\sum_{n=1}^N \binom{N-1}{n-1} (-2)^{n} \\ & = & N\sum_{n=0}^{N-1} \binom{N-1}{n} (-2)^{n+1} \\ & = & -2N(-2+1)^{N-1} =2(-1)^N N. \eean Finally, \bean \cal L\circ\cal E(\eta)_N & = & 2N (1+(-1)^N). \eean \end{proof} \begin{lemma} We have \bean \cal L\circ\cal E(\beta)_N & = & \sum_{j=2}^N T_{N,j}\frac{\zeta(j)B_{j}}{j}, \eean where \bean T_{N,j} & = & (j-1)!\sum_{n=2}^N \binom{N}{n} 2^{n} \left[(-1)^n S(n+1,j) + (-1)^j S(n,j-1)\right]. \eean \end{lemma} \begin{proof} Let us expand and rewrite \bean \cal L\circ\cal E(\beta)_N & = & \sum_{n=0}^N \binom{N}{n} 2^{n} \sum_{k=0}^n S(n,k)(-1)^k k! \sum_{j=2}^{k} \binom{k}{j-1} \frac{\zeta(j)B_{j}}{j}\\ & = & \sum_{j=2}^N \sum_{n=2}^N \binom{N}{n} 2^{n} W_{n,j}\frac{\zeta(j)B_{j}}{j}, \eean where $W_{n,j} = 0$ if $n < j$, and for $n \ge j$, \bean W_{n,j} & = & \sum_{k=j}^n S(n,k)(-1)^k k! {k \choose j-1}\\ & = &\sum_{k=j-1}^n S(n,k)(-1)^k k! {k \choose j-1} - (-1)^{j-1}(j-1)! S(n,j-1). \eean Fix $j\ge2$. Then \bean \sum_{k=j-1}^n S(n,k)(-1)^k k! {k \choose j-1} & = & \cal E(u)_n, \eean where $u_k={k \choose j-1}\1_{k\ge j-1}$. We have \bean F_u(x)=\sum_{k=0}^\infty u_k x^k & = & \sum_{k=j-1}^\infty {k \choose j-1} x^{k} \\ & = & x^{j-1} \sum_{k=j-1}^\infty {k \choose j-1} x^{k-(j-1)} \\ & = & \frac{x^{j-1}}{(1-x)^{j}}. \eean Thus \bean F_u(1-e^{-t}) & = & e^{t j}(1-e^{-t})^{j-1} \\ & = &e^t (e^t-1)^{j-1}\\ & = & (e^t-1)^{j}+(e^t-1)^{j-1}. \eean Since \bean (e^t-1)^{j} & = & j ! \sum_{n \ge j} \frac{S(n,j)}{n!}t^n \\ (e^t-1)^{j-1} & = & (j-1) ! \sum_{n \ge j-1} \frac{S(n,j-1)}{n !}t^n, \eean we deduce that for $n \ge j$ : \bean U_{n,j} (-1)^n = (j-1)! \left[ j S(n,j) + S(n,j-1) \right] = (j-1)! S(n+1,j). \eean Hence \bean W_{n,j} = (j-1)! \left[(-1)^n S(n+1,j) + (-1)^j S(n,j-1) \right], \eean as desired. \end{proof} \bigskip \section{Elementary Proof of Theorem \ref{second}} \label{sec:elementary} \subsection{Decomposition of $A^{(k)}(1)$} Set \bea h(x) & = & \frac{1}{e^x-1}, \qquad x>0. \eea We can differentiate $A$ under the integral sign: \bea A^{(k)}(1) & = & \int_0^\infty \left(\frac{(-1)^k k!}{x}-x^k h^{(k)}(x)\right)\left(\frac{1}{x}-\frac{1}{e^{x}-1}\right)dx, \eea by grouping the divergent terms as $x\to0$ inside the integral of $A^{(k)}(v)$ and using e.g. Lemma \ref{e-diff-h} with a uniform bound for $v\in [1-\eta,1+\eta]$ (small $\eta>0$). We can then develop and obtain: \begin{lemma} For all $k\ge1$, \bean A^{(k)}(1) & = & D_{3,k}^\e + D_{2,k}^\e -D_{1,k}^\e, \eean where \bean D_{1,k}^\e & = & \int_\e^\infty x^{k-1}h^{(k)}(x)dx \\ D_{2,k}^\e & = & \int_\e^\infty \frac{x^k h^{(k)}(x)}{e^{x}-1}dx \\ D_{3,k}^\e & = & (-1)^k k!\int_\e^\infty \frac{1}{x}\left(\frac{1}{x}-\frac{1}{e^{x}-1}\right)dx. \eean \end{lemma} \subsection{The derivatives of $h$} Set \bea \alpha_{k,p} & = & (-1)^p S(k,p) p! \eea The integrals $D_{1,k}^\e$ and $D_{2,k}^\e$ involve the derivatives of $h$. The paper \cite{GQ14} gives some expressions of these ones and interesting applications. For our purpose, we need a different formula, especially having an $e^x$ within the numerator: \begin{lemma} For all $k\ge1$, \bea \label{h(k)} h^{(k)}(x) & = & \sum_{p=1}^k \alpha_{k,p} \frac{e^{px}}{(e^x-1)^{p+1}}. \eea \end{lemma} \begin{proof} Setting $\varphi(x) = \frac{1}{x-1}$, we have for all $p \ge 1$, \begin{equation*} \varphi^{(p)}(x) = \frac{(-1)^p p!}{(x-1)^{p+1}}. \end{equation*} Noting that $h(x)=\varphi(e^x)$, we apply Lemma \ref{compo-exp} to deduce \bean h^{(k)}(x) & = & \sum_{p=1}^k S(k,p) e^{px} \frac{(-1)^p p!}{(e^x-1)^{p+1}}, \eean as claimed. \end{proof} \begin{lemma} For $k\ge0$ and $p\ge1$, \bea \alpha_{k,p-1}-\alpha_{k,p} & = & -\frac{\alpha_{k+1,p}}{p}. \eea \end{lemma} \begin{proof} This only requires the recursive definition of $S(k+1,p)$: \bean \alpha_{k,p-1}-\alpha_{k,p} & = & (-1)^{p-1} S(k,p-1) (p-1)!-(-1)^p S(k,p) p! \\ & = & (-1)^{p+1} (p-1)! \left(S(k,p-1)+pS(k,p)\right) \\ & = & (-1)^{p+1} (p-1)! S(k+1,p), \eean and the claim follows. \end{proof} \subsection{Asymptotic expansions involving $h$} We gather here several asymptotic expansions, useful for the sequel. \begin{lemma} \label{e-diff-h} For all $a\ge 0$, as $\e\to 0$, \bea \epsilon^a \frac{e^{a\epsilon}}{(e^\epsilon-1)^{a+1}} & = & \frac{1}{\epsilon} + \frac{a-1}{2} +o(1). \eea \end{lemma} \begin{proof} We have \bean \epsilon^a \frac{e^{a\epsilon}}{(e^\epsilon-1)^{a+1}} & = & \frac{1}{\epsilon}\frac{1+a\epsilon+o(\epsilon)}{(1+\epsilon/2+o(\epsilon))^{a+1}}. \eean But \bean (1+\epsilon/2+o(\epsilon))^{-(a+1)} & = & 1-\frac{a+1}{2}\epsilon +o(\epsilon), \eean and the conclusion follows. \end{proof} \begin{lemma} As $\e\to 0$, \bean I(\epsilon)=\int_\e^\infty \frac{1}{t}\left(\frac{1}{t}-\frac{1}{e^{t}-1}\right)dt & = & -\frac{\log \epsilon}{2} + C + o(1),\\ \int_\e^\infty \frac{1}{t(e^{t}-1)}dt & = & \frac{1}{\epsilon}+\frac{\log \epsilon}{2} - C + o(1),\\ \int_\e^\infty \frac{1}{(e^{t}-1)^2}dt & = & \log(\epsilon) + \frac{1}{\epsilon} - \frac{1}{2} +o(1). \eean \end{lemma} \begin{proof} In \cite{Bal18}, Balazard identified the constant $C=\frac 12(\log 2\pi -\gamma)$, see \cite{DH21a} for his proof. Rewriting $C$, we obtain \bean C & = & 1-\int_0^1 \left (\frac{1}{t(e^t-1)}-\frac 1{t^2}+\frac 1{2t}\right) dt-\int_1^{\infty} \frac{dt}{t(e^t-1)}\\ & = & \int_0^1 \left[\frac{1}{t}\left(\frac{1}{t}-\frac{1}{e^{t}-1}\right)-\frac 1{2t}\right] dt + \int_1^\infty \frac{1}{t}\left(\frac{1}{t}-\frac{1}{e^{t}-1}\right)dt \\ & = & I(\epsilon) - \int_\epsilon^1 \frac{dt}{2t}+o(1) \\ & = & I(\epsilon) +\frac{\log \epsilon}{2}+o(1). \eean The asymptotic development of $I(\epsilon)$ and the second quantity then follow. Moreover \bean \int_\e^\infty \frac{-1}{(e^{t}-1)^2}dt & = & \int_\e^\infty \frac{e^t-1}{(e^{t}-1)^2}dt - \int_\e^\infty \frac{e^t}{(e^{t}-1)^2}dt \\ & = & [\log(1-e^{-t})]_\epsilon^\infty + \left[\frac{1}{e^{t}-1}\right]_\epsilon^\infty \\ & = & -\log(1-e^{-\epsilon})- \frac{1}{e^{\epsilon}-1}\\ & = & -\log(\epsilon) - \frac{1}{\epsilon} + \frac{1}{2} +o(1), \eean as desired. \end{proof} The following elementary quantities will be useful in the next section. \begin{lemma} \label{sumq} As $\e\to 0$, \bea \sum_{q\ge1}\frac{e^{-\epsilon q}}{q} & = & -\log \epsilon + o(1), \eea and for all $a\ge 1$, \bea \epsilon^a\sum_{q\ge1}q^{a-1}e^{-\epsilon q} & = & (a-1)! + o(1) \\ \epsilon^a\sum_{q\ge1}q^{a}e^{-\epsilon q} & = & \frac{a!}{\epsilon} + o(1). \eea \end{lemma} \begin{proof} We have \bean \sum_{q\ge1}\frac{e^{-\epsilon q}}{q} = \int_\e^\infty \frac{dt}{e^{t}-1} = [\log(1-e^{-t})]_\epsilon^\infty = -\log(1-e^{-\epsilon}), \eean and we obtain the first expansion. For $a\ge2$, we have \bean \epsilon^a\sum_{q\ge1}q^{a-1}e^{-\epsilon q} & = & (-1)^{a-1}\epsilon^a h^{(a-1)}(\epsilon). \eean But \bean \epsilon^a h^{(a-1)}(\epsilon) & = & \epsilon^a\sum_{p=1}^{a-1} S(a-1,p) e^{p\epsilon} \frac{(-1)^p p!}{(e^\epsilon-1)^{p+1}} \\ & = & (-1)^{a-1}(a-1)! \frac{\epsilon^a e^{(a-1)\epsilon}}{(e^\epsilon-1)^{a}} +o(1)\\ & = & (-1)^{a-1}(a-1)! +o(1), \eean and the conclusion follows noticing that the identity also holds for $a=1$. Finally, for $a\ge1$, \bean \epsilon^a\sum_{q\ge1}q^{a}e^{-\epsilon q} & = & (-1)^{a}\epsilon^a h^{(a)}(\epsilon). \eean Notice that for $a\ge2$, as $\e\to0$, \bean \epsilon^a h^{(a)}(\epsilon) & = & \epsilon^a\sum_{p=1}^{a} S(a,p) e^{p\epsilon} \frac{(-1)^p p!}{(e^\epsilon-1)^{p+1}} \\ & = & (-1)^{a-1}(a-1)! S(a,a-1) \frac{\epsilon^a e^{(a-1)\epsilon}}{(e^\epsilon-1)^{a}} + (-1)^{a}a! \frac{\epsilon^a e^{a\epsilon}}{(e^\epsilon-1)^{a+1}} + o(1)\\ & = & (-1)^{a-1}(a-1)! S(a,a-1) + (-1)^{a}a!\left(\frac{1}{\epsilon} + \frac{a-1}{2}\right)+o(1). \eean Since \bean (a-1)! S(a,a-1) = (a-1)! {a \choose 2} = \frac{a-1}{2} a! \eean we deduce \bean \epsilon^a\sum_{q\ge1}q^{a}e^{-\epsilon q} & = & \frac{a!}{\epsilon} +o(1). \eean and the conclusion follows noticing that the identity also holds for $a=1$. \end{proof} \subsection{Incomplete integrals related to $D_{1,k}^\e$ and $D_{2,k}^\e $} The following quantities will be involved in the computation of $D_{1,k}^\e$ and $D_{2,k}^\e $. Set \bean J_1^\e(k,p) & = & \int_\e^\infty x^{k}\frac{e^{px}}{(e^x-1)^{p+1}}dx \\ J_2^\e(k,p) & = & \int_\e^\infty x^{k}\frac{e^{px}}{(e^x-1)^{p+2}}dx, \eean and \bean Z_\e(k,j) & = & \sum_{q=1}^\infty q^j \int_\e^\infty e^{-qy}y^k dy. \eean \begin{lemma} For all $k,p\ge0$, \bean J_2^\e(k,p) & = & J_1^\e(k,p+1)-J_1^\e(k,p), \eean \end{lemma} \begin{proof} A simple manipulation gives \bean J_2^\e(k,p) = \int_\e^\infty x^{k}\frac{e^{px}}{(e^x-1)^{p+2}}dx & = & -\int_\e^\infty x^{k}\frac{e^{px}(e^x-1-e^x)}{(e^x-1)^{p+2}}dx \\ & = & -\int_\e^\infty x^{k}\frac{e^{px}}{(e^x-1)^{p+1}}dx + \int_\e^\infty x^{k}\frac{e^{(p+1)x}}{(e^x-1)^{p+2}}dx, \eean as claimed. \end{proof} \begin{lemma} For all $k,p\ge0$, \bean J_1^\e(k,p) = \int_\e^\infty x^{k}\frac{e^{px}}{(e^x-1)^{p+1}}dx & = & \frac{1}{p!} \sum_{j=0}^{p} (-1)^{p+j} s(p,j) Z_\e(k,j), \eean where \bean Z_\e(k,j) & = & k! \sum_{a=0}^k \frac{\e ^a}{a!}\sum_{q=1}^\infty \frac{e^{-\e q}}{q^{k-j+1-a}}. \eean If $j<k$, i.e. $k-j+1\ge2$, then \bean Z_\e(k,j) & = & k! \ \zeta(k-j+1) +o(1). \eean \end{lemma} \begin{proof} Using changes of variables, we obtain \bean J_1^\e(k,p) & = & \int_\e^\infty x^{k}\frac{e^{px}}{(e^x-1)^{p+1}}dx \qquad (u=e^x,\ x=\log u)\\ & = & \int_{e^\e}^\infty \frac{u^{p-1} \log^k u}{(u-1)^{p+1}}du \qquad (u=1/x)\\ & = & (-1)^{k}\int_0^{e^{-\e}} \frac{\log^k x}{(1-x)^{p+1}} dx. \eean We have the expansion \bean \frac{1}{(1-x)^{p+1}} & = & \frac{1}{p!} \sum_{q=1}^\infty \frac{(q+p-1)!}{(q-1)!} x^{q-1}. \eean On the other hand, \bean \frac{(q+p-1)!}{(q-1)!} & = & q(q+1)\cdots (q+p-1) \\ & = & (-1)^p (-q)(-q-1) \cdots (-q-(p-1))\\ & = & (-1)^p \sum_{j=0}^{p} s(p,j) (-1)^{j} q^j, \eean where we recall that $s(p,j)$ is a (signed) Stirling number of the first kind. Using Fubini-Tonelli and the change of variables $y=-\log(x)$, we obtain \bean J_1^\e(k,p) & = & \frac{(-1)^{k}}{p!} \sum_{j=0}^{p} (-1)^{p+j} s(p,j) \sum_{q=1}^\infty q^j \int_0^{e^{-\e}} x^{q-1} \log^k x\ dx \\ & = & \frac{1}{p!} \sum_{j=0}^{p} (-1)^{p+j} s(p,j) \sum_{q=1}^\infty q^j \int_\e^\infty e^{-qy}y^k dy. \eean Let us now set and compute, using the particular form of the incomplete Gamma function $\int_{\e q}^\infty e^{-x}x^k dx$ when $k$ is an integer: \bean Z_\e(k,j) & = & \sum_{q=1}^\infty q^j \int_\e^\infty e^{-qy}y^k dy \\ & = & \sum_{q=1}^\infty q^j \frac{1}{q^{k+1}}\int_{\e q}^\infty e^{-x}x^k dx \\ & = & k! \sum_{q=1}^\infty \frac{e^{-\e q}}{q^{k-j+1}} \sum_{a=0}^k \frac{(\e q)^a}{a!}. \eean Finally notice that if $j<k$, the dominated convergence theorem yields \bean Z_\e(k,j) & = & k! \ \zeta(k-j+1) +o(1), \eean and the conclusion follows. \end{proof} \begin{lemma}\label{Zkk} The following expansions hold: \bea Z_\e(k,k) & = & -k!\log(\epsilon) + k!\ H_k + o(1), \qquad k\ge1; \\ Z_\e(k,k+1) & = & \frac{(k+1)!}{\epsilon}-\frac{k!}{2} + o(1) \qquad k\ge0. \label{Zkk+1} \eea \end{lemma} \begin{proof} First, we compute \bean Z_\e(k,k) & = & k! \sum_{a=0}^k \frac{\epsilon^a}{a!} \sum_{q\ge1}q^{a-1}e^{-\epsilon q}\\ & = & k!\left(\sum_{q\ge1}q^{-1}e^{-\epsilon q}+\sum_{a=1}^k \frac{1}{a!} \epsilon^a \sum_{q\ge1}q^{a-1}e^{-\epsilon q}\right)\\ & = & k!\left(-\log(\epsilon)+\sum_{a=1}^k \frac{1}{a} \right) + o(1). \eean Second, \bean Z_\e(k,k+1) & = & k! \sum_{a=0}^k \frac{\epsilon^a}{a!} \sum_{q\ge1}q^{a}e^{-\epsilon q}\\ & = & k!\left(\sum_{q\ge1} e^{-\epsilon q}+\sum_{a=1}^k \frac{1}{a!} \epsilon^a \sum_{q\ge1}q^{a}e^{-\epsilon q}\right)\\ & = & k!\left(\frac{1}{e^\epsilon-1}+\sum_{a=1}^k \frac{1}{a!}\frac{a!}{\epsilon} \right) + o(1)\\ & = & k!\left(\frac{1}{\epsilon}-\frac{1}{2}+ \frac{k}{\epsilon} \right) + o(1), \eean as claimed. \end{proof} \subsection{Asymptotic expansion of $D_{1,k}^\e$, $D_{2,k}^\e$, and conclusion} \begin{lemma} For all $k\ge1$, \bean D_{1,k}^\e & = & (-1)^k \left(\frac{k!}{\epsilon}-\frac{(k-1)!}{2}\right) + o(1). \eean \end{lemma} \begin{proof} We have for all $k\ge1$, \bean D_{1,k}^\e & = & \int_\e^\infty x^{k-1} h^{(k)}(x)dx \\ & = & \int_\e^\infty x^{k-1}\sum_{p=1}^k \alpha_{k,p} \frac{e^{px}}{(e^x-1)^{p+1}} dx \\ & = & \sum_{p=1}^k \alpha_{k,p} J_1^\e(k-1,p)\\ & = & \sum_{p=1}^{k} (-1)^p S(k,p) \sum_{j=1}^{p} S_1(p,j) Z_\e(k-1,j)\\ & = & \sum_{j=1}^{k} (-1)^{j}\sum_{p=j}^{k} S(k,p)S_1(p,j)\ Z_\e(k-1,j)\\ & = & \sum_{j=1}^{k}(-1)^{j} \delta_{k,j} Z_\epsilon(k-1,j)\\ & = & (-1)^{k} Z_\epsilon(k-1,k), \eean and we use (\ref{Zkk+1}) replacing $k$ by $k-1$ to conclude. \end{proof} \begin{lemma} We have \bean D_{2,k}^\e & = & \sum_{j=1}^{k+1}(-1)^{j+1} K_{k+1,j} Z_\epsilon(k,j). \eean \end{lemma} \begin{proof} We have \bean D_{2,k}^\e & = & \int_\e^\infty \frac{x^k h^{(k)}(x)}{e^{x}-1}dx \\ & = & \int_\e^\infty \frac{x^k}{e^{x}-1}\sum_{p=1}^k \alpha_{k,p} \frac{e^{px}}{(e^x-1)^{p+1}} dx \\ & = & \sum_{p=1}^k \alpha_{k,p} J_2^\e(k,p)\\ & = & \sum_{p=1}^k \alpha_{k,p}\left(J_1^\e(k,p+1)-J_1^\e(k,p)\right)\\ & = & \sum_{p=2}^{k+1} \alpha_{k,p-1}J_1^\e(k,p) - \sum_{p=1}^k \alpha_{k,p}J_1^\e(k,p) \\ & = & \alpha_{k,k}J_1^\e(k,k+1)-\alpha_{k,1}J_1^\e(k,1) + \sum_{p=2}^{k} (\alpha_{k,p-1}-\alpha_{k,p})J_1^\e(k,p)\\ & = & \alpha_{k,k}J_1^\e(k,k+1)-\alpha_{k,1}J_1^\e(k,1) - \sum_{p=2}^{k} \frac{\alpha_{k+1,p}}{p} J_1^\e(k,p) \\ & = & - \sum_{p=1}^{k+1} \frac{\alpha_{k+1,p}}{p} J_1^\e(k,p), \eean since $\d \frac{\alpha_{k+1,k+1}}{k+1}=(-1)^{k+1}k!=-\alpha_{k,k}$ and $\alpha_{k+1,1}=\alpha_{k,1}$. Moreover \bean \sum_{p=1}^{k+1} \frac{\alpha_{k+1,p}}{p} J_1^\e(k,p) & = & \sum_{p=1}^{k+1} \frac{(-1)^p}{p}S(k+1,p) \sum_{j=1}^{p} (-1)^{p-j}S_1(p,j) Z_\e(k,j)\\ & = & \sum_{j=1}^{k+1} \sum_{p=j}^{k+1}\frac{(-1)^j}{p}S(k+1,p)S_1(p,j)\ Z_\e(k,j)\\ & = & \sum_{j=1}^{k+1}(-1)^{j} K_{k+1,j} Z_\epsilon(k,j), \eean which concludes the proof. \end{proof} For $k\ge2$, $D_{2,k}^\e$ contains convergent terms involving $\zeta$ and a divergent term $R_{2,k}^\e$, which reads \bean R_{2,k}^\e & = & (-1)^{k+1} K_{k+1,k} Z_\epsilon(k,k)+ (-1)^{k} K_{k+1,k+1} Z_\epsilon(k,k+1) \eean But \bean K_{k+1,k} & = & \frac{1}{k+1} \binom{k+1}{k} B_{1} +1 \ =\ 1/2 \\ K_{k+1,k+1} & = &\frac{1}{k+1} \binom{k+1}{k+1} B_{0} \ =\ \frac{1}{k+1}. \eean Therefore, by lemma \ref{Zkk}, \bean R_{2,k}^\e & = & (-1)^{k+1} \frac{-k!\log(\epsilon) + k!\ H_k}{2} + (-1)^{k} \frac{1}{k+1}\left(\frac{(k+1)!}{\epsilon}-\frac{k!}{2}\right) + o(1). \eean \begin{lemma} We have \bean D_{3,k}^\e & = & (-1)^{k+1} k! \frac{\log \epsilon}{2} + (-1)^k k! C + o(1). \eean \end{lemma} We can now complete the proof of Theorem \ref{second} using \bean A^{(k)}(1) & = & D_{3,k}^\e + D_{2,k}^\e -D_{1,k}^\e. \eean Therefore for all $k\ge1$, with the convention $\sum_{j=1}^{0}=0$, \bean A^{(k)}(1) & = & (-1)^k k! \left(C - \frac{H_{k+1}}{2} +\frac{1}{2k} - \sum_{j=1}^{k-1}(-1)^{j+k} K_{k+1,j} \zeta(k-j+1) \right). \eean Using for all $j\le k-1$ \bea K_{k+1,j} & = & \frac{1}{k+1} \binom{k+1}{j} B_{k+1-j}, \eea symmetry and "pion" formula for binomial coefficients, and $(-1)^jB_j=B_j$, $j\ge2$, we can write: \bean \sum_{j=1}^{k-1}(-1)^{j+k} K_{k+1,j} \zeta(k-j+1) & = & \sum_{j=1}^{k-1}(-1)^{j+k} \frac{1}{k+1} \binom{k+1}{j} B_{k+1-j} \zeta(k-j+1) \\ & = & -\sum_{j=2}^{k}(-1)^{j} \frac{1}{k+1} \binom{k+1}{j} B_{j} \zeta(j) \\ & = & -\sum_{j=2}^{k} \frac{1}{j} \binom{k}{j-1} B_{j} \zeta(j) \eean Hence, for all $k\ge1$, with the convention $H_0=0$, \bean A^{(k)}(1) & = & (-1)^k k! \left(C -\frac{1}{2(k+1)} - \frac{H_{k-1}}{2} + \sum_{j=2}^{k} {k \choose j-1} \frac{\zeta(j)B_j}{j} \right). \eean To obtain the formula for $k=0$, let us finally compute $A(1)$. \begin{lemma}\label{} We have \bean A(1) & = & 2C -\frac{1}{2}. \eean \end{lemma} \begin{proof} We expand \bean A(1) & = & \int_\epsilon^\infty \left(\frac{1}{x}-\frac{1}{e^{x}-1}\right)\left(\frac{1}{x}-\frac{1}{e^{x}-1}\right)dx +o(1) \\ & = & I(\epsilon) - \int_\epsilon^\infty \frac{x^{-1}dx}{e^x-1} + \int_\epsilon^\infty \frac{dx}{(e^x-1)^2} +o(1)\\ & = & -\frac{\log \epsilon}{2} + C -\frac{1}{\epsilon} - \frac{\log \epsilon}{2} + C + \log(\epsilon) + \frac{1}{\epsilon} - \frac{1}{2} +o(1) \\ & = & 2C -\frac{1}{2} +o(1), \eean and take the limit. \end{proof} \bigskip \section{Numerical results}\label{sec:numerical} \subsection{Expression of the first $A^{(k)}(1)$} Recall that \bean A(v) & = & \int_0^\infty \left(\frac{1}{xv}-\frac{1}{e^{xv}-1}\right)\left(\frac{1}{x}-\frac{1}{e^{x}-1}\right)dx, \qquad v>0. \eean Let us recall that \bean C=\frac{\log(2\pi)-\gamma}{2}=0.6303307\cdots \eean We have $\d A(1) = 2C -\frac{1}{2}$, \bean A'(1) = -\left(C-\frac{H_2}{2}+\frac{1}{2}\right) = -C+\frac{1}{4}, \eean and \bean A''(1) & = & 2\left(C-\frac{H_3}{2}+\frac{1}{4} + {2 \choose 1} \frac{B_{2}\zeta(2)}{2}\right)\\ & = & 2C-\frac{4}{3} + \frac{1}{3}\zeta(2). \eean To check these values, we directly differentiate $A$ and evaluate at $1$, for instance: \bean A'(1) & = & \int_0^\infty \left(-\frac{1}{x}+\frac{xe^x}{(e^{x}-1)^2}\right)\left(\frac{1}{x}-\frac{1}{e^{x}-1}\right)dx. \eean \subsection{Expression of the two first moments} The formula for $N=1$ reads \bean \frac{-2}{\pi}\int_{-\infty}^\infty t^{2} \left|\Gamma\zeta\left(\frac{1}{2}+it\right)\right|^2 dt & = & \log(2\pi)-\gamma - 4 + \left(\frac{4}{2}-1\right) B_{4}+ T_{2,2}\frac{\zeta(2)B_{2}}{2} \\ & = & \log(2\pi)-\gamma - \frac{23}{6} + \frac{4}{3}\zeta(2), \eean since \bean T_{2,2} & = & \binom{2}{2} 2^2 \left[ S(3,2) + S(2,1)\right] \\ & = & 4 (3+1)=16. \eean The formula for $N=2$ reads \bean \frac{(-4)^2}{2\pi}\int_{-\infty}^\infty t^{4} \left|\Gamma\zeta\left(\frac{1}{2}+it\right)\right|^2 dt & = & \log(2\pi)-\gamma - 8 + \left(\frac{4^2}{2}-1\right) B_{4}+ \sum_{j=2}^{4} T_{4,j}\frac{\zeta(j)B_{j}}{j}, \eean where \bean T_{4,j} & = & (j-1)!\sum_{n=2}^{4} \binom{4}{n} 2^{n} \left[(-1)^n S(n+1,j) + (-1)^j S(n,j-1)\right]. \eean Thus, we have \bean T_{4,2} & = & 24(3+1) +32(-7+1) +16(15+1) \\ & = & 160, \\ T_{4,4} & = & 6(0+0+16(10+6)) \\ & = & 1536. \eean Hence \bean \frac{8}{\pi}\int_{-\infty}^\infty t^{4} \left|\Gamma\zeta\left(\frac{1}{2}+it\right)\right|^2 dt & = & \log(2\pi)-\gamma - 8 - 7\cdot \frac{1}{30} + 160\frac{\zeta(2)/6}{2} - 1536\frac{\zeta(4)/30}{4} \\ & = & \log(2\pi)-\gamma - \frac{247}{30} + \frac{40}{3}\zeta(2) - \frac{64}{5}\zeta(4). \eean \bigskip \subsection{Numerical values} The following table gives the first values of the $T(\ell,j)$: \begin{equation*} \begin{array}{| c | ccccccc |} \hline \ell \backslash j & 2 & 3 & 4 & 5 & 6 & 7 & 8\\ \hline 2 & 16 & & & & & & \\ 3 & 0 & -144 & & & & &\\ 4 & 160 & 0 & 1536 & & & &\\ 5 & 0 & -5280 & 0 & -19200 & & &\\ 6 & 1456 & 0 & 145920 & 0 & 276480 & & \\ 7 & 0 & -147504 & 0 & -3897600 & 0 & -4515840 & \\ 8 & 13120 & 0 & 9225216 & 0 & 105799680 & 0 & 82575360 \\ \hline \end{array} \end{equation*} \bigskip The last one gives the first moments $M^{\Gamma\zeta}_{k}$: \medskip \begin{center} \begin{tabular}{|c|l|} \hline $k$ & $\qquad M^{\Gamma\zeta}_{k} \rule{0pt}{2.6ex} \rule{2.6ex}{0pt}$ \tabularnewline \hline 0 \rule{0pt}{2.6ex}& $4.77937654\cdots$ \rule{0pt}{2.6ex} \\ 2 \rule{0pt}{2.6ex}& $0.59600176\cdots$ \rule{0pt}{2.6ex} \\ 4 \rule{0pt}{2.6ex}& $0.43434281\cdots $\rule{0pt}{2.6ex} \\ 6 \rule{0pt}{2.6ex}& $1.01613719\cdots $\rule{0pt}{2.6ex} \\ 8 \rule{0pt}{2.6ex}& $5.60532440\cdots$ \rule{0pt}{2.6ex} \\ 10 \rule{0pt}{2.6ex}& $57.6316873\cdots$ \rule{0pt}{2.6ex} \\ 12 \rule{0pt}{2.6ex}& $940.337401\cdots$ \rule{0pt}{2.6ex} \tabularnewline \hline \end{tabular} \end{center} \bigskip \section*{Acknowledgement} The authors thank Michel Balazard for noting the relation of Ramanujan's formulas with Mellin-Plancherel theorems and communicating the reference \cite{Kim16}. They thank Charles Bordenave for communicating \cite{Sim98}. They are also grateful to Ren\'e Adad for his kind support, correction, and to Ren\'e Adad and Joseph Najnudel for numerical experiments. The first author warmly thanks Francois Alouges, Michel Balazard, Paul Bourgade, Christophe Delaunay, Bryce Kerr, Oleksiy Klurman, Joseph Najnudel, Olivier Ramar\'e and Kristian Seip for very interesting and insightful discussions. \bigskip
{ "timestamp": "2022-09-23T02:15:07", "yymm": "2209", "arxiv_id": "2209.10990", "language": "en", "url": "https://arxiv.org/abs/2209.10990" }
\section{Introduction} Core-collapse supernovae (CCSNe) are one of the primary sources of heavy elements in the universe. They modify and disseminate the products of the nucleosynthesis of their massive stellar progenitors and freshly produce radioactive and trans-iron species through various processes such as explosive burning in the shock-heated ejecta, freeze-out from nuclear statistical equilibrium, neutrino-induced reactions, and neutron and proton capture chains \citep[e.g.,][]{2002RvMP...74.1015W,2016ApJ...821...38S,2019ApJ...870....2C,2020ApJ...888...91E,2021RvMP...93a5002C,2021PASA...38...62D}. Thus they play a crucial role as one of the main drivers of galactic chemical evolution \citep[e.g.,][]{1995ApJS...98..617T,2003Ap&SS.284..539M,2015ApJ...808..132H,2020ApJ...900..179K,2021MNRAS.506.4131W}. Large sets of progenitor models need to be surveyed with numerical simulations of CCSNe in order to account for a rich diversity of pre-collapse conditions, because the evolution of massive stars depends not only on the stellar mass and metallicity but also on the amount of rotation and the strength of internal magnetic fields, different prescriptions of mass loss rates through stellar winds as well as binary interactions and mergers. Moreover, uncertainties connected to nuclear rates and the treatment of multi-dimensional effects such as angular momentum transport, convection, overshooting, and boundary mixing cause variations. Systematic investigations of large model sets are therefore indispensable for characterising the electromagnetic signatures of CCSNe linked to different types of hydrogen-rich and stripped progenitors \citep[e.g.,][]{2016ApJ...821...38S,2021A&A...652A..64D,2021A&A...656A..61D,2021ApJ...921..143C,2022ApJ...934...67B}. The same effort is also necessary for predicting the mass distributions of neutron stars and black holes as the compact remnants of stellar core collapse events \citep[e.g.,][]{2012ApJ...757...69U,2015ApJ...801...90P,2016ApJ...821...38S,2016MNRAS.460..742M,2019ApJ...870....1E,2020ApJ...890...51E,2020ApJ...896...56W,2021A&A...645A...5S}, which are responsible for the growing repository of measured gravitational-wave signals when they are components in close binary systems \citep{2021ApJ...913L...7A,2021arXiv211103606T}. Although the mechanisms of CCSN explosions, either neutrino-driven or magneto-rotational, have been recognized to be generically multi-dimensional hydrodynamic phenomena \citep[see, e.g.,][for reviews of full-fledged state-of-the-art, multi-dimensional CCSN simulations]{2005NatPh...1..147W,2007PhR...442...38J,2012ARNPS..62..407J,2016ARNPS..66..341J,2016PASA...33...48M,2020LRCA....6....3M,2021Natur.589...29B}, three-dimensional (3D) simulations are still constrained by their prohibitive requirements of computational resources. Therefore the enormous diversity of the progenitor conditions can currently be accounted for only by CCSN calculations in spherical symmetry (one dimension; 1D), which permit to follow the long-time evolution in order to determine the explosion properties including nucleosynthesis and electromagnetic observables for large sets of stellar models. Traditionally, this task has been undertaken by triggering the CCSN explosions artificially either by a so-called ``thermal bomb'' mechanism \citep[e.g.,][]{1988A&A...196..141S,1989A&A...210L...5H,1990ApJ...349..222T,1996ApJ...460..408T,2001ApJ...555..880N,2006NuPhA.777..424N,2008ApJ...673.1014U,2010ApJ...719.1445M}, in which an outgoing shock wave is initiated by dumping thermal energy into a chosen volume around a chosen initial mass cut. This initial mass cut is picked by nucleosynthesis constraints based on the electron fraction ($Y_e$) of the pre-collapse progenitor or by guessing the mass of the compact remnant, and it is intended to define the borderline between this emerging compact object and the explosion ejecta before fallback happens later and possibly brings back matter that does not achieve to become gravitationally unbound. Or, alternatively, the outgoing shock was generated by a piston-driven mechanism \citep[e.g.,][]{1988ApJ...330..218W,1995ApJS..101..181W,2002RvMP...74.1015W,2007PhR...442..269W,2008ApJ...679..639Z}, where kinetic energy is deposited by the outward motion of a piston, which is placed at a chosen Lagrangian mass shell corresponding to the initial mass cut to push the overlying shells. Refinements of these methods concern, for example, a contraction of the location of the piston or initial mass cut to mimic the collapse that precedes the subsequent expansion, and variations of the duration of the energy deposition by the thermal bomb instead of an instantaneous delivery of the energy. In yet another approach \citep[e.g.,][]{2003ApJ...592..404L,2006ApJ...647..483L,2012ApJS..199...38L,2013ApJ...764...21C,2018ApJS..237...13L} a ``kinetic bomb'' approach was applied in 1D Lagrangian hydrodynamic simulations of CCSN explosions such that the blast wave is started by imparting an initial expansion velocity at a mass coordinate around 1\,$M_\odot$, which is still well inside the iron core, and tuning the value of this velocity such that desired values of the ejected amount of $^{56}$Ni and/or of the final kinetic energy of the ejecta are obtained. Also multi-dimensional (2D, 3D) variants of the method of thermal (or kinetic) bombs exist to trigger highly asymmetric blast waves and jet-induced or jet-associated explosions \citep[see, e.g.,][for a few exemplary applications from a rich spectrum of publications]{1997ApJ...486.1026N,1999ApJ...524..262M,1999ApJ...524L.107K,2000ApJ...531L.119A,2003ApJ...596..401N,2003ApJ...598.1163M,2006ApJ...647.1255N,Ono+2020,Orlando+2020}. All of these methods of artificially exploding massive stars depend on numerous free parameters, for example the location of the initial mass cut, the width of the energy-deposition region and the timescale of energy deposition for the thermal bomb, the duration and depth of the collapse-like contraction, and the initial expansion velocity and coasting radius for the piston method, the initial velocity of the kinetic bomb, or the 2D/3D geometry of the energy input. These parameters are chosen suitably to produce defined values for the explosion energy and the expelled $^{56}$Ni mass or to reproduce multi-dimensional properties of observed supernovae and supernova remnants. Such degrees of freedom have an influence on the nucleosynthetic yields through the initial strength of the shock and the volume and extent of the heating achieved by the thermal energy injection, which determine the ejecta mass where sufficiently high peak temperatures for nuclear reactions are reached. Moreover, the traditional explosion recipes do not enable one to track the conditions in the innermost ejecta, whose neutron-to-proton ratio gets reset by the exposure to the intense neutrino fluxes from the nascent neutron star or from an accretion torus around a new-born black hole \citep[see, e.g.,][]{2016ApJ...818..123B,2017MNRAS.472..491M,2019Natur.569..241S,2021ApJ...915...28B}. For these reasons more modern CCSN explosion treatments by means of ``neutrino engines'' have been introduced that attempt to capture essential effects of the neutrino-driven mechanism but replace the highly complex and computationally intense, energy and direction dependent neutrino transport used in full-fledged neutrino-hydrodynamical CCSN models by simpler treatments. This line of research has been pursued in 2D and 3D simulations either neglecting neutrino transport and replacing it by a so-called light-bulb approximation with chosen (time-dependent) neutrino luminosities and spectra \citep[e.g.,][]{1996A&A...306..167J,2000ApJ...531L.123K,2001ApJ...552..756S,2003A&A...408..621K,2006A&A...453..661K,2013ApJ...771...27Y} or by using an approximate, grey description of the neutrino transport with a boundary condition for the neutrino emission leaving the optically thick, high-density core of the proto-neutron star \citep[e.g.,][]{2006A&A...457..963S,2010ApJ...725L.106W,2013A&A...552A.126W,2015A&A...577A..48W,2017ApJ...842...13W}. Neutrino-engine treatments are also applied in 1D hydrodynamic CCSN simulations with neutrino transport schemes of different levels of refinement for determining the supernova and compact remnant properties as well as the associated nucleosynthetic outputs for large sets of stellar progenitor models. In these studies neutrino-driven explosions are obtained by parametrically increasing the neutrino-energy deposition behind the stalled bounce shock \citep{2011ApJ...730...70O}, by describing the neutrino emission of the newly formed neutron star via a model with parameters that are calibrated to reproduce basic properties of the well-observed CCSNe of SN~1987A and SN~1054 (Crab) \citep[P-HOTB;][]{2012ApJ...757...69U,2016ApJ...818..124E,2016ApJ...821...38S,2020ApJ...890...51E}, by parametrizing additional energy transfer to the CCSN shock via muon and tau neutrinos (also using observational constraints) \citep[PUSH;][]{2015ApJ...806..275P,2019ApJ...870....1E,2019ApJ...870....2C,2020ApJ...888...91E}, or by also including the effects of convection and turbulence through a modified mixing-length theory approach with free parameters adjusted to fit the results of 3D simulations \citep[STIR;][]{2020ApJ...890..127C}. Alternatively to these novel simulation approaches, semi-analytic descriptions have been applied, either by using spherical, quasi-static evolutionary sequences to determine the explosion threshold and energy input to the explosion via a neutrino-driven wind \citep{2015ApJ...801...90P} or by parametrically phrasing the elements of multi-dimensional processes that play a role in initiating and powering CCSNe via the neutrino-heating mechanism \citep{2016MNRAS.460..742M,2021A&A...645A...5S,2022arXiv220400025A}. Despite these more advanced modelling efforts, which generally reflect more of the physics of the CCSN explosion mechanism than thermal-bomb or piston models, the latter are still widely used. In fact, thermal bombs have experienced an increase in popularity in 1D applications recently, because they are applied in the open-source codes MESA \citep{2015ApJS..220...15P} and SNEC \citep{2015ApJ...814...63M}. They have the advantage of simplicity and great flexibility in their usage, allowing one to control the dynamics of the explosion by choosing the value, timescale, mass layer or volume of the energy deposition, and the evolution of the inner boundary, i.e., if and how the collapse of the stellar core is taken into account. The sensitivities of the traditional thermal or kinetic bombs and piston mechanisms and of the associated nucleosynthesis to the involved parameterisations and the corresponding limitations of these methods have been investigated in previous works, though never comprehensively \citep{1991ApJ...370..630A,2007ApJ...664.1033Y}. In a seminal study \citet{1991ApJ...370..630A} discussed the parameters employed in the numerical recipes to artificially launch the explosion of a 20\,$M_\odot$ progenitor in 1D. They initiated explosions at different locations of enclosed mass, and compared the ejecta conditions (especially the peak temperatures reached behind the outgoing shocks) as well as the explosively created nuclear yields. In particular, they considered thermal bomb and piston calculations for two variations, namely when the inner core was allowed to collapse prior to shock initiation or not. We will call such cases ``collapsed'' (C) versus ``uncollapsed'' (U) models. They concluded that the former are a better representation of the CCSN physics, which is governed by the iron-core collapse to a neutron star. However, in their study the C-cases also showed more differences between piston and bomb results. Their main concerns were the uncertainties in the choice of the mass-cut location and in the assumed duration of the initial collapse phase, and the differences in the peak temperature because of too much kinetic energy being connected to the piston and too much thermal energy to the bomb mechanism. Moreover, they expressed concerns that the instantaneous energy deposition assumed in their simulations might not be appropriate if the CCSN mechanism is delayed and the shock receives energy input by neutrino heating for several seconds \citep[as indeed seen in state-of-the-art self-consistent CCSN simulations, e.g.,][]{2021ApJ...915...28B}. In a subsequent study, \citet{2007ApJ...664.1033Y} arrived at similar conclusions and found not only a strong sensitivity of the elemental and isotopic yields of silicon and heavier elements to the assumed explosion energy, but also considerable differences of the abundances of these nuclei between piston-driven and thermal-bomb type explosions even for the same explosion energy. In particular, they considered a 23\,$M_\odot$ star, whose collapse, bounce-shock formation, and shock stagnation were followed by a 1D neutrino-hydrodynamics simulation. Their work was focused on triggering explosions of different energies by thermal energy injection over time intervals of 20\,ms, 200\,ms, and 700\,ms, starting at 130\,ms after bounce (corresponding to 380\,ms after the start of the collapse simulation) and leading to explosions at 150\,ms, 330\,ms, and 830\,ms after bounce, respectively. The authors reported a considerable increase of intermediate-mass and Fe-group yields with the longer delay times of the explosion (i.e., longer duration of the energy deposition) and, in particular significantly more (orders of magnitude!) $^{56}$Ni and several times more $^{44}$Ti production for models with $1.5\times 10^{51}$\,erg explosion energy and 200\,ms and 700\,ms delay time compared to a case with the same explosion energy but a short energy injection time of only 20\,ms. Recently, \citet{2019ApJ...886...47S} (in the following SM19) published a study where they came to exactly the opposite conclusion based on 1D hydrodynamic CCSN models with a thermal-bomb prescription to trigger the explosions of 15, 20, and 25\,$M_\odot$ progenitors. They found that the produced amount of $^{56}$Ni {\em decreases} with longer timescales of the energy deposition; observational constraints for nucleosynthesis products of CCSNe could be fulfilled only by rapid explosions when the final blast-wave energy was reached within $\lesssim$\,250\,ms, and best compatibility was obtained for nearly instantaneous explosions where the energy was transferred within $\lesssim$\,50\,ms. They interpreted their results as a serious challenge for the neutrino-heating mechanism, which delivers the explosion energy in progenitors as massive as those considered by SM19 only on timescales that are significantly longer than 1\,s \citep[see][]{2016ApJ...818..123B,2017MNRAS.472..491M,2021ApJ...915...28B,2021Natur.589...29B}. However, the opposite trends reported by \citet{2007ApJ...664.1033Y} and SM19 for the dependence of the $^{56}$Ni yields on the energy-deposition timescale do not need to contradict each other. In this context it is important to remember that the former study considered collapsed (C) models, whereas SM19 did not collapse their stars (using U models) before switching on the thermal energy deposition. This is likely to have important consequences for the hydrodynamic response of the stellar gas when the energy input happens on different timescales. With the expansion of the heated gas setting in, which is easier in an uncollapsed star, expansion cooling takes place. Therefore slow energy injection in a star that has not collapsed will not be able to achieve sufficiently high temperatures in sufficiently large amounts of ejecta to enable any abundant production of $^{56}$Ni. In our work we aim at investigating this question quantitatively by means of 1D hydrodynamical simulations within the framework of the thermal-bomb method. Two different aspects serve us as motivation. First, SM19 and also \citet{2019MNRAS.483.3607S} claimed that long energy transfer timescales or slow growth rates of the blast-wave energy (``slow explosions'') suppress the $^{56}$Ni production. The authors interpreted this proposition as a problem for current self-consistent neutrino-driven explosion models and the neutrino-driven mechanism itself. Second, our study is supposed to assist the design of suitable thermal-bomb treatments that can serve as easy-to-implement methods to conduct systematic CCSN simulations in 1D for large progenitor sets without the need of a detailed treatment of neutrinos. Naturally, such approaches can never capture all aspects of ``realistic'' multi-dimensional CCSN models, in particular not with regard to the innermost, neutrino-processed ejecta. Nevertheless, such simplified explosion treatments can still be useful to answer many observationally relevant questions, in particular since the explosive nucleosynthesis past the outer edge of the silicon shell is mostly determined by the explosion energy and the progenitor structure, but little sensitive to the initiation method of the explosion \citep{1991ApJ...370..630A}.\footnote{According to present-day understanding, this statement better holds good for the outer edge of the oxygen layer instead of the silicon shell.} Similarly, the explosive nucleosynthesis in these layers is also unlikely to depend strongly on the neutrino physics and the multi-dimensional hydrodynamic processes that play a crucial role in the CCSN mechanism and that determine the observable asymmetries of the explosions. In this paper we thus investigate the influence of the energy-deposition timescale for thermal bombs in collapsed as well as uncollapsed models. But instead of conducting a complete survey of all free parameters needed to steer the thermal bombs, we will stick to simple and well-tested prescriptions already applied in previous publications. For a diagnostic property we will focus on the produced mass of $^{56}$Ni before any effects of fallback could modify the ejecta, because fallback will also depend on the radially outward mixing of metals and thus on multi-dimensional effects that can be accounted for in 1D models only with additional assumptions for parametric treatments. The amount of $^{56}$Ni produced by the CCSN ``engine'' is not only a crucial characteristic of the early dynamics of the explosion but also a primary observable that governs the light curve and the electromagnetic display of CCSNe from weeks to many years \citep[e.g.][]{1989ARA&A..27..629A,1994ApJ...437L.115I}. In a follow-up paper we plan to explore a wider range of thermal-bomb parameterisations and check them against piston-triggered and neutrino-driven CCSN explosion models. Moreover, in this subsequent work we will compare the results for a greater selection of products of explosive nucleosynthesis. Our paper is organised as follows. In Section~\ref{section:methods} we briefly describe the stellar evolution models considered in our study, the methodology of the hydrodynamic explosion modelling, the small nuclear reaction network used in the hydrodynamic simulations and the large network applied in a more detailed post-processing of the nucleosynthesis. In Section~\ref{section:setups} we describe our setup for reference models, guided by the calculations reported by SM19, i.e., uncollapsed models, as well as the variations investigated by us, i.e., collapsed models and different mass layers vs. radial volumes for the energy deposition. In Section~\ref{section:results} we present our results, followed by a summary and discussion in Section~\ref{section:conclusions}. \section{Methods and inputs} \label{section:methods} In this section we describe the three aspects of our calculations: the progenitors used as input models, the corresponding explosion simulations including the definition of the thermal bomb method, and the nucleosynthetic post-processing with an extended nuclear-reaction network. Our progenitors were taken from the work of \citet{2014ApJ...783...10S}, the explosion modelling was performed using the hydrodynamic code \textsc{Prometheus-HOTB} \citep{1996A&A...306..167J,2003A&A...408..621K,2006A&A...457..963S,2007A&A...467.1227A,2012ApJ...757...69U,2016ApJ...818..124E}, but without making use of the neutrino-transport module associated with this code, and the detailed explosive nucleosynthesis was calculated with the SkyNet open-source nuclear network code \citep{2017ApJS..233...18L}. \begin{table} \centering \caption{Properties of the progenitors used in this work. $M_{\rm pre}$ is the total pre-collapse mass, $M_{\rm He}$ is the mass of the helium core, $M_{\rm CO}$ the mass of CO core, $M_{s=4}$ is the enclosed mass where the dimensionless entropy $s/k_\mathrm{B} = 4$, and $M_{Y_e=0.48}$ is the enclosed mass where the electron fraction is equal to 0.48. All the masses are in $M_\odot$. \label{tab:psn} \begin{tabularx}{\columnwidth}{lccccc} \hline \hline $M_{\rm ZAMS}$ & $M_{\rm pre}$ & $M_{\rm He}$ & $M_{\rm CO}$& $M_{s=4}$ & $M_{Y_e=0.48}$ \\ \hline $12.3$ & $11.0599$ & $3.29162$ & $2.22902$ & $1.59102$ & $1.23017$ \\ $19.7$ & $15.7490$ & $6.09592$ & $4.85410$ & $1.53298$ & $1.25635$ \\ $21.0$ & $16.1109$ & $6.62284$ & $5.37384$ & $1.48435$ & $1.27209$ \\ $26.6$ & $15.3093$ & $8.96794$ & $7.69495$ & $1.73833$ & $1.38264$ \\ \hline \end{tabularx} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{progenitors_encM_12c2-1} \caption{Density structure as a function of enclosed mass for the considered progenitors with $M_{\rm ZAMS}=12.3\,M_\odot$ (cyan line), $19.7\,M_\odot$ (black line), $21.0\,M_\odot$ (red line), and $26.6\,M_\odot$ (blue line). The color convention for the progenitors is kept the same throughout our paper.} \label{fig:psn} \end{figure} \begin{figure*} \includegraphics[width=\columnwidth]{progenitors_encM_closer_Mcuts8} \includegraphics[width=\columnwidth]{progenitors_encM_closer_ye_Mcuts7} \includegraphics[width=\columnwidth]{progenitors_encM_closer_stot_Mcuts8} \includegraphics[width=\columnwidth]{xni_12_new5} \caption{Pre-collapse structure of the progenitors used in this work, namely the density (top left), the dimensionless entropy per nucleon $s/k_\mathrm{B}$ (bottom left), and the electron fraction $Y_e$ (top right) versus enclosed mass. Vertical lines indicate the inner grid boundaries chosen in our explosion simulations, with the line colors corresponding to the colors chosen for the four stellar models: the pale solid lines mark the deeper locations where $Y_e=0.48$, which is also indicated by the horizontal black line in the $Y_e$ plot, and the short-dashed lines define the points where the dimensionless entropy per nucleon $s/k_\mathrm{B}$ equals 4, which can also be seen by the horizontal black line in the $s/k_\mathrm{B}$ plot. The lower right panel displays the mass fraction of $^{56}$Ni obtained as function of enclosed mass for our default setup of uncollapsed models with deep inner boundary; the energy-deposition (ED) timescale assumed for the displayed case is $t_{\rm inj}=0.01$\,s. The crosses on the stellar profiles in all panels mark the locations of the inner and outer edges of the main production region of $^{56}$Ni (see Section~\ref{section:psn} for the definition of this region). Note that due to the similarity of the profiles the red and black crosses in the two left panels and the lower right panel partly overlap.} \label{fig:psn_closer} \end{figure*} \subsection{Presupernova models} \label{section:psn} The progenitor models for this work were computed with the 1D hydrodynamics code KEPLER \citep{1978ApJ...225.1021W} and are a subset of the large model set published by \citet{2014ApJ...783...10S}. They represent non-rotating stars with solar metallicity, which were evolved from the main sequence until the onset of the iron-core collapse. The physics of this set of progenitors was discussed in detail in the literature \citep[e.g.][]{2002RvMP...74.1015W, 2007PhR...442..269W}. In order to investigate basic features of the nickel production using different setups for the thermal bomb triggering the CCSN explosion, we selected four progenitors with zero-age-main-sequence (ZAMS) masses of $M_{\rm ZAMS} = 12.3$, $19.7$, $21.0$, and $26.6\,M_\odot$. Their characteristic properties are listed in Table~\ref{tab:psn}, where $M_{\rm pre}$ is the total pre-collapse mass, $M_{\rm He}$ is the helium-core mass defined by the mass coordinate where $X({\rm H})\le0.2$, $M_{\rm CO}$ is the mass of the carbon-oxygen core associated with the location where $X({\rm He})\le0.2$, $M_{s=4}$ is the mass enclosed by the radius where the value of the dimensionless entropy per nucleon is $s/k_\mathrm{B}=4$ (where $k_\mathrm{B}$ is the Boltzmann constant), and $M_{Y_e=0.48}$ is the enclosed mass where the electron fraction is $Y_e=0.48$. This selection of the progenitors is motivated by the aim to cover approximately the same range of progenitor masses as considered by SM19. For the lighter progenitors, we investigated two models with $M_{\rm ZAMS}=12.3\,M_\odot$ and $19.7\,M_\odot$, representing two extreme cases with respect to their density declines at mass coordinates $m \gtrsim 1.5\,M_\odot$ and differing from each other by the shape of their corresponding density profiles (see Figures~\ref{fig:psn} and \ref{fig:psn_closer}). Our simulations are intended to explore the uncertainties in the thermal-bomb modelling, and these progenitor models exhibit a different behavior in the explosive nickel production based on their structure and our calculations, as will be discussed in Section~\ref{section:results}. The upper two panels and the lower left one in Figure~\ref{fig:psn_closer} visualize the progenitor structures in more details by showing density, electron fraction $Y_e$, and dimensionless entropy per nucleon as functions of enclosed mass. The crosses indicate the inner and outer edges of the regions where most of the $^{56}$Ni is produced, based on the results given in the lower right panel of Figure~\ref{fig:psn_closer}. This last panel displays, as an exemplary case, the nickel mass fractions for one of our setups (namely the uncollapsed models with deep inner boundary and an energy deposition timescale of 0.01\,s, see below). The main region of $^{56}$Ni production is defined by the requirement that the mass fraction of this isotope is greater than 0.1 and consequently at least 90\% of its total yield are produced between the limits marked by two crosses. Nickel and other heavy elements are mainly produced in the close vicinity of the inner grid boundaries of the simulations (for the relevant models these are marked by vertical pale solid lines in Figure~\ref{fig:psn_closer}), i.e., close to the mass region that is assumed to end up in the newly formed neutron star. Therefore differences in the $^{56}$Ni production will be connected to differences in the progenitor structures between the inner grid boundary and below roughly $2\,M_\odot$. \subsection{Hydrodynamic explosion modelling} \label{section:boom} The progenitor models were exploded by making use of the 1D hydrodynamics code \textsc{Prometheus-HOTB}, or in short P-HOTB, which solves the hydrodynamics of a stellar plasma including evolution equations for the electron fraction and the nuclear species in a conservative manner on an Eulerian radial grid, employing a higher-order Godunov scheme with an exact Riemann solver. The code employs a micro-physical model of the equation of state that includes a combination of non-relativistic Boltzmann gases for nucleons and nuclei, arbitrarily degenerate and arbitrarily relativistic electrons and positrons, and energy and pressure contributions from trapped photons. Although the hydrodynamics is treated in the Newtonian limit, the self-gravity of the stellar matter takes into account general relativistic corrections. Relevant details of the code and its upgrades over time can be found in the papers of \citet{1996A&A...306..167J,2003A&A...408..621K,2006A&A...457..963S,2007A&A...467.1227A,2012ApJ...757...69U,2016ApJ...818..124E,2020ApJ...890...51E}. The CCSN models discussed in this paper were computed with a radial mesh of 2000 zones, geometrically distributed from the inner grid boundary at radius $R_\mathrm{ib}$ to the stellar surface with a resolution of $\Delta r/R_\mathrm{ib} = 10^{-3}$ in the innermost grid cell and $\Delta r/r < 0.013$ everywhere on the grid. The central volume ($r < R_\mathrm{ib}$) was excluded from the computational mesh and replaced by an inner grid boundary at $R_\mathrm{ib}$ plus a gravitating point mass at the grid center. This introduces a first parameter into the artificial explosion modelling, namely the enclosed mass at the location of this inner boundary (sometimes called the (initial) mass cut), which is identified with the initial mass of the compact remnant. In our calculations we considered two cases for the choice of the position of the inner boundary. In a first case, following SM19, it was placed where $Y_e=0.48$ in the outer regions of the progenitor's iron core. This deep location, indicated by the letter ``D'' in the names of the corresponding explosion models, is extreme because the ejection of matter with $Y_e$ as low as 0.48 is severely constrained by observational bounds on the $^{58}$Ni production in CCSNe \citep[see, e.g., SM19 and][]{2015ApJ...807..110J}. In a second case we placed the inner grid boundary at the location where the dimensionless entropy per nucleon rises to $s/k_\mathrm{B}=4$, which corresponds to the base of the oxygen shell. This position is thus farther out in mass (see Table~\ref{tab:psn}) and is indicated by the letter ``O'' in the names of the corresponding explosion simulations. This location was also used in 1D piston-driven CCSN models by \citet{2007PhR...442..269W} and \citet{2008ApJ...679..639Z} and is better compatible with the initial mass cut developing in neutrino-driven explosions \citep[see, e.g.,][]{2016ApJ...818..124E}. In Figure~\ref{fig:psn_closer} these two choices of the inner boundary position are indicated by vertical lines for each progenitor. Realistically, the surface of the proto-neutron star is likely to be located somewhere between these two positions and will also be determined only after possible fallback has taken place. The mass of the proto-neutron star cannot be significantly larger than the base of the oxygen shell (``O'' location), because otherwise the typical neutron star masses will be too big to be compatible with observations \citep[][]{2007PhR...442..269W}. The temporal behavior of the inner boundary is likely to affect the dynamics of the explosion, because the effect of the deposition of energy by the thermal-bomb method will depend on the state of the matter the energy is transferred to. If the boundary radius was kept constant at its initial value, i.e., if the stellar core was not collapsed and the explosion was initiated right away, this corresponds to uncollapsed models and is denoted by the initial letter ``U'' in the model names. Alternatively, if the boundary was first contracted to mimic the collapse of the progenitor's degenerate core, this allowed the matter just exterior to the inner boundary to move to the higher densities and deeper into the gravitational potential of the central mass before the bomb was started. This approach defines our collapsed models and is indicated by the initial letter ``C'' in the names of the corresponding explosion models. In the thermal bomb method, the CCSN explosion is triggered by thermal energy input into a chosen layer around the inner boundary, either instantaneously \citep[e.g.][]{1991ApJ...370..630A} or over a chosen interval in time \citep[e.g., SM19 and][]{2007ApJ...664.1033Y}. The injected energy $E_\mathrm{inj}$, the mass layer $\Delta M$ or volume $\Delta V$ where the energy is deposited, and the timescale of the energy injection $t_\mathrm{inj}$ are free parameters of such a procedure. These parameters define energy transfer rates per unit of mass or volume, respectively: \begin{eqnarray} \dot e_\mathrm{inj, M} &=& \frac{E_\mathrm{inj}}{\Delta M\,t_\mathrm{inj}} \,, \label{eq:edotm} \\ \dot e_\mathrm{inj, V} &=& \frac{E_\mathrm{inj}}{\Delta V\,t_\mathrm{inj}} \,. \label{eq:edotv} \end{eqnarray} The expressions of Equations~(\ref{eq:edotm}) and (\ref{eq:edotv}) assume that, for simplicity, the energy input rate is constant in time and thus the deposited energy grows linearly with time. The total injected energy $E_\mathrm{inj}$ was varied in order to obtain a chosen value for the terminal explosion energy $E_\mathrm{exp}$ at infinity. In our study we considered CCSN models with an explosion energy close to $E_\mathrm{exp} = 10^{51}$\,erg and determined this value at $t \ge 80$\,s, at which time it had saturated in each model. The layer of the energy deposition is characterized by two fixed Lagrangian mass coordinates in the case of $\Delta M$ and two fixed radii in the case of $\Delta V$. In our simulations the inner boundary of the energy-deposition layer (IBED) was set to be the inner boundary of the computational grid, and the outer boundary of the energy-deposition layer (OBED) depends on the choice of $\Delta M$ or $\Delta V$. The last parameter here is the timescale of the energy deposition $t_\mathrm{inj}$, which defines how fast the shock will be developing and which we varied in our study, following SM19. During the CCSN simulations carried out for our investigation, we employed a reflecting inner boundary condition in order to maintain the pressure support while the explosion was still developing. This setting is motivated by the continued push of the CCSN ``engine'' (either neutrino-driven or magneto-rotational) over the period of time when the blast-wave energy builds up. We note in passing that we do not intend to discuss any effects of fallback, which typically play a role only on timescales longer than those considered for nucleosynthesis in the present work. \begin{figure*} \centering \includegraphics[width=0.92\textwidth]{isotope_chart_new5} \caption{Nuclear chart visualizing the three sets of isotopes used in this work for testing the final nucleosynthetic outputs. The test calculations were done under extreme conditions of density, $Y_e$, and entropy, and were carried out until $t=10$\,s. Their results showed convergence in the final yields of the 50 most abundantly produced isotopes between the sets with $262$~isotopes and $878$~isotopes.} \label{fig:network} \end{figure*} \begin{table} \centering \caption{Nuclear species used for the nucleosynthetic post-processing of our thermal-bomb CCSN models with SkyNet.} \label{tab:list} \begin{tabularx}{\columnwidth}{c} \hline \hline Nuclei used in the 262-species network\\ \end{tabularx} \begin{tabularx}{\columnwidth}{ccccc} \hline n & $^{1-3}$H & $^{3-4,6,8}$He& $^{6-8}$Li& $^{7,9-12}$Be \\ $^{8,10-13}$B & $^{11-15}$C & $^{12-16}$N& $^{13-21}$O & $^{16-23}$F \\ $^{17-24}$Ne & $^{19-25}$Na & $^{22-27}$Mg & $^{25-28}$Al & $^{27-33}$Si \\ $^{29-34}$P & $^{31-37}$S& $^{33-38}$Cl & $^{35-41}$Ar & $^{37-44}$K \\ $^{39-49}$Ca& $^{43-51}$Sc& $^{43-54}$Ti& $^{46-56}$V & $^{47-58}$Cr \\ $^{50-59}$Mn & $^{51-66}$Fe& $^{53-67}$Co& $^{55-68}$Ni& $^{57-66}$Cu\\ $^{58-66}$Zn & $^{59-67}$Ga& $^{60-69}$Ge\\ \hline \end{tabularx} \end{table} \subsection{Reaction Networks} \label{section:reactions} A small $\alpha$-network is consistently coupled to the hydrodynamic modelling with \textsc{P-HOTB}. It is described in the relevant details by \citet{1986A&A...162..103M} and is capable of tracking the bulk nucleosynthesis and thus to account for the contribution to the explosion energy provided by explosive nuclear burning. The network includes the $13$ isotopes of the alpha-chain, $^{4}$He, $^{12}$C, $^{16}$O, $^{20}$Ne, $^{24}$Mg, $^{28}$Si, $^{32}$S, $^{36}$Ar, $^{40}$Ca, $^{44}$Ti, $^{48}$Cr, $^{52}$Fe, and $^{56}$Ni, plus a ``tracer nucleus'' $^{56}$Tr, which is connected to the network with the reaction rates of $^{56}$Ni and is supposed to keep track of the formation of neutron-rich species in matter with considerable neutron excess, i.e., when $Y_e < 0.49$ \citep{2000ApJ...531L.123K,2001AIPC..561...21K,2003A&A...408..621K}. The network calculations made use of the reaction rates of \citet{1996ApJ...460..408T} and they were applied for temperatures between $0.1$\,GK and $9$\,GK, whereas for higher temperatures nuclear statistical equilibrium (NSE) was assumed. In order to perform more detailed nucleosynthesis calculations of our models in a post-processing step, we made use of the modular nuclear reaction network library SkyNet \citep{2017ApJS..233...18L}. For this purpose we extracted the temperature and density evolution of selected mass-shell trajectories from our CCSN explosion simulations with \textsc{P-HOTB} and applied the SkyNet network to each of these shells, starting out with shells closest to the mass cut between ejecta and proto-neutron star and constraining the network calculations to the same regime in temperature as used for the small network in \textsc{P-HOTB}, namely to the interval between $0.1$\,GK and $9$\,GK. Adding up the nuclear abundances obtained for all mass shells that ended up to be ejected (i.e. that expanded outward continuously until the end of the hydrodynamic simulation) provided the integrated yields of chemical elements and isotopes. If mass shells reached a peak temperature above $T_{\rm NSE}=9$\,GK during their infall or explosive expansion, the network calculations were started only at the time when the temperature finally dropped below 9\,GK, using the local NSE composition as initial condition.\footnote{Note that any preceding nuclear composition is erased when NSE is established.} Otherwise, if mass shells did not reach temperatures as high as 9\,GK, the composition evolution of these mass shells was followed with SkyNet from the beginning of their infall through their shock heating and ejection, and the initial composition was taken from the progenitor data. The mass resolution for post-processing the nucleosynthesis was chosen to be $10^{-4}\,M_\odot$ for the innermost part of the ejecta below a stellar mass coordinate of $2\,M_\odot$, and $0.005\,M_\odot$ farther out. SkyNet allows to define any selection of isotopes of interest and to define their relevant reactions. We took great care to employ a sufficiently big set of isotopes and to include all of their important reactions. To arrive there we started with three different sets of isotopes, inspired by their use in the literature: a small network with 160 isotopes \citep{2021ApJ...921..113S}, a medium-sized network with 204 isotopes \citep{2015ApJS..220...15P}, and a large network with 822 isotopes \citep{1992ApJ...395..202W}. We modified the medium and the large ones in a way that every next-bigger list included the previous one. On top of that we added more light isotopes; for the largest network, for example, we included all nuclear species available in SkyNet with $Z\le15$ and $N\le15$. After these modifications, we ended up with selections of 160, 262, and 878 isotopes (see Figure~\ref{fig:network}). With all of these three versions of the network we performed nucleosynthesis calculations for about 20 trajectories with the most extreme conditions (in density, $Y_e$, and temperature) picked from the set of our CCSN models. We found that the yields were well determined with an accuracy of better than 1\% for the 25 most abundantly produced isotopes when including 262 species compared to the case with 878 isotopes. Therefore we continued all further analyses with this medium-sized network, whose selection of nuclei is listed in Table~\ref{tab:list}. In our present work, we will only discuss the production of $^{56}$Ni based on our network calculations with the 262-isotope setup of SkyNet. We focus on this nickel isotope and aim at exploring the dependence of its production on the parameterisation of the thermal-bomb treatment, because the mass of $^{56}$Ni ejected in the explosion is an important diagnostic quantity for CCSN observations \citep[e.g.,][]{1989ARA&A..27..629A,2017ApJ...841..127M,2021A&A...655A..90Y,Valerin+2022}. Any implementation of a method to artificially trigger explosions in CCSN models should therefore be checked for its ability to provide reasonable predictions of the $^{56}$Ni yield and for the robustness of these predictions concerning changes of the (mostly rather arbitrarily) chosen values of the parameters steering the trigger mechanism. The produced amount of $^{56}$Ni is particularly useful to assess these questions, because the isotope is made in the innermost CCSN ejecta. Therefore it is potentially most immediately and most strongly affected by the artificial method (or by the physical mechanism) that is responsible for initiating the explosion. \begin{table*} \centering \caption{Properties of the thermal-bomb models computed in this work. $M_\mathrm{ZAMS}$ is the ZAMS mass of the progenitor star, ``Model'' is our name for the specific CCSN simulation (see text for our naming convention), ``Inner Grid Boundary'' specifies the criterion for placing the inner grid boundary, $M_\mathrm{ib}$ is the corresponding enclosed mass, $t_\mathrm{coll}$ is the collapse time, $r_\mathrm{min}$ is the minimum radius for the collapse phase, $\Delta M$ is the mass of the energy-injection layer or, respectively, the initial mass in the volume where the energy is injected, $t_\mathrm{inj}$ is the range of energy-deposition timescales considered, and $E_\mathrm{exp}$ is the range of final explosion energies to which the CCSN models for different energy-injection timescales were calibrated (see Section~\ref{section:setups} for details). Note that per construction all 26.6 $M_\odot$ models have identical values for $\Delta M$ in this listing (unless $\Delta M = 0.005\,M_\odot$). } \label{tab:explosions} \begin{tabular}{lcccccccc} \hline \hline $M_\mathrm{ZAMS}$ & Model & Inner Grid & $M_\mathrm{ib}$ & $t_\mathrm{coll}$ & $r_\mathrm{min}$ & $\Delta M$ & $t_\mathrm{inj}$ & $E_\mathrm{exp}$ \\ $[M_\odot]$ & & Boundary & [$M_\odot$] & [s] & [cm] & [$M_\odot$] & [s] & [$10^{51}$\,erg] \\ \hline \hline $12.3$ & U$12.3$D & $Y_e = 0.48$ & $1.230$ & no collapse & $-$ & $0.05$ & $0.01-2.0$ & $1.0099-1.0170$ \\ $12.3$ & U$12.3$DM\textquotesingle & $Y_e = 0.48$ & $1.230$ & no collapse & $-$ & $0.005$ & $0.01-2.0$ & $0.9834-1.0241$ \\ \hline $19.7$ & U$19.7$D & $Y_e = 0.48$ & $1.256$ & no collapse & $-$ & $0.05$ & $0.01-2.0$ & $1.0003-1.0178$ \\ $19.7$ & C$19.7$D & $Y_e = 0.48$ & $1.256$ & $0.45$ & $5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $1.0067-1.0125$ \\ $19.7$ & C$19.7$O & $s/k_\mathrm{B} = 4$ & $1.533$ & $0.45$ & $5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $1.0048-1.0160$ \\ $19.7$ & xC$19.7$O & $s/k_\mathrm{B} = 4$ & $1.533$ & $0.45$ & $1.5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $0.9977-1.0260$ \\ \hline $19.7$ & U$19.7$DM & $Y_e = 0.48$ & $1.256$ & no collapse & $-$ & $0.043$ & $0.01-2.0$ & $1.0018-1.0177$ \\ $19.7$ & C$19.7$DM & $Y_e = 0.48$ & $1.256$ & $0.45$ & $5\cdot10^7$ & $0.044$ & $0.01-2.0$ & $1.0016-1.0169$ \\ $19.7$ & C$19.7$OM & $s/k_\mathrm{B} = 4$ & $1.533$ & $0.45$ & $5\cdot10^7$ & $0.027$ & $0.01-2.0$ & $1.0000-1.0151$ \\ $19.7$ & U$19.7$DM\textquotesingle & $Y_e = 0.48$ & $1.256$ & no collapse & $-$ & $0.005$ & $0.01-2.0$ & $0.9889-1.0188$ \\ $19.7$ & C$19.7$OM\textquotesingle & $s/k_\mathrm{B} = 4$ & $1.533$ & $0.45$ & $5\cdot10^7$ & $0.005$ & $0.01-2.0$ & $1.0061-1.0394$ \\ \hline $19.7$ & C$19.7$OV & $s/k_\mathrm{B} = 4$ & $1.533$ & $0.45$ & $5\cdot10^7$ & $0.027$ & $0.01-0.5$ & $0.9982-1.0302$ \\ $19.7$ & xC$19.7$OV & $s/k_\mathrm{B} = 4$ & $1.533$ & $0.45$ & $1.5\cdot10^7$ & $0.027$ & $0.01-2.0$ & $1.0009-1.0400$ \\ \hline $21.0$ & U$21.0$D & $Y_e = 0.48$ & $1.272$ & no collapse & $-$ & $0.05$ & $0.01-2.0$ & $1.0185-1.0334$ \\ $21.0$ & C$21.0$D & $Y_e = 0.48$ & $1.272$ & $0.45$ & $5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $1.0161-1.0302$ \\ $21.0$ & C$21.0$O & $s/k_\mathrm{B} = 4$ & $1.484$ & $0.45$ & $5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $1.0160-1.0266$ \\ $21.0$ & xC$21.0$O & $s/k_\mathrm{B} = 4$ & $1.484$ & $0.45$ & $1.5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $1.0210-1.0363$ \\ \hline $21.0$ & U$21.0$DM & $Y_e = 0.48$ & $1.272$ & no collapse & $-$ & $0.042$ & $0.01-2.0$ & $1.0207-1.0334$ \\ $21.0$ & C$21.0$DM & $Y_e = 0.48$ & $1.272$ & $0.45$ & $5\cdot10^7$ & $0.041$ & $0.01-2.0$ & $1.0205-1.0319$ \\ $21.0$ & C$21.0$OM & $s/k_\mathrm{B} = 4$ & $1.484$ & $0.45$ & $5\cdot10^7$ & $0.068$ & $0.01-2.0$ & $1.0196-1.0247$ \\ $21.0$ & U$21.0$DM\textquotesingle & $Y_e = 0.48$ & $1.272$ & no collapse & $-$ & $0.005$ & $0.01-2.0$ & $1.0251-1.0545$ \\ $21.0$ & C$21.0$OM\textquotesingle & $s/k_\mathrm{B} = 4$ & $1.484$ & $0.45$ & $5\cdot10^7$ & $0.005$ & $0.01-2.0$ & $1.0067-1.0417$ \\ \hline $21.0$ & C$21.0$OV & $s/k_\mathrm{B} = 4$ & $1.484$ & $0.45$ & $5\cdot10^7$ & $0.068$ & $0.01-1.0$ & $1.0321-1.0503$ \\ $21.0$ & xC$21.0$OV & $s/k_\mathrm{B} = 4$ & $1.484$ & $0.45$ & $1.5\cdot10^7$ & $0.068$ & $0.01-2.0$ & $1.0101-1.0346$ \\ \hline $26.6$ & U$26.6$D & $Y_e = 0.48$ & $1.383$ & no collapse & $-$ & $0.05$ & $0.01-2.0$ & $1.0677-1.0811$ \\ $26.6$ & C$26.6$D & $Y_e = 0.48$ & $1.383$ & $0.45$ & $5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $1.0652-1.0784$ \\ $26.6$ & C$26.6$O & $s/k_\mathrm{B} = 4$ & $1.738$ & $0.45$ & $5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $1.0652-1.0775$ \\ $26.6$ & xC$26.6$O & $s/k_\mathrm{B} = 4$ & $1.738$ & $0.45$ & $1.5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $1.0595-1.0904$ \\ \hline $26.6$ & U$26.6$DM & $Y_e = 0.48$ & $1.383$ & no collapse & $-$ & $0.05$ & $0.01-2.0$ & $1.0677-1.0811$ \\ $26.6$ & C$26.6$DM & $Y_e = 0.48$ & $1.383$ & $0.45$ & $5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $1.0652-1.0784$ \\ $26.6$ & C$26.6$OM & $s/k_\mathrm{B} = 4$ & $1.738$ & $0.45$ & $5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $1.0652-1.0775$ \\ $26.6$ & U$26.6$DM\textquotesingle & $Y_e = 0.48$ & $1.383$ & no collapse & $-$ & $0.005$ & $0.01-2.0$ & $1.0492-1.0992$ \\ $26.6$ & C$26.6$OM\textquotesingle & $s/k_\mathrm{B} = 4$ & $1.738$ & $0.45$ & $5\cdot10^7$ & $0.005$ & $0.01-2.0$ & $1.0562-1.1010$ \\ \hline $26.6$ & C$26.6$OV & $s/k_\mathrm{B} = 4$ & $1.738$ & $0.45$ & $5\cdot10^7$ & $0.05$ & $0.01-1.0$ & $1.0666-1.0855$ \\ $26.6$ & xC$26.6$OV & $s/k_\mathrm{B} = 4$ & $1.738$ & $0.45$ & $1.5\cdot10^7$ & $0.05$ & $0.01-2.0$ & $1.0738-1.0985$ \\ \hline \hline \end{tabular} \end{table*} \section{Thermal-bomb setups} \label{section:setups} In order to investigate the effects of the thermal-bomb parameterisation, we simulated models without a collapsing central core as well as models including the core collapse, varied the timescale $t_\mathrm{inj}$ of the energy deposition, changed the location of the inner grid boundary, and tested models with the volume $\Delta V$ for the energy deposition fixed in time instead of the mass layer $\Delta M$ being kept unchanged with time. Our naming convention for the CCSN models is the following: \begin{enumerate} \item U and C are used as first letters to discriminate between the uncollapsed and collapsed models. \item Numerical values refer to the ZAMS masses (in units of $M_\odot$) of the progenitor models. They are replaced by $M_*$ as a placeholder in generic model names. \item Letters D or O are appended to distinguish the CCSN models with deep inner grid boundary at the progenitor's location where $Y_e = 0.48$ from the models with the inner grid boundary farther out where $s/k_\mathrm{B} = 4$. \item Letters M or M\textquotesingle\ at the end of the model names denote two different types of test simulations where the fixed mass value $\Delta M$ of the energy-injection layer is changed compared to the standard case with $\Delta M = 0.05\,M_\odot$ (see Section~\ref{sec:bombvariations}). \item Letters V instead of M at the end of the model names denote those simulations where the energy is injected into a fixed volume $\Delta V$ instead of a fixed mass shell $\Delta M$. \item Letters xC at the beginning of the model names indicate that the collapse of these models was prescribed to reach an ``extreme'' radius, smaller than in the C-models. \end{enumerate} A summary of all CCSN simulations studied for the four considered progenitor stars is given in Table~\ref{tab:explosions}. The explosion energy $E_\mathrm{exp}$ listed in this table is defined as the integral of the sum of the kinetic, internal, and gravitational energies for all unbound mass, i.e., for all mass shells that possess positive values of the binding energy at the end of our simulation runs. We exploded our progenitors with an explosion energy of approximately $E_{\rm exp}\approx 1\,\mathrm{B} = 10^{51}$\,erg, guided by the values of 1.01\,B for the $12.3\,M_\odot$ and $19.7\,M_\odot$ progenitors, 1.03\,B for the $21.0\,M_\odot$ star, and 1.07\,B for the $26.6\,M_\odot$ model.\footnote{These energies are slightly different in order to compare the thermal bomb models discussed here to existing neutrino-driven 1D explosion models from the study by \citet{2016ApJ...821...38S} in a follow-up project.} In all cases and setups, the energy was calibrated to the mentioned values with an accuracy of 3\%, which is a good compromise between accuracy needed and effort required by the iterative process for the calibration to such a precision. The corresponding ranges of the explosion energies for each set of models with different energy-injection timescales are provided in the last column of Table~\ref{tab:explosions}. The slight differences in the explosion energies between the models of each set as well as between the different progenitors are of no relevance for the study reported here. In detail, the different setups and corresponding simulations are as follows. \begin{table*} \centering \caption{Parameters for our thermal-bomb models with fixed energy-deposition volume $\Delta V$ and models with variations of $\Delta M$ (except those with an extremely small value of $\Delta M = 0.005\,M_\odot$). $R_\mathrm{IBED}$ and $R_\mathrm{OBED}$ are the inner and outer boundary radii of $\Delta V$, $\Delta M$ is the initial mass in this volume, and the ratio gives the value of $R_\mathrm{OBED}/R_\mathrm{IBED}$. Since for each setup the $26.6\,M_\odot$ model, uncollapsed or collapsed, was taken to calculate the radius ratio, $\Delta M=0.05\,M_\odot$ in all of the cases for this progenitor. } \label{tab:dR} \begin{tabular}{l|ccc|ccc|ccc|ccc} \hline \hline \multirow{2}{*}{\hspace{-0.2cm}$M_{\rm ZAMS}$} & \multicolumn{3}{c|}{U$M_*$DM} & \multicolumn{3}{c|}{C$M_*$DM} & \multicolumn{3}{c|}{C$M_*$OM, C$M_*$OV} & \multicolumn{3}{c}{xC$M_*$OV}\\ & $\Delta M$ & $R_\mathrm{IBED}$ & $R_\mathrm{OBED}$ & $\Delta M$ & $R_\mathrm{IBED}$ & $R_\mathrm{OBED}$ & $\Delta M$ & $R_\mathrm{IBED}$ & $R_\mathrm{OBED}$ & $\Delta M$ & $R_\mathrm{IBED}$ & $R_\mathrm{OBED}$\\ $[M_\odot]$ & $[M_\odot]$ & {[}cm{]} & {[}cm{]} & $[M_\odot]$ & {[}cm{]} & {[}cm{]} & $[M_\odot]$ & {[}cm{]} & {[}cm{]} & $[M_\odot]$ & {[}cm{]} & {[}cm{]} \\ \hline $19.7$ & $0.043$ & $1.066\cdot 10^8$ & $1.15\cdot 10^8$ & $0.044$ & $5\cdot 10^7$ & $5.4\cdot 10^7$ & $0.027$ & $5\cdot 10^7$ & $17.6\cdot 10^7$ & $0.027$ & $1.5\cdot 10^7$ & $15.88\cdot 10^7$ \\ $21.0$ & $0.042$ & $1.058\cdot 10^8$ & $1.14\cdot 10^8$ & $0.041$ & $5\cdot 10^7$ & $5.4\cdot 10^7$ & $0.068$ & $5\cdot 10^7$ & $17.6\cdot 10^7$ & $0.068$ & $1.5\cdot 10^7$ & $15.88\cdot 10^7$ \\ $26.6$ & $0.050$ & $1.278 \cdot 10^8$ & $1.38\cdot 10^8$ & $0.050$ & $5\cdot 10^7$ & $5.4\cdot 10^7$ & $0.050$ & $5\cdot 10^7$ & $17.6\cdot 10^7$ & $0.050$ & $1.5\cdot 10^7$ & $15.88\cdot 10^7$ \\ \hline ratio & \multicolumn{3}{c|}{$1.080$} & \multicolumn{3}{c|}{$1.081$} & \multicolumn{3}{c|}{$3.519$} & \multicolumn{3}{c}{$10.587$} \\ \hline \end{tabular} \end{table*} \subsection{Models for comparison with SM19} \label{sec:compSM19models} We started our investigation with a setup that was guided by models discussed in SM19, i.e., the CCSN simulations did not include any collapse of the central core of the progenitors. These U-models were supposed to permit a comparison with the results presented by SM19. In all of the discussed U-models the inner boundary was placed at the location where $Y_e=0.48$, and in our default setup the explosion energy was injected into a fixed mass layer with $\Delta M =0.05\,M_\odot$, which was the same in all CCSN models for the set of progenitors. The inner boundary of this energy-deposition layer (IBED) was therefore chosen to be identical to the inner grid boundary. The entire mass exterior to the IBED, i.e., including the matter in the energy-deposition layer between the IBED and the outer boundary of the energy-deposition layer (OBED), was considered to be ejected, provided it became gravitationally unbound by the energy injection. Note that in models with fixed energy-deposition layer $\Delta M$, the outer radius of this shell, $R_\mathrm{OBED}$, moves outward as the heated mass $\Delta M$ expands, whereas the inner radius, $R_\mathrm{IBED}$, is set to coincide with the inner grid boundary $R_\mathrm{ib}$ and does not change with time. Our thus chosen setup differs in two technical aspects from the choices made in SM19. First, SM19 reported that they injected the thermal-bomb energy into a fixed mass of 0.005\,$M_\odot$ (corresponding to the innermost 20 zones of their 1D Lagrangian hydrodynamics simulations). In contrast, we adopted $\Delta M = 0.05\,M_\odot$ as our default value. This larger mass appears more appropriate to us, at least in the case of the more realistic collapsed models and in view of the neutrino-driven mechanism, where neutrinos transfer energy to typically several 0.01\,$M_\odot$ to more than 0.1\,$M_\odot$ of circum-neutron star matter. Second, SM19 did not count the mass in the heated layer as ejecta, which means that they considered only the entire mass above the energy-deposition layer, i.e., exterior to the OBED, as ejecta. We did not join this convention, because we chose a 10 times larger mass for $\Delta M$ than SM19. In addition, again in view of the neutrino-driven mechanism, we do not see any reason why heated matter that can also be expelled should not be added to the nucleosynthesis-relevant CCSN ejecta. Moreover, we performed test calculations with $\Delta M = 0.005\,M_\odot$ and found no significant differences in the $^{56}$Ni yields, at least not in the case of uncollapsed models that served for a direct comparison with SM19. (This will be discussed in Section~\ref{sec:massvariations}.) The timescale of the energy deposition used in Equation~(\ref{eq:edotm}) was varied from 0.01\,s to 2\,s, using the following values: \begin{equation} \quad\quad\quad t_\mathrm{inj} = 0.01,\ 0.05,\ 0.2,\ 0.5,\ 1.0,\ 2.0\ \mathrm{s}\,. \label{eq:timesinj} \end{equation} We thus tested the influence of different durations of the energy injection on the explosion dynamics and $^{56}$Ni production. Although our progenitors are different from those used by SM19 and also our setup for the CCSN simulations differs in details from the one employed by SM19, the modelling approaches are sufficiently similar to permit us to reproduce the basic findings reported by SM19. In Table~\ref{tab:explosions} the corresponding models are denoted by U$M_*$D, where $M_*$ stands here as a placeholder for the mass value of the model. While our standard setup uses $\Delta M =0.05\,M_\odot$, we also performed test runs with $\Delta M \approx 0.04\,M_\odot$ for the U-setup. These models are denoted by U$M_*$DM in Table~\ref{tab:explosions}. We also ran test cases with the SM19 value of $\Delta M =0.005\,M_\odot$; the corresponding models are named U$M_*$DM\textquotesingle\ in Table~\ref{tab:explosions}, but they are not prominently discussed in the following, because such a small mass in the energy-deposition layer does not appear to be realistic for common CCSNe. It is most important, however, to note that all of these changes of $\Delta M$ led to secondary and never dominant differences in the produced amount of $^{56}$Ni compared to the changes connected to introducing a collapse phase or shifting the inner grid boundary (see Section~\ref{sec:bombvariations}). We did not consider any cases U$M_*$O, because moving the inner grid boundary farther out will lead to lower densities in the ejecta (Figure~\ref{fig:psn_closer}). This will significantly reduce the nucleosynthesized amount of $^{56}$Ni in this setup, and in particular for long $t_\mathrm{inj}$ it will lead to even more severe underproduction of $^{56}$Ni compared to the yields inferred from observations of CCSNe with energies around $10^{51}$\,erg (see Section~\ref{sec:SM19results}). \subsection{Variations of thermal-bomb setups} \label{sec:bombvariations} Instead of releasing thermal energy in the uncollapsed progenitor as assumed by SM19, we extended our setup in a next step by forcing the progenitor's core to contract before depositing the energy. Adding such a collapse phase will change the dynamics of the explosion, even with the same explosion energy and the same location of the inner boundary. To this end the inner grid boundary was moved inward for a time interval $t_\mathrm{coll}$, thus mimicking the collapse phase that precedes the development of the explosion. The time-dependent velocity for contracting the inner boundary was prescribed as in \citet{1995ApJS..101..181W,2002RvMP...74.1015W,2007PhR...442..269W} (who applied this prescription within the framework of the classical piston method): \begin{equation} \frac{\mathrm{d}r}{\mathrm{d}t}(t) = v_0 -a_0t \quad \mathrm{for} \quad t<t_\mathrm{coll} \,, \label{eq:collapse} \end{equation} where $v_0 < 0$ is the initial velocity of the inner boundary (following the infall of the progenitor model at the onset of its core collapse), and $a_0=2(r_0-r_\mathrm{min}+v_0t_\mathrm{coll})/t_\mathrm{coll}^2$ is a constant acceleration calculated in order to reach the minimum radius $r_\mathrm{min}$ after the collapse time $t_\mathrm{coll}$, with $r_0$ being the initial radius of the inner boundary. After this phase, the boundary contraction is stopped, matter begins to pile up around the grid boundary, and a shock wave forms at the interface to the still supersonically infalling overlying shells. Concomitantly, the deposition of internal energy by our thermal bomb was started. Equation~(\ref{eq:collapse}) defines the inward movement of the constant Lagrangian mass shell corresponding to the closed inner grid boundary. The collapse is basically controlled by the parameters $t_\mathrm{coll}$ and $r_\mathrm{min}$, whereas the explosion phase is controlled by the thermal-bomb parameters $E_\mathrm{inj}$, $\Delta M$ (or $\Delta V$), and $ t_\mathrm{inj}$ (Equations~\ref{eq:edotm} and \ref{eq:edotv}). Again following the literature mentioned above, we adopt for our default collapse simulations $t_\mathrm{coll}=0.45$\,s and the minimum radius $r_\mathrm{min} = 5\cdot 10^7$\,cm. In Table~\ref{tab:explosions} the models with this collapse setup and the deep inner boundary are denoted by C$M_*$D. In a variation of the setup for the C-models, we relocated the inner grid boundary outward to the base of the oxygen shell in the progenitor, i.e., to the radial position where $s/k_\mathrm{B} = 4$, with the goal of studying the influence on the $^{56}$Ni production. These models are denoted by C$M_*$O in Table~\ref{tab:explosions}. A variant of these models, named xC$M_*$O, considered the collapse to proceed to a smaller radius of $r_\mathrm{min} = 1.5\cdot 10^7$\,cm, using the same value of $t_\mathrm{coll}=0.45$\,s for the collapse time. As in the U-models, the inner boundary of the grid and the inner boundary of the energy-deposition layer (IBED) were chosen to coincide in all simulations. In both model variants, U-models as well as C-models, our standard runs were done with energy being dumped into a fixed mass layer of mass $\Delta M = 0.05\,M_\odot$. For the C-models we also simulated some test cases with different values of $\Delta M$ between about 0.03\,$M_\odot$ and roughly 0.07\,$M_\odot$. The corresponding models are denoted by C$M_*$DM or C$M_*$OM in Table~\ref{tab:explosions}. We also tested $\Delta M = 0.005\,M_\odot$ in simulations with collapse and the IBED at $s/k_\mathrm{B} = 4$, listed as models C$M_*$OM\textquotesingle\ in Table~\ref{tab:explosions}. These variations turned out to have no relevant influence on the $^{56}$Ni yields in the D-boundary cases, in agreement with what we found for the U-models. However, the change of $\Delta M$ caused some interesting, though secondary, differences in those cases that employed the O-boundary. We will briefly discuss these results in Section~\ref{sec:massvariations}. In yet another variation we investigated cases for our more realistic setup of C-models with O-boundary, where the volume of the energy deposition, $\Delta V$, was fixed instead of the mass layer $\Delta M$. Such a change might potentially affect the $^{56}$Ni production in CCSN models with steep density profile near the inner grid boundary. This time-independent volume of the energy deposition was determined for the different progenitors by a simple condition, connecting it to the initial values of the outer boundary radius $R_\mathrm{OBED}$ and of the inner boundary radius $R_\mathrm{IBED} = R_\mathrm{ib}$ of our standard setup with $\Delta M = 0.05\,M_\odot$ in the 26.6\,$M_\odot$ CCSN models. Specifically, the volume $\Delta V$, which is bounded by $R_\mathrm{IBED}$ and $R_\mathrm{OBED}$, was defined by the requirement that the ratio of these two radii should have the same value as in the $26.6\,M_\odot$ model in all of the CCSN runs (i.e., for all progenitors) of each considered setup: \begin{equation} \frac{R_\mathrm{OBED}}{R_\mathrm{IBED}}(26.6M_\odot)= \frac{R_\mathrm{OBED}}{R_\mathrm{IBED}}(21.0M_\odot)=\frac{R_\mathrm{OBED}}{R_\mathrm{IBED}}(19.7M_\odot)\,. \label{eq:rratio} \end{equation} This condition means that the inner radius of the deposition region, $R_\mathrm{IBED}$, was pre-defined by $R_\mathrm{ib}$ in the O-cases, and the outer radii $R_\mathrm{OBED}(21.0M_\odot)$ and $R_\mathrm{OBED}(19.7M_\odot)$ were calculated from the equation above. The chosen condition of Equation~(\ref{eq:rratio}) was also applied more generally for defining variations of $\Delta M$ (or $\Delta V$) in collapsed or uncollapsed models with deep or outer location of $R_\mathrm{ib}$ (Table~\ref{tab:dR}). Such a procedure should ensure that the distance between $R_\mathrm{IBED}$ and $R_\mathrm{OBED}$ adjusts to the size of $R_\mathrm{ib}$ and thus accounts for the higher density in its vicinity instead of being rigid without any reaction to the progenitors' radial structures. The models with fixed energy-deposition volume $\Delta V$ thus determined are denoted by C$M_*$OV or xC$M_*$OV in Table~\ref{tab:explosions} for standard and extreme collapse cases, respectively, and the values of $R_\mathrm{IBED}$ and $R_\mathrm{OBED}$ in our different model variations are listed in Table~\ref{tab:dR}. The latter table also provides numbers for the initial masses $\Delta M$ that correspond to the volumes bounded by $R_\mathrm{IBED}$ and $R_\mathrm{OBED}$. Note that Equation~(\ref{eq:rratio}) implies that $\Delta M$ is still 0.05\,$M_\odot$ for the 26.6\,$M_\odot$ models, but the initial masses in the heating layers are not the same in the runs with fixed $\Delta V$ for the other progenitors. Of course, for fixed volume $\Delta V$, the radii $R_\mathrm{IBED}$ and $R_\mathrm{OBED}$ do not evolve with time, but the mass $\Delta M$ in this heated radial shell decreases with time as the heated gas expands outward. Table~\ref{tab:dR} also provides the $\Delta M$ values that were obtained via Equation~(\ref{eq:rratio}) and apply for our tests performed with variations of the fixed heated mass-layer $\Delta M$ in models U$M_*$DM (see Section~\ref{sec:compSM19models}) as well as models C$M_*$DM and C$M_*$OM mentioned above. These subsets of models are interesting despite their small differences in $\Delta M$ compared to our default choice of $\Delta M = 0.05\,M_\odot$, because in the C-cases the initial volumes of the heated masses are the same for all progenitors instead of being different from case to case. Thus, these model variations check another aspect of potential influence on the nucleosynthesis conditions in the innermost ejecta. \begin{figure} \centering \includegraphics[width=\columnwidth]{ni_uncoll_deep11} \includegraphics[width=\columnwidth]{ni_coll_deep10} \includegraphics[width=\columnwidth]{ni_coll_si11} \caption{$^{56}$Ni yields as functions of energy-injection timescale for uncollapsed CCSN models (top panel) and collapsed models (middle panel) with deep inner grid boundary, and collapsed CCSN models with the inner grid boundary shifted farther out (bottom panel). The different colors correspond to the different progenitors as labelled in the top panel. Solid lines belong to our standard choice of $\Delta M = 0.05\,M_\odot$ for the fixed mass in the energy-deposition layer and dashed lines refer to varied mass values $\Delta \widetilde{M}$ (models with unprimed M in their names; see Table~\ref{tab:explosions}). Note that in the top and middle panels the solid and dashed lines overlap and are almost completely indistinguishable. In all panels the blue solid and dashed lines fall on top of each other by definition. The light-colored lines (solid and dashed) in the bottom panel show the $^{56}$Ni yields when the mass in the energy-injection layer is excluded from the ejecta instead of adding unbound matter of this layer to the ejecta. The horizontal grey dotted line indicates the $^{56}$Ni yield of 0.07\,$M_\odot$ for a $\sim$\,$10^{51}$\,erg explosion, e.g., SN~1987A \citep{1989ARA&A..27..629A}.} \label{fig:mni0} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{peak_temp_uncoll_deep_21_4} \includegraphics[width=\columnwidth]{peak_temp_coll_deep_21_4} \includegraphics[width=\columnwidth]{peak_temp_coll_si_21_4} \caption{Peak temperatures as functions of enclosed mass for the CCSN runs with the 21\,$M_\odot$ progenitor and different energy-injection timescales for the same modelling setups shown in Figure~\ref{fig:mni0}: uncollapsed (top), collapsed (middle), and collapsed with inner grid boundary shifted farther out (bottom). Different intensities of grey shading indicate different regimes of explosive nucleosynthesis as labelled. Note that the peak temperatures are displayed only for the runs with our standard value of $\Delta M = 0.05\,M_\odot$ for the fixed mass in the energy-injection layer, because the differences compared to the other choices of $\Delta M$ are effectively indistinguishable.} \label{fig:peakTtinj} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{shells_uncoll_deep8} \includegraphics[width=\columnwidth]{shells_coll_deep7} \includegraphics[width=\columnwidth]{shells_coll_si6} \vspace{-0.5cm} \caption{Radius evolution of Lagrangian mass shells with time for the CCSN runs of the 21\,$M_\odot$ progenitor with standard value of $\Delta M = 0.05\,M_\odot$ for the fixed mass in the energy-injection layer and a representative energy-deposition timescale of 1.0\,s: uncollapsed (top) and collapsed (middle) with deep inner grid boundary, and collapsed with inner grid boundary shifted farther out (bottom). The thin black solid lines are the mass shells, spaced in steps of 0.025\,$M_\odot$, the blue line marks the shock radius, the red line indicates the radius of the outer edge of the energy-injection layer ($R_\mathrm{OBED}$), and the yellow line the radius of the inner grid boundary, $R_\mathrm{ib}$, which is chosen as the inner edge of the energy-injection layer ($R_\mathrm{IBED}$) when the thermal bomb is switched on. Crosses indicate the moments when the peak temperature of each mass shell is reached; their colors correspond to temperature values as given by the color bar. Vertical lines mark the beginning and the end of the energy deposition.} \label{fig:second_wave} \end{figure} \section{Results of thermal-bomb simulations} \label{section:results} In this section we present the results of our study, focusing on the mass of $^{56}$Ni produced in the ejecta as computed in a post-processing step with the 262-isotope version of SkyNet (see Section~\ref{section:reactions}). These yields were determined after 10\,s of simulated evolution and, different from SM19, we usually (unless explicitly stated differently) considered as ejecta also unbound matter contained in the energy-deposition layer. We stress, however, that for models with the deep inner boundary $R_\mathrm{ib} = R_\mathrm{IBED}$ at $Y_e = 0.48$, there is no relevant difference in the $^{56}$Ni yields when including or excluding the mass in the heating layer. The reason is seen in Figure~\ref{fig:psn_closer}, upper and lower right panels: Since $Y_e < 0.485$ in the innermost $0.05\,M_\odot$ just outside of $R_\mathrm{ib}$, i.e., in the mass between $R_\mathrm{IBED}$ and $R_\mathrm{OBED}$, the $^{56}$Ni production is negligibly small in the energy-deposition layer. In Section~\ref{sec:SM19results} we will first report on our models of the U-setup in comparison to SM19. Then, in Section~\ref{sec:collapsedmodels}, we will discuss the differences when our models included an initial collapse before the thermal bomb was switched on. In Section~\ref{sec:shiftedIBED} we will describe the influence of shifting the inner grid boundary, $R_\mathrm{ib} = R_\mathrm{IBED}$, from the deep default location at $Y_e = 0.48$ to the outer location at the base of the oxygen shell where $s/k_\mathrm{B} = 4$. In Section~\ref{sec:massvariations} we will briefly summarize the consequences of changing the fixed mass $\Delta M$ of the energy-deposition layer, in Section~\ref{sec:fixedvolume} we will discuss the influence of changing from a fixed mass $\Delta M$ to a fixed volume $\Delta V$ of the energy-injection layer, and in Section~\ref{sec:minradius} we will finally present results for different minimum radii prescribed for the collapse phase. \subsection{Uncollapsed models compared to SM19} \label{sec:SM19results} When we consider uncollapsed models with deep inner grid boundary and the thermal-bomb energy injection into a fixed mass $\Delta M$ (the U$M_*$D simulations), following SM19, our results confirm the findings of this previous study (Figure~\ref{fig:mni0}, top panel): One can witness a clear anti-correlation between the amount of $^{56}$Ni produced and the timescale of the energy deposition for the explosion runs of all of the four considered progenitors; slower energy injection leads to a clear trend of reduced $^{56}$Ni production. Our set of CCSN models exhibits the same qualitative behavior as visible in Figure~7 (left panel) of SM19, although there are significant quantitative differences. These are most likely connected to the different core structures of the progenitor models, because the mentioned technical differences in the explosion modelling (i.e., the choice of the value of $\Delta M$ for the energy-injection layer and the inclusion of the heated mass in the ejecta) turned out to have no significant impact on the $^{56}$Ni yields in the uncollapsed models with deep inner boundary, see Sections~\ref{sec:shiftedIBED} and \ref{sec:massvariations}. For example, we investigated the effects of changing $\Delta M$ within several 10\% of our standard value (varying between 0.027\,$M_\odot$ and 0.068\,$M_\odot$) and also tested the extremely small value of $\Delta M = 0.005\,M_\odot$, but could not find any relevant $^{56}$Ni differences compared to our U$M_*$D simulations (a detailed discussion of this aspect is provided in Section~\ref{sec:massvariations}). The reason for the anti-correlation of $^{56}$Ni yield and energy-injection timescale can be inferred from the top panel of Figure~\ref{fig:peakTtinj}, which displays the peak temperatures as functions of enclosed mass for all investigated values of $t_\mathrm{inj}$ in the 21\,$M_\odot$ CCSN runs. Efficient $^{56}$Ni production requires the temperature in the expanding ejecta to reach the regime of NSE or complete silicon burning. Moreover, $Y_e$ has to exceed $\sim$0.48 considerably, which is obvious from the upper and lower right panels of Figure~\ref{fig:psn_closer}, where $^{56}$Ni mass fractions above 0.1 occur only in regions where $Y_e \gtrsim 0.485$. Only when these requirements are simultaneously fulfilled, freeze-out from NSE or explosive nuclear burning are capable of contributing major fractions to the $^{56}$Ni yield. The top panel of Figure~\ref{fig:peakTtinj} shows that for longer energy-injection times not only the maximum value of the peak temperature that can be reached in the heated matter drops, but also the total mass that is heated to the threshold temperature of complete Si burning (about 5\,GK) decreases. Therefore less $^{56}$Ni is nucleosynthesized when the energy injection of the thermal bomb (for a given value of the final explosion energy) is stretched over a longer time interval. This behavior is a consequence of the fact that the heated matter begins to expand as soon as the thermal bomb is switched on (see the upper panel of Figure~\ref{fig:second_wave} for the uncollapsed 21.0\,$M_\odot$ model with $t_\mathrm{inj}=1.0$\,s). When the energy injection is quasi-instantaneous, i.e., short compared to the hydrodynamical timescale for the expansion,\footnote{The hydrodynamical timescale, by its order of magnitude, is given by the radial extension of the bomb-heated layer divided by the average sound speed in this layer. For the uncollapsed models it is roughly $\Delta R/\bar{c}_\mathrm{s}\sim 10^7\,\mathrm{cm}/(10^9\,\mathrm{cm\,s}^{-1}) = 10^{-2}$\,s. Since the gravitational binding energy of the uncollapsed stellar structure at $r > R_\mathrm{ib}$ is low, this means that the outward expansion of the thermal-bomb-heated layer gains momentum within several 10\,ms at the longest.} the thermal energy deposition leads to an abrupt and strong increase of the temperature before the matter can react by its expansion. If, in contrast, the energy release by the thermal bomb for the same final explosion energy is spread over a long time interval, i.e., longer than the hydrodynamical timescale, the expansion occurring during this energy injection has two effects that reduce the temperature increase, in its maximum peak value as well as in the volume that gets heated to high temperatures: First, cooling by expansion ($p\mathrm{d}V$) work limits the temperature rise and, second, the thermal energy dumped by the bomb is distributed over a wider volume because the fixed mass $\Delta M$, into which the energy is injected, expands continuously. This is visible in the mass-shell plots of Figure~\ref{fig:second_wave} by the outward motion of the red line, which corresponds to the outer boundary radius, $R_\mathrm{OBED}$, of the energy-deposition layer. Because the gravitational binding energy of the uncollapsed stellar profile is comparatively low, the expansion of the energy-injection layer sets in basically promptly when the thermal bomb starts releasing its energy at $t = 0$. This holds true even if the specific energy-deposition rate $\dot e_\mathrm{inj, M}$ is relatively low because of a long injection timescale of $t_\mathrm{inj} = 1.0$\,s, for example (top panel of Figure~\ref{fig:second_wave}). Comparing the results for the four progenitors in the top panel of Figure~\ref{fig:mni0}, we notice three different aspects: (i) The absolute amount of the produced $^{56}$Ni and its steep variation with $t_\mathrm{inj}$ are quite similar for the 19.7\,$M_\odot$ and 21\,$M_\odot$ progenitors; (ii) these progenitors yield considerably less $^{56}$Ni for all energy-injection timescales than the 26.6\,$M_\odot$ case; (iii) the 12.3\,$M_\odot$ progenitor exhibits the weakest variation of the ejected $^{56}$Ni mass with $t_\mathrm{inj}$ among all of the four considered stars. These differences can be traced back to the progenitor structures plotted in Figure~\ref{fig:psn_closer} and to the peak temperature profiles in the ejecta caused by the thermal bomb (see top panel in Figure~\ref{fig:peakTC}). Because of the shallow density profile at $r > R_\mathrm{ib}$ in the 26.6\,$M_\odot$ progenitor, the outward going shock wave that is generated by a thermal bomb with final explosion energy of $10^{51}$\,erg heats much more mass to the temperatures required for strong $^{56}$Ni production. The $^{56}$Ni nucleosynthesis is actually hampered in the 26.6\,$M_\odot$ progenitor by the fact that its innermost layer of $\sim$0.15\,$M_\odot$ possesses $Y_e$ values below 0.485 (Figure~\ref{fig:psn_closer}, upper right panel). In such conditions the mass fraction of $^{56}$Ni does not exceed a few percent, see Figure~\ref{fig:psn_closer}, lower right panel, and Figure~\ref{fig:xni}, top panel, for $t_\mathrm{inj} = 0.01$\,s and $t_\mathrm{inj} = 1.0$\,s, respectively. Nevertheless, the 26.6\,$M_\odot$ runs produce a lot of $^{56}$Ni because considerable abundances of this isotope can be nucleosynthesized even beyond an enclosed mass of $\sim$1.8\,$M_\odot$, in particular for short energy-injection times. In contrast, the 12.3\,$M_\odot$ progenitor possesses only a narrow layer of less than $\sim$0.07\,$M_\odot$ with $Y_e\lesssim 0.485$ around $R_\mathrm{ib}$. This enables a relatively abundant production of $^{56}$Ni in the thermal-bomb models with this star for all energy-injection times and in spite of the steeper density profile compared to the 26.6\,$M_\odot$ progenitor. Finally, the two stellar models with 19.7\,$M_\odot$ and 21\,$M_\odot$ exhibit very similar $Y_e$ profiles and also their density profiles are close to each other up to the base of the oxygen shell, which is at roughly 1.48\,$M_\odot$ in the 21\,$M_\odot$ model, but at about 1.53\,$M_\odot$ in the 19.7\,$M_\odot$ case (see Table~\ref{tab:psn}). This difference, however, is located quite far away from the inner grid boundaries (which are at 1.256\,$M_\odot$ and 1.272\,$M_\odot$ for 19.7\,$M_\odot$ and 21\,$M_\odot$, respectively; see Table~\ref{tab:explosions}) and its consequence (i.e., higher $^{56}$Ni mass fractions up to larger mass coordinates in the 21.0\,$M_\odot$ runs; Figure~\ref{fig:xni}) is partly compensated by more efficient $^{56}$Ni production in the layers just exterior to the energy-injection domain in the 19.7\,$M_\odot$ runs (Figure~\ref{fig:psn_closer}, lower right panel, and Figure~\ref{fig:xni}, top panel). The overall effect is that both progenitors resemble each other closely in their $^{56}$Ni outputs for all values of $t_\mathrm{inj}$, at least when uncollapsed thermal-bomb models with deep inner boundary are considered. In the following we will not use the 12.3\,$M_\odot$ runs any further, because they exhibit the weakest variation of the produced $^{56}$Ni mass with $t_\mathrm{inj}$, whereas our main focus is on how this variation is affected when an initial collapse phase is included in the thermal-bomb treatment. \begin{figure} \centering \includegraphics[width=\columnwidth]{peak_temp_uncoll_deep4} \includegraphics[width=\columnwidth]{peak_temp_coll_deep4} \includegraphics[width=\columnwidth]{peak_temp_coll_si4} \caption{Peak temperatures as functions of enclosed mass for CCSN models for different progenitors using the standard value of $\Delta M = 0.05\,M_\odot$ for the fixed mass in the energy-injection layer and a representative energy-deposition timescale of 1.0\,s: uncollapsed (top) and collapsed (middle) with deep inner grid boundary, and collapsed with inner grid boundary shifted farther out (bottom). Grey shading again indicates different regimes of explosive nucleosynthesis as in Figure~\ref{fig:peakTtinj}. Note that the peak temperatures are displayed only for the runs with our default choice of $\Delta M = 0.05\,M_\odot$, because the differences compared to the other choices of $\Delta M$ are effectively indistinguishable.} \label{fig:peakTC} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{xni_uncoll_deep5} \includegraphics[width=\columnwidth]{xni_coll_deep5} \includegraphics[width=\columnwidth]{xni_coll_si6} \caption{$^{56}$Ni mass fractions as functions of enclosed mass as produced in the CCSN models shown in Figure~\ref{fig:peakTC}. Here, we plot the results for our standard value of $\Delta M = 0.05\,M_\odot$ for the fixed mass in the energy-injection layer (solid lines) and for the cases with varied mass values $\Delta \widetilde{M}$ (models with unprimed M in their names, see Table~\ref{tab:explosions}; dashed lines). Note that the solid and dashed lines mostly overlap and therefore are hardly distinguishable. Moreover, we highlight the contribution to the $^{56}$Ni production from the mass in the energy-injection layer, which is included in our definition of the ejecta (indicated by light-colored parts of the solid and dashed lines).} \label{fig:xni} \end{figure} \subsection{Collapsed models} \label{sec:collapsedmodels} The picture changes radically when a collapse phase is introduced into the explosion modelling before the energy injection by the thermal bomb is switched on. Figure~\ref{fig:mni0}, middle panel, displays the $^{56}$Ni yields for the corresponding models with deep inner boundary (our C$M_*$D simulations). For short energy-injection timescales ($t_\mathrm{inj}\lesssim 0.05$\,s) we find amounts of $^{56}$Ni very similar to those obtained in the uncollapsed models, but now also the explosion simulations with longer $t_\mathrm{inj}$ are efficient in producing $^{56}$Ni. In fact, there is little variation of the $^{56}$Ni yields when $t_\mathrm{inj}$ increases from 0.01\,s to 2\,s. The anti-correlation of the $^{56}$Ni production with $t_\mathrm{inj}$ observed for the U$M_*$D models is gone and instead the C$M_*$D models exhibit a $^{56}$Ni nucleosynthesis that varies much less with the duration of the energy release by the thermal bomb. Inspecting the peak temperature profiles versus enclosed mass (Figure~\ref{fig:peakTtinj}, middle panel), one recognizes three main differences compared to the uncollapsed cases in the top panel of this figure. First, the maximum peak temperatures for all energy-injection times reach higher values in the C-models and extend well into the NSE regime. Second, the peak temperature profiles are more similar to each other than in the U-models when $t_\mathrm{inj}$ is varied. And third, this implies that for all values of $t_\mathrm{inj}$ a wider mass layer is heated to the temperatures required for complete Si burning or NSE. These differences in the collapsed models compared to the uncollapsed ones have several reasons, whose relative importance varies with the energy-injection timescale. Because of the compression heating during the collapse, the temperatures at the onset of the energy injection by the thermal bomb are already higher. A more important effect, however, is connected to the fact that the shock expands into stellar layers that have collapsed for $\sim$0.5\,s or longer and over radial distances between several 100\,km and more than 1000\,km. The growing kinetic energy of the infalling gas is converted to thermal energy in the shock. Moreover, the energy input into the collapsed mass layer of $\Delta M = 0.05\,M_\odot$ means that the energy is injected into a much smaller volume than in the uncollapsed models (Table~\ref{tab:dR}), implying considerably higher heating rates per unit volume. For the uncollapsed 26.6\,$M_\odot$ model with deep inner boundary, for example, the initial radii bounding the heating layer are $R_\mathrm{IBED}\approx 1280$\,km and $R_\mathrm{OBED}\approx 1380$\,km, i.e., the layer has a width of $\sim$100\,km, whereas in the corresponding collapsed model the initial radial extension of the heating layer is only 40\,km between 500\,km and 540\,km (see Table~\ref{tab:dR}). In addition, the expansion of the heated matter sets in much more slowly in the collapsed models, where the energy-injection layer sits deeper in the gravitational potential and the overlying, infalling mass shells provide external pressure, hampering the outward acceleration. One can clearly see this effect when comparing the top and middle panels of Figure~\ref{fig:second_wave}. This inertia of the matter in the wake of the outgoing shock permits the energy injection to boost the temperature and thus the postshock pressure to high values even when the energy-deposition timescales are long. As a consequence, the shock is pushed strongly into the infalling, overlying shells, and the peak-temperature profiles (Figure~\ref{fig:peakTtinj}) as well as the mass that is heated sufficiently to enable abundant $^{56}$Ni production become quite similar for different $t_\mathrm{inj}$. Again, as for the U-models, the thermal-bomb runs for the collapsed 26.6\,$M_\odot$ models lead to the highest yields when the final explosion energy is fixed to $\sim$\,$10^{51}$\,erg for all progenitors. Once again, this is connected to the more shallow pre-collapse density profile of the 26.6\,$M_\odot$ star, for which reason more mass is heated to $^{56}$Ni-production temperatures (Figure~\ref{fig:peakTC}, middle panel). Correspondingly, the mass layer with a high mass fraction of this isotope is much more extended in the C26.6 models (see Figure~\ref{fig:xni}, middle panel). More energy input by the thermal bomb is needed and, accordingly, a stronger shock wave is created to lift the ejecta out of the deeper gravitational potential of the central mass of the new-born neutron star ($M_\mathrm{ib} = 1.383\,M_\odot$ in model C26.6D compared to 1.256\,$M_\odot$ and 1.272\,$M_\odot$ in models C19.7D and C21.0D, respectively). The $^{56}$Ni yields of the 19.7\,$M_\odot$ and 21.0\,$M_\odot$ models are somewhat more different in the simulations with initial collapse than in the runs without collapse, especially for energy-injection times shorter than 0.5\,s (Figure~\ref{fig:mni0}, middle panel), despite the similar density profiles of the two stars up to the base of the oxygen shell and despite their steep increase from $Y_e < 0.485$ to $Y_e > 0.495$ happening at the same mass coordinate (Figure~\ref{fig:psn_closer}, upper two panels). The C21.0D models nevertheless produce more $^{56}$Ni because the interface to the O-layer with decreasing density and increasing entropy lies at a lower enclosed mass, permitting stronger shock heating and more $^{56}$Ni nucleosynthesis in the oxygen shell (Figure~\ref{fig:xni}, middle panel). For long energy-injection times, however, this effect is again compensated by slightly more $^{56}$Ni production in the innermost layers of the C19.7D runs. A special feature requires brief discussion: At intermediate energy-deposition timescales the C21.0D and C26.6D models exhibit local maxima of their $^{56}$Ni yields, more prominently in the 26.6\,$M_\odot$ cases and only shallow in the 21.0\,$M_\odot$ runs. This phenomenon is caused by the thermal-bomb prescription of energy-injection into a fixed mass shell $\Delta M$ that starts expanding when the energy deposition sets in. This creates a compression wave when the energy deposition takes place on a shorter timescale than the expansion, which leads to peak temperatures in the ejecta that are reached not exactly right behind the outgoing shock wave but at some distance behind the shock, thus causing high temperatures for a longer period in a wider layer of mass and therefore more $^{56}$Ni production. This effect can be seen in a weak variant in the middle panel of Figure~\ref{fig:second_wave}, where between $t\sim 1.1$\,s and $t\sim 1.3$\,s the peak temperatures of the expelled mass shells (marked by crosses) appear detached from the shock. In this 21.0\,$M_\odot$ model with $t_\mathrm{inj} = 1.0$\,s, however, the effect is mild and has no relevant impact on the $^{56}$Ni nucleosynthesis. For simulations with very short $t_\mathrm{inj}$ the energy deposition is so fast that the compression wave quickly merges with the shock, whereas for very long timescales $t_\mathrm{inj}$ the energy injection is gentle and keeps pace with the outward acceleration of the mass shells, for which reason a strong compression wave is absent. Only at intermediate values of $t_\mathrm{inj}\sim 0.2$\,s this compression wave has a significant influence on the temperature evolution of the ejected mass shells in the postshock domain and thus a noticeable effect on enhanced $^{56}$Ni production. \subsection{Shifted inner boundary} \label{sec:shiftedIBED} In a next test we moved the inner grid boundary from the deep location to the position at the base of the O-shell (where $s/k_\mathrm{B} = 4$). This choice for the C$M_*$O models is more realistic than the deep inner boundary, because it is better compatible with our current understanding of the neutrino-driven explosion mechanism of CCSNe \citep[e.g.,][]{2016ApJ...818..124E,2016ApJ...821...38S}. The corresponding $^{56}$Ni yields of the thermal-bomb simulations with our standard setting of $\Delta M = 0.05\,M_\odot$ for the energy-injection layer and different values of $t_\mathrm{inj}$ are displayed by solid lines in the bottom panel of Figure~\ref{fig:mni0}. First, we notice that the $^{56}$Ni yields of C$M_*$O models are much lower for all $t_\mathrm{inj}$ than in the C$M_*$D models in the panel above. In absolute numbers these yields are closer to the typical values of $\sim$0.05--$0.1\,M_\odot$ for the $^{56}$Ni production in CCSNe with explosion energies around (1--2)$\times 10^{51}$\,erg \citep[see, e.g.,][]{1989ARA&A..27..629A,1994ApJ...437L.115I,2017ApJ...841..127M}. While models C26.6O and C21.0O eject similar amounts of $^{56}$Ni, model C19.7O, in contrast, produces considerably less $^{56}$Ni. Several important aspects in the C-models with the O-boundary are different from those with the D-boundary: The densities and therefore the ram pressure in the pre-shock matter are significantly lower, for which reason the expansion of the shock and thus also of the matter in the energy-injection layer and above occurs much faster. This can be seen by comparing the middle and bottom panels of Figure~\ref{fig:second_wave}. Moreover, since the density is low, the energy injected into a given mass layer $\Delta M$ is distributed over a considerably wider volume, which can be concluded from the values of $R_\mathrm{OBED}$ given for the C$M_*$O and C$M_*$D models in Table~\ref{tab:dR} ($1.76\cdot 10^8$\,cm and $5.4\cdot 10^7$\,cm, respectively). The effect, however, is not quite as dramatic as the different $R_\mathrm{OBED}$ might suggest, because the density gradient is steep and most of the heated mass $\Delta M$ is still located relatively close to $R_\mathrm{IBED} = 5\cdot 10^7$\,cm. Overall, however, these differences lead to steeper declines of the peak temperatures with enclosed mass than in the models with D-boundary (compare the bottom panels of Figures~\ref{fig:peakTtinj} and \ref{fig:peakTC} with the top and middle panels of these figures). This explains why in the CCSN models with O-boundary less mass is heated to $^{56}$Ni production temperatures. As a consequence, the layer of abundant $^{56}$Ni nucleosynthesis is much narrower in mass and very close to the inner grid boundary (Figure~\ref{fig:xni}), and the total $^{56}$Ni yields are considerably lower than in the CCSN models with deep boundary, even when the final explosion energy is tuned to the same value. In the C$M_*$O models the peak temperature profiles are quite similar for different energy-injection timescales (Figure~\ref{fig:peakTtinj}, bottom panel), for which reason the $^{56}$Ni outputs of the 21.0\,$M_\odot$ and 26.6\,$M_\odot$ models are relatively similar with a moderate decrease for longer $t_\mathrm{inj}$. In the case of the 19.7\,$M_\odot$ simulations, however, the peak temperature declines extremely steeply as function of enclosed mass (Figure~\ref{fig:peakTC}) because of the very low densities of the heated mass layer (due to the low densities in the oxygen layer of the progenitor; Figure~\ref{fig:psn_closer}). Therefore the expansion of this layer proceeds extremely quickly and the expansion cooling as well as the dilution of the energy deposition over a quickly growing volume do not permit high peak temperatures in a large mass interval. This leads to the result that the $^{56}$Ni yields in the C19.7O models are the lowest of all of the three considered progenitors. Another difference between C-models with D-boundary and O-boundary is the fact that in the latter the inclusion of the heated mass $\Delta M$ in the ejecta or its exclusion can make a sizable difference in the $^{56}$Ni yields. In contrast to the U$M_*$D and C$M_*$D models, the simulations with collapse and O-boundary produce considerably less $^{56}$Ni when the matter in the energy-injection layer is not taken into account in the ejecta (see the light-colored solid lines in the bottom panel of Figure~\ref{fig:mni0}). In particular, C19.7O underproduces $^{56}$Ni massively in this case, and for the models with the 21.0\,$M_\odot$ and 26.6\,$M_\odot$ progenitors we witness again a strong trend of decreasing $^{56}$Ni yields with longer energy-injection timescales when only material exterior to $R_\mathrm{OBED}$ is counted as ejecta. Such a trend, however, disappears essentially entirely when the $^{56}$Ni nucleosynthesized in the energy-deposition layer is included in the ejecta (heavy solid lines compared to light-colored solid lines in the bottom panel of Figure~\ref{fig:mni0}). We recall that the exclusion of the heated mass from the ejecta or its inclusion does not have any relevant influence on the total $^{56}$Ni yields of our U- and C-models with deep inner boundary, because the low $Y_e$ in the vicinity of this boundary location (see Figure~\ref{fig:psn_closer}) prevents abundant production of $^{56}$Ni in the heated mass layer (Figure~\ref{fig:xni}, top and middle panels). The situation is different now for the O-models, because $Y_e$ is close to 0.5 near the inner grid boundary in this case (Figure~\ref{fig:psn_closer}). Much of the $^{56}$Ni is then produced in the mass layers just exterior to $R_\mathrm{ib}$ in addition to the fact that the total $^{56}$Ni yields are much smaller (Figure~\ref{fig:xni}, bottom panel). Therefore the $^{56}$Ni assembled in the heated mass can make a significant or even dominant contribution to the total yield of this isotope. The C19.7O models are the most extreme cases in this respect. Their $^{56}$Ni yields are extremely low when only matter exterior to the heated layer is considered as ejecta. This is especially problematic since our default value of 0.05\,$M_\odot$ for the energy-injection mass $\Delta M$ is fairly large. This fact is further illuminated in the following section, where we will discuss the results for variations of $\Delta M$. \begin{figure*} \centering \includegraphics[width=\columnwidth]{ni_uncoll_deep_005_4} \includegraphics[width=\columnwidth]{ni_coll_si_005_4} \caption{ $^{56}$Ni yields as functions of energy-injection timescale for uncollapsed CCSN models (left panel) and collapsed CCSN models with inner grid boundary shifted farther out (right panel). The different colors correspond to the different progenitors as labelled in the left panel. Solid lines belong to our standard choice of $\Delta M = 0.05\,M_\odot$ for the fixed mass in the energy-deposition layer and dash-dotted lines refer to the values of $\Delta M$\textquotesingle$=0.005\,M_\odot$ (see Table~\ref{tab:explosions}). The horizontal grey dotted line indicates the $^{56}$Ni yield of 0.07\,$M_\odot$ for a $\sim$\,$10^{51}$\,erg explosion, e.g., SN~1987A \citep{1989ARA&A..27..629A}.} \label{fig:mni005} \end{figure*} \subsection{Variations of mass in energy-injection layer} \label{sec:massvariations} We also simulated some test cases of U-models and C-models using moderately different values of the fixed heated mass $\Delta M$, varied within plausible ranges such that the initial volumes of the heated masses are the same for the C-models of all progenitors (see Table~\ref{tab:dR} and Section~\ref{sec:bombvariations}). These models are denoted by U$M_*$DM, C$M_*$DM, and C$M_*$OM, represented by dashed lines in the panels of Figure~\ref{fig:mni0}. There are no relevant effects with respect to the $^{56}$Ni production, neither in U-models nor C-models, in the cases with deep inner boundary when $\Delta M \approx 0.04\,M_\odot$ is used instead of $\Delta M = 0.05\,M_\odot$; the dashed lines are mostly indistinguishable from the solid lines in the top and middle panels of Figure~\ref{fig:mni0}. However, slightly more sensitivity of the $^{56}$Ni yields to the choice of $\Delta M$ is obtained in the cases of the C$M_*$O models (bottom panel of Figure~\ref{fig:mni0}). Changing to $\Delta M \approx 0.03\,M_\odot$ (C19.7OM models) increases the nickel production for $t_\mathrm{inj} \lesssim 0.2$\,s, whereas a change to $\Delta M \approx 0.07\,M_\odot$ decreases the $^{56}$Ni yield (C21.0OM models), displayed by heavy dashed lines in the bottom panel of Figure~\ref{fig:mni0}. In both cases the relative difference in the $^{56}$Ni yields compared to the standard setup with $\Delta M = 0.05\,M_\odot$ depends on $t_\mathrm{inj}$ and is largest for short $t_\mathrm{inj}$ and low $^{56}$Ni production with the standard value of $\Delta M$. We notice again that this effect is considerably stronger if the nucleosynthesis in the heated mass $\Delta M$ itself is excluded from the $^{56}$Ni budget (light-colored dashed lines in the bottom panel of Figure~\ref{fig:mni0}) instead of counting unbound matter in the energy-deposition layer also as ejecta (heavy dashed lines in the bottom panel of Figure~\ref{fig:mni0}). When $\Delta M$ is excluded from the ejecta, the $^{56}$Ni yields in the C$M_*$O (light-colored solid lines) and the C$M_*$OM models (light-colored dashed lines) do not only become significantly lower but also very sensitive to the energy-injection timescale, as already mentioned in Section~\ref{sec:shiftedIBED}. This strong variation with $t_\mathrm{inj}$ in the case of our O-boundary models reminds one of the SM19 results with D-boundary, but the effect vanishes almost entirely for all O-models when the $^{56}$Ni production within the heated mass layer is added to the ejecta. For completeness, we also tested a radical reduction of $\Delta M$ from our default of 0.05\,$M_\odot$ to the value of 0.005\,$M_\odot$ adopted by SM19 for the fixed mass in the energy-deposition layer (U- and C-models in Table~\ref{tab:explosions} with M\textquotesingle\ as endings of their names). These simulations reproduce the trend witnessed for the C19.7OM models compared to the C19.7O models in the bottom panel of Figure~\ref{fig:mni0}, namely that a reduced $\Delta M$ tends to increase the $^{56}$Ni production (see Figure~\ref{fig:mni005}). While the difference is small and thus has no relevant effect in the uncollapsed (and collapsed) models with the D-boundary (left panel of Figure~\ref{fig:mni005}) the increase is more significant in the simulations with O-boundary (right panel). However, considering all the results provided by Figures~\ref{fig:mni0} and \ref{fig:mni005}, one must conclude that, overall, the $^{56}$Ni yields are not overly sensitive to the exact value chosen for $\Delta M$, and that the corresponding variations are certainly secondary compared to the differences obtained between collapsed and uncollapsed models and between changing from D-boundary to O-boundary. These findings shed light on the many ambiguities and the somewhat arbitrary choices that can be made in the treatments of artificial explosions with parametric methods. In any case, it is advisable to include also the mass of the energy-injection layer in the ejecta of the thermal bomb, if this matter gets ultimately expelled during the explosion. This is particularly relevant when the initial mass cut is assumed to be located at the more realistic $s/k_\mathrm{B} = 4$ position and the thermal energy is dumped into an extended layer with mass $\Delta M$, whose choice is inspired (roughly) by the mass heated by neutrinos in CCSNe. If otherwise the mass of $\Delta M$ is excluded from the ejecta, the $^{56}$Ni production can become highly sensitive to the exact values of both $\Delta M$ and $t_\mathrm{inj}$, depending on the density structure of the progenitor star. \begin{figure*} \includegraphics[width=\columnwidth]{final_ni_C21MV_3.pdf} \includegraphics[width=\columnwidth]{final_ni_C21MV150_3.pdf} \caption{$^{56}$Ni yields as functions of energy-injection timescale for collapsed CCSN models with fixed mass $\Delta M = 0.05\,M_\odot$ (solid lines) and fixed volume (dash-dotted lines) of the energy-deposition layer. The left panel displays the results for our standard collapse to $r_\mathrm{min}=500$\,km, the right panel the cases with extreme collapse to $r_\mathrm{min}=150$\,km. The different colors correspond to the different progenitors as labelled in the left panel. Note that the models with fixed volume for the longest energy-deposition timescales in the left panel could not be finished because of the computational demands connected to small time steps. The horizontal grey dotted line indicates the $^{56}$Ni yield of 0.07\,$M_\odot$ for a $\sim$\,$10^{51}$\,erg explosion, e.g., SN~1987A \citep{1989ARA&A..27..629A}.} \label{fig:final_ni_150} \end{figure*} \begin{figure*} \includegraphics[width=\columnwidth]{shells_C21M_5.pdf} \includegraphics[width=\columnwidth]{shells_C21M150_5.pdf} \includegraphics[width=\columnwidth]{shells_C21V_7.pdf} \includegraphics[width=\columnwidth]{shells_C21V150_8.pdf} \caption{Radius evolution of Lagrangian mass shells versus time for CCSN runs of the 21\,$M_\odot$ progenitor with collapse phase and a representative energy-deposition timescale of 0.5\,s; top left: for fixed mass of $\Delta M = 0.05\,M_\odot$ in the energy-deposition layer and collapse to our default value for the minimum radius of $r_\mathrm{min}=500$\,km; top right: for the same fixed mass in the energy-deposition layer but collapse to $r_\mathrm{min}=150$\,km; bottom left: for fixed volume of the energy deposition and collapse to $r_\mathrm{min}=500$\,km; bottom right: for fixed energy-deposition volume and collapse to $r_\mathrm{min}=150$\,km. The thin black solid lines are the mass shells, spaced in steps of 0.025\,$M_\odot$, the blue line marks the shock radius, the yellow line the inner grid boundary, which is also the lower boundary of the energy-deposition layer, and the red line indicates the outer boundary of the energy-deposition layer, either at a fixed mass interval of 0.05\,$M_\odot$ above the inner boundary or at a fixed radius. Crosses indicate the instants when the peak temperature of each mass shell is reached; their colors correspond to temperature values as given by the color bars. Vertical lines mark the beginning and the end of the energy deposition.} \label{fig:shells_150} \end{figure*} \begin{figure*} \includegraphics[width=\columnwidth]{peak_temp_21_C21M_3.pdf} \includegraphics[width=\columnwidth]{peak_temp_21_C21M150_3.pdf} \includegraphics[width=\columnwidth]{peak_temp_21_C21V_4.pdf} \includegraphics[width=\columnwidth]{peak_temp_21_C21V150_3.pdf} \caption{Peak temperatures as functions of enclosed mass for the CCSN runs with the 21\,$M_\odot$ progenitor and different energy-injection timescales for the same modelling setups shown in Figure~\ref{fig:shells_150}; top left: for fixed mass of $\Delta M = 0.05\,M_\odot$ in the energy-deposition layer and collapse to our default value for the minimum radius of $r_\mathrm{min}=500$\,km; top right: for the same fixed mass in the energy-deposition layer but collapse to $r_\mathrm{min}=150$\,km; bottom left: for fixed volume of the energy deposition and collapse to $r_\mathrm{min}=500$\,km; bottom right: for fixed energy-deposition volume and collapse to $r_\mathrm{min}=150$\,km. Different intensities of grey shading indicate different regimes of explosive nucleosynthesis as labelled.} \label{fig:peak_temp_150} \end{figure*} \subsection{Fixed volume for energy-injection layer} \label{sec:fixedvolume} In another variation of the thermal-bomb modelling we also performed runs with fixed volume $\Delta V$ for the energy deposition, constrained to simulations including the collapse phase and applying the O-boundary (models C$M_*$OV in Table~\ref{tab:explosions}). These simulations used the same volume for all of the three considered progenitors, and correspondingly the initial masses in the energy-injection volume were slightly different between these progenitors (Table~\ref{tab:dR}). Moreover, these initial mass values were also different from the fixed masses $\Delta M$ in the heating layer of the C$M_*$O models (except for the 26.6\,$M_\odot$ case), which we will compare the C$M_*$OV models to. Although we found only a modest influence by variations of the fixed mass in the energy-deposition layer in Section~\ref{sec:massvariations}, we will see that the moderate differences in the initial mass contained by the fixed heated volume can cause some subtle relative differences in the behavior of the simulations for different progenitor masses. Our CCSN models with fixed volume for the energy-injection behave, overall, quite similarly to the models with fixed mass. This holds concerning the $^{56}$Ni yields (left panel of Figure~\ref{fig:final_ni_150}) as well as the explosion dynamics (left panels of Figure~\ref{fig:shells_150}) and the peak-temperature distribution (left panels of Figure~\ref{fig:peak_temp_150}). However, the computation of the fixed $\Delta V$-models is partly more difficult and more time consuming, because the time steps become small when the mass in the energy-deposition volume decreases and therefore the entropy per nucleon $s$ increases. This implies a growth of the sound speed, because $c_\mathrm{s} \approx \sqrt{(4/3)\cdot P/\rho} \propto \sqrt{(4/3)\cdot s\,T}$ for the radiation-dominated conditions in the heated volume, and therefore it leads to a corresponding reduction of the Courant-Friedrichs-Lewy limit for the length of the time steps. For this reason our C$M_*$OV simulations with the longest energy-deposition timescales could partly not be finished due to their computational demands. Nevertheless, the available runs are sufficient to draw the essential conclusions. In Figure~\ref{fig:final_ni_150}, left panel, only minor differences in the $^{56}$Ni production are visible between the C$M_*$O models and the C$M_*$OV models. Only the 21.0\,$M_\odot$ runs exhibit more sizable differences, i.e., the C21.0OV models eject systematically lower $^{56}$Ni yields than the C21.0O simulations, especially for short energy-injection times. The special role of the C21.0OV models among the CCSN simulations for the three progenitors is explained by the fact that the initially heated mass in the 21.0\,$M_\odot$ models is the largest of all of the constant-volume models (see Table~\ref{tab:dR}), whereas the heated volumes are the same for all cases. This implies that the heating rate per unit mass is smallest in the C21.0OV models of the 21.0\,$M_\odot$ progenitor. In addition, the initial mass in the heated volume of the C21.0OV models is also larger than the mass in the heating layer of the C21.0O simulations ($0.068\,M_\odot$ instead of $0.05\,M_\odot$). For this reason the volume over which the heating is spread is greater in the C21.0OV models, reducing the heating rate per volume in the innermost ejecta. These differences have consequences for the shock strength. The shock in the C21.0OV simulations is weaker and the peak temperatures remain lower than in the C21.0O models (Figure~\ref{fig:peak_temp_150}, left panels), where the heated mass is not only smaller but the energy injection also occurs into a fixed mass and thus follows the expanding gas. In contrast, in the C21.0OV simulations the heated gas expands out of the heated volume. For long heating timescales the energy injection into a fixed mass or a fixed volume makes little difference because the gas expands only slowly, allowing the infall of the preshock gas to proceed for a longer time, leading to higher kinetic energies and thus to stronger shock heating. Therefore the solid and dash-dotted lines in the left panel of Figure~\ref{fig:final_ni_150} approach each other for all progenitors when the heating timescales are long, consistent with the observation that the peak temperatures in the left panels of Figure~\ref{fig:peak_temp_150} become very similar for the higher values of $t_\mathrm{inj}$. Instead, if the heating timescale is short, the heated gas in the 21.0\,$M_\odot$ models with fixed energy-deposition volume experiences lower heating rates per unit volume and moves out of the heated volume rather than receiving continuous energy input as in the C21.0O models, where the heating shifts outward with the expanding matter. Therefore the shock becomes weaker and the peak temperatures in particular of the innermost ejecta in the C21.0OV simulations with short $t_\mathrm{inj}$ remain lower than in the C21.0O models. Since the initially heated mass in the C21.0OV models is larger than in the fixed $\Delta V$-simulations for the other progenitors, this temperature effect and the correspondingly lower $^{56}$Ni production are most pronounced in the C21.0OV runs. A moderate opposite trend is visible for the C19.7OV models with short $t_\mathrm{inj}$ because of the smallest value of the initial mass in the fixed heated volume in simulations with the 19\,$M_\odot$ progenitor (Table~~\ref{tab:dR}). \subsection{Effects of minimum radius for collapse} \label{sec:minradius} Finally, we also tested the influence of the minimum radius $r_\mathrm{min}$ in the prescription of the initial collapse phase of the C-models by running thermal-bomb models with $r_\mathrm{min} = 150$\,km, which is close to the radial location of the neutrino-heating layer in neutrino-driven explosion models, instead of our canonical choice of $r_\mathrm{min} = 500$\,km. For doing these tests we constrained ourselves to the models with O-boundary for fixed mass layer $\Delta M$ (models xC$M_*$O in Table~\ref{tab:explosions}) and fixed volume $\Delta V$ (models xC$M_*$OV in Table~\ref{tab:explosions}) for the energy injection, and we will compare them with the default-collapse models of C$M_*$O and C$M_*$OV. Here one has to keep in mind that all C$M_*$O and xC$M_*$O models, for all progenitors, were computed with exactly the same fixed mass of $\Delta M = 0.05\,M_\odot$ for the energy-injection layer. The C$M_*$OV and xC$M_*$OV models for a given progenitor had effectively the same initial mass (up to the third digit) and nearly the same volume of the heated layer (Table~\ref{tab:dR}). However, while the heated volume is the same in the CCSN runs for all progenitors, the initial masses in this volume differ between the three progenitors (Table~\ref{tab:dR}). Comparing the left and right panels of Figure~\ref{fig:final_ni_150}, we witness only small differences in the $^{56}$Ni production for short heating timescales between the xC$M_*$O and the C$M_*$O simulations, and also between the xC$M_*$OV and the C$M_*$OV simulations there are only relatively modest differences. The most prominent effect is a spreading between the $^{56}$Ni yields of the xC21.0O and xC21.0OV models that is about twice as big as it is between the C21.0O and C21.0OV cases (right panel of Figure~\ref{fig:final_ni_150}). There is also a slightly greater gap between the yields of the xC26.6O and xC26.6OV simulations; this difference is again about double the size of that between the C26.6O and C26.6OV models, where it is effectively insignificant. The reasons for the somewhat lower production of $^{56}$Ni in the fixed-volume models with short energy-injection times were discussed in Section~\ref{sec:fixedvolume}, and they lead to stronger effects in simulations with more extreme collapse. For long heating timescales we observe an interesting, new phenomenon in the extreme-collapse models that is exactly opposite to the pronounced decrease of the $^{56}$Ni yields for longer $t_\mathrm{inj}$ in U-models reported by SM19 and reproduced by our calculations, and the similar but much weaker trends that one can spot in most of our C-models, too. Allowing for a deep collapse to $r_\mathrm{min} = 150$\,km we obtain increasing $^{56}$Ni yields for longer energy-injection timescales in particular for the fixed-$\Delta M$ cases, but also, though less drastic, for the fixed-$\Delta V$ models (Figure~\ref{fig:final_ni_150}, right panel). (It is possible that a mild version of this trend is also present in our default-collapse models with fixed heating volume, but unfortunately the corresponding simulations for long $t_\mathrm{inj}$ could not be finished.) The increase of the $^{56}$Ni production for $t_\mathrm{inj} = 1$\,s and 2\,s reverses the shallow decline that can be seen between $t_\mathrm{inj} = 0.05$\,s and 0.5\,s. The reason for this new effect can be inferred from the right panels of Figure~\ref{fig:peak_temp_150}. In stark contrast to all the other model sets plotted in Figure~\ref{fig:peakTtinj} and in the left panels of Figure~\ref{fig:peak_temp_150}, the extreme-collapse models with the longest energy-injection times tend to reach higher peak temperatures in a wider mass range than the corresponding simulations with short $t_\mathrm{inj}$. This effect is particularly strong for the xC-models with fixed mass $\Delta M$ of the heating layer (upper right panel of Figure~\ref{fig:peak_temp_150} for the CCSN runs with the 21.0\,$M_\odot$ progenitor). The mass-shell plots of Figure~\ref{fig:shells_150}, right panels compared to the left panels, provide an explanation of this phenomenon. In the deep collapse cases, the matter is much more strongly compression-heated during the infall, and it also expands more slowly behind the shock than in the standard C-models. This effect is especially relevant when the heating timescales are long, because in such cases the shock accelerates outward less quickly, thus the gas ahead of the shock has more time to fall deeper into the gravitational potential of the newly formed neutron star, and when the outward moving shock sweeps up the infalling matter, the higher gas velocities lead to much stronger shock heating. In the xC21.0OV and xC21.0O models there is an additional effect. In the fixed-$\Delta M$ models of the 21.0\,$M_\odot$ progenitor, the energy injection is initially constrained to a more narrow volume containing 0.05\,$M_\odot$, and it tracks the ejected matter. This leads to maximum peak temperatures in the mass shells well behind the shock (see upper right panel of Figure~\ref{fig:shells_150}). In contrast, in the fixed-$\Delta V$ models of the same progenitor, the heated volume (initially containing 0.068\,$M_\odot$) is considerably larger than the initial heating volume in the corresponding fixed-$\Delta M$ models. Therefore the shock expansion reaches a larger radius within a shorter period of time, preventing the deep infall of the preshock material in the xC21.0-cases with fixed $\Delta V$ (compare upper and lower right panels of Figure~\ref{fig:shells_150}). Consequently, the postshock heating is less extreme in the simulations with fixed energy-injection volume than in the models with fixed mass (see the upper and lower right panels of Figure~\ref{fig:peak_temp_150}). In the extreme-collapse cases with fixed $\Delta V$ the heated volume is somewhat smaller than in the corresponding models with standard collapse because of smaller values of $R_\mathrm{IBED}$ and $R_\mathrm{OBED}$ (Table~\ref{tab:dR}). Therefore the energy-deposition rate per volume in these xC-models is higher than in the C-models, and the innermost ejecta come from regions with stronger heating, for which reason also the xC-models with fixed $\Delta V$ exhibit a mild trend to higher postshock temperatures for long energy-injection timescales. Of course, the combined heating effect (compression by infall and shock, plus energy injection) is significantly stronger when the heating follows the ejected mass in the xC$M_*$O models, for which reason these models show a considerably steeper increase of the $^{56}$Ni production with longer $t_\mathrm{inj}$. In contrast, for short heating timescales the explosion dynamics of models with default collapse and extreme collapse are quite similar and the differences in the peak-temperature distributions are mostly connected to the initially stronger compression heating in the xC-models. However, in both prescriptions of the collapse phase, similar amounts of mass are heated to NSE and complete Si-burning temperatures (compare the upper left with the upper right panel and the lower left with the lower right panel in Figure~\ref{fig:peak_temp_150}). Therefore the $^{56}$Ni yields for short $t_\mathrm{inj}$ are similar between the C-models and the xC-models of each progenitor and both for fixed $\Delta M$ and for fixed $\Delta V$, except for the effect that we already mentioned above, namely that the $^{56}$Ni production in the xC21.0OV and xC26.6OV models compared to the xC21.0O and xC26.6O models is somewhat more reduced than in the C21.0OV and C26.6OV models relative to the C21.0O and C26.6O models (see the left and right panels of Figure~\ref{fig:final_ni_150}). By default our $^{56}$Ni yields include nickel produced in the energy-deposition layer (see Section~\ref{sec:compSM19models}). In principle, one has to consider that some of this innermost matter may be unable to achieve escape conditions and thus may stay gravitationally bound, thus not contributing to the CCSN ejecta. From our model sets this issue affects especially the extreme-collapse cases with fixed volume for the energy injection, where the heated gas resides deep in the gravitational potential of the newly formed neutron star and the energy deposition does not follow the outward moving matter. Among these xC-models mainly the 21.0\,$M_\odot$ simulations are concerned, since the initial mass in the heated volume of these models is largest (see Table~\ref{tab:dR}). One can see this in the lower right panel of Figure~\ref{fig:shells_150}, because the innermost displayed mass shell exterior to $R_\mathrm{IBED}$ expands only very slowly there. The radial velocities of this shell over 30\,s in the xC21.0OV simulations are only around 100\,km\,s$^{-1}$ and therefore considerably lower than the escape velocity, which is on the order of 1000\,km\,s$^{-1}$ at a radius of some 1000\,km. Consequently, this matter might not become unbound despite its continuous, slow expansion until the end of our simulations. Subtracting the $^{56}$Ni contained in this innermost material would somewhat reduce the nickel production, but such a correction would not mean a dominant effect for the xC21.0OV models. Nevertheless, it might damp the increase of the $^{56}$Ni yields in these model runs for long energy-injection times seen in Figure~\ref{fig:final_ni_150}, right panel. \section{Summary and discussion} \label{section:conclusions} The thermal bomb method is a widely used modelling approach to trigger CCSN explosions artificially by releasing energy into a chosen mass layer or chosen volume around a chosen location of the (initial, i.e. before fallback) mass cut, which usually coincides with the inner boundary of the computational grid. In the present paper we explored various dependencies of the thermal-bomb parameterization, in particular we considered models with and without an initial collapse phase, different timescales for the energy release, different radial positions of the mass cut, energy deposition in a fixed mass layer or fixed volume, different masses for this layer, and different minimum radii for the contraction during the collapse phase. For this purpose we performed 1D CCSN simulations with the thermal-bomb method, using the \textsc{Prometheus-HOTB} code, and we post-processed the ejecta for nucleosynthesis with the SkyNet open-source network. We focused here on the production of $^{56}$Ni because of its pivotal importance for observational SN diagnostics. Moreover, the production of this dominant radioactive isotope can be considered as representative of the total output in iron-group and intermediate-mass nuclei without entering the discussion of yields of other isotopes, whose relative amounts are highly sensitive to the exact distribution of $Y_e$ in the ejecta. Our work was motivated by the recent finding of SM19, deduced from thermal-bomb simulations for three progenitors with different masses, that the production of $^{56,57}$Ni and $^{44}$Ti decreases dramatically for energy-injection timescales longer than about 100\,ms. SM19 concluded that the production of these nuclear species and other elements is best compatible with observational constraints for nearly instantaneous explosions, i.e., for energy-release timescales of the thermal bomb as short as $\lesssim$50\,ms. If correct, this result would be a strong argument against the neutrino-driven explosion mechanism for CCSNe, because self-consistent ab initio simulations show that this mechanism provides the energy of the explosion only over timescales of seconds \citep[see, e.g.,][]{2021ApJ...915...28B}. In our simulations, mainly considering 19.7, 21.0, and 26.6\,$M_\odot$ progenitors with significantly different pre-collapse structures, we confirmed the results obtained by SM19, namely a strong anti-correlation between $^{56}$Ni yields and energy-injection timescale. However, we obtained these results only when the thermal bomb was assumed to release its energy in the uncollapsed progenitor models. Including an initial collapse phase, which is the more realistic approach when stellar core collapse, neutron star formation, and CCSN explosions are supposed to be simulated, the trend witnessed by SM19 effectively disappears and the $^{56}$Ni production becomes almost independent of the timescale for the energy release. Allowing for an initial collapse to a minimal radius of 150\,km instead of our default value of 500\,km, thus more closely adopting conditions similar to those in neutrino-driven explosions, we even obtained a reversal of the trend seen in uncollapsed models. In such calculations with the more extreme collapse, we found that long energy-injection timescales, especially when longer than $\sim$1\,s, lead to a higher production of $^{56}$Ni than the shorter energy-deposition times, which trigger more rapid explosions. Therefore there is no reason to conclude on grounds of thermal-bomb simulations that the $^{56}$Ni production in slow explosions as expected for the neutrino-driven mechanism is in conflict with observational data. The result reported by SM19 for their thermal-bomb explosions of uncollapsed progenitor models was caused by the energy injection into the low-density, hydrostatic stellar profiles, which permits easy expansion of the ejecta with corresponding expansion-cooling as soon as the energy release is switched on. Therefore only small amounts of matter close to the heated mass shell (i.e., the defined mass cut) can reach temperatures that are sufficiently high for NSE and Si-burning. The conditions for such temperatures are strongly disfavored for longer energy-injection timescales. In contrast, when an initial collapse phase is included in the thermal-bomb modelling, the energy deposition occurs in infalling matter, which expands much less readily, because the SN shock wave needs to propagate outward against the ram pressure of infalling stellar layers. In this case it has to receive more energy input for a predefined value of the final explosion energy, and the correspondingly stronger explosion shock can heat more mass to NSE and Si-burning conditions. Varying the different inputs for the parametric description of the thermal bombs for a fixed value of the explosion energy, we found that the most sensitive aspects for the production of $^{56}$Ni are the inclusion of the initial collapse instead of releasing the energy into the uncollapsed progenitor, and the location of the initial mass cut at the radius where the entropy per nucleon reaches $s/k_\mathrm{B} = 4$ instead of the position where $Y_e = 0.48$. There is only a relatively modest influence of the exact value of the fixed mass $\Delta M$ in the energy-deposition layer. Also the choice of a fixed volume for the energy release instead of a fixed mass causes only secondary differences. Once the initial collapse is included, also the timescale of the energy release by the thermal bomb leads to variations only on a secondary level. For the more realistic choice of the initial mass cut at $s/k_\mathrm{B} = 4$, which can be better motivated by neutrino-driven explosion models, it is crucial to also include matter in the heated layer in the ejecta, if this matter becomes unbound during the explosion. Because of their numerous degrees of freedom, thermal-bomb models can certainly not be employed to assess the viability of any kind of physical explosion mechanism. For example, artificial explosion methods like the thermal bombs can hardly be expected to reproduce the dynamics of neutrino-driven explosions in a physically correct and reliable way. In particular, fixing the mass layer for the energy injection means that the energy input follows the expanding matter, which is unrealistic. Fixing instead the volume for the energy release either overestimates the heated volume or underestimates the heated mass in this heated volume, where in addition the mass decreases with time, which again is not a realistic description of the neutrino-driven mechanism. Fortunately, the $^{56}$Ni production of thermal bomb simulations that include a collapse phase turned out not to be overly sensitive to such alternative choices. Thermal bombs are a numerical recipe that depends on a variety of parameterized inputs that need to be defined. Nevertheless, even with the best choice of these inputs, their usefulness for quantitative predictions of iron-group and intermediate-mass-element nucleosynthesis will always be hampered by the unknown value of the explosion energy and, in principle, also of the initial mass cut. Moreover, iron-group species such as the isotopes of $^{56,57}$Ni and of $^{44}$Ti are formed in ejecta whose $Y_e$ evolves due to weak-force interactions of neutrinos and where multi-dimensional flows play a crucial role. None of these are taken into account are taken into account in a simple thermal-bomb treatment. Therefore the best one can expect of any artificial explosion trigger is that the method should be set up such that it does not massively overproduce or underproduce nickel and it should also be set up such that the correct trends of the $^{56}$Ni production with explosion energy, explosion time scale, and progenitor structure can be maintained. Since thermal bombs provide an easy-to-apply recipe to trigger explosions, it is very likely that they will remain in use as a method of choice for the exploration of CCSN nucleosynthesis, for example in large sets of progenitor models, despite all the mentioned caveats \citep[e.g.][]{Farmer+2021}. In view of the results of our study, we recommend the following prescriptions: \begin{enumerate} \item Include a collapse phase before the energy release of the thermal bomb is started. A minimum collapse radius near 500\,km seems to be sufficient and is computationally less demanding than a smaller radius. \item Since self-consistent simulations of neutrino-driven CCSNe show that the explosion sets in when the infalling Si/O interface reaches the stagnant bounce shock, the initial mass cut should be chosen near the $s/k_\mathrm{B} = 4$ location instead of putting it close to the edge of the iron core. Therefore $Y_e$ in the layer of energy injection by the thermal bomb is very close to 0.5 (typically higher than 0.497). \item For this reason $^{56}$Ni will be efficiently produced in the energy-injection layer and the matter in this layer should be included in the ejecta, if it becomes gravitationally unbound by the explosion. \item Using a fixed mass layer $\Delta M$ for the energy injection is numerically easier than a fixed volume, and both choices do not cause any major differences. The exact value of $\Delta M$ is not crucial. We suggest $0.05\,M_\odot$, but smaller masses lead to very similar nickel yields. \item With the recommended setup the $^{56}$Ni production is basically insensitive to the timescale chosen for the energy injection by the thermal bomb. \end{enumerate} Of course, these recommendations are based on a small set of simulations for only three progenitors and a defined explosion energy of $10^{51}$\,erg in all of our thermal-bomb calculations. A wider exploration is desirable to test the more general reliability of our proposed parameter settings. Beyond the prescriptions listed above, the value of the explosion energy is another crucial input into the thermal-bomb modelling. Its specification has to be guided by our first-principle understanding of the physics of the CCSN mechanism in stars of different masses. In future work we plan to compare thermal-bomb models and direct simulations of neutrino-driven CCSN explosions with respect to the progenitor and explosion energy dependent production of $^{56}$Ni and other iron-group and intermediate-mass elements. \section*{Acknowledgements} We are grateful to Thomas Ertl for his assistance in the starting phase of the project and thank Ewald M\"uller and Johannes Ringler for discussions. Support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through Sonderforschungsbereich (Collaborative Research Center) SFB-1258 ``Neutrinos and Dark Matter in Astro- and Particle Physics (NDM)'' and under Germany's Excellence Strategy through Cluster of Excellence ORIGINS (EXC-2094)-390783311 is acknowledged. \section*{Data availability} The data of our calculations will be made available upon reasonable request. \section*{Software} \textsc{Prometheus-HOTB} \citep{1996A&A...306..167J,2003A&A...408..621K,2006A&A...457..963S,2007A&A...467.1227A,2012ApJ...757...69U,2016ApJ...818..124E}; KEPLER \citep{1978ApJ...225.1021W}; SkyNet \citep{2017ApJS..233...18L}. \bibliographystyle{mnras}
{ "timestamp": "2022-09-23T02:15:03", "yymm": "2209", "arxiv_id": "2209.10989", "language": "en", "url": "https://arxiv.org/abs/2209.10989" }
\section{Introduction} Fredholm integro-differential equations appear in modeling various physical processes such as neutron transport problems \cite{Martin}, neural networks \cite{Jackiewicz}, population model \cite{Kemanci}, filtering and scattering problems \cite{Hale2}, inverse problems \cite{Beilina} and diseases spread \cite{Medlock}. Fractional differential equations are important tools in the mathematical modeling of some real-phenomena problems with memory \cite{He}. Theoretical and numerical analysis of fractional differential equations has been considered by many researchers \cite{Kilbas,Bagley,Chow,Diethelm}. Consider the fractional Fredholm integro-differential equations of the form \begin{equation}\label{two} \left\lbrace \begin{array}{lr} D_{*0}^{\alpha}y(x)=g(x)+\displaystyle\int_{0}^{1}k(x,t)f(t,y(t))dt,\ \ \ \ 0< \alpha < 1,\ x\in \Omega,\\ y(0)=c, \end{array}\right. \end{equation} where $ \Omega=[0,1] $, $c \in \mathds{R}$, the function $g\in C( \Omega)$ and $k\in C( \Omega \times \Omega)$ are known. The given function $f$ is continuous and satisfy the following Lipschitz condition argument; i.e., \begin{equation}\label{800} \vert f(x,y_{2})-f(x,y_{2})\vert \leq L_{f} \vert y_{2}-y_{1}\vert,\ \ \ x\in \Omega,\ \ \end{equation} where $L_{f}$ is a positive constant. The operator $D_{*0}^{\alpha}$ denotes the Caputo fractional differential operator with $0<\alpha<1$ (see \cite{Diethelm}). There are some numerical methods to solve the problem (\ref{two}): CAS wavelet method \cite{Saeedi}, Chebyshev wavelet method \cite{Setia}, Alpert wavelet method \cite{Hag}, Adomian decomposition method \cite{Noor}, Taylor expansion method \cite{Zhao}, fractional Lagrange basis function\cite{Kumar}, rationalized Haar functions \cite{Rahimi} and Laguerre polynomials \cite{Bayram}. The spectral collocation methods are efficient tools in numerical investigation of linear and non-linear problems. The exponential convergence of these methods are the main reason for its prominence in solving problems with smooth solutions compared to other numerical methods \cite{Canuto,21,15,H1,samad}. The main purpose of these methods is to find an approximate solution in terms of finite number of basis functions so that the unknown coefficients determined by minimizing the error between the exact and approximate solutions. Orthogonal basis polynomials are the main tool in constructing approximate solutions in spectral methods. The solution of fractional integral and/or differential equations may be non-smooth, as its derivatives may be infinite at the left end of the interval. For this reason, the efficiency of spectral methods in solving fractional problems is reduced by using standard basic polynomials such as Legendre, Chebyshev, Hermit, and so on. Therefore, decreasing the order of convergence is one of the main disadvantages in the implementation of these methods to solve fractional problems and as a challenging problem to overcome its drawbacks. Recently, some numerical methods have been introduced by modifying the standard basis polynomials by using the change of $x$ to $x^{\nu}$, $(0<\nu<1)$ \cite{9,Conte,Cai,Talaei2,Hale,Talaeim,Talaeid,Wang}. The existence, uniqueness, and smoothness of the solution of (\ref{two}) are investigated. The framework of this paper is to convert the problem into a fractional nonlinear integral equation and present a high-order implicit collocation method for its numerical solution. Because of the non-smooth behavior of the solutions of (\ref{two}), we utilize the fractional Chelyshkov basis function of the form \begin{equation*} \widetilde{C}_{N,i}(x):=C_{N,i}(x^{\nu}),\ \ \ 0<\nu<1. \end{equation*} where $\lbrace C_{N,i}(x)\rbrace_{i=0}^{N}$ are a set of orthogonal Chelyshkov polynomials on $[0,1]$ \cite{Chel}. The advantage of the method is the reduction of the given problem into a algebraic system of equations. Simplicity in calculating the fractional operational matrix and high accuracy of the method in solving problems with non-smooth solutions by selecting a smaller number of fractional Chelyshkov polynomials are the other main advantage of our method in comparison with some numerical methods that uses basis functions such as Chebyshev-wavelet function \cite{Setia} and Laguerre polynomials \cite{Bayram}, CAS wavelet functions \cite{Saeedi}, Alpert multi-wavelets functions \cite{Hag}, rationalized Haar function \cite{Rahimi}, and fractional Lagrange polynomials \cite{Kumar}. The layout of this paper included as follows: Section 2 contain the fractional Chelyshkov polynomials are investigate and operational matrices of integration are derived and also the existence, uniqueness and smoothness of the solution of problem (\ref{two}) is studied. In Section 3, the numerical method to solve the problem (\ref{two}) is constructed. Section 4 contain the convergence analysis of the method. Numerical examples in Section 5 demonstrate the effectiveness of the method. Final section contain a brief review of the paper. \section{Fractional Chelyshkov polynomials} In 2005, Vladimir S. Chelyshkov introduced a new type of orthogonal polynomials \cite{Chel}. These polynomials have been used based on spectral method to solve various type differential and integral equations \cite{Sezer,Talaei2,Izadi,Hosseininia,Talaei,Talaei1}. The fractional Chelyshkov polynomials defined as follows \begin{equation}\label{jadid} \widetilde{C}_{N,n,\nu}(x)=\sum_{j=n}^{N}(-1)^{j-n}\binom{N-n}{j-n}\binom{N+j+1}{N-n}x^{j\nu},\ \ \ n=0,1,...,N. \end{equation} \begin{figure}[!ht] \begin{center}$ \begin{array}{cc} \includegraphics[width=0.5\linewidth]{P1} \includegraphics[width=0.5\linewidth]{P2} \end{array}$ \end{center} \caption{Plots of a fractional Chelyshkov basis: $\nu=1/2$ (left), $\nu=1$ (right) for $N=5$.} \label{Aggn} \end{figure} These polynomials are connected to a set of fractional Jacobi polynomials \cite{9} $P^{a,b,\nu}_{n}(x)$, for $a,b>-1$ as \[ \widetilde{C}_{N,n,\nu}(x)=(-1)^{N-n}x^{n\nu}P^{0,2n+1,\nu}_{N-n}(2x-1), \] and can be generated from the following recurrence relations \cite{Chel}: \begin{align}\label{ghgh} &\widetilde{C}_{N,N,\nu}(x)=x^{N\nu},\ \ \ \widetilde{C}_{N,N-1,\nu}(x)=(2N)x^{(N-1)\nu}-(2N+1)x^{N\nu},\nonumber\\ &a_{N,k}\widetilde{C}_{N,k-1,\nu}(x)=(b_{N,k}x^{-\nu}-c_{N,k})\widetilde{C}_{N,k,\nu}(x)-d_{N,k}\widetilde{C}_{N,k+1,\nu}(x), \end{align} for $k=N-1,...,1$, where, \begin{align} &a_{N,k}=(k+1)(N-k+1)(N+k+1),\ \ \ b_{N,k}=k(2k+1)(2k+2),\nonumber\\ &c_{N,k}=(2k+1)((N+1)^2+k^2+k),\ \ \ d_{N,k}=k(N-k)(N+k+2). \end{align} For $N=5$, we have \begin{align*} &\widetilde{C}_{5,0,\nu}(x)=6-105\,{x}^{\nu}+560\,{x}^{2\,\nu}-1260\,{x}^{3\,\nu}+1260\,{ x}^{4\,\nu}-462\,{x}^{5\,\nu},\\ &\widetilde{C}_{5,1,\nu}(x)=35\,{x}^{\nu}-280\,{x}^{2\,\nu}+756\,{x}^{3\,\nu}-840\,{x}^{4 \,\nu}+330\,{x}^{5\,\nu},\\ &\widetilde{C}_{5,2,\nu}(x)=56\,{x}^{2\,\nu}-252\,{x}^{3\,\nu}+360\,{x}^{4\,\nu}-165\,{x} ^{5\,\nu},\\ &\widetilde{C}_{5,3,\nu}(x)=36\,{x}^{3\,\nu}-90\,{x}^{4\,\nu}+55\,{x}^{5\,\nu},\\ &\widetilde{C}_{5,4,\nu}(x)=10\,{x}^{4\,\nu}-11\,{x}^{5\,\nu},\\ &\widetilde{C}_{5,5,\nu}(x)={x}^{5\,\nu}. \end{align*} In Fig. \ref{Aggn}, the Chelyshkov polynomials $\lbrace \widetilde{C}_{5,0,\nu}(x),...,\widetilde{C}_{5,5,\nu}(x)\rbrace$ are plotted for $\nu=1,\frac{1}{2}$. All of $\widetilde{C}_{N,n,\nu}(x),\ n=0,...,N$ are polynomials exactly degree $N\nu$. The fractional Chelyshkov polynomials (\ref{jadid}) are orthogonal w.r.t weight function $\varpi(x)=x^{\nu-1}$; \begin{equation}\label{orto1} \langle \widetilde{C}_{N,p,\nu}(x),\widetilde{C}_{N,q,\nu}(x)\rangle:=\int_{0}^{1}\widetilde{C}_{N,p,\nu}(x)\widetilde{C}_{N,q,\nu}(x)\varpi(x)dx=\left\lbrace \begin{array}{lr} 0,\ \ \ \ \ \ \ \ \ \ \ p\neq q,\\ \displaystyle\frac{1}{\nu(2p+1)}, \ \ \ \ p=q. \end{array}\right. \end{equation} Also, \begin{equation}\label{jnnk} \int_{0}^{1}\widetilde{C}_{N,n,\nu}(x)\varpi(x)dx=\frac{1}{\nu(N+1)}. \end{equation} Let \[ L^{2}(\Omega)=\lbrace u:\Omega \rightarrow \mathds{R};\ \int_{\Omega}\vert u(x)\vert^{2}\varpi(x)dx <\infty \rbrace, \ \ \ \Vert u\Vert_{2}=\langle u,u\rangle^{\frac{1}{2}}, \] be the weighted Hilbert space and \[ M_{N}=Span\lbrace \widetilde{C}_{N,0,\nu}(x),\widetilde{C}_{N,1,\nu}(x),...,\widetilde{C}_{N,N,\nu}(x)\rbrace, \] be a finite-dimensional subspace of $L^{2}(\Omega)$. The space $M_{N}$ is a complete subspace of $L^{2}(\Omega)$ and for any $u\in L^{2}(\Omega)$, there exist a unique best approximation $\widetilde{u}_{N}$ out of $M_{N}$ such that \[ \Vert u-\widehat{u}_{N} \Vert_{L^{2}} \leq \Vert u-v \Vert_{L^{2}}, \ \ \forall v \in M_{N}, \] and implies that \begin{equation}\label{bc} \langle u-\widehat{u}_{N},\widetilde{C}_{N,n,\nu}(x)\rangle=0,\ \ n=0,...,N. \end{equation} Moreover, there exist unique coefficients $a_{0},a_{1},...,a_{N}$, such that \begin{equation}\label{123} \widehat{u}_{N}(x)=\sum_{n=0}^{N}a_{n}\widetilde{C}_{N,n,\nu}(x)=\mathbf{A}_{N}\mathbf{\Phi}(x), \end{equation} where, \begin{equation*} \mathbf{A}_{N}=[a_{0},a_{1},...,a_{N}], \end{equation*} and \begin{equation}\label{wd} \mathbf{\Phi}(x)=[\widetilde{C}_{N,0,\nu}(x),\widetilde{C}_{N,1,\nu}(x),...,\widetilde{C}_{N,N,\nu}(x)]^{T}. \end{equation} From the orthogonality condition (\ref{orto1}) and relation (\ref{bc}), we get \begin{align*} \int_{0}^{1}u(x)\widetilde{C}_{N,n,\nu}(x)\varpi(x)dx&=\int_{0}^{1}\widehat{u}_{N}(x)\widetilde{C}_{N,n,\nu}(x) \varpi(x)dx\nonumber\\ &=\int_{0}^{1}\left(\sum_{i=0}^{N}a_{i}\widetilde{C}_{N,i,\nu}(x)\right)\widetilde{C}_{N,n,\nu}(x)\varpi(x)dx\nonumber\\ &=a_{n}\int_{0}^{1}\left(\widetilde{C}_{N,n,\nu}(x)\right)^{2}\varpi(x)dx, \end{align*} therefore, the coefficients $a_{n}$ can be computed as \begin{equation}\label{bvb} a_{n}=\nu(2n+1)\int_{0}^{1}u(x)\widetilde{C}_{N,n,\nu}(x)\varpi(x)dx,\ \ \ n=0,1,...,N. \end{equation} \begin{lemma}\label{kl}\cite{Talaei1} Let $x_{i}$ be the roots of $\widetilde{C}_{N,0,1}(x)$. Therefore, the fractional Chelyshkov polynomial $\widetilde{C}_{N,0,\nu}(x)$ has $N$ roots as $x_{i}^{\frac{1}{\nu}}$ for $i=1,...,N$. \end{lemma} In the following theorem, we derive the fractional integration operational matrix of fractional Chelyshkov polynomials: \begin{theorem}\label{plp} Let $\mathbf{\Phi}(x)$ be the fractional Chelyshkov polynomials vector defined in (\ref{wd}). Then, \[ \int _{0}^{x} (x-s)^{\alpha -1}\mathbf{\Phi}(s)ds \simeq \mathcal{P} \mathbf{\Phi}(x), \] where \[ \mathcal{P}=\left( \begin{array}{cccc} \Theta(0,0) & \Theta(0,1) & \ldots & \Theta(0,0) \\ \Theta(1,0) & \Theta(1,1) & \cdots & \Theta(1,N) \\ \vdots & \vdots & \ddots & \vdots \\ \Theta(N,0) & \Theta(N,1) & \ldots & \Theta(N,N) \\ \end{array} \right), \] with \begin{equation}\label{Dad} \Theta(n,k)=\sum_{j=n}^{N}(-1)^{j-n}\binom{N-n}{j-n}\binom{N+j+1}{N-n} B(\alpha , j\nu +1)\xi_{k,j}, \end{equation} is called the fractional operational matrix of integration. Here, $B(\cdot,\cdot)$ denotes the Beta function. \end{theorem} \begin{proof} Integrating of $\widetilde{C}_{N,n,\nu}(x)$ from $0$ to $x$ yields \begin{equation}\label{treee} \int_{0}^{x}(x-s)^{\alpha -1}\widetilde{C}_{N,n,\nu}(s)ds=\sum_{j=n}^{N}(-1)^{j-n}\binom{N-n}{j-n}\binom{N+j+1}{N-n} B(\alpha , j\nu+1)x^{j\nu+\alpha}. \end{equation} Approximating $x^{j\nu+\alpha}$ in terms of fractional Chelyshkov polynomials, we get \begin{equation}\label{Za} x^{j\nu+\alpha}\simeq \sum_{k=0}^{N}\xi_{k,j}\widetilde{C}_{N,k,\nu}(x), \end{equation} where the coefficients $\xi_{k,j}$ can be computed using (\ref{bvb}) as \begin{align}\label{123} \xi_{k,j}&=\nu(2k+1) \displaystyle\int_{0}^{1}x^{j\nu+\alpha}\widetilde{C}_{N,k,\nu}(x)\varpi(x)dx,\nonumber \\ &=\nu(2k+1) \displaystyle\sum_{l=k}^{N}(-1)^{l-k}\binom{N-k}{l-k}\binom{N+l+1}{N-k}\int_{0}^{1}x^{(j+l+1)\nu+\alpha-1}dx,\nonumber \\ &=\nu(2k+1)\displaystyle\sum_{l=k}^{N}\frac{(-1)^{l-k}}{(j+l+1)\nu+\alpha}\binom{N-k}{l-k}\binom{N+l+1}{N-k}. \end{align} By substituting (\ref{Za}), (\ref{123}) in (\ref{treee}), we have \begin{align}\label{123t} \int_{0}^{x}(x-s)^{\alpha -1}\widetilde{C}_{N,n,\nu}(s)ds\simeq\sum_{k=0}^{N}\Theta(n,k)\widetilde{C}_{N,k,\nu}(x). \end{align} Thus, the proof is completed. \end{proof} Now, we study the existence, uniqueness and smoothness of the solution to the problem (\ref{two}): \begin{definition}\cite{Diethelm} The Riemann--Liouville fractional integral of order $ \alpha $ for any $u\in L_{1}[a,b]$ is defined as \begin{equation} J_{a}^{\alpha}u(x):=\frac{1}{\Gamma (\alpha)}\int_{a}^{x}(x-t)^{\alpha-1}u(t)dt. \end{equation} For $\alpha=0$, we have $J_{a}^{0}:=I$ the identity operator.\\ \end{definition} \begin{definition}\cite{Diethelm} The operator $D_{*a}^{\alpha}$ defined by \begin{equation} D_{*a}^{\alpha}u(x):=J_{a}^{\lceil \alpha \rceil-\alpha}D^{\lceil \alpha \rceil}u(x)=\frac{1}{\Gamma (\lceil \alpha \rceil-\alpha)}\int_{a}^{x}(x-t)^{\lceil \alpha \rceil-\alpha-1}u^{\lceil \alpha \rceil}(t)dt \end{equation} is called the Caputo differential operator of order $\alpha\in \mathds{R}_{+}$. \end{definition} \begin{theorem}\cite{fixed,Kantorovich} (Banach's Fixed Point Theorem)\label{A1} Let $(\mathcal{U},d)$ to be a complete metric space and $ 0 \leq \gamma <1$. Also, assume that the function $ A:\mathcal{U} \rightarrow \mathcal{U}$ satisfy the inequality \begin{equation} d(Au,Av)\leq \gamma d(u,v), \end{equation} for every $u,v\in \mathcal{U}$. Then, there exists a unique $u^{*}$ such that $u^{*}=A(u^{*})$. Furthermore, for any $u_{0} \in \mathcal{U}$ we have \[ A^{j}u_{0}\rightarrow u^{*},\ \ j\rightarrow \infty, \] that is the unique solution of the problem $u=Au$. \begin{definition}\cite{Bru} The space $C^{m, \lambda}(0,1]$, $m\in \mathds{N}$, $-\infty<\lambda<1$, define a set of all $m$ times continuously differentiable functions $u:(0,1]\rightarrow \mathds{R}$ such that \begin{equation} \vert u^{(i)}(x)\vert \leq c_{i} \begin{cases} 1, \quad i<1-\lambda, \\ 1+\vert \log(x)\vert, \quad i=1-\lambda, \\ x^{1-\lambda-i}, \quad i>1-\lambda, \end{cases} \end{equation} holds with a constant $c_{i}$ for $i=0,...,m$. \end{definition} \end{theorem} By applying fractional integral operator $J_{0}^{\alpha}$ on both sides of the Eq. (\ref{two}), we obtain \begin{align}\label{3} y(x)=\widetilde{g}(x)+ \int_{0}^{x}(x-t)^{\alpha-1}\widetilde{K}(y(t))dt, \end{align} where \begin{equation}\label{has} \widetilde{K}y(x)=\frac{1}{\Gamma (\alpha)}\int_{0}^{1}k(x,z)f(z,y(z))dz \end{equation} and \begin{equation}\label{kok} \widetilde{g}(x)=c+\frac{1}{\Gamma (\alpha)}\int_{0}^{x}(x-t)^{\alpha-1}g(t)dt. \end{equation} Therefore, the problem (\ref{two}) is equivalent to a fractional nonlinear Volterra integral equation of the form Eq. (\ref{3}). In the following, we consider the existence, uniqueness and smoothness of the solution of Eq. (\ref{3}). \begin{theorem}\label{B} Assume that the function $f$ satisfy Lipschitz condition (\ref{800}) and \[ \mathcal{M}_{k}:=\underset{x,t \in \Omega }{\max} \vert k(x,t)\vert. \] If \begin{equation}\label{6} L_{f}\mathcal{M}_{k}<\Gamma (\alpha+1), \end{equation} then, the Eq. (\ref{3}) has a unique continuous solution on $\Omega$. \end{theorem} \begin{proof} Define the operator $A:C(\Omega)\rightarrow C(\Omega)$ as \begin{equation}\label{HGH} (Ay)(x):=\widetilde{g}(x)+ \int_{0}^{x}(x-t)^{\alpha-1}\widetilde{K}(y(t))dt. \end{equation} It is enough to prove that it has a unique fixed point. To this end, by using the Lipschitz condition for $f$ we obtain \begin{align*} \vert(Ay)(x)-(A\widehat{y})(x)\vert &\leq \int_{0}^{x}(x-t)^{\alpha-1}\vert \widetilde{K}(y(t))-\widetilde{K}(\widehat{y}(t))\vert dt\\ &\leq \frac{\mathcal{M}_{k}}{\Gamma (\alpha)} \int_{0}^{x}(x-t)^{\alpha-1}\left[\int_{0}^{1}\vert f(z,y(z))-f(z,\widehat{y}(z))\vert dz\right]dt\\ & \leq \frac{L_{f}\mathcal{M}_{k}}{\Gamma (\alpha)}\Vert y-\widehat{y} \Vert_{\infty} \int_{0}^{x}(x-t)^{\alpha-1} dt\\ &\leq \frac{L_{f}\mathcal{M}_{k}}{\Gamma (\alpha+1)}\Vert y-\widehat{y}\Vert_{\infty}, \end{align*} which implies that $A$ is a contraction mapping if only if \[ \frac{L_{f}\mathcal{M}_{k}}{\Gamma (\alpha+1)}<1. \] From Theorem \ref{A1}, the operator $A$ defined in (\ref{HGH}) has a unique fixed point. \end{proof} \begin{theorem}\label{jko} Let $\widetilde{g}\in C^{m,1-\alpha}(\Omega)$, $\widetilde{K}y\in C^{m}(\mathds{R})$, $m \in \mathds{N}$ and $0<\alpha<1$. Then, the solution of Eq. (\ref{3}) satisfy the smoothness properties as follows \begin{equation} \vert y^{(i)}(x)\vert =O(x^{\alpha-i}),\ \ \ \ x\in (0,1], \end{equation} for $i=1,...,m$. \end{theorem} \begin{proof} For the proof see Theorem 2.1 in \cite{Bru}. \end{proof} The result of Theorem \ref{jko} implies that the solution of Eq. (\ref{3}) has a singularity at the origin as $t\rightarrow 0^{+}$ which indicates deterioration of the accuracy of many existing spectral methods based on standard basis functions such as Chebyshev, Legendre, etc. In order to overcome such drawbacks, we utilize the fractional Chelyshkov polynomials as basis functions to obtain the fractional order behavior and consistency between the approximate and exact solutions. \section{Numerical method} According to the implicit version of the collocation method in \cite{Sloan}, we consider the transformation \begin{equation}\label{34} w(x)=\widetilde{K}y(x), \end{equation} to approximate the solutions of Eq. (\ref{3}). By using (\ref{3}) and (\ref{has}), $w$ satisfy the following equation \begin{align}\label{3vbbb} w(x)=\frac{1}{\Gamma (\alpha)}\int_{0}^{1}k(x,z)f\bigg(z,\widetilde{g}(z)+\displaystyle\int_{0}^{z}(z-t)^{\alpha-1}w(t)dt\bigg)dz. \end{align} After determining the unknown function $w$, then we can obtain a solution of the Eq. (\ref{3}) by \begin{align}\label{3vbbbv} y(x)=\widetilde{g}(x)+\int_{0}^{x}(x-t)^{\alpha-1}w(t)dt. \end{align} Now, we focus on the implementation of a spectral collocation method to solve Eq. (\ref{3vbbb}). To this end, we approximate $w(x)$ by the fractional Chelyshkov polynomials as \begin{equation}\label{34} w(x)\simeq w_{N}(x)=\sum_{i=0}^{N}w_{i}\widetilde{C}_{N,i,\nu}(x)=\mathbf{W}_{N}\mathbf{\Phi}(x), \end{equation} where, $\mathbf{W}_{N}=[w_{0},...,w_{N}]$ is an unknown vector. \begin{theorem}\label{popp} Let $w_{N}$ be the approximation defined in (\ref{34}) and $\mathcal{P}$ be fractional operational matrix defined in Theorem \ref{plp}, then we have \begin{itemize} \item[(A)] $\displaystyle\int_{0}^{x}(x-t)^{\alpha-1}w_{N}(t)dt\simeq \widehat{\mathbf{W}}_{N}\mathbf{\Phi}(x)$,\\ \item[(B)] $\widetilde{g}(x)\simeq (\mathcal{C}+\mathcal{G})\mathbf{\Phi}(x)$, \end{itemize} in which $\mathcal{C}:=[\mathcal{C}_{0},...,\mathcal{C}_{N}]$, $\widetilde{\mathbf{G}}:=[g_{0},...,g_{N}]$, with \[ \mathcal{C}_{i}=\frac{c(2i+1)}{N+1},\ \ g_{i}=\frac{\nu(2i+1)}{\Gamma(\alpha)} \int_{0}^{1} \widetilde{C}_{N,i,\nu}(x)g(x)\varpi(x)dx, \] for $i=0,...,N$, and $\widehat{\mathbf{W}}_{N}=\mathbf{W}_{N} \mathcal{P}$, $\mathcal{G}=\widetilde{\mathbf{G}}\mathcal{P}$. \end{theorem} \begin{proof} Part (A): \\ From Theorem \ref{plp}, we have \begin{align}\label{vol} \int_{0}^{z}(z-t)^{\alpha-1}w_{N}(t)dt&=\mathbf{W}_{N}\int_{0}^{z}(z-t)^{\alpha-1}\mathbf{\Phi}(t)dt \nonumber\\ &\simeq \mathbf{W}_{N} \mathcal{P} \mathbf{\Phi}(x)\nonumber\\ &=\widehat{\mathbf{W}}_{N}\mathbf{\Phi}(z). \end{align} Part (B):\\ Consider to (\ref{kok}), and let $\frac{1}{\Gamma(\alpha)}g(x)\simeq \widetilde{\mathbf{G}}\mathbf{\Phi}(x)$, $c=\mathcal{C}\mathbf{\Phi}(x)$, from (\ref{jnnk}) and (\ref{bvb}), we get \[ \mathcal{C}_{i}=\nu(2i+1) \int_{0}^{1} c\ \widetilde{C}_{N,i,\nu}(x) \varpi(x)dx=\frac{c(2i+1)}{N+1},\ \ \] and \[ g_{i}=\frac{\nu(2i+1)}{\Gamma(\alpha)} \int_{0}^{1} \widetilde{C}_{N,i,\nu}(x)g(x)\varpi(x)dx. \] By using Theorem \ref{plp}, we can write \begin{align*} \widetilde{g}(x)&=c+\frac{1}{\Gamma (\alpha)}\int_{0}^{x}(x-t)^{\alpha-1}g(t)dt\nonumber\\ &=\mathcal{C}\mathbf{\Phi}(x)+\int_{0}^{x}(x-t)^{\alpha-1}\widetilde{\mathbf{G}}\mathbf{\Phi}(t)dt\nonumber\\ & \simeq \mathcal{C}\mathbf{\Phi}(x)+\widetilde{\mathbf{G}}\int_{0}^{x}(x-t)^{\alpha-1}\mathbf{\Phi}(t)dt\nonumber\\ &=\left(\mathcal{C}+\widetilde{\mathbf{G}}\mathcal{P}\right) \mathbf{\Phi}(x)\nonumber\\ &=\left(\mathcal{C}+\mathcal{G}\right) \mathbf{\Phi}(x), \end{align*} which completes the proof. \end{proof} Assume that $\ell^{\nu}_{i}(x)$ be the fractional Lagrange basis function associated with the points $\widehat{x}_{i}=x_{i}^{\frac{1}{\nu}},\ i=0,...,N$, the roots of the polynomial $\widetilde{C}_{N+1,0}(x)$, and define the fractional interpolation operator as \begin{equation}\label{daroon} \mathcal{I}_{N}u(x)=\sum_{i=0}^{N}u(\widehat{x}_{i})\ell^{\nu}_{i}(x). \end{equation} By substituting $w_{N}(x)$ in Eq. (\ref{3vbbb}) and applying $\mathcal{I}_{N}$ we obtain \begin{equation}\label{pgj} \mathcal{I}_{N}w_{N}(x)=\mathcal{I}_{N}\bigg(\frac{1}{\Gamma (\alpha)}\int_{0}^{1}k(x,z)f\bigg(z,\widetilde{g}(z)+\displaystyle\int_{0}^{z}(z-t)^{\alpha-1}w_{N}(t)dt\bigg)dz\bigg), \end{equation} consequently, \begin{align}\label{3vvvd} w_{N}(\widehat{x}_{i})=\frac{1}{\Gamma (\alpha)}\int_{0}^{1}k(\widehat{x}_{i},z)f\bigg(z,\widetilde{g}(z)+\displaystyle\int_{0}^{z}(z-t)^{\alpha-1}w_{N}(t)dt\bigg)dz. \end{align} By using Theorem \ref{popp} in Eq. (\ref{3vvvd}), we obtain \begin{equation}\label{qq} \mathbf{W}_{N}\mathbf{\Phi}(\widehat{x}_{i})=\frac{1}{\Gamma (\alpha)}\int_{0}^{1}k(\widehat{x}_{i},z)f\bigg(z,\big(\mathcal{C}+\mathcal{G}+\widehat{\mathbf{W}}_{N}\big)\mathbf{\Phi}(z)\bigg)dz. \end{equation} The integral term in (\ref{qq}) approximated by Gauss-Legendre quadrature formula \cite{Canuto} on $[0,1]$ with the weights and nodes $(z_{\ell},\omega_{\ell})_{\ell=0}^{N}$, \begin{equation}\label{908} \int_{0}^{1}k(\widehat{x}_{i},z)\mathcal{H}(z)dz \simeq \sum_{\ell=0}^{N}\omega_{\ell}k(\widehat{x}_{i},z_{\ell})\mathcal{H}(z_{\ell}), \end{equation} where $\mathcal{H}(z)=\displaystyle\frac{1}{\Gamma (\alpha)}f\bigg(z,\big(\mathcal{C}+\mathcal{G}+\widehat{\mathbf{W}}_{N}\big)\mathbf{\Phi}(z)\bigg)$. Then, substituting Eq. (\ref{908}) in Eq. (\ref{qq}), we obtain \begin{equation}\label{up} \mathbf{f}_{i}(\mathbf{W}_{N})=\mathbf{W}_{N}\mathbf{\Phi}(\widehat{x}_{i})-\sum_{\ell=0}^{N}\omega_{\ell}k(\widehat{x}_{i},z_{\ell})\mathcal{H}(z_{\ell})=0. \end{equation} Therefore, \begin{align}\label{oppo} \mathds{F}_{N}(\mathbf{W}_{N})&=\big[ \mathbf{f}_{0}(\mathbf{W}_{N}),...,\mathbf{f}_{N}(\mathbf{W}_{N})\big]\equiv 0, \end{align} which gives a nonlinear algebraic system that can be solved by Newton's iterative method. The approximate solution for the Eq. (\ref{3}) is obtained in the following form \begin{align}\label{56} y_{N}(x)&= \left(\mathcal{C}+\mathcal{G}+\mathbf{W}_{N} \mathcal{P}\right) \mathbf{\Phi}(x). \end{align} Newton's iterative method reads as follows: \begin{align}\label{dcdc} \left\{ \begin{array}{ll} \mathds{J}(\mathbf{W}_{N,i})\delta_{N,i}=-\mathds{F}_{N}(\mathbf{W}_{N,i}); \\ \mathbf{W}_{N,i+1}\leftarrow\mathbf{W}_{N,i}+\delta_{N,i},\\ i\leftarrow i+1, \end{array} \right. \end{align} with initial guess $\mathbf{W}_{N,0}$ and end condition $\Vert \mathds{F}_{N}(\mathbf{W}_{N,i})\Vert \leq \epsilon$, where be a small enough number. The Jacobian matrix $ \mathds{J}$ is defined as \[ \mathds{J}_{i,j}=\frac{\partial \mathbf{f}_{i} }{\partial w_{j}}. \] By applying the iterative process (\ref{dcdc}), a sequence of approximate solution \[ w_{N,i}(x)=\mathbf{W}_{N,i}\mathbf{\Phi}(x),\ i=0,1,2,..., \] is generated. It can be seen that for $\Vert w_{N,i}-w_{N}\Vert \rightarrow 0$, the Jacobian matrix should be nonsingular. In the next section, we state some convergence results for Newton's method. To select a proper initial guess for Newton's methods, by using the initial condition $y_{N}(0)=c=\mathcal{C} \mathbf{\Phi}(0)$ and Eq. (\ref{56}), we choose the initial guess such that \[ y_{N}(0)=\left(\mathcal{C}+\mathcal{G}+\mathbf{W}_{N,0} \mathcal{P}\right) \mathbf{\Phi}(0)=\mathcal{C} \mathbf{\Phi}(0). \] Since $\mathcal{G}=\widetilde{\mathbf{G}}\mathcal{P}$, we conclude that $\mathbf{W}_{N,0}=-\widetilde{\mathbf{G}}$. \section{Convergence analysis} In this section, we obtain an upper bound for the error vector of the fractional integration operational matrix and analyze the convergence of the method. \begin{theorem}\label{pkl}(Generalized Taylor series \cite{Odibat}) Let \[ D_{*,0}^{i\nu}u(x)\in C(0,1], \ i=0,...,N+1, \] where $0<\theta< 1$. Then, \begin{equation}\label{1231} u(x)=\sum_{i=0}^{N}\frac{D_{*,0}^{i\nu}u(0)}{\Gamma(i\nu+1)}x^{i\nu}+\frac{x^{(N+1)\nu}}{\Gamma((N+1)\nu+1)}D_{*,0}^{(N+1)\nu}u(x)\vert_{x=\xi}, \end{equation} with $0 < \xi \leq x$, $\forall x\in (0,1]$. \end{theorem} \begin{theorem}\label{098} Let $D_{*,0}^{i\nu}u(x)\in C(0,1], \ i=0,...,N+1$, $0<\nu< 1$ and $\widehat{u}_{N}(x)=\sum_{n=0}^{N}a_{n}\widetilde{C}_{N,n,\nu}(x)$ be the best approximation to $u(x)$ out of $M_{N}$. Then, \begin{equation}\label{bon} \Vert u-\widehat{u}_{N}\Vert_{2} \leq \frac{\mathcal{N}_{\nu}}{\Gamma((N+1)\nu+1)\sqrt{(2N+3)\nu}}, \end{equation} in which $\mathcal{N}_{\nu}:=\underset{x\in [0,1]}{\max} \vert D_{*,0}^{(N+1)\nu}u(x) \vert $. \end{theorem} \begin{proof} From Theorem \ref{pkl}, we have \begin{align} \Vert u-\widehat{u}_{N} \Vert^{2} _{2} \leq \Vert u-\sum_{i=0}^{N}\frac{D_{*,0}^{i\nu}u(0)}{\Gamma(i\nu+1)}x^{i\nu}\Vert^{2} _{2} &\leq\int_{0}^{1}\bigg( \frac{\mathcal{N}_{\nu}}{\Gamma((N+1)\nu+1)}x^{\nu-1} \bigg)^{2}dx\nonumber\\ &\leq \bigg(\frac{\mathcal{N}_{\nu}}{\Gamma((N+1)\nu+1)}\bigg)^{2} \int_{0}^{1}x^{2(N+1)\nu}x^{\nu-1}dx\nonumber\\ &\leq \bigg(\frac{\mathcal{N}_{\nu}}{\Gamma((N+1)\nu+1)}\bigg)^{2}\frac{1}{(2N+3)\nu}. \end{align} \end{proof} \begin{corollary}\label{9877} For the best approximate solution $\widehat{u}_{N}(x)=\sum_{n=0}^{N}a_{n}\widetilde{C}_{N,n,\nu}(x)$ to $u(x)$, from Theorem \ref{098} we have the following error bound \begin{equation}\label{bon1} \Vert u-\widehat{u}_{N}\Vert_{2}=O\left(\frac{1}{\Gamma((N+1)\nu+1)\sqrt{(2N+3)\nu}}\right). \end{equation} \end{corollary} \begin{theorem}\label{0123} \cite{Kreyszig} Assume the hypothesis of Theorem \ref{098} is held. Then, \[ \Vert u-\widehat{u}_{N} \Vert_{2}^{2}=\frac{\Psi(u,\widetilde{C}_{N,0,\nu},\widetilde{C}_{N,1,\nu},...,\widetilde{C}_{N,N,\nu})}{\Psi(\widetilde{C}_{N,0,\nu},\widetilde{C}_{N,1,\nu},...,\widetilde{C}_{N,N,\nu})}, \] where \[ \Psi(u,\phi_{1},\phi_{2},...,\phi_{N}):=\left| \begin{array}{cccc} \langle u,u\rangle & \langle u,\phi_{1}\rangle & \ldots & \langle u,\phi_{N}\rangle \\ \langle \phi_{1},u\rangle & \langle \phi_{1},\phi_{1}\rangle & \ldots & \langle \phi_{1},\phi_{N}\rangle \\ \vdots & \vdots &\vdots & \vdots \\ \langle \phi_{N},u\rangle & \langle \phi_{N},\phi_{1}\rangle & \ldots & \langle \phi_{N},\phi_{N}\rangle \\ \end{array} \right|. \] \end{theorem} \begin{theorem}\label{KKK}Let \begin{equation*}\label{E0} \mathcal{E}(x)=[e_{0}(x),...,e_{N}(x)]:=\int _{0}^{x} (x-s)^{\alpha -1}\mathbf{\Phi}(s)ds- \mathcal{P} \mathbf{\Phi}(x) \end{equation*} be the error vector related to $\mathcal{P}$. Then, \begin{equation} \Vert e_{n}\Vert_{2} \rightarrow 0, \ \ \ \ \ N\rightarrow \infty, \end{equation} for $n=0,...,N$. \end{theorem} \begin{proof} From (\ref{treee}) and (\ref{Za}), we have \begin{align}\label{xvv} e_{n}(x)=\sum_{j=n}^{N}(-1)^{j-n}\binom{N-n}{j-n}\binom{N+j+1}{N-n} B(\alpha , j\nu +1)\left( x^{j\nu+\alpha}- \sum_{k=0}^{N}\xi_{k,j}\widetilde{C}_{N,k}(x)\right), \end{align} for $n=0,1,...,N$. On the other hand, from Theorem \ref{0123}, we can get \begin{equation}\label{kgg} \Vert x^{j\nu+\alpha}- \sum_{k=0}^{N}\xi_{k,j}\widetilde{C}_{N,k}(x) \Vert_{2}^{2}\leq \frac{\Psi(x^{j\nu+\alpha},\widetilde{C}_{N,0,\nu},\widetilde{C}_{N,1,\nu},...,\widetilde{C}_{N,N,\nu})}{\Psi(\widetilde{C}_{N,0,\nu},\widetilde{C}_{N,1,\nu},...,\widetilde{C}_{N,N,\nu})}, \end{equation} therefore, \begin{align} \Vert e_{n}\Vert_{2}&\leq \sum_{j=n}^{N}\binom{N-n}{j-n}\binom{N+j+1}{N-n} B(\alpha , j\nu +1)\left(\frac{\Psi(x^{j\nu+\alpha},\widetilde{C}_{N,0,\nu},\widetilde{C}_{N,1,\nu},...,\widetilde{C}_{N,N,\nu})}{\Psi(\widetilde{C}_{N,0,\nu},\widetilde{C}_{N,1,\nu},...,\widetilde{C}_{N,N,\nu})}\right)^{1/2}, \end{align} that gives an upper bound for each component of the error vector. Finally, from (\ref{kgg}) and Corollary \ref{9877}, we can conclude that \begin{align*} \Vert e_{n}\Vert_{2}&\rightarrow 0,\ \ \ \ N\rightarrow \infty, \end{align*} that complete the proof. \end{proof} For example, let $N=4$ and $\alpha=\nu=\frac{1}{2}$, we have \begin{align*} &\Vert e_{0}\Vert_{2}\leq 1.5666\times 10^{-1},\ \ \Vert e_{1}\Vert_{2}\leq 9.3999\times 10^{-2},\\\ \ &\Vert e_{2}\Vert_{2}\leq 3.1333\times 10^{-2},\ \ \ \Vert e_{3}\Vert_{2}\leq 4.4761\times 10^{-3}.\ \ \end{align*} \begin{theorem}\label{B} Assume that $y(x)$ and $y_{N}(x)$ are the exact solution and approximate solution of \eqref{two}, respectively. Then, \begin{equation}\label{bbm} \Vert y-y_{N}\Vert_{2}\rightarrow 0,\ \ \ N\rightarrow \infty. \end{equation} \end{theorem} \begin{proof} By subtracting \eqref{56} from \eqref{3vbbbv}, we have \begin{align} y(x)-y_{N}(x)=\widetilde{g}(x)-\big(\mathcal{C}+\mathcal{G}\big)\mathbf{\Phi}(x)+\int_{0}^{x}(x-t)^{\alpha-1}\left(w(t)-w_{N}(t)\right)dt, \end{align} that yields \begin{equation} \Vert y-y_{N}\Vert_{2} \leq \Vert \widetilde{g}(x)-\big(\mathcal{C}+\mathcal{G}\big)\mathbf{\Phi}(x)\Vert_{2} +\Vert \int_{0}^{x}(x-t)^{\alpha-1}\big(w(t)-w_{N}(t)\big)dt\Vert_{2}. \end{equation} On the other hand, \begin{align} w(x)-w_{N}(x)&=\frac{1}{\Gamma (\alpha)}\int_{0}^{1}k(x,z)f\bigg(z,\widetilde{g}(z)+\displaystyle\int_{0}^{z}(z-t)^{\alpha-1}w(t)dt\bigg)dz\nonumber\\ &-\frac{1}{\Gamma (\alpha)}\int_{0}^{1}k(x,z)f\bigg(z,\big(\mathcal{C}+\mathcal{G}+\widehat{\mathbf{W}}_{N}\big)\mathbf{\Phi}(z)\bigg)dz, \end{align} and from (\ref{800}), we can write \begin{align}\label{4rf} \vert w(x)-w_{N}(x)\vert &\leq \frac{L_{f}\mathcal{M}_{k}}{\Gamma (\alpha)}\bigg(\int_{0}^{1}\vert \widetilde{g}(z)-\big(\mathcal{C}+\mathcal{G}\big)\mathbf{\Phi}(z)\vert dz\nonumber\\ &+\int_{0}^{1}\vert \displaystyle\int_{0}^{z}(z-t)^{\alpha-1}w(t)dt-\widehat{\mathbf{W}}_{N}\mathbf{\Phi}(z)\vert dz\bigg). \end{align} So, by using Theorems \ref{098} and \ref{KKK} in (\ref{4rf}), we can obtain the desired result (\ref{bbm}). \end{proof} Now, we discuss the conditions under which Newton's method is convergent. To this end, we consider the operator form of Eq. (\ref{oppo}) as follows \begin{equation}\label{ppqq} \mathcal{F}_{N}(w_{N})=w_{N}-\mathcal{I}_{N}\mathcal{K}_{N}w_{N}\equiv 0, \end{equation} where $\mathcal{K}_{N}$ is an approximate quadrature of integral operator $\mathcal{K}$ defined as \[ \mathcal{K}w(x)=\frac{1}{\Gamma(\alpha)}\int_{0}^{1}k(x,z)f\bigg(z,\widetilde{g}(z)+\displaystyle\int_{0}^{z}(z-t)^{\alpha-1}w(t)dt\bigg)dz. \] The Frechet derivative of $\mathcal{F}_{N}$ at $w_{N}$ is defined as \[ \mathcal{F}'_{N}(w_{N})(v)=v-\mathcal{I}_{N}\mathcal{K}'_{N}(w_{N})(v), \] in which \[ \mathcal{K}'_{N}(w)(v)=\displaystyle\frac{1}{\Gamma (\alpha)}\sum_{\ell=0}^{N}\omega_{\ell}k(x,z_{\ell})f'\bigg(z,\big(\mathcal{C}+\mathcal{G}+\widehat{\mathbf{W}}_{N}\big)\mathbf{\Phi}(z_{\ell})\bigg)v(z_{\ell}), \] and $ f':=f_{y}(x,y)\in C(\Omega)$. From Lemma 2.2 in \cite{Anselone}, we can conclude that if \[ \Vert \mathcal{I}_{N}\mathcal{K}'_{N}w_{N}-\mathcal{K}'w\Vert \rightarrow 0 ,\ \ \ N \rightarrow \infty, \] and $\mathcal{K}'w$ has no eigenvalue equal to 1, then $[I-\mathcal{I}_{N}\mathcal{K}'_{N}w_{N}]$ is invertible. To this end, assume the following conditions hold: \begin{itemize} \item[(R1)] $ \vert f_{y}(x,y)-f_{y}(x',y)\vert \leq C_{1} \vert x-x'\vert^{\beta}, $ \item[(R2)] $ \vert f_{y}(x,u)-f_{y}(x,v)\vert \leq C_{2} \vert u-v\vert, $ \end{itemize} where $C_{i}$ are positive constants. According to triangular inequality, we get \begin{align} \Vert \mathcal{I}_{N}\mathcal{K}'_{N}w_{N}-\mathcal{K}'w\Vert_{2} &\leq \Vert \mathcal{I}_{N}\mathcal{K}'w-\mathcal{K}'w\Vert_{2}+\Vert \mathcal{I}_{N}\mathcal{K}'_{N}w_{N}-\mathcal{I}_{N}\mathcal{K}'_{N}w\Vert_{2} \nonumber\\ &+\Vert \mathcal{I}_{N}\mathcal{K}'_{N}w-\mathcal{I}_{N}\mathcal{K}'w\Vert_{2}. \end{align} From Corollary (\ref{9877}) and that $\mathcal{I}_{N}\mathcal{K}'w \in M_{N}$ is the best approximation to $\mathcal{K}'w$, under condition (R1), we conclude that \[ \Vert \mathcal{I}_{N}\mathcal{K}'w-\mathcal{K}'w\Vert_{2} \rightarrow 0,\ \ \ N \rightarrow \infty. \] Using condition (R2) and (\ref{4rf}), we have \[ \Vert \mathcal{I}_{N}\mathcal{K}'_{N}w_{N}-\mathcal{I}_{N}\mathcal{K}'_{N}w\Vert_{2}\leq \Vert \mathcal{I}_{N}\Vert_{2} \Vert w_{N}-w\Vert_{2} \rightarrow 0, \] as $N \rightarrow \infty$. From Theorem 1 in \cite{Nevai} and integration error estimation from Gauss-quadrature role (\cite{Canuto}, p. 290), one can show that \[ \Vert \mathcal{I}_{N}\mathcal{K}'_{N}w-\mathcal{I}_{N}\mathcal{K}'w\Vert_{2} \rightarrow 0,\ \ \ N \rightarrow \infty. \] In the following theorem, we deal with the local convergence of Newton's method: \begin{theorem} Assume that $w_{N}$ is the solution of Eq. (\ref{ppqq}) and $[I-\mathcal{K}'w]^{-1}$ exists. Assume further that the conditions (R1)-(R2) be held. Then, there exist a $\epsilon>0$ such that if $\Vert w_{N,0}-w_{N}\Vert \leq \epsilon$, then Newton's method converges. Furthermore, \[ \Vert w_{N,i}-w_{N}\Vert \leq \frac{(r\epsilon)^{2^{i}}}{r}, \] provided that $r\epsilon<1$ for some constant $r$. \end{theorem} \begin{proof} If 1 is not the eigenvalue of $\mathcal{K}'w$, then $[I-\mathcal{K}'w]$ is invertible. The proof is straightforward from Theorem 5.4.1 in \cite{Atkinson} and the above discussion. \end{proof} \section{Numerical examples} In this section, we intend to show the accuracy of the proposed method to solve the problem (\ref{two}) with smooth and non-smooth solutions. All calculations are computed by Maple 2018 with Digits=40. The $L_{2}$-error is measured in the following way \[ \Vert E_{N}\Vert_{2}=\bigg(\sum_{l=0}^{N}\vert E_{N}(x_{l}) \vert^{2} \bigg)^{1/2} \approx \sqrt{\frac{\sum_{l=0}^{N}\vert _{N}(x_{l}) \vert^{2}}{N}}, \] where $E_{N}(x)=\vert y(x)-y_{N}(x) \vert$, denotes the absolute difference between exact and approximate solution. In these examples, $m$ denotes the number of Newton's iterations with initial value $W_{N,0}=-\widetilde{\mathbf{G}}$. The steps of the numerical method can be summarized as follows: \\ \textbf{Input:} Input $N$, $\alpha$, $\nu$, $k$, $f$ and $g$. \smallskip \\ \textbf{Output:} The approximate solution $y_{N}(x)=\widetilde{g}(x)+\mathbf{W} \mathcal{P} \mathbf{\Phi}(x)$. \smallskip \\ \textbf{Step 1.} Construct the vector basis $\mathbf{\Phi}(t)$ in (\ref{wd}) from (\ref{ghgh}). \smallskip \\ \textbf{Step 2.} Compute the vectors $\mathcal{C},\mathcal{G},\widehat{\mathbf{W}}$ from Theorem \ref{popp}. \smallskip \\ \textbf{Step 3.} Construct the nonlinear algebraic system (\ref{oppo}) using collocation points $\widehat{x}_{i},\ i=0,...,N$ and quadrature formulae with weights and nodes $(z_{\ell},\omega_{\ell})_{\ell=0}^{N}$. \smallskip \\ \textbf{Step 4.} Solve the system (\ref{oppo}) using Newton's iterative method. \begin{example}\label{DWr} Consider the problem \[ \left\{ \begin{array}{ll} D_{*0}^{\frac{1}{2}}y(x)=\displaystyle \frac{\sqrt{\pi}}{2}-\frac{1}{4}+\frac{1}{2}\displaystyle\int_{0}^{1}y^{2}(t)dt,\\ y(0)=0. \end{array} \right. \] \end{example} with the non-smooth solution $y(x)=\sqrt{x}$. Using relations (\ref{3})-(\ref{kok}), we obtain the equivalent nonlinear integral equation \begin{align} y(x)=\widetilde{g}(x)+ \int_{0}^{x}(x-t)^{\frac{-1}{2}}\widetilde{K}(y(t))dt, \end{align} where, \begin{equation} \widetilde{K}y(x)=\frac{1}{\sqrt{\pi}}\int_{0}^{1}y^{2}(z)dz,\ \ \ \widetilde{g}(x)=\displaystyle\frac{\sqrt{x}(\sqrt{\pi}-\frac{1}{2})}{\sqrt{\pi}}. \end{equation} From (\ref{3vbbb}), we have \begin{align} w(x)=\frac{1}{\sqrt{\pi}}\int_{0}^{1}\bigg(\displaystyle\frac{\sqrt{z}(\sqrt{\pi}-\frac{1}{2})}{\sqrt{\pi}}+\displaystyle\int_{0}^{z}(z-t)^{-\frac{1}{2}}w(t)dt\bigg)dz. \end{align} Now, we apply our method with $N=1$. From \textbf{Step 1.}, let \begin{align} &w_{1}(x)=w_{0}\widetilde{C}_{1,0}(x)+w_{1}\widetilde{C}_{1,1}(x)=\mathbf{W}_{1}\mathbf{\Phi}(x),\nonumber\\ &\mathbf{W}_{1}=[w_{0},w_{1}],\ \ \ \ \mathbf{\Phi}(x)=[2-3\sqrt{x},\sqrt{x}]^{T}. \end{align} From \textbf{Step 2.}, \[ \displaystyle \mathcal{P}= \left[ \begin{array}{cc} \displaystyle \frac{\pi}{8}&\displaystyle 4-\frac{9\pi}{8} \\ \displaystyle \frac{-\pi}{24}&\displaystyle \frac{3\pi}{8}\\ \end{array} \right],\ \ \ \widehat{\mathbf{W}}_{1}=\mathbf{W}_{1} \mathcal{P}=[\frac{\pi}{8}w_{0}-\frac{\pi}{24}w_{1},(4-\frac{9\pi}{8})w_{0}+\frac{3\pi}{8}w_{1}], \] \[ \mathcal{C}=[0,0],\ \ \ \widetilde{\mathbf{G}}=[\frac{2\sqrt{\pi}-1}{8\sqrt{\pi}},\frac{6\sqrt{\pi}-3}{8\sqrt{\pi}}],\ \ \ \mathcal{G}=\widetilde{\mathbf{G}}\mathcal{P}=[0,1-\frac{1}{2\sqrt{\pi}}], \] \[ \mathcal{H}(z)=\frac{1}{\sqrt{\pi}}\bigg(\big(4\sqrt{x}-\frac{3\pi\sqrt{x}}{2}+\frac{\pi}{4}\big)w_{1}+\big(\frac{\pi\sqrt{x}}{2}-\frac{\pi}{12}\big)w_{2}+\sqrt{x}-\frac{\sqrt{x}}{2\sqrt{\pi}} \bigg)^{2}. \] From \textbf{Step 3.}, we obtain we obtain \begin{equation} \mathbf{f}_{i}(\mathbf{W}_{1})=\mathbf{W}_{1}\mathbf{\Phi}(\widehat{x}_{i})-\int_{0}^{1}\mathcal{H}(z)dz=0, \end{equation} consequently, \begin{align} \mathds{F}_{1}(\mathbf{W}_{1})&=\big[ \mathbf{f}_{0}(\mathbf{W}_{1}),\mathbf{f}_{1}(\mathbf{W}_{1})\big]\equiv 0, \end{align} with the collocation points $\widehat{x}_{0}=\frac{3}{5}+\frac{\sqrt{6}}{10}, \widehat{x}_{1}=\frac{3}{5}-\frac{\sqrt{6}}{10}$. From \textbf{Step 4.}, Newton's iterative method with initial guess $\mathbf{W}_{1,0}=-\widetilde{\mathbf{G}}$ gives \begin{align} \mathbf{W}_{1,1}&=\left[ \begin{array}{c} 0.06109105203159421258866747384762992484681\\ 0.1832731560947826377660024215428897745405 \end{array} \right] ,\nonumber\\ &\vdots\nonumber\\ \mathbf{W}_{1,6}&=\left[ \begin{array}{c} 0.07052369794346953586850993144509657323068\\ 0.2115710938304086076055297943352897196920 \end{array} \right], \end{align} where $\Vert \mathds{F}_{1}(\mathbf{W}_{1,6})\Vert \leq 10^{-40}$. Finally, we obtain an approximate solution as \[ y_{1}(x)=\widetilde{g}(x)+\mathbf{W}_{1} \mathcal{P} \mathbf{\Phi}(x)=\sqrt{x}+12.5664\times 10^{-40}. \] \begin{table}[ht] \centering \caption{The exact and approximate solutions at selected points for $N=4$, $\nu=1/2$ for Example \ref{DW}} \label{pppoo} \begin{tabular}{cc|c} \hline\noalign{\smallskip} x & & Numerical Results\\ \noalign{\smallskip}\hline\noalign{\smallskip} 0.1 &App. sol. &0.1316227766016837933199889354443271853503\\ &Exa. sol&0.1316227766016837933199889354443271853372\\ \hline 0.3 &App. sol.&0.4643167672515498340370909348402406402096\\ &Exa. sol&0.4643167672515498340370909348402406401858 \\ \hline 0.5&App. sol. & 0.8535533905932737622004221810524245196736\\ &Exa. sol&0.8535533905932737622004221810524245196424 \\ \hline 0.7 & App. sol. &1.285662018573852883584720418049631242609\\ &Exa. sol&1.285662018573852883584720418049631242575 \\ \hline 0.9 & App. sol.& 1.753814968245462419639701256996834004134\\ &Exa. sol&1.753814968245462419639701256996834004104\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{example}\label{DW} Consider the problem \[ \left\{ \begin{array}{ll} D_{*0}^{\frac{1}{2}}y(x)=\displaystyle \frac{2\sqrt{x}}{\sqrt{\pi}}+\frac{3x\sqrt{\pi}}{4}-\frac{9}{10}+\displaystyle\int_{0}^{1}y(t)dt,\\ y(0)=0, \end{array} \right. \] with the non-smooth solution $y(x)=x^{\frac{3}{2}}+x$. The comparison between approximate and exact solution for $N=4$ and $\nu=1/2$ are reported in Table \ref{pppoo} and Fig. \ref{AWqq}. The absolute error obtained between the approximate solutions and the exact solution is plotted in Fig. \ref{AWqq}. Table \ref{pppoo1} shows the absolute error of our method for $N=4,\ \nu=\frac{1}{2},1$, Chebyshev-wavelet method \cite{Setia}, and Laguerre-collocation method \cite{Bayram}. The obtained results show that the approximate solution has a good agreement with the exact solution by using the fractional Chelyshkov polynomials. \end{example} \begin{figure}[h] \begin{center}$ \begin{array}{cc} \includegraphics[width=0.5\linewidth]{Exx} \includegraphics[width=0.5\linewidth]{Exx2} \end{array}$ \end{center} \caption{ The plot of the exact and approximate solution (left), the absolute error between the exact and approximation solution with $\nu=\frac{1}{2}$ (right) for Example \ref{DW}.} \label{AWqq} \end{figure} \begin{table}[h] \centering \caption{The numerical results of Laguerre-collocation method \cite{Bayram}, Chebyshev-wavelet method \cite{Setia} and our method with $N=4$ for Example \ref{DW}} \label{pppoo1} \begin{tabular}{c|c|c|c|c} \hline\noalign{\smallskip} x & Ref. \cite{Bayram} & Ref. \cite{Setia}&Our Method&Our Method\\ &with $N=7$ & with $k=M=4$ &with $\nu=1$& with $\nu=1/2$\\ \noalign{\smallskip}\hline\noalign{\smallskip} 0.1 &9.5e-5 &1.67722e-3&1.31000e-4&1.31000e-38\\ 0.3 & 2.4e-4&1.78323e-3&5.55703e-4&2.38000e-38 \\ 0.5 & 6.7e-5&2.04661e-3&8.97348e-4&3.12000e-38 \\ 0.7 &2.2e-4&2.33798e-3&7.88172e-5&3.40000e-38 \\ 0.9 & 2.2e-4&2.58503e-3&5.16739e-4&3.00000e-38\\ \hline CPU-Time&Not Reported&Not Reported&2.527s&2.449s\\\hline $L^2$-error&3.8e-4&2.11330e-3&2.14371e-3&2.16e-38\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{example}\label{EEE2} Consider the problem \begin{equation}\label{KOK} \left\{ \begin{array}{ll} D_{*0}^{\alpha}y(x)=\displaystyle 1-\frac{x}{4}+\displaystyle\int_{0}^{1}xty^{2}(t)dt, \\ y(0)=0. \end{array} \right. \end{equation} The exact solution is $y(x)=x$ for $\alpha=1$. The approximate solution for various $\alpha=\nu=\frac{1}{4},\frac{1}{2},\frac{3}{4},1$ and $N=8$ are plotted in Fig. \ref{ETT}. The numerical solutions converge to the solution of problem (\ref{KOK}) for $\alpha=1$ as $\alpha\rightarrow 1$. In Table \ref{m34}, we report the $L^2$-error obtained using proposed method and CAS wavelet basis \cite{Saeedi}, Chebyshev wavelet basis \cite{Zhu}, and rationalized Haar functions \cite{Rahimi} in the case of $\alpha= 1$. \end{example} \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{Ex1.eps} \caption{The behavior of approximate solutions for various values of $\alpha$ along with the exact solution for Example \ref{EEE2}.} \label{ETT} \end{figure} \begin{table} \caption{The $L^{2}$-error comparison results between the our method and results of Refs. \cite{Zhu}, \cite{Saeedi} and \cite{Rahimi} in Example \ref{EEE2}} \label{m34} \begin{tabular}{lllll} \hline\noalign{\smallskip} & Chebyshev wavelet & CAS wavelets & Rationalized Haar & Our method \\ & \cite{Zhu} &\cite{Saeedi} & function \cite{Rahimi}&($\nu=1$) \\ &(k=5, M=2)& (k=4, M=1) & (N=3, M=8) &(m=6, N=2)\\\hline Number of basis& 32& 48&24&2 \\ \noalign{\smallskip}\hline\noalign{\smallskip} $L^{2}$-error & 1.1645e-9 &1.6745e-5&1.3561e-4&2.0525e-40\\\hline CPU-Time&-&-&-&0.202s\\ \hline \end{tabular} \end{table} \begin{table}[h] \caption{The $L^{2}$-error with different basis for Example \ref{Typ}} \label{m3} \begin{tabular}{lllll} \hline\noalign{\smallskip} & Chebyshev wavelet\cite{Zhu} & CAS wavelet \cite{Saeedi} &Alpert wavelets \cite{Hag} \\ & (k=5, M=2)&(k=4, M=1)&(r=4, J=6) \\ \noalign{\smallskip}\hline\noalign{\smallskip} $\Vert E \Vert_{N}$ & $2.3374e-7$ &$5.3445e-6$&$1.7295e-5$\\\hline CPU-Time&-&-&10.151s\\ \hline\noalign{\smallskip} &Fractional-order &Our method & Our method & \\ &Lagrange polynomials \cite{Kumar} &(m=6, N=4)& (m=6, N=4) & \\ &(n=4,\ $\nu=\frac{1}{2}$)& ($\nu=1$) &($\nu=\frac{1}{2}$) \\ \noalign{\smallskip}\hline\noalign{\smallskip} $\Vert E \Vert_{N}$&1.08395e-14& 1.5591e-3 &$1.6792e-39$\\\hline CPU-Time&-&0.764s&0.765s\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{figure}[h] \begin{center}$ \begin{array}{cc} \includegraphics[width=0.5\linewidth]{Ex31} \includegraphics[width=0.5\linewidth]{Ex32} \end{array}$ \end{center} \caption{Plot of the exact and approximate solution (left), absolute error between the exact and approximation solution (right) with $\nu=\frac{1}{2} $ for Example \ref{Typ}.} \label{Exp3} \end{figure} \begin{example}\label{Typ} Consider the following integro-differential equation \[ \left\{ \begin{array}{ll} D_{*0}^{\frac{1}{2}}y(x)=\displaystyle\frac{1}{\Gamma(1/2)}\left(\frac{8}{3}x^{3/2}-2x^{1/2}\right)-\frac{x}{1260}+\displaystyle\int_{0}^{1}xty^{4}(t)dt,\\ y(0)=0. \end{array} \right. \] The exact solution is $y(x)=x^{2}-x$. Table \ref{m3} displays a comparison between our method and the CAS wavelet method \cite{Saeedi}, Chebyshev wavelet method \cite{Zhu} and Alpert multi-wavelets method \cite{Hag}. Fig. \ref{Exp3} shows the good agreement between the exact solution and approximate solution. The obtained results show that our method by using a small number of fractional Chelyshkov basis produces a more accurate result compared to other basis functions. \end{example} \begin{example}\label{Typp} Consider the problem \[ \left\{ \begin{array}{ll} D_{*0}^{\frac{1}{2}}y(x)=g(x)+\displaystyle\int_{0}^{1}sin(x+t)y^{2}(t)dt,\\ y(0)=0, \end{array} \right. \] with the non-smooth solution $y(x)= x^{\frac{1}{2}}-\frac{1}{3!}x^{\frac{3}{2}}+\frac{1}{5!}x^{\frac{5}{2}}$. Table \ref{TT11}, illustrates the $L^{2}$-errors obtained by our method for different values of $N$ and $\nu$ with $m=10$ which are also shown in Fig. \ref{ETT44}. The obtained results confirm the efficiency and convergence of the method. It can be seen that a significant improvement in the rate of convergence of the method is obtained using the fractional Chebyshev basis functions. The semi-log representation in Fig. \ref{ETT44} shows the linear variations of the errors versus the degree of approximation in case of $\nu=\frac{1}{2}$. This is so-called exponential convergence or spectral accuracy of the collocation methods that have been recovered in the proposed method for the problems with non-smooth solutions. The absolute error at some selected points with $N=10$ is reported in Table \ref{pppoou}. \end{example} \begin{table}[h] \begin{center} \caption{The $L^{2}$-errors for different values of $\nu$ and $N$ for Example \ref{Typp}} \begin{tabular}{c|cccccccc} \hline\noalign{\smallskip} N &2&4 &6& 8&10 \\\hline $\nu=1/4$&4.5944e-02&1.4297e-03&3.0806e-05&6.3394e-07&7.9853e-08\\ $\nu=1/2$&9.1157e-03&1.4795e-05&6.4717e-07&1.6866e-10&1.1612e-10\\ $\nu=3/4$&1.3978e-02&3.3550e-03&1.1277e-03&6.1575e-04&3.4911e-04\\ $\nu=1$ &2.1852e-02&6.0291e-03&2.9305e-03&1.5718e-03&8.4281e-04\\\hline N &12 &14& 16&18&20 \\\hline $\nu=1/4$&2.8281e-09&9.0679e-11&1.1469e-11&3.1578e-13&9.2724e-15\\ $\nu=1/2$&3.4954e-13 &9.8229e-15&4.5248e-17&4.6602e-19&2.7650e-21\\ $\nu=3/4$&1.9716e-04 &1.5580e-04&1.0786e-04&7.0349e-05&5.6620e-05\\ $\nu=1$&5.4258e-04 &4.5238e-04&3.7783e-04&2.8438e-04&2.0093e-04\\ \hline \noalign{\smallskip} \end{tabular} \label{TT11} \end{center} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{F1} \caption{The $L^{2}$-error for different values of $N$ with $\nu$=1/4 (dashed-lines), $\nu=1/2$ (solid-line), $\nu=3/4$ (dashed-dotted-lines) and $\nu=1$ (dotted-lines) for Example \ref{Typp}.} \label{ETT44} \end{figure} \begin{table}[h] \centering \caption{Absolute errors at some selected points with $m=N=10$ for Example \ref{Typp}} \label{pppoou} \begin{tabular}{c|c|c|c|c} \hline\noalign{\smallskip} x & $\nu=1/4$ & $\nu=1/2$& $\nu=3/4$&$\nu=1$\\ \noalign{\smallskip}\hline\noalign{\smallskip} 0.1 &4.2857e-08 &3.1881e-11&4.0770e-4&5.6351e-4\\ 0.3 &5.7263e-09&5.7105e-11&3.1744e-5&6.4303e-4 \\ 0.5 & 6.7919e-08&4.0053e-11&8.3869e-5&4.3161e-4 \\ 0.7 &6.7635e-09&7.2990e-11&1.2539e-5&5.3821e-4 \\ 0.9 &3.4723e-08&7.7616e-11&1.9118e-4&6.8182e-4\\ \hline CPU-Time&8.752s&8.331s&11.325s&4.524s\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \section{Conclusions} In this paper, a new fractional version of the collocation method based on the fractional Chelyshkov polynomials has been introduced to solve a class of nonlinear fractional integro-differential equations. The operational matrix of fractional integrations with the spectral collocation method is utilized to convert the problem into a system of algebraic equations. Finally, the proposed method was implemented to solve the fractional integro-differential equations with smooth and non-smooth solutions with accurate results. The proposed method is computationally simple and the approximate solutions converges to the exact solution of the problem as the number of basis (fractional Chelyshkov polynomials) increases. Numerical examples illustrate that the obtained results are more significant than other existing methods. \section*{Declarations} {\bf{\small{Conflicts of Interest}}} All other authors have no conflicts of interest to declare.
{ "timestamp": "2022-09-23T02:12:32", "yymm": "2209", "arxiv_id": "2209.10912", "language": "en", "url": "https://arxiv.org/abs/2209.10912" }
\section{\label{sec:level1_intro}Introduction} In recent years, there has been an increased interest in magnetic films with large perpendicular magnetic anisotropy due to their potential to improve the efficiency and non-volatility of spin transfer torque magnetoresistive random access memories~\cite{Meng.2006,Kishi.2008,Ikeda.2010,Kim.2011,Moinuddin.2020}. In this search for new materials, Fe films interfaced with MgO are of particular interest due to their favourable properties: a small magnetic damping, a high tunneling magnetoresistance and a large perpendicular surface anisotropy (PSA) \cite{Johnson.1995,Johnson.1996, Kishi.2008,Maruyama.2009,Ikeda.2010}. There is a good general agreement between experimental and theoretical investigations on the nature and order of magnitude of this PSA, with values ranging between 0.86 and 3.15 $\text{mJ/m}^{2}$\cite{Ikeda.2010,Maruyama.2009,Shiota.2009,Lambert.2013,Koo.2013,Okabayashi.2014,Shimabukuro.2010,Nakamura.2010,Yang.2011,Hallal.2013, Odkhuu.2016}. These are up to two times larger than usual PSA found at the interfaces between transition metals and heavy metals \cite{Guo.2006,Johnson.1995,Johnson.1996} despite weak spin orbit coupling\cite{Yang.2011}. Theoretical works have attributed this large PSA to the hybridization between interfacial oxygen and iron atoms \cite{Shimabukuro.2010, Nakamura.2010}. Experimentally, it is very challenging to access the internal magnetic environment of ultrathin films and separate the contributions of the top and bottom interfaces to the total PSA of a film. So far, experimental characterizations \cite{Ikeda.2010,Yakata.2009,Nistor.2010,Yamanouchi.2011,Maruyama.2009,Shiota.2009,Lambert.2013,Koo.2013,Okabayashi.2014} have relied on magnetometry measurements to estimate the surface magnetic anisotropy. This means that additional hypothesis were needed to extract the individual surface anisotropies, including comparison with a reference interface or assumptions regarding possible bulk magnetoelastic contributions. In this work, we separate the top and bottom perpendicular surface anisotropies of single crystalline MgO$\text{/}$Fe$\text{/}$MgO films resorting exclusively to spectroscopic measurements of inhomogeous magnetization dynamics. To achieve this, we perform a broadband ferromagnetic resonance characterization of a thickness series of films MgO$\text{/}$Fe$\text{(}t\text{)/}$MgO ($t\!=\!10\text{-}30$nm) and combine it with a careful study of non-reciprocal spin-wave propagation. Our results show that the films of the entire series behave as a quasi-bulk film interior with two Fe$\text{/}$MgO interfaces that are not magnetically equivalent. We deduce then different top and bottom surfaces anisotropies that are in good agreement with theoretical calculations for ultra-thin films\cite{Shimabukuro.2010,Nakamura.2010,Yang.2011,Hallal.2013,Odkhuu.2016}. This work not only presents a new characterization methodology, but also provides evidence that the large PSA of the technologically relevant ultra-thin films also exists in the thicker films traditionally used in material science. \section{\label{sec:level2_FMR}Broadband ferromagnetic resonance} \subsection{Film growth} The studied films were grown by molecular beam epitaxy on commercial MgO(001) substrates and consist of the following stacks: substrate/MgO(20)/Fe($t$)/MgO(8)/Ti(4.5) (thicknesses in nm). The MgO buffer film was deposited on top of a polished MgO surface at 550 $\text{\textdegree}$C. The Fe film, with thickness $t$ = 10, 15, 20, 25, 30 nm, was subsequently grown at 100 $\text{\textdegree}$C (stair step structure obtained with a movable shutter). Finally, the sample was annealed at 480 $\text{\textdegree}$C and capped with the MgO and Ti layers, both grown at room temperature. The epitaxial relationship between Fe and MgO is such that the [010] and [100] in-plane directions (magnetic easy axes of the Fe film) are rotated by 45$\text{\textdegree}$ with respect to those of the MgO films (aligned with the edges of the substrate). The crystalline quality of the samples was confirmed in-situ by RHEED. After growth, an X-ray diffraction study revealed a slight tetragonal distortion of the Fe lattice with respect to the bulk, more precisely a 0.5$\%$ out-plane compression accompanied by a 0.7$\%$ in-plane expansion \cite{Magnifouet.2020}. \subsection{Vector Network Analyzer-Ferromagnetic Resonance The dynamic magnetic properties of the films are characterized by Vector Network Analyzer - Ferromagnetic Resonance. The sample ($1.8\!\times\!1.8\text{mm}^2$ piece cut from a film) lies on a 50 $\Omega$ channelized coplanar waveguide (CPW) \cite{R.N.Simons.1989} with a 300 $\mu$m center line separated from the lateral ground planes by 100 $\mu$m gaps (see inset in Fig.~\ref{fig:first}). The 50 $\mu$m thick copper/gold top metallization rests on a 127 µm thick PTFE$\text{/}$glass Rogers RT5880 substrate backed with a very thick copper layer. The CPW's top and bottom grounds are connected through rows of vias parallel to the center line in order to ensure a single-mode propagation in the entire 0-50 GHz frequency range. The part of the waveguide on which the sample is placed has a tapered center line (width 200 $\mu$m, thickness 30 $\mu$m) that compensates for the impedance change caused by the presence of the conductive film on top of the CPW \cite{Bailleul.2013}. \begin{figure}[ht] \includegraphics[width=86mm]{resonancePeaks_InPlane_40GHz_v9}% \caption{\label{fig:first} Ferromagnetic resonance spectrum measured at 40 GHz for the t=20 nm film. The main panel shows the real and imaginary components of the change of effective permeability as function of the in-plane applied magnetic field (along an the [100] direction of Fe). One distinguishes both a main peak (uniform resonance, n=0) and a satellite one (perpendicular standing spin wave, n=1). The inset contains a photograph of the CPW loaded with a sample at its center (top) and a cross-section sketch showing the tapered CPW, the magnetic film, and the thickness profiles of the dynamic magnetization for the two resonance modes (bottom, not to scale).} \end{figure} To perform the magnetic measurements, the CPW and sample are inserted in the gap of an electromagnet and connected to a 2-port vector network analyzer via 2.4 mm connectors and coaxial microwave cables. The analyzer can excite and measure the microwave response of the CPW: microwave reflection on each port and transmission between the two ports. The excitation of the ferromagnetic sample results in a modification of the waveguide's impedance. After a suitable calibration and deembedding procedure \cite{bilzer:tel-00202827}, we extract the magnetic field-induced change of the effective permeability $\Delta\mu_{\text{eff}}$ of the waveguide loaded with the ferromagnetic sample. This magnetic response exhibits clear resonances when the microwave frequency matches the field-dependent magnetization precession frequency. Figure~\ref{fig:first} shows the ferromagnetic resonance spectrum measured on a 20~nm thick Fe film at a microwave frequency of 40~GHz with an external field $\mathbf{H}$ applied in-plane, along the [100] magnetic easy axis (see Fig.~\ref{fig:first} inset). One recognizes an intense peak centered at 634~mT, which we attribute to the uniform resonance mode ($n$=0 in the inset of Fig.\ref{fig:first}). A satellite peak centered at 112 mT is also observed: we attribute this to the first perpendicular standing spin wave mode, corresponding to an inhomogeneous precession across the film thickness, with opposite phases at the two surfaces and zero amplitude at the center ($n$=1 in the inset of Fig.~\ref{fig:first}). The observation of this satellite peak might be surprising at first glance since the microwave magnetic field produced by the coplanar waveguide is expected to be homogeneous over the thickness of the magnetic film, thus preventing the excitation of inhomogeneous modes. However, the fact that the ferromagnetic film is conductive leads to the occurrence of electromagnetic shielding. This effect is characterized by the creation of electrical currents in the metallic film which tend to confine the electromagnetic field within the space between the waveguide and the sample \cite{Bailleul.2013}. This results in a very inhomogeneous microwave magnetic field distribution across the ferromagnetic film thickness leading to the excitation of non-uniform magnetization precession modes \cite{Kennewell.2010}. Similarly, ferromagnetic resonance spectra are recorded for the various thicknesses of Fe with frequency in the range 1.4-50 GHz and external field (up to 2.7~T) applied either in-plane, along the [100] direction of Fe, or out-of-plane (along [001]). Each resonance spectrum is fitted with a complex lorentzian function. From these fits we extract the resonance fields of the two modes, in the two field configurations (Fig.~\ref{fig:FMRfvsH}). From the fit of the resonance peak of mode $n$=0 in the out-of-plane configuration, we also extract the linewidth and from its frequency dependence we finally estimate the damping factor $\alpha$=2.6$\times10^{-3}$. It must be noted that mode $n$=1 could not be observed for $t$=10~nm and 15~nm with in-plane applied field and for $t$=10~nm with out-of-plane field because the corresponding resonance frequencies lie beyond the 50 GHz experimental limit. \begin{figure*} \includegraphics[width=120mm]{FMR_fvsH_v7}% \caption{\label{fig:FMRfvsH} Measured frequency vs. resonance magnetic field of both the uniform mode (n=0) and first perpendicular standing spin wave mode (n=1) for the entire thickness series. In the case of in-plane applied field, squares (triangles) correspond to mode n=0 (n=1). For the case of out-of-plane applied field, circles (stars) correspond to mode n=0 (n=1). The lines are the corresponding Kittel fits [see Eqs. (\ref{eq:p1})].} \end{figure*} \subsection{\label{sec:level2_Model}Theoretical model To interpret the resonance data of Figure~\ref{fig:FMRfvsH}, we use the so-called Kittel formulas \cite{Kittel.1948}. These simple expressions are known to be exact in the case of a uniform resonance mode in a high symmetry configuration (equilibrium magnetization parallel or perpendicular to the anisotropy axes). This section shows how they can be extended also to the case of inhomogeneous dynamics ($n\!=\!1$) in films with moderate thickness and sizable surface anisotropies. Our starting point is the linearized Landau-Lifshitz equation for plane spin-waves: \begin{equation} \label{eq:LL} i \omega \textbf{m} = \gamma \mu_o( \textbf{H}_{\text{eq}} \times \textbf{m} -\textbf{M}_{\text{eq}} \times \textbf{h}). \end{equation} Here, $\omega$ is the angular frequency, $\gamma$ is the gyromagnetic ratio and $\mu_0$ is the permeability of vacuum. $\textbf{M}_{\text{eq}}$ and $\textbf{m}$ are the static and dynamic components of the magnetization, respectively. Similarly, $\textbf{H}_{\text{eq}}$ and $\textbf{h}$ are the static and dynamic parts of the effective magnetic field, respectively. The effective field derives from the total magnetic energy \cite{Hubert.2014}, which in the present case contains five contributions: i) the exchange and ii) demagnetizing contributions present in any ferromagnet, iii) the cubic volume anisotropy known to exist in iron, iv) surface perpendicular anisotropies at the two Fe/MgO interfaces, and v) an additional volume anisotropy with uniaxial symmetry and perpendicular-to-plane axis which, we argue, is created by strain through a magnetoelastic coupling (see discussion section). When the external magnetic field $\mathbf{H}$ is applied along the easy axes of the Fe films, the static effective field writes \begin{equation} \label{eq:StaticField} H_{\text{eq}}(\xi)=\begin{cases} H \quad &\text{if} \, $\textbf{H}$ \: \parallel [100]_{\text{Fe}} \\ H+H_{\text{U}}+h_\text{S}(\xi)-M_\text{s} \quad &\text{if} \, $\textbf{H}$ \: \parallel [001]_{\text{Fe}}, \\ \end{cases} \end{equation} \noindent while the dynamic effective field writes \begin{equation} \label{eq:Field} \textbf{h}= \frac{1}{M_\text{s}} \bigg\{ \left[ \frac{2A}{\mu_0 M_\text{s}}\nabla^2 + H_{\text{K}} \right]\textbf{m} + \left[ H_\text{U}+h_\text{S}(\xi)-M_\text{s} \right]m_{\xi}\boldsymbol{\hat{\xi}} \bigg\}. \end{equation} Here, $M_\text{s}$ is the saturation magnetization, $A$ is the exchange stiffness constant, $\boldsymbol{\hat{\xi}}$ is a unit vector along the direction perpendicular to the film and $m_{\xi}$ is the dynamic magnetization component along this direction. Additionally, $H_{\text{K}}\!=\!\frac{2K_1}{\mu_0M_\text{s}}$, where $K_1$ is the volume cubic anisotropy constant, and $H_{\text{U}}\!=\!\frac{2K_\text{U}}{\mu_0M_\text{s}}$, where $K_\text{U}$ is the volume uniaxial magnetoelastic anisotropy constant. Finally, the field $h_\text{s}(\xi)=\frac{2}{\mu_0 M_\text{s}}[K^{\text{bot}}_\text{S} \delta(\xi)+K^{\text{top}}_\text{S}\delta(\xi-t)]$ models the perpendicular surface anisotropies with constants $K_\text{S}^{\text{top}}$ and $K_\text{S}^{\text{bot}}$ at the top and bottom interfaces, respectively \cite{Gladii.2016p}. Note that $m_{\xi}$=0 when a saturating magnetic field is applied perpendicular to the plane of the film ($\textbf{M}_\text{eq} \parallel \boldsymbol{\hat{\xi}}$), making Eq.~(\ref{eq:Field}) valid for the two experimental configurations considered in Eq.~(\ref{eq:StaticField}). The system of equations (\ref{eq:LL}-\ref{eq:Field}) is effectively a fourth order differential equation for the dynamic magnetization with mixed-type boundary conditions (so-called surface pinning), which does not have an exact analytical solution \cite{Gurevich.1990}. However, it is possible to obtain approximate solutions in some limiting cases. In particular, if the exchange energy $A/t$ is much larger than the surface anisotropy $K_\text{S}$, we can expand the dynamic magnetization in a Fourier series of cosine thickness modes (unpinned standing spin wave modes) and, in the spirit of the Kalinikos-Slavin theory of dipole-exchange spin-waves \cite{Kalinikos.1986,Kalinikos.1990}, limit ourselves to the first two terms of the series. Then we may write the complex amplitude of the dynamic magnetization $\textbf{m}^{*}(\xi)$ as \begin{equation} \label{eq:Dynamicmag} \begin{split} \textbf{m}^*(\xi)=& \quad\: [m^0_{x}\Phi_0+m^1_{x}\Phi_1(\xi)] \: \boldsymbol{\hat{x}} \\ & + [m^0_{y}\Phi_0+m^1_{y}\Phi_1(\xi)] \: \boldsymbol{\hat{y}}, \end{split} \end{equation} \noindent with $x$ and $y$ denoting two directions orthogonal to the static magnetization $\textbf{M}_{\text{eq}}$. $\Phi_0\!=\!1/\sqrt{t}$ is the lowest order term in the Fourier series corresponding to a normalized uniform distribution, and $\Phi_1(\xi)\text{=}\sqrt{2}\cos(\frac{\pi}{t}\xi)/\sqrt{t}$ is the second term corresponding to a normalized non-uniform distribution with a thickness-profile which is antisymmetric with respect to the center of the film (see their sketch in the inset of Fig.~\ref{fig:first}). In the basis of the four orthogonal vector modes $\left[\Phi_0\mathbf{\hat{x}},\;\Phi_1(\xi)\mathbf{\hat{x}},\; \Phi_0\mathbf{\hat{y}},\; \Phi_1(\xi)\mathbf{\hat{y}}\right]$, the complex amplitude of the dynamic magnetization can be conveniently expressed as $\textbf{m}^*$=$(m^0_{x},m^1_{x},m^0_{y},m^1_{y})$. After substituting Eqs.~(\ref{eq:Field}) and (\ref{eq:Dynamicmag}) in Eq.~(\ref{eq:LL}), one can project the system of equations (\ref{eq:LL}-\ref{eq:Field}) on this new four-mode basis (see Ref.~\onlinecite{Solano.2017} for a detailed treatment of the projection) and obtain a simplified eigenvalue equation of the form $i\omega\mathbf{m}^*=C\mathbf{m}^*$, where $C$ is the 4$\times$4 dynamic matrix \cite{Gladii.2016p}. The eigenvalues of this matrix are the resonance frequencies. By replacing the eigenvectors of matrix $C$ back in Eq.~(\ref{eq:Dynamicmag}) one may recover the actual oscillation modes of the magnetization. The dynamic matrix in the presence of surface anisotropies is given in Appendix~\ref{sec:App1} for the case of a spin wave with wavevector $\textbf{k}$ parallel to the $[010]_{\text{Fe}}$ axis. It is important to note that this matrix depends explicitly not only on the total surface anisotropy $K_{\text{S}}\!=\!K_{\text{S}}^{\text{bot}}\!+\!K_{\text{S}}^{\text{top}}$ but also on the difference of surface anisotropies at the two interfaces $\Delta K_{\text{S}}\!=\!K_{\text{S}}^{\text{bot}}\!-\!K_{\text{S}}^{\text{top}}$~\cite{Gladii.2016p}. Let us now concentrate on the case of ferromagnetic resonance ($k\!=\!0$). Up to first order in $\Delta K_{S}$, our approach produces Kittel-like\cite{Kittel.1948} expressions for the resonance frequencies of the first two standing spin wave modes: \begin{subequations} \label{eq:p1} \begin{eqnarray} f_{\parallel n}=\frac{\gamma\mu_0}{2\pi} [(H+H_{Xn})(H+H_{Yn})]^{1/2},\label{subeq:1} \end{eqnarray} \begin{equation} f_{\perp n}=\frac{\gamma\mu_0}{2\pi} (H-H_{Zn}).\label{subeq:2} \end{equation} \end{subequations} Here $n\!=\!0,1$ is the mode index, and $\parallel$ and $\perp$ refer to the configurations with $\mathbf{H} \parallel [100]_{\text{Fe}}$ and $\mathbf{H} \parallel [001]_{\text{Fe}}$, respectively. $H_{\alpha n}$ are orientation and mode-dependent stiffness fields whose expressions are given in Table~\ref{tab:table1}, \begin{table}[ht] \caption{\label{tab:table1}% Explicit expressions of stiffness fields in the resonance frequencies of Eqs.(\ref{eq:p1}). } \begin{ruledtabular} \begin{tabular}{lcc} \textrm{Field} & \textrm{$n=0$}& \textrm{$n=1$}\\ \colrule $H_{Xn}$ & $H_{\text{K}}$ & $H_{\text{K}}+H_{\text{E}}$\\ $H_{Yn}$ & $M_{\text{s}}+H_{\text{K}}-H_{\text{U}}-H_{\text{S}}$ & $M_{\text{s}}+H_{\text{K}}-H_{\text{U}}-2H_{\text{S}}+H_{\text{E}}$\\ $H_{Zn}$ & $M_{\text{s}}-H_{\text{K}}-H_{\text{U}}-H_{\text{S}}$ & $M_{\text{s}}-H_{\text{K}}-H_{\text{U}}-2H_{\text{S}}-H_{\text{E}}$\\ \end{tabular} \end{ruledtabular} \end{table} where the following thickness-dependent exchange and surface anisotropy fields have been used: \begin{subequations} \label{eq:HeHs} \begin{eqnarray} H_{\text{E}}=\frac{2\pi^2A}{\mu_0M_\text{s}t^2},\label{subeq:5} \end{eqnarray} \begin{equation} H_{\text{S}}=\frac{2 K_\text{S}}{\mu_0M_\text{s}t}.\label{subeq:6} \end{equation} \end{subequations} At the chosen level of approximation, which is valid only for small differences in surface anisotropies ($\Delta K_{\text{S}}\: t/A \ll 1$), the resonance frequencies (Eqs.~\ref{eq:p1}) depend on $K_{S}$ but not on $\Delta K_{S}$, although the $C$ matrix depends explicitly on it. This may be explained as follows. With $k$=0, the mutual demagnetization factor $Q$ vanishes (see Appendix~\ref{sec:App1}) and so does the largest source of hybridization between the uniform and antisymmetric basis modes. Only terms in $\Delta K_{\text{S}}$ remain non zero in the off diagonal blocks of the $C$ matrix (Eq.~\ref{eq:Cmatrix}), meaning that the difference in surface anisotropies becomes the sole source of hybridization. It happens that the corresponding coupling is proportional to $\Delta K_{\text{S}}^2$ and is thus neglected in the above approximation. We note also that the contribution of surface anisotropies to the stiffness fields is doubled in the case of mode $n$=1 as compared to mode $n$=0. This is a direct consequence of mode $n$=0 being uniform at this level of approximation while mode $n$=1 is fully asymmetric with large amplitude at the surfaces, which makes it more sensitive to PSA. \subsection{Results}\label{sec:results Fitting the experimental data in Fig.~\ref{fig:FMRfvsH} to the corresponding Eqs.~(\ref{eq:p1}) yields the values of the stiffness fields $H_{Xn}$, $H_{Yn}$, and $H_{Zn}$ $(n\!=\!0,1)$ presented in Fig.~\ref{fig:HiFields} as open circles. For this extraction, we assume a unique value for $\gamma$ (see Table~\ref{tab:table2}), \begin{table}[b] \caption{\label{tab:table2}Magnetic parameters obtained for MgO/Fe($t$)/MgO films ($t\!=\!10\!-\!30\text{nm}$).} \begin{ruledtabular} \begin{center} \begin{tabular}{cccc} $\mu_0 M_{\text{s}}$ [T] & $\gamma/2\pi$ [GHz/T] & $A$ [pJ/m] & $K_1$ [kJ/$\text{m}^3$] \\ \colrule 2.15\footnote{Tabulated value for bulk iron at room temperature} & $29.1\pm0.6$ & $19.4\pm0.1$ & $52\pm1$\\ \hline\\[-1em] \hline\\[-1em] $K_\text{U}$ [kJ/$\text{m}^3$] & $K_\text{S}$ [mJ/$\text{m}^2$] &\multicolumn{2}{c}{${\Delta K_{\text{S}}}$ [mJ/$\text{m}^2$] }\\ \colrule $-45\pm28$ & $2.3\pm0.3$ & \multicolumn{2}{c}{$0.8\footnote{Value obtained from non-reciprocal spin wave measurements (See Section~\ref{sec:level3_PSWS})}\pm0.1$}\\ \end{tabular} \end{center} \end{ruledtabular} \end{table} which is the average over all film thicknesses of the individual $\gamma$ values obtained by fitting $f_{\perp 0}(H)$ experimental data to Eq.~(\ref{subeq:2}). \begin{figure}[ht] \includegraphics[width=86mm]{HxHyHz_v13}% \caption{\label{fig:HiFields} a) Stiffness fields as a function of Fe thickness obtained from Kittel-like fits of the experimental resonance frequencies (open circles) compared with the analytical model of Table~\ref{tab:table1} (solid lines) and the values obtained from Kittel-like fits of SWIIM simulations (dotted lines). b) Stiffness field $H_{X1}$ as a function of $1/t^2$. c) Stiffness fields $H_{Y0}$ and $H_{Z0}$ as a function of $1/t$. } \end{figure} Values of the stiffness fields associated with the two oscillation modes $(n\!=\!0,1)$ can be readily treated and combined sequentially so as to extract most of the magnetic parameters of the iron films. As a starting point and in agreement with SQUID characterization, the saturation magnetization value is set to that of bulk iron $\mu_0 M_{\text{s}}\!=\!2.15$~T. Next, we observe that $H_{X0}$ is thickness independent and we extract the cubic anisotropy constant $K_1$ from $H_{X0}\!=\!H_{\text{K}}$. Then, since $H_{X1}\!-\!H_{X0}\!=\!H_{\text{E}}$ varies as $t^{-2}$ [Fig.~\ref{fig:HiFields}(b)], we confidently extract a thickness independent exchange constant $A$ (Eq.~\ref{subeq:5}). We subsequently notice that the thickness dependent parts of $H_{Y0}$ and $H_{Z0}$ both vary as $t^{-1}$, with similar slopes [Fig.~\ref{fig:HiFields}(c)], from which we determine the average total surface anisotropy constant $K_{\text{S}}$ (Eq.~\ref{subeq:6}). Finally, we determine the unixial anisotropy constant $K_{\text{U}}$ using $H_{Y0}\!+\!H_{Z0}\!-\!2(M_{\text{s}}\!-\!H_{\text{S}})\!=\!2H_{\text{U}}$. The obtained magnetic parameters are summarized in Table~\ref{tab:table2}. In the chosen parametrization, the negative sign of $K_{\text{U}}$ indicates an easy-plane parallel to the film's plane and the positive sign of $K_{\text{S}}$ an easy-axis along the film normal. The lines in Fig.~\ref{fig:HiFields} are the theoretical stiffness fields calculated by injecting the parameters just determined (Table~\ref{tab:table2}) back into our analytical model. The good agreement obtained illustrates that a single set of thickness independent magnetic parameters is indeed enough to capture most features of the magnetization dynamics in the studied MgO/Fe/MgO films. Furthermore, the fact that the agreement also applies to those stiffness fields which have not been considered in the above analysis ($H_{Y1}$ and $H_{Z1}$) can be considered as a validation of the model. There are however points of slight disagreement between the experimentally determined values and the corresponding analytical predictions. In particular, predicted values of the difference $H_{Y0}\!-\!H_{Z0}$ are significantly larger than the ones determined experimentally. As may be inferred from Appendix~\ref{sec:App3}, modifying the analytical model (Eqs.~\ref{eq:p1}) to include terms up to second order in $\Delta K_{\text{S}}$ allows one partially reducing the disagreement. This second order approximation however produces cumbersome expressions which are unpractical and, even more importantly, unable to provide information regarding the sign of $\Delta K_{\text{S}}$. This points at the need for an accurate determination of this additional parameter through an experimental technique which is sensitive to it at first order, namely spin wave frequency non-reciprocity. \section{\label{sec:level3_PSWS}Non-reciprocal spin wave propagation} Propagating spin wave spectroscopy has been shown to be very sensitive to magnetic asymmetries across the thickness of thin films \cite{Gladii.2016p}, including differences in surface anistropies at the two interfaces ($\Delta K_{\text{S}}\!\neq\!0$). The principle of such measurement is sketched in the inset of Fig.~\ref{fig:dLs}. A spin-wave propagating in the so-called Damon-Eshbach geometry (i.e. with its wave-vector $\mathbf{k}$ oriented perpendicular to the in-plane applied magnetic field $\mathbf{H}$) is known to exhibit a mode profile non-reciprocity. This means that the wave has an asymmetric distribution across the film thickness, with more amplitude on one side of the film than on the other. This asymmetric profile is reversed when changing the sign of $\mathbf{k}$, i.e. for spin waves propagating in the opposite direction~\cite{Kostylev.2013} (see inset of Fig.~\ref{fig:dLs}). Consequently, an inhomogeneous magnetic environment across the thickness will have different effects on the dynamics of counter-propagating spin waves. This can be measured experimentally as a difference between their resonance frequencies which becomes a spectroscopic signature of the film's asymmetric magnetic environment~\cite{Gladii.2016p}. \begin{figure}[ht] \includegraphics[width=86mm]{MutualInductance_v12}% \caption{\label{fig:dLs} Measured imaginary component of the mutual inductance between antennas due to counter-propagating spin waves (with wave numbers: $k_1$=3.9 $\text{rad/}\mu \text{m}$ and $k_2$=1.57 $\text{rad/}\mu \text{m}$) in a strip of $t$=20 nm Fe film under a 120 mT magnetic field (Damon-Eshbach geometry). The inset shows an electronic microscope image of the experimental device (top) and a sketch of the asymmetric modal profiles of counter-propagating waves across the thickness of the film and its interplay with asymmetric magnetic surface anisotropies (bottom).} \end{figure} To measure this frequency non-reciprocity, suitable devices are fabricated from the Fe samples. This is achieved by patterning the film into a strip geometry and fabricating a pair of conductors on top. These conductors with a meander geometry serve as antennas for exciting and detecting spin-waves of controlled wavelength. With the design chosen in the present work (see inset in Fig.~\ref{fig:dLs}) the most important excitation occurs around two particular wave vectors $k_1$=3.9 $\text{rad/}\mu \text{m}$ and $k_2$=1.57 $\text{rad/}\mu \text{m}$. By using two separate antennas, and interchanging their role, it is possible to measure the changes in mutual inductance corresponding to spin waves propagating with positive and negative wave vectors $\Delta L_{k>0}$, $\Delta L_{k<0}$ \cite{V.Vlaminck.2010}. More details on the fabrication process of the devices and experimental procedure can be found elsewhere \cite{Gladii.2017}. Fig.~\ref{fig:dLs} shows the measured change in mutual inductance corresponding to spin wave propagation in a $t$=20~nm device under a 120~mT field applied along [100]. One can observe directly a frequency difference between counterpropagating waves, both for the main spin-wave excitation [$f_{\text{NR}}(k_1)$] and for the secondary one [$f_{\text{NR}}(k_2)$]. The value of this frequency non-reciprocity is followed as a function of the applied magnetic field in the range 30-200 mT. The different symbols in Fig.~\ref{fig:NRvsH} show the frequency non-reciprocity measured for three different samples: namely devices from the FMR series with $t\!=\!10$~nm and 20~nm, and a third device with $t\!=\!20$~nm but without Ti capping, labelled thereafter 20~nm*. We observe that $f_{\text{NR}}$ is roughly field independent. \begin{figure}[ht] \includegraphics[width=86mm]{NRvsH_v9}% \caption{\label{fig:NRvsH} Frequency non-reciprocity measurements as a function of applied magnetic field for devices $t$=10 nm, 20 nm and 20 nm* and each wave numbers $k_1$=3.9 $\text{rad/}\mu \text{m}$, $k_2$=1.57 $\text{rad/}\mu \text{m}$. The lines correspond to the average value of $f_{\text{NR}}$ over the experimental field range.} \end{figure} To analyze quantitatively the data in Fig.~\ref{fig:NRvsH} and estimate the difference in magnetic surface anisotropy $\Delta K_{\text{S}}$, we use the theory developed in the previous section with some modifications. To account for the propagating character of the spin waves, the space-dependent part of the dynamic component of the magnetization [Eq. (\ref{eq:Dynamicmag})] becomes $\textbf{m}^*(\xi)e^{-i k\eta}$ where $\hat{\eta}$ is a direction vector along the ferromagnetic strip. Now, the dynamic magnetic field writes \begin{equation} \label{eq:PropDynH} \begin{split} \textbf{h}=& \frac{1}{M_{\text{s}}} \bigg\{ \left[ \frac{2A}{\mu_0 M_{\text{s}}}\nabla^2 + H_{\text{K}} \right]\textbf{m} + \left[ H_{\text{U}}+h_{\text{s}}(\xi) \right]m_{\xi}\boldsymbol{\hat{\xi}} \bigg\}\\ &+ \int_{0}^{t} G_k(\xi-\xi')\textbf{m}\,d\xi', \end{split} \end{equation} \noindent where the last term is the dipolar contribution and $G_k$ the magnetostatic Green's tensor \cite{GUSLIENKO.2011}. As in the previous section, the system of Eqs. (\ref{eq:LL},\ref{eq:PropDynH}) can be projected onto the spin wave modes basis which allows one to obtain an eigenvalue equation with a dynamical matrix $C$ given explicitly in Appendix~\ref{sec:App1} (note that in this case $k\!\neq\!0$). In the Damon-Eshbach configuration, the non-uniform character of magnetization along $\hat{\eta}$ gives rise to dipolar coupling between Fourier components $n\!=\!0$ and $n\!=\!1$ [through the factor $Q\!\neq\!0$ in the non-diagonal blocks of the dynamic matrix $C$, see Eq. (\ref{subeq:3})], which is non-reciprocal with respect to the wave number. This, combined with the asymmetry produced by $\Delta K_{\text{S}}$, is at the very origin of the frequency non-reciprocity. Once again we can calculate the resonance frequencies from the dynamic matrix C, and in this case, obtain the frequency difference between counterpropagating waves. Assuming $\Delta K_{\text{S}}t/A\!\ll\!1$, an approximate expression can be derived in which the frequency non-reciprocity of mode $n\!=\!0$ is a linear function of both the wave number $k$ and the difference in surface anisotropy $\Delta K_{\text{S}}$ \cite{Gladii.2016p}: \begin{equation} \label{eq:LnNonReci} \begin{split} f_{\text{NR}} &= f_{\parallel 0}(k<0)-f_{\parallel 0}(k>0) \\ &\approx\frac{8\gamma\:\Delta K_{\text{S}}}{\pi^3M_{\text{s}}(1+\frac{\Lambda^2\pi^2}{t^2})}\:k \end{split} \end{equation} Since $f_{\text{NR}}$ shows no systematic variation with the external magnetic field (Fig.~\ref{fig:NRvsH}), we consider below its average value over the 30-200 mT range and plot it in Fig.~\ref{fig:NRvsk} as a function of $k$ for each of the studied films. Applying Eq. (\ref{eq:LnNonReci}) and using values of $\gamma$, $M_{\text{s}}$, and $A$ found in Table~\ref{tab:table2}, we finally extract estimates for $\Delta K_{\text{S}}$ from the slopes of linear fits: 0.8, 1.1, 1.6 $\text{mJ/m}^{2}$ for the 10 nm, 20 nm and 20nm* films, respectively. \begin{figure}[ht] \includegraphics[width=86mm]{f_NR_v6} \caption{\label{fig:NRvsk} Frequency non-reciprocity as a function of wave number for the three films under study. The points represent the average experimental values in the applied field range 30-200 mT (see Figure~\ref{fig:NRvsH}). The solid lines are the corresponding linear fits to Eq. (\ref{eq:LnNonReci}). The dashed lines show the mean value as calculated with SWIIM while using values for $\Delta K_{S}$ that best adjust the experimental points (see text for details).} \end{figure} These results confirm the asymmetry of the two film interfaces suggested by FMR measurements. They show that $\Delta K_{\text{S}}$ is undoubtedly positive, which means that, in all films, the bottom interface has a stronger PSA than the top one. Since the above characterization is based on several approximations [Eqs.~(\ref{eq:Dynamicmag}, \ref{eq:LnNonReci})], the magnitude of $\Delta K_{\text{S}}$ should however be refined before comparing and contrasting the films and their respective interfaces. To this aim, we finally turn to numerical simulations. We resort to the code SWIIM \cite{henry2016propagating} which provides a finite-difference numerical solution of Eqs.~(\ref{eq:LL},\ref{eq:PropDynH}) to calculate the difference between the frequencies of counterpropagating waves as a function of wave number $k$ in the Damon-Eshbach configuration. For each sample, we adjust $\Delta K_{\text{S}}$ (the remaining magnetic parameters are taken from Table~\ref{tab:table2}), so as to best reproduce the experimental $k_{\text{NR}}(k)$ data in Fig.~\ref{fig:NRvsk} (dashed lines). The values of $\Delta K_{\text{S}}$ obtained in this way are 0.7, 0.8, and 1.2 $\text{mJ/m}^{2}$ for the 10~nm, 20~nm and 20~nm* films, respectively. Comparing them with values obtained from Eq.~(\ref{eq:LnNonReci}), we observe that the analytical approach systematically underestimates the effect of $\Delta K_{\text{S}}\!\neq\!0$. Noticeably, using numerical simulations allows us reducing the difference between the values of $\Delta K_{\text{S}}$ for the 10~nm and 20~nm samples to almost nothing, which is of course expected for films of similar composition. We then choose the value $\Delta K_{\text{S}}\!=\!0.8$~mJ/m$^{2}$ as a reference for our MgO/Fe/MgO system. Having refined the magnitude of $\Delta K_{\text{S}}$, we finally go back to the ferromagnetic resonance case and we use SWIIM to calculate the resonance frequencies of the two lowest FMR modes as a function of magnetic field using the now completed set of magnetic parameters (Table~\ref{tab:table2}). Then we fit the frequencies determined numerically to Eqs.~(\ref{eq:p1}) and extract the corresponding stiffness fields. As expected, accounting for the difference in surface anisotropy evidenced through propagating spin wave spectroscopy allows one improving slightly the agreement between experimentally and numerically determined $H_{Y0}$, $H_{Z0}$, $H_{Y1}$, and $H_{Z1}$ stiffness fields (Fig.~\ref{fig:HiFields}). Note that despite the introduction of $\Delta K_{\text{S}}$ and the exact treatment of hybridization effects by SWIIM, numerical data remain however rather close to predictions of our analytical model. This proves the suitability of our choice of a limited four-vector mode basis (Sec.~\ref{sec:level2_Model}). \section{\label{sec:level4_Discussion}Discussion} Starting from a simplified analytical model, we have described above a method for extracting the magnetic parameters of ferromagnetic films with moderate thickness from broadband ferromagnetic resonance and propagating spin wave spectroscopy measurements. The positions of the ferromagnetic resonance peaks measured over large field and frequency ranges are first fitted to Kittel formulas (Fig.~\ref{fig:FMRfvsH}). Then, the extracted stiffness fields are confronted to explicit expressions (Fig.~\ref{tab:table1}) allowing one to extract successively five magnetic parameters. The deviations between the model and the experiments do not exceed 3$\%$ (Fig.~\ref{fig:HiFields}), which we find very satisfactory given the wide range of field, frequency and film thickness investigated, and the limited number of parameters involved. Moreover, the extension of the model to propagating spin waves accounts for the measured frequency non-reciprocity, a quantity from which we extract a sixth magnetic parameter. The value of the latter is eventually refined by confronting frequency non reciprocity data to full micromagnetic calculations. The above ferromagnetic resonance study suggests that the entire thickness series can be described quite accurately with a thickness independent set of magnetic parameters. The exchange stiffness constant we determined lies within the range of values reported in literature for Fe at room temperature $A\!=\!19\text{-}23~\text{pJ/m}$~\cite{Devolder.2013,Niitsu.2020,Kuzmin.2020} and agrees particularly well with the recent determinations by Niitsu~\cite{Niitsu.2020} and Kuz'min~\textit{et al.} \cite{Kuzmin.2020}. The measured cubic anisotropy constant, on the other hand, is slightly larger than the values measured on bulk iron and thin iron films ($K_1\!=\!44\text{-}49~\text{kJ/m}^{3}$) \cite{Buschow.2001,Graham.1958,Westerstrand.1975,Barsukov.2011}, but it agrees well with results from first principles calculations ($K_1\!=\!52~\text{kJ/m}^{3}$)~\cite{Barsukov.2011}. The third volume parameter, namely the uniaxial anisotropy $K_{\text{U}}$, accounts for the difference between the saturation magnetization $M_{\text{s}}$ determined from SQUID magnetometry and the so-called effective magnetization $M_{\text{eff}}\!=\!(M_{\text{s}}\!-\!H_{\text{U}})$ measured from FMR. Now, we argue that this anisotropy originates from a distortion of the iron lattice. Indeed, the -4$\%$ mismatch between Fe and the MgO substrate is known to relax only partly through a dense array of dislocations formed in the first Fe atomic layers, thus leaving a small residual strain in nanometer thick films~\cite{Du2021}. Accordingly, a tetragonal distortion is measured in the samples under study consisting of a 0.7$\%$ mean in-plane expansion and a 0.5$\%$ out-of-plane compression~\cite{Magnifouet.2020}. Such vertical lattice compression is expected to enhance the spin-orbit mediated interactions between electronic states which favor an in-plane orientation of the magnetization~\cite{Wu.1998}. To relate phenomenologically this extra magnetic anisotropy to the measured lattice distortion one may use the magnetoelastic coupling constants of bulk iron~\cite{Hearmon.1946}. The obtained uniaxial anisotropy constant $K_{\text{U}}\!=\!-41$~kJ/m$^{3}$ (see Sander~\cite{Sander.2004} for calculation details) is in very good agreement with our experimental observations regarding both its sign and its magnitude. In terms of total perpendicular surface anisotropy, our results (see Table~\ref{tab:table2}) agree well with what is expected for an Fe ultra thin film sandwiched between two MgO layers~\cite{Shimabukuro.2010,Nakamura.2010,Yang.2011, Hallal.2013, Odkhuu.2016}. From the joint results of broadband FMR and spin wave propagation, we can extract the values for the two individual PSA constants: $K^{\text{bot}}_{\text{S}}\!=\!(K_{\text{S}}+\Delta K_{\text{S}})/2\!=\!1.55$~mJ/m$^{2}$ for the bottom interface (MgO buffer/Fe) and $K^{\text{top}}_{\text{S}}\!=\!(K_{\text{S}}-\Delta K_{\text{S}})/2\!=\!0.75$~mJ/m$^{2}$ for the top one (Fe/MgO capping). These two values are within the range for the PSA obtained by ab-initio calculations~\cite{Shimabukuro.2010,Nakamura.2010,Yang.2011, Hallal.2013, Odkhuu.2016} and measurements on ultrathin films with a single MgO/Fe interface~\cite{Lambert.2013,Koo.2013,Okabayashi.2014,KozioRachwa.2013,KozioRachwa.2013b}. However, in these previous experimental works, the extracted values always included the contributions from two interfaces, and some hypothesis based on reference interfaces (e.g. V/Fe) needed to be included to extract individual values. In the present study we provide individual $K_{\text{S}}$ values for both interfaces which allows us to compare them directly with results from ab-initio simulations and evidence that the ultra-thin interface physics can be extrapolated to thicker films~\cite{Shimabukuro.2010,Nakamura.2010,Yang.2011, Hallal.2013, Odkhuu.2016, Lambert.2013,Koo.2013,Okabayashi.2014,KozioRachwa.2013,KozioRachwa.2013b}. It has been shown theoretically that over/under oxidation at the Fe/MgO interface reduce drastically its surface anisotropy~\cite{Yang.2011,Hallal.2013}. Moreover, using Mossbaüer spectroscopy, it has been shown that the Fe/MgO and MgO/Fe interfaces of a film can exhibit different amount of interfacial Fe oxidation~\cite{Mynczak.2013}. Therefore, we attribute the difference in PSA at the two interfaces to a difference in their oxidation states. We assume that the distinct temperature treatments to which the bottom and top interfaces are subjected during growth is the reason for that: the 480°C annealing, performed just after iron deposition, promotes a better epitaxy and higher value of surface anisotropy for the bottom MgO/Fe interface~\cite{Okabayashi.2014}. On the other hand, the top interface is not annealed, which likely leads to an over oxidation of the interfacial Fe atoms and a lower value of the PSA~\cite{Yang.2011,Hallal.2013}. This behaviour is corroborated by the larger value of $\Delta K_{\text{S}}$ observed in the 20~nm* sample without Ti protection (see blue dots in Fig.~\ref{fig:NRvsk}). For this sample, indeed, we argue that a further oxidation of the top interface may take place after the unprotected 8~nm thick MgO capping layer is exposed to water~\cite{Holt.1997,Youssef.2018} both during the fabrication of this specific device and later under ambient conditions. \section{Conclusion} The magnetization dynamics of a thickness series of MgO/Fe($t$)/MgO epitaxial films ($t\!\text{=}\!10\text{-}30$~nm) was characterized using a combination of ferromagnetic resonance and non-reciprocal spin wave propagation measurements. Our rather versatile Kittel model accounts consistently for the frequencies of the uniform mode of magnetization precession and also for the inhomogeneous first standing spin-wave mode. Noticeably, the ability to probe both of these modes over a wide range of film thicknesses allowed us to determine the exchange stiffness constant and the perpendicular surface anisotropy, two quantities which are inaccessible through the sole study of homogeneous dynamics. With our detailed ferromagnetic resonance characterization, we evidenced that the entire thickness series can be described with a single set of magnetic parameters. The magnetic volume parameters, cubic anisotropy and exchange stiffness, agree very well with what is expected for bulk iron. Also, an additional uniaxial perpendicular anisotropy was identified and attributed to a slight tetragonal distortion of the Fe lattice. Finally, it was possible to separate contributions of individual film interfaces to the perpendicular surface anisotropy with the help of complementary propagating spin wave spectroscopy measurements. The sizeable asymmetry between the top and bottom interfaces was attributed to the different oxidation states of each interface. Our characterization suggest that 10-30 nm thick single crystalline Fe films have a well defined quasi-bulk magnetic interior, while the interfaces with MgO retain the large perpendicular surface anisotropy found in ultra-thin film. Our work provides new light into the technologically-relevant ferromagnet/MgO interfaces and their effect on spin waves, while it also validates a new method for characterizing magnetic interfaces. \begin{acknowledgments} We thank Arnaud Boulard, Beno\^{\i}t Leconte, Daniel Spor, J\'er\'emy Thoraval and Fares Abiza for assembling and testing the broadband FMR setup; J\'er\^{o}me Robert for SQUID magnetometry measurements; Romain Bernard, Sabine Siegwald and Hicham Majjad for technical support during nanofabrication work in the STnano platform; and Mat\'{\i}as Grassi for useful discussion. We acknowledge financial support by the Interdisciplinary Thematic Institute QMat, as part of the ITI 2021-2028 program of the University of Strasbourg, CNRS and Inserm, IdEx Unistra (ANR 10 IDEX 0002), SFRI STRAT’US project (ANR 20 SFRI 0012) and ANR-17-EURE-0024 under the framework of the French Investments for the Future Program. We also acknowledge financial support from Region Grand Est through its FRCR call (NanoTeraHertz and RaNGE projects) and from Agence Nationale de la Recherche (France) under Contract No. ANR-20-CE24-0012 (MARIN). \end{acknowledgments}
{ "timestamp": "2022-09-23T02:12:23", "yymm": "2209", "arxiv_id": "2209.10906", "language": "en", "url": "https://arxiv.org/abs/2209.10906" }
\section{Introduction} Despite the successes of deep reinforcement learning agents in the last decade, these still require a large amount of data or interactions to learn good policies. This data inefficiency makes current methods difficult to apply to environments where interactions are more expensive or data is scarce, which is the case in many real-world applications. In environments where the agent doesn't have full access to the current state (partially observable environments), this problem becomes even more prominent, since the agent not only needs to learn the state-to-action mapping but also a state representation function that tries to be informative about a state given an observation. In contrast, humans, when learning a new task, already have a well-developed visual system and a good model of the world which are components that allows us to easily learn new tasks. Previous works have tried to tackle the sample inefficiency problem by using auxiliary learning tasks \citep{schwarzer_pretraining_2021, stooke_decoupling_2021, guo_bootstrap_2020}, that try to help the network's encoder to learn good representations of the observations given by the environments. These tasks can be supervised or unsupervised and can happen during a pretraining phase or a reinforcement learning (RL) phase in a joint-learning or decoupled-learning scheme. In recent years, self-supervised learning has shown to be very useful in computer vision, the increasing interest in this area has resulted in the appearance of new and improved methods that train a network to learn important features from the data using only the data itself as supervision. A common approach to evaluating such methods is to train a network composed of the pretrained encoder, with the parameters frozen, paired with a linear layer in popular datasets, like ImageNet. These evaluations have shown that these methods can achieve high scores in different benchmarks, which shows how well the current state-of-the-art methods are able to encode useful information from the given images without being task-specific. Additionally, it has been shown that pretraining a network using self-supervised learning (or unsupervised) adds robustness to the network and gives better generalization capabilities \citep{erhan_why_2010}. Also recently, a new architecture for vision-based tasks called the Vision Transformer (ViT) \citep{dosovitskiy_image_2020} has shown impressive results in several benchmarks without using any convolutions. This architecture presents much weaker inductive biases when compared to a CNN, which can result in lower data efficiency. But the Vision Transformer, unlike the CNNs, can capture relations between parts of an image (patches) that are far apart from each other, thus deriving global information that can help the model perform better in certain tasks. Furthermore, when the model is pretrained, using supervised or self-supervised learning, it manages to surpass the best convolution-based models in terms of task performance. Nonetheless, and despite these successes in computer vision these results are yet to be seen in reinforcement learning. Motivated by the potential of the Vision Transformer, in particular when paired with a pretraining phase, and the increasing interest in self-supervised tasks for DRL, we study pretraining ViT using state-of-the-art (SOTA) self-supervised learning methods. Consequently, we propose TOV-VICReg (Temporal Order Verification-VICReg) which is an extension of VICReg (Variance Invariance Covariance Regularization) \citep{bardes_vicreg_2021} that adds a temporal order verification task \citep{misra_shuffle_2016} to help the model better capture the temporal relations between consecutive observations. While we could have adapted any of the other methods, we opted for VICReg due to its computational performance, simplicity, and good results in early experiments and metrics such as the ones presented in Section \ref{section:metrics}. After our empirical results in the Atari games, we present a small study of the pretrained encoders using several metrics to understand if they suffer from any representational collapse and also analyse the learned representations using similarity matrices and attention maps. Our main contributions are: \begin{itemize} \item We propose a new self-supervised learning method which extends VICReg to capture the temporal relations between consecutive frames through a temporal order verification task, in Section \ref{section:tov-vicreg}. \item We pretrain a Vision Transformer using several SOTA self-supervised methods and our proposed method, and study them through metrics (Section \ref{section:metrics}), visualizations (Section \ref{section:repr}) and fine-tuning in reinforcement learning ( Section \ref{section:data-efficiency}), where we show that temporal relations learned by the model pretrained with our method contribute to a great increase in data efficiency. \end{itemize} \section{Related Work} \paragraph{Pretraining representations} Previous work, similarly to our approach, has explored pretraining representations using self-supervised methods which led to great data-efficiency improvements in the fine-tuning phase \citep{schwarzer_pretraining_2021,zhan_framework_2020} or superior results in evaluation tasks, like AtariARI \citep{anand_unsupervised_2020}. Others have pretrained representations using RL algorithms, like DQN, and transfer those learned representations to a new learning task \citep{wang_investigating_2022}. \paragraph{Temporal Relations} Other works have explored learning representations that have temporal information encoded. ATC (Augmented Temporal Contrast) \citep{stooke_decoupling_2021} trains an encoder to compute temporally consistent representations using contrastive learning, and the ST-DIM (SpatioTemporal DeepInfoMax) \citep{anand_unsupervised_2020} captures spatial-temporal information by maximizing the mutual information between features of two consecutive observations. \paragraph{Joint learning} In recent years, adding an auxiliary loss to the RL loss, usually called joint learning, has become a common approach by many proposed methods. Curl \citep{srinivas_curl_2020} adds a contrastive loss using a siamese network with a momentum encoder. Another work studies different joint-learning frameworks using different self-supervised methods \citep{li_does_2022}. SPR \citep{schwarzer_data-efficient_2021} uses an auxiliary task that consists of training the encoder followed by an RNN to predict the encoder representation k steps into the future. PSEs \citep{agarwal_contrastive_2021} combines a policy similarity metric (PSM), that measures the similarity of states in terms of the behaviour of the policy in those states, and a contrastive task for the embeddings (CME) that helps to learn more robust representations. PBL \citep{guo_bootstrap_2020} learns representations through an interdependence between an encoder, that is trained to be informative about the history that led to that observation, and an RNN that is trained to predict the representations of future observations. Proto-RL \citep{yarats_reinforcement_2021} uses an auxiliary self-supervised objective to learn representations and prototypes \citep{caron_unsupervised_2020}, and uses the learned prototypes to compute intrinsic rewards which will push the agent to explore the environment. \paragraph{Augmentations} While we only use augmentations in the pre-training phase, their use during reinforcement learning has also been studied. Methods like DrQ \citep{kostrikov_image_2021} and RAD \citep{laskin_reinforcement_2020} pair an RL algorithm, like SAC, with image augmentations to improve data efficiency and generalization of the algorithms. \paragraph{Vision Transformer for vision-based Deep RL} Recent works, also compare the Vision Transformer to convolution-based architectures with a similar number of parameters and show that ViT is very data inefficient even when paired with an auxiliary task \citep{tao_evaluating_2022}. \section{Background} \subsection{Vision Transformer} ViT \citep{dosovitskiy_image_2020} is a model, for image classification tasks, that doesn't rely on CNNs using only attention. The model wraps the encoder of a Transformer, uses patches of the input image as tokens and adds a classification token which after the computation will serve as the image representation. When compared to CNNs, ViT presents weaker image-specific inductive biases which allow the CNNs for much sample-efficient learning \citep{dascoli_convit_2021}, although it has been shown that with enough data the image-specific inductive biases become less important \citep{dosovitskiy_image_2020}. \subsection{Reinforcement Learning} The problem of an \textbf{agent} learning to solve a task in a certain \textbf{environment} can be defined as a Markov Decision Process (MDP). A MDP $\mathcal{M}$ is defined by the tuple $\left \langle \mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T} \right \rangle$, where $\mathcal{S}$ is the set of states, $\mathcal{A}$ the set of actions, $\mathcal{R}$ the reward function, and $\mathcal{T}$ the transition function. At each timestep the agent is in a state $s\in \mathcal{S}$ and takes an action $a \in \mathcal{A}$. Upon performing the action the agent receives from the environment a reward $r\in\mathcal{R}$ and a new state $s^\prime\in \mathcal{S}$ which is determined by the transition function $\mathcal{T}(s^\prime, s, a)$. The MDP assumes that the Markov property holds in the environment, i.e. the state transitions are independent and the agent only needs to know the current state to perform an action $P(a_t|x_0,x_1 ...x_t)=P(a_t|x_t)$.For the agent to decide what action to take it uses a policy function $\pi$, which gives a distribution over actions given a state, $\pi(a_t|s_t)$. This policy is evaluated using the function $V^\pi(s)$, which estimates the expected total discounted reward of an agent in a state $s$ and which follows a policy $\pi$. \subsubsection{DQN and Rainbow} DQN \citep{mnih_playing_2013} is a value-based method and uses a network with parameters $\phi$ that given a state $s$ outputs a prediction of the distribution of Q values over actions, $Q_\phi(s,a)$. The network learns the Q function by minimizing the mean squared error: $(y-Q_\phi(s,a))^2$, where $y=r +\gamma \ max_{a^\prime} Q_\phi(s^\prime,a^\prime)$. The algorithm has the following structure: \begin{enumerate} \item Start episode = 1 and repeat \begin{enumerate} \item Start t=1 and repeat T time: \begin{enumerate} \item With probability $\epsilon$: $a_t=random()$, otherwise: $a_t = argmax_{a^\prime}\ Q(s,a^\prime)$ \item Execute $a_t$ and observe $s_t^\prime$ and $r_t$ \item Store transition $\{s_t,a_t, r_t, s_t^\prime\}$ in the replay buffer $\mathcal{D}$ \item Sample a mini-batch of transitions $\{s_j,a_j,r_j,s^\prime_j\}$ from $\mathcal{D}$ \item $y_j=r_j+\gamma\ max_{a^\prime_j}Q_\phi(s_j^\prime,a_j^\prime)$ \item $\phi \leftarrow \phi - \alpha \sum_j\frac{dQ_\phi(s_j,a_j)}{d\phi}(Q_\phi(s_j,a_j)-y_j)$ \end{enumerate} \end{enumerate} \end{enumerate} Several works followed the DQN algorithm which introduced changes to improve performance. Rainbow \citep{hessel_rainbow_2017} combines six improvements, Double Q-Learning \citep{van_hasselt_deep_2016}, Prioritized Replay \cite{schaul_prioritized_2016}, Dueling Networks \citep{wang_dueling_2016}, Multi-step Learning \citep{sutton_reinforcement_2018}, Distributional RL \citep{bellemare_distributional_2017}, and Noisy Nets \citep{fortunato_noisy_2018} resulting in a more stable and sample efficient algorithm. \subsection{Self-Supervised methods} Recent self-supervised methods for vision tasks can be put in two main categories: contrastive and non-contrastive. In contrastive learning, methods like MoCo \citep{he_momentum_2020} or SimCLR \citep{chen_simple_2020} learn using a loss function that pulls the positive samples together and pushes the negative samples apart. These methods usually require very large batch sizes or auxiliary structures that allow for more negative samples. MoCo, in particular, has three iterations v1 \citep{he_momentum_2020}, v2 \citep{chen_improved_2020}, and v3\citep{chen_empirical_2021}. In this work, we consider the more recent version (v3). This version uses a siamese network, where in one path the augmented samples (queries) are computed by an encoder $f_\theta$ (backbone) and a projector $g_\phi$, and in the other the samples (keys) by a momentum-encoder $f_\theta{^\prime}$ and a projector $g_\phi$. The loss function is the InfoNCE loss, with temperature, of the dot product of the queries with the keys. On the other hand, non-contrastive methods don't rely on the notion of positive and negative samples which results in a vast number of different approaches. DINO \citep{caron_emerging_2021} consists of a siamese network where each path is fed with a random augmentation of the input and where the encoders learn to minimize the cross-entropy between their normalized output probability distributions, computed using a softmax with temperature scaling. The teacher encoder is updated using an exponential moving average of the student encoder parameters and in its computation path is used an additional centring operation that contributes to an asymmetry that helps the method avoid collapse. Unlike, most methods, DINO creates more than 2 augmentations of the same source. More precisely it creates a set of views composed of two global views and several local views. All views are computed by the student network while only the global views are computed by the teacher network, which pushes the student to create a local-to-global correspondence. VICReg, on the other hand, tries to learn representations invariant to augmentations by minimizing the L2 distance while maintaining some variance in the representation features and decorrelating features. A more detailed explanation of the method will be presented in Section \ref{section:tov-vicreg}. For this study we selected DINO, MoCo, and VICReg since they are currently considered state-of-the-art, their official implementations are available in PyTorch, and each represents a different type of approach. \section{TOV-VICReg} \label{section:tov-vicreg} VICReg is a non-contrastive method that trains a network to be invariant to augmentations applied to the inputs while avoiding a trivial solution with the help of two additional losses, called variance and covariance, that act as regularizers over the embeddings. While VICReg is agnostic concerning the architectures used and even the weight sharing, in this work we consider the version where paths are symmetric, the weights are shared, and each path is composed of an encoder (also called backbone) and an expander. VICReg uses three loss functions: \textbf{invariance} is the mean of square distance between each pair of embeddings from the same original image, as shown in Equation \ref{eqn:invariance}, where $Z$, and $Z^\prime$ are two sets of embeddings, of size $N$, that result from computing two different augmentations of $N$ sources, and $z_j$ denotes the \textit{j-th} embedding in the set; \textbf{variance} is a hinge loss that computes, over the batch, the standard deviation of the variables in the embedding vector and pushes that value to be above a certain threshold, as shown in Equation \ref{eqn:variance}, where $d$ denotes the number of dimensions of the embedding vector, and $Z^j$ is the set of the \textit{j-th} variables in the set of embedding $Z$; \textbf{covariance} is a function that computes the sum of the squared off-diagonal coefficients of a covariance matrix computed over a batch of embeddings, as shown in Equation \ref{eqn:covariance}. While the invariance loss function tries to make the model invariant to augmentations, i.e. output the same representation vector, the other two functions regularize the method by pushing the variables of the embedding vector to vary above a certain threshold and decorrelating the variables in each embedding vector. \begin{equation} \label{eqn:invariance} i(Z,Z^\prime)=\frac{1}{N}\sum_{j}^{N}\left \|z_j - z^\prime_j \right \|_2^2 \end{equation} \begin{equation} \label{eqn:variance} v(Z)=\frac{1}{d}\sum^{d}_{j}\textup{max}(0,\gamma-\sqrt{Var(Z^j)}) \end{equation} \begin{equation} \label{eqn:covariance} c(Z)=\frac{1}{d}\sum_{i\neq j}\left [ \textup{Cov(Z)}\right ]^2_{i,j} \end{equation} TOV-VICReg or Temporal-Order-Verification-VICREG extends VICReg to better capture the temporal relations between consecutive observations and consequently encode extra information that can be useful in the deep reinforcement learning phase. To achieve that we add a new temporal order verification task, as seen in Shuffle-and-Learn \citep{misra_shuffle_2016}, that consists of a binary classification task where a linear layer learns to predict if three given representation vectors are in the correct order or not. Like the other losses, we also employ a coefficient for the temporal loss and in most of our experiments, the value is 0.1. Figure \ref{fig:tov-vicreg} visually illustrates TOV-VICReg. \begin{figure}[h] \centering \includegraphics[width=8cm]{Images/TOV-VICReg.png} \caption{TOV-VICReg architecture} \label{fig:tov-vicreg} \end{figure} At each step we sample 3 consecutive observations, $\{x_{t-1},\ x_t,\ x_{t+1}\}$, the $x_t$ is processed by two different augmentations, and like VICReg these are the augmentations used in BYOL \citep{grill_bootstrap_2020}, while $x_{t-1}$ and $x_{t+1}$ are processed by two simple augmentations composed of a color jitter and a random grayscale. The $x_t$ augmentations are computed by the VICReg computation path and the resultant embeddings are used for the loss functions, i.e. variance, invariance, and covariance. In the temporal order verification task we encode the augmentation of $x_{t-1}$ and $x_{t+1}$, and concatenate those two representations with one of the representations of $x_t$, in our case we used the one that was augmented without solarize, obtaining the vector $\{y_{t-1},y_t,y_{t+1}\}$. At last, we randomly permute the order of the representations in the vector and feed the resultant concatenated vector to a linear layer with a single output node that predicts if the given concatenated vector has the representations in the right order or not. \section{Pre-Training Methodology} We pretrained four encoders, one using our proposed method TOV-VICReg and three using state-of-the-art self-supervised methods: MoCov3 \citep{chen_empirical_2021}, DINO \citep{caron_emerging_2021} and VICReg \citep{bardes_vicreg_2021}. For this study, the encoder used is a Vision Transformer, more precisely the ViT tiny with a patch size of 8. We chose this patch size based on experiments that show that this value performed well in terms of data-efficiency when compared to 6, 10, and 12 without being too computationally intensive (Appendix \ref{app:vit_patch}). The dataset used is a set of observations from 10 of the 26 games in the Atari 100k benchmark, all available in the DQN Replay Dataset \citep{agarwal_optimistic_2020}. For each game, we use three checkpoints with a size of 100 thousand data points (observations), which makes up a total of 3 million data points (\textasciitilde 55 hours). The pretraining phase is 10 epochs with two warmup epochs. We used the official code bases of all the self-supervised methods and tried to change the least amount of hyperparameters. Appendix \ref{section:hyper} contains the tables with the hyperparameters used for each method. \section{Data-Efficiency} \label{section:data-efficiency} To test the pretrained Vision Transformers in reinforcement learning and compare data-efficiency gains, we trained in the 10 games used for pre-training for 100k steps using the Rainbow algorithm \citep{hessel_rainbow_2017}, with the DER \citep{van_hasselt_when_2019} hyperparameters. The only difference between the agents at the start is the representation module. We chose two networks to compare against, the Nature CNN \citep{mnih_human-level_2015}, and the SGI ResNet Large which is a larger version of the ResNet used in the SGI method \citep{schwarzer_pretraining_2021} that has a size roughly similar to the ViT tiny. Moreover, we use a learning rate two orders of magnitude smaller for the encoder ($1 \times 10^{-6}$), which previous works and experiments performed by us show to be beneficial \citep{schwarzer_pretraining_2021}. In this section, to report our results we follow the rliable \citep{agarwal_deep_2021} evaluation framework, where the scores of all games are normalized and treated as one single task. \subsection{Results} Figure \ref{fig:results} shows the aggregate metrics on 10 Atari games with training runs of 100k steps. Starting with the non-pretrained models (ViT, Nature CNN, and SGI-ResNet Large) we can assess that, observing the mean, Nature CNN is the most sample efficient model followed by SGI-ResNet Large, and ViT, respectively. Regarding the pretrained models, ViT, when pretrained with our method, performs better than the other models and the non-pretrained ViT in all metrics. It is worth noting that we report a higher variance in the results of our proposed method when compared to the remaining methods and non-pretrained models. ViT+TOV-VICReg when compared to Nature CNN, which has far fewer parameters, and SGI ResNet Large, with a similar number of parameters seems to closely match their sample-efficiency performance (Appendix Table \ref{tab:model_size}). Furthermore, the difference between the non-pretrained ViT and ViT pretrained with TOV-VICReg shows that a good self-supervised method that explores temporal relations and 3 million data points can help close the sample-efficiency gap while remaining a more complex and capable model. Regarding the remaining self-supervised methods, MoCo seems to perform considerably well obtaining even a median very similar to TOV-VICReg and is then followed by DINO and VICReg, respectively. All pretrained ViTs show an improvement in comparison to the non-pretrained ViT. \begin{figure}[h] \centering \includegraphics[width=14cm]{Images/results.png} \caption{The eval runs across the different games are normalized and treated as a single task. The IQM corresponds to the Inter-Quartile Mean among all the runs, where the top and bottom 25\% are discarded and the mean is calculated over the remaining 50\%. The Optimality Gap refers to the number of runs that fail to surpass the human average score, i.e. 1.0. } \label{fig:results} \end{figure} \section{Metrics} \label{section:metrics} A significant phenomenon when doing self-supervised training is the collapse of the representations, which can be seen in three forms: representational collapse, dimensional collapse, and informational collapse. Representational collapse refers to the features of the representation vector collapsing to a single value for every input, meaning the variance of the features is zero, or close to zero. In dimensional collapse, the representations don't use the full representation space, which can be measured by calculating the singular values of the covariance matrix calculated over the representations. Informational collapse defines the case where the features of the representation vector are correlated and therefore are representing the same information. \paragraph{Dimensional Collapse} All methods seem to avoid dimensional collapse, i.e. most dimensions have a singular value larger than zero, as observed in Figure \ref{fig:dimensional}. However, we notice that some methods make better use of the space available since they present higher singular values. TOV-VICReg, in particular, seems to excel in this metric, even improving the results obtained by VICReg. It is worth noting that both VICReg and TOV-VICReg employ a covariance loss that helps decorrelate the embedding variables which may be contributing positively to these results. Furthermore, we used a covariance coefficient of 10 for TOV-VICReg and 1 for VICReg a change that according to our experiments culminates in the increase here observed. \begin{figure}[h] \centering \includegraphics[width=6cm]{Images/dimensional.png} \caption{Logarithm of the singular values of the representation vector's covariance matrix sorted by value.} \label{fig:dimensional} \end{figure} \paragraph{Representational Collapse} Results in Table \ref{tab:repr_collapse} show the computed standard deviation of the representation vector over a batch of thousands of data points. DINO, VICReg and TOV-VICReg show a value well above zero, meaning that none of the methods suffered from representation collapse during training. On the other hand, MoCo shows a much smaller value of 0.178, which, is far from a complete collapse. Both VICReg and TOV-VICReg use a hinge loss that pushes the representation vector to have a standard deviation of 1 or above, while VICReg slowly converges to this value our method converges to roughly 1.65, which might be the result of adding a temporal order verification task. \begin{table}[ht] \centering \begin{tabular}{llll} \toprule DINO & MoCo & VICReg & TOV-VICReg \\ \midrule 0.979 & 0.178 & 1.003 & 1.648 \\ \bottomrule \end{tabular} \caption{Average standard deviation of the representation vector} \label{tab:repr_collapse} \end{table} \paragraph{Informational Collapse} We report in Table \ref{tab:info_collapse}, the comparison of the average correlation coefficients of the representation vectors. TOV-VICReg performs better than the other methods, including VICReg, which present very similar coefficients. Like in the dimensional collapse, this result is in part due to the higher covariance coefficient used in TOV-VICReg which by design helps the model to decorrelate the representation's features. Increasing the coefficient in VICReg results in a lower correlation coefficient as well, but is still higher than TOV-VICReg. \begin{table}[ht] \centering \begin{tabular}{llll} \toprule DINO & MoCo & VICReg & TOV-VICReg \\ \midrule 0.1764 & 0.1538 & 0.1531 & 0.0780 \\ \bottomrule \end{tabular} \caption{Average correlation coefficient} \label{tab:info_collapse} \end{table} \section{Representations} \label{section:repr} In this section, we present different visualizations to better understand the representations learned by each of the methods. Our goal with the following visualizations is to help us better understand the learned representations and give some intuitions about their properties. \paragraph{Cosine similarity} Figure \ref{fig:cosine-all} presents a similarity matrix of the representations where we can observe that TOV-VICReg can better distinguish between observations of different games but also observations from the same game, Figure \ref{fig:cosine-game}. MoCo, on the other hand, seems to make a good distinction between observations from the same game. However, we can observe in the colour bar that all the representations are very similar to each other, which corroborates the results obtained in Section \ref{section:metrics}. Oppositely, VICReg and DINO manage to spread representations more, as we can see in the colour bars, but, the yellow squares in the diagonal show that the representations from the same game are more similar to each other which is corroborated by Figure \ref{fig:cosine-game}. Given the empirical results, we believe that this capacity to distinguish observations from the same game might be a good indicator. \begin{figure}[h] \centering \includegraphics[width=10cm]{Images/cos-sim-all-games.png} \caption{Similarity matrices of the representations computed by MoCo, DINO, VICReg, and TOV-VICReg respectively. There are a total of 64 data points, from 4 different games: Alien, Breakout, MsPacman, and Pong, where from 0-15 are from Alien, 16-31 are from Breakout and so forth. } \label{fig:cosine-all} \end{figure} \begin{figure}[h] \centering \includegraphics[width=10cm]{Images/cos-sim-game.png} \caption{Similarity matrices of the representations computed by MoCo, DINO, VICReg, and TOV-VICReg respectively, of observations from MsPacman. } \label{fig:cosine-game} \end{figure} \paragraph{Attention visualisation} The research work that proposes DINO shows that the Vision Transformer is able to attend to important parts of the input after training using DINO. Inspired by these results, we try to make the same evaluation for the several self-supervised methods we are studying, including TOV-VICReg, and try to understand if any of the encoders can attend to interesting parts of the input. In Figure \ref{fig:attention}, we can see the results of all methods for an observation from the game of Pong, where each method produces three attention maps, one for each self-attention head of the last block of the Vision Transformer. All pretrained ViT seem to attend at some level to important game features like the ball and the paddles. However, TOV-VICReg is the only method that doesn't spread the attention to other parts of the frame that we don't consider important to describe the current state of the game. When comparing to VICReg's attention maps we believe that the temporal order verification task greatly helped the attention of the model. In more visually complex games, e.g. Freeway or MsPacman, these attention maps start to be more difficult to analyse but it is still possible to discern some important features. \begin{figure}[h] \centering \includegraphics[width=8cm]{Images/attention.png} \caption{Attention maps produced by the pretrained ViTs. We fed a pretrained ViT with an observation from the game Pong and obtained the attention maps from the three heads in the last block. } \label{fig:attention} \end{figure} \section{Discussion \& Conclusion} In this work, we presented a study of ViT for vision-based deep reinforcement learning using self-supervised pretraining, and proposed a self-supervised method that extends VICReg to better capture temporal relations between consecutive observations. Our results showed that the agent using a Vision Transformer that was pretrained with our method manages to surpass all other Vision Transformers, pretrained and non-pretrained, in sample efficiency and also achieves results very close to convolution-based models with far fewer parameters. These results reinforce the importance of encoding temporal relations between observations in the representation model, as shown by previous works, and also show that even vision models with weaker inductive biases and more parameters, when well pretrained, can achieve similar results in sample efficiency. The ability to use larger models, with millions of parameters, that are as sample efficient as some of the most popular CNN-based models (like Nature CNN or Impala ResNet), with thousands of parameters, is very important since it opens the door to using Deep RL in even more complex problems where smaller models tend to struggle to perform, without losing sample-efficiency. Moreover, recent work in natural language processing \citep{devlin_bert_2019, brown_language_2020}, and computer vision \citep{radford_learning_2021}, shows great benefits from pre-training large models, and similar approaches in RL have the potential to unlock new levels of performance never achieved before \citep{baker_video_2022}. \bibliographystyle{iclr2023_conference}
{ "timestamp": "2022-09-23T02:12:15", "yymm": "2209", "arxiv_id": "2209.10901", "language": "en", "url": "https://arxiv.org/abs/2209.10901" }
\section{Introduction} In recent years, the generalization accuracy of classification prediction using Deep Convolutional Neural Networks (DCNNs) has dramatically improved, and DCNNs are expected to be applied to various fields such as automated driving and medical diagnosis support. However, if the model misclassifies in the real world, it may cause extremely serious accidents. We consider the multi-class classification problem of identifying road signs in automatic driving. If the model misclassifies a "Stop" sign as other signs, the car will enter the intersection without pausing, likely leading to a traffic accident. By contrast, misclassifying a "No Parking" sign as a "No Parking Daytime" sign is relatively unlikely to lead to accidents or other harm. Therefore, the low recall of classes such as "Stop" and "No Entry," which lead to actual harm from misclassification, is a particularly important problem. Thus, in order to use the DCNN multi-class classification model in the real world, there are cases where we want to increase the recall of a particularly important class. At the same time, the accuracy should not deteriorate at the cost of improving the recall of a particular class. Therefore, it is necessary to find a way to improve the recall of an important class without compromising the accuracy of entire classes. \subsection{Related Works} Threshold tuning\cite{margineantu2000bootstrap} is considered one of the naive methods for the problem. However, this method improves the recall of a particular class but sacrifices the recall of other classes. In other words, since it only adjusts the trade-off ratio between accuracy and recall of a particular class, it is not possible to both improve recall and maintain accuracy. There are several approaches with cost-sensitive learning\cite{Elkan01thefoundations,kukar1998cost}. Cost-sensitive learning is a method of predicting classification with a loss function that imposes a relatively large cost (i.e., misclassification cost) on unacceptable errors. Several methods\cite{panchapagesan2016multi,aurelio2019learning,ho2019real,frogner2015learning} consider the importance of the classes that need to improve recall as a penalty and use this penalty as a loss function that weights cross-entropy loss. However, as shown in our experimental results below, these methods do not always sufficiently improve the recall of the important class. Even if the recall is improved, the accuracy may be impaired. The cause of these is discussed in detail in a later section. For our goal, we need to increase the separation between the important class and other classes while maintaining the separation between unimportant classes. In order to achieve such separation, the model should be trained to generate features so that (1) the feature vectors of the important class are well isolated and localized from the other classes, and (2) the feature vectors of each class are well separated from each other. In recent years, various loss functions have been developed to achieve this\cite{liu2016large,liu2017sphereface,liang2017soft,wang2018additive,wang2018cosface,deng2019arcface}. These methods consider classification by cosine similarity between the representative vector of each class and the feature vector. Here, the vectors are obtained by projecting onto the unit hypersphere\cite{zhai2018classification,ranjan2017l2}. With this setup, the margin between classes in the feature space is expected to increase. Then, the cosine term is manipulated with some constant factor so that the separation between classes is improved, which is called \textit{margin}. Liu et al.\cite{liu2016large,liu2017sphereface} proposed a method to introduce an angular margin loss between the class corresponding to the feature vector and other classes to promote the expansion of inter-class variance. Liang et al.\cite{liang2017soft} and Wang et al.\cite{wang2018additive,wang2018cosface} proposed an additive margin loss to stabilize the optimization. Deng et al.\cite{deng2019arcface} proposed an additive angular margin loss (ArcFace) that can be interpreted geometrically, and this ArcFace has reported good performance in face recognition tasks. We remark that these angular-based loss functions are designed to improve separations between classes, whereas they are not designed to improve the recall of a specific important class without sacrificing the overall accuracy. \subsection{Our Contribution} The intuitive approach to improve the recall of the important class is to optimize the cross-entropy loss by giving a larger weight to the loss for the important class. However, as we will show in later experiments, these approaches do not attain our goal even with intensive weight parameter tuning. Our contribution is the following two. \begin{itemize} \item We propose a loss function, Class-sensitive additive Angular MaRgIn Loss (CAMRI Loss). CAMRI loss adds a margin penalty to the angle between the feature vector and the weight vector corresponding to the important class of the last fully connected (FC) layer only when the feature is labeled with the important class. As a result, it is expected to improve the separability of the features labeled with the important class from other classes in the feature space due to the margin. \item We have empirically shown that CAMRI loss improves the recall of the important class without sacrificing accuracy. We conducted experiments using three datasets: CIFAR-10\cite{torralba200880}, German Traffic Sign Recognition Benchmark (GTSRB)\cite{Stallkamp-IJCNN-2011}, and Animals with Attributes 2 (AwA2)\cite{8413121}. In 8 out of 9 ways, CAMRI loss achieved a higher value of recall improvement than the existing method while maintaining accuracy. \end{itemize} This paper is organized as follows: In Section \ref{sec:pre}, we formulate the multi-class classification and related loss functions. In Section \ref{sec:analysis}, we analyze related loss functions based on their contours in the feature space. Then we propose a method that can improve the separation of the important class by setting the margin only for the important class. In Section \ref{sec:experiment}, we evaluate the effectiveness of the proposed method with multiple datasets. In Section \ref{sec:conclusion}, we present the conclusion. \section{Preliminaries} \label{sec:pre} \subsection{Multi-class Classification} Let $K$ be the number of classes, $\mathcal{X}$ be the input space, and $\mathcal{Y}=\left\{ 1,\cdots,K \right\}$ be the output space. We train a CNN model to represent feature vector $\bi{z}_n\in \mathbb{R}^D$ with dimensionality $D$, using a pair of input images and teacher labels $\left\{\bi{x}_n,t_n\right\}_{n=1}^N\in\mathcal{D}$ for training. The one hot representation of $\bi{t}_n$ is given by $\bi{y}_n$. $\bi{z}_n$ is given to the FC layer with weights $\bi{W}\in\mathbb{R}^{D\times K}$ and bias $\bi{b}\in\mathbb{R}^K$. And the fully-connected layer outputs $\bi{o}_n=\bi{W}\ensuremath{^{\text{T}}} \bi{z}_n + \bi{b}$. The softmax function transforms $\bi{o}_n$ into a probability vector $\bi{h}_n$, whose elements are interpreted as the predicted probabilities corresponding to each class. The cross-entropy loss is often used for classifier training, which takes $\bi{h}_n$ and $\bi{y}_n$ as input. By updating the model parameters to minimize the loss function, a multi-class classification model $f\colon\mathcal{X}\rightarrow\mathcal{Y}$ can be obtained. We also represent the $i$th element in vector \bi{a} as $a_i$ and the $(i,j)$ element in matrix \bi{A} as $A_{i,j}$. \subsection{Related Loss Functions} In this section, we introduce several types of loss functions related to our research. Let class $\kappa$ be an important class. \subsubsection{Cross-Entropy Loss} First, we formulate the commonly used cross-enstopy loss. When the number of samples is $N$, and the number of classes is $K$, the cross-entropy loss of multi-class classification is given by \begin{equation} \mathcal{L}_{\mathrm{ce}}=-\frac{1}{N} \sum_{n=1}^{N} \sum_{k=1}^{K}\left\{y_{n, k} \log \left(h_{n, k}\right)\right\}. \label{eq:1} \end{equation} \subsubsection{Weighted Cross-Entropy Loss} Second, we introduce Weighted Cross-Entropy loss (WCE) as the method that imposes class-specific penalties on cross-entropy loss. Let $w_k \in \mathbb{R}_+^{K}$ be the weight parameter of class $k$. By penalizing the loss with $w_k$ for each class, WCE\cite{panchapagesan2016multi,aurelio2019learning,ho2019real} is given by \begin{equation} \mathcal{L}_{\text{wc}}= -\frac{1}{N} \sum_{n=1}^{N} \sum_{k=1}^K \left\{ w_{k} y_{n,k} \log \left(h_{n,k}\right) \right\}. \label{eq:2} \end{equation} For our goal, we set the $\kappa$th element in \bi{w} to a value greater than $1$ and set the other elements to $1$. \subsubsection{Categorical Real-World-Weight Cross-Entropy Loss} Ho et al. \cite{ho2019real} introduced Real-World-Weight Cross-Entropy (RWWCE), which penalizes each class's false negatives and false positives independently with different costs. Multi-class classification extension of RWWCE, termed Categorical RWWCE (CRWWCE), is defined by \begin{equation} \begin{split} \mathcal{L}_{\text{rw}}= &-\frac{1}{N} \sum_{n=1}^{N}\sum_{k=1}^{K} \left[c^{\text{fn}}_{k} y_{n,k} \log \left(h_{n,k}\right) \right.\\ & +\sum_{k^{\prime} \neq k} \left.C^{\text{fp}}_{k^{\prime},k} y_{n,k} \log \left(1-h_{n,k^{\prime}}\right)\right]. \end{split} \label{eq:3} \end{equation} Here $\bi{c}^{\text{fn}}\in\mathbb{R}_+^{K}$ is the weight vector whose elements are weights to penalize the false negative cases for each class. $\bi{C}^{\text{fp}}\in\mathbb{R}_+^{K\times K}$ is a square matrix with zero diagonal. The $(k,l)$ element in $\bi{C}^{\text{fp}}$ represents the weight parameter for penalizing false positives when a sample of class $l$ is misclassified to class $k$. For our goal, we set the $\kappa$th element in $\bi{c}^{\text{fn}}$ to a value greater than $1$, and set the other elements to $1$. We also set elements in the $\kappa$ column and the $\kappa$ row in $\bi{C}^{\text{fp}}$, except for the $(\kappa, \kappa)$ element, to values greater than $1$, diagonal elements to $0$, and the other elementss to $1$. \subsubsection{Wasserstein Loss} Wasserstein loss\cite{frogner2015learning} penalize misclassification by Wasserstein distance between $\bi{y}_n$ and $\bi{h}_n$ using a distance matrix $\bi{C}\in\mathbb{R}_+^{K\times K}$ that defines the penalty for misclassification. Wasserstein loss is defined by \begin{equation} \mathcal{L}_{\text{ws}}=\frac{1}{N} \sum_{n=1}^{N} \left\{ \inf _{\bi{T} \in \Pi( \bi{h}_n,\bi{y}_n)}\langle\bi{T},\bi{C}\rangle-\lambda H( \bi{T}) \right\}, \label{eq:4} \end{equation} where \begin{equation} H(\bi{T})=-\sum_{k, k^{\prime}\in\mathcal{Y}} T_{k, k^{\prime}}\left( \log T_{k, k^{\prime}}-1 \right) , \end{equation} \begin{equation} \Pi( \bi{h},\bi{y})=\left\{ \bi{T} \in \mathbb{R}_{+}^{K \times K} \mid\bi{T} \mathbf{1}=\bi{h},\bi{T}\ensuremath{^{\text{T}}} \mathbf{1}= \bi{y}\right\} . \end{equation} The transport matrix \bi{T} in \eqref{eq:4} is approximated by the Sinkhorn-Knopp algorithm \cite{cuturi2013sinkhorn}. For our goal, we set elements in the $\kappa$ column and the $\kappa$ row in $\bi{C}$, except for the $(\kappa, \kappa)$ element, to values greater than $1$, diagonal elements to $0$, and the other elements to $1$. \subsubsection{ArcFace} Finally, we describe the additive angular margin loss (ArcFace)\cite{deng2019arcface} as a method that explicitly penalizes the angle between features and weights. ArcFace is defined by \begin{equation} \begin{split} &\mathcal{L}_{\text{ArcFace}}=\\ &\!-\!\frac{1}{N}\! \sum_{n=1}^{N}\! \log \!\frac{\exp \!\left(s \cos \left(\theta_{t_n}\!+\!m\right)\right)}{\!\exp \!\left(s \cos\! \left(\theta_{t_n}\!+\!m\right)\right)\!+\!\sum_{k \neq t_n}\! \exp \!\left(s \cos \theta_{k}\right)}, \end{split} \label{eq:12} \end{equation} where $m \in \mathbb{R}$ is margin penalty and $\theta_{t_n}$ is given by \begin{equation} \theta_{t_n}=\arccos\left(\boldsymbol{W}_{t_n}\ensuremath{^{'\text{T}}} \boldsymbol{z}'_n\right), \label{eq:121} \end{equation} where $\boldsymbol{W}_{t_n}\ensuremath{^{'}}$ and $\boldsymbol{z}'_n$ are normalized respectively so that its $L^2$-norm is $1$. Equation \eqref{eq:121} means that $\theta_{t_n}$ is the radian angle between the weight vector $\boldsymbol{W}_{t_n}\ensuremath{^{'}}$ of the last FC layer and the feature vector $\boldsymbol{z}'_n$. ArcFace trains the model by adding an angular margin $m$ to $\theta_{t_n}$. At test time, no margin is added. The effect of adding the margin leads to an improvement in the separability between classes, which will be explained in detail in Section \ref{34}. \section{Analysis of Class-sensitive Separation} \label{sec:analysis} \begin{figure}[tbp] \centerline{ \includegraphics[width=88mm]{fig1.eps} } \caption{ The contour plots of the loss values formed by each loss function are shown. The vertical and horizontal axes represent the elements $z_1, z_2$ in the feature vector \bi{z} and the contour plots represent the loss values, with yellow indicating high loss values and blue indicating low loss values. The weight vectors $\bi{W}_1, \bi{W}_2$ and $\bi{W}_3$ are elements in the last FC layer's weight vector. The ground truth weight vector $\bi{W}_1$ is shown in the red arrow. The non-ground truth weight vector $\bi{W}_2$ and $\bi{W}_3$ are shown in black arrows. The gray lines are the decision boundary. } \label{fig:2} \end{figure} Let class $\kappa$ be an important class, such as a stop sign, and let other classes, such as a no parking sign, be less important. When the misclassification of samples belonging to class $\kappa$ is significantly harmful compared to those in the other classes, we need to consider improving recall of class $\kappa$ without reducing overall accuracy. As we already discussed, threshold tuning does not help to attain this goal. As we discuss in this section later, to achieve this goal, we need to find feature representation such that feature vectors of the important class are well separated from feature vectors in the other classes. In this section, we investigate the property of existing loss functions in the feature space and how the separation in the feature space is induced when we penalize the important class using the existing loss functions. \subsection{Setting} To visualize the properties of existing loss functions in detail, we consider a three-class classification problem and train CNN where the dimension of the feature vector is set to two. Fig.~\ref{fig:2} represents the contour plots of loss values in the two-dimension feature space. Each loss function corresponding to these figures is cross-entropy loss (top left), WCE (bottom left), ArcFace without margin (top right), and ArcFace with margin (bottom right)\footnote{This is consistent with the CAMRI loss defined later.}. We give a detailed explanation of Fig.~\ref{fig:2} when using the cross-entropy loss as an example (Fig.~\ref{fig:2}, top left). Since it is a three-class classification, there are three corresponding weight vectors in the last FC layer, $\bi{W}_1,\bi{W}_2$, and $\bi{W}_3$. Fig.~\ref{fig:2} (top left) represents a contour plot of the cross-entropy loss in the two-dimensional feature space when the ground truth label is $1$. In Fig.~\ref{fig:2}, the weight vector $\bi{W}_1$ corresponding to class $1$, which we call the ground truth weight vector, is shown in the red arrow. The weight vectors $\bi{W}_2$ and $\bi{W}_3$ corresponding to class $2$ and $3$ are shown in black arrows. The blue area in the direction of $\bi{W}_1$ indicates that the loss values are low (since the ground truth label is $1$), and the yellow area in the opposite direction indicates high loss values. The last FC layer evaluates the inner product of the weight vector $\bi{W}_1$ and the feature vector \bi{z} as \begin{equation} \bi{W}_1\ensuremath{^{\text{T}}} \bi{z}= \| \bi{W}_1\| \|\bi{z} \| \cos \left( \theta_1 \right), \label{eq:110} \end{equation} where $\theta_1$ is the angle between $\bi{W}_1$ and $\bi{z}$. With this representation, the cross-entropy loss is given by \begin{align} \mathcal{L}_{\text{ce}}&=-\frac{1}{N}\sum_{n=1}^{N}\log \left(\frac{\exp \left( \| \bi{W}_{t_n}\| \|\bi{z}_n \|\cos \left( \theta_{t_n} \right) \right)} { \sum_{k=1}^{K}\exp \left( \| \bi{W}_{k}\| \|\bi{z}_n \|\cos \left( \theta_{k} \right)\right)}\right). \label{eq:010} \end{align} Equation \eqref{eq:010} can be minimized by minimizing $\theta_{t_n}$, which is realized by decreasing the angle between the features and the ground truth weight vector. Looking at the contour plot of cross-entropy loss (Fig.~\ref{fig:2}, top left) again, the region where the loss has low values spreads in the direction of $\bi{W}_1$. If we could sharpen the angle of the valley-like landscape represented by blue in the contour plot, we can expect that the separation between classes in the feature space is increased, yielding a higher recall. \subsection{Comparison between with and without weighting penalty for cross-entropy loss} \label{32} Next, we compare cross-entropy loss (Fig.~\ref{fig:2}, top left) and WCE (Fig.~\ref{fig:2}, bottom left), which is a cross-entropy loss penalizing the important class. Because the contour plots of the loss value obtained using CRWWCE and Wasserstein loss have similar shapes to WCE, we omit discussion about them here. Even with penalization to the important class, we can see that the angle discussed earlier is not sharpened. Instead, the counter is shifted toward the direction of $\bi{W}_1$. Since WCE simply increases the loss values for the important class by multiplying a constant, it does not necessarily reduce the angle between features of the important class and the corresponding weight vector. Therefore, the cost-sensitive learning approach, including WCE, CRWWCE, and Wasserstein loss, would not work well for our purpose. \subsection{Comparison of cross-entropy loss and L2-constrained softmax loss} We now turn our attention to Fig.~\ref{fig:2} (top right). L2-constrained softmax loss function\cite{ranjan2017l2} explicitly treats the angle between features and weights as a loss. In the training process, $\bi{W}_i$ and \bi{z} are normalized so that $\|\bi{W}_i\|=1$ where $i=1,2,3$ and $\|\bi{z}\|=1$. With this, \eqref{eq:010} can be transformed to \begin{align} \mathcal{L}_{\text{L2}}&=-\frac{1}{N}\sum_{n=1}^{N}\log \left(\frac{\exp \left( s\cos \left( \theta_{t_n} \right) \right)} { \sum_{k=1}^{K}\exp \left( s\cos \left( \theta_{k} \right)\right)}\right), \label{eq:011} \end{align} where $s$ is the inverse of the temperature parameter. Comparing Fig.\ref{fig:2} (top right) and Fig.\ref{fig:2} (top left), we can see that the shape of the countor is different, while the angular sharpness of the valley-like landscape is the same. Thus, this does not improve the separation between classes, either. \subsection{Comparison of angular-based loss with and without margin} \label{34} ArcFace introduces additive constant penalty $m$ to $\theta_{t_n}$ of L2-constrained softmax loss function, which is called additive angular margin. ArcFace is defined by \begin{equation} \begin{split} &\mathcal{L}_{\text{ArcFace}}=\\ &\!-\!\frac{1}{N}\! \sum_{n=1}^{N}\! \log \!\frac{\exp \!\left(s \cos \left(\theta_{t_n}\!+\!m\right)\right)}{\!\exp \!\left(s \cos\! \left(\theta_{t_n}\!+\!m\right)\right)\!+\!\sum_{k \neq t_n}\! \exp \!\left(s \cos \theta_{k}\right)}. \end{split} \label{eq:012} \end{equation} Comparing the case without (Fig.~\ref{fig:2}, top right) and with (Fig.~\ref{fig:2}, bottom right) margin penalty, we can see that the angular sharpness of the valley-like landscape is sharpened. Since angular margin corresponds to geodesic distance margin on a hypersphere\cite{deng2019arcface}, ArcFace would improve the separation by making the feature distribution on the hypersphere compact, called intra-class compactness. Therefore, fine-tuning of the margin penalty to all classes equally is expected to improve the generalization ability of the classifier. \subsection{Proposal of CAMRI Loss} Considering the discussion in the last section, we expect that recall improvement of a specific important class without sacrificing overall accuracy is attained by concentrating feature vectors along with the ground truth weight vector and separating feature vectors from non-ground truth weight vectors. For this purpose, we propose Class-sensitive additive Angular MaRgIn Loss (CAMRI Loss), which adds a margin only to the important class. Let $\kappa$ be an important class. The margin vector is defined using one-hot vector as $\bi{m} = \mu[0,\hdots,1, \hdots, 0]\ensuremath{^{\text{T}}}$, where only the $\kappa$th element is $\mu\geq0$ and the other elements are $0$. Using this, CAMRI loss is defined by \begin{equation} \begin{split} &\mathcal{L}_{\text{CAMRI}}=\\ &\!-\!\frac{1}{N}\! \sum_{n=1}^{N}\! \log \!\frac{\exp \!\left(s \cos \left(\theta_{t_n}\!+\!m_{t_n}\right)\right)}{\!\exp \!\left(s \cos\! \left(\theta_{t_n}\!+\!m_{t_n}\right)\right)\!+\!\sum_{k \neq t_n}\! \exp \!\left(s \cos \theta_{k}\right)}, \end{split} \label{eq:13} \end{equation} where $\theta_{t_n}$ is defined by \eqref{eq:121} and $s$ is the inverse of the temperature parameter. For each training sample, if the label $t_n$ is $\kappa$, the margin $\mu$ is added, otherwise no margin is added. We note that the margin is applied only in the training process. Since CAMRI loss adds the angular margin only to the important class, the intra-class compactness of the important class is expected to increase compared to the other classes. \section{Experiments} \label{sec:experiment} \begin{table*}[tbp] \caption{ Mean and standard deviation of recall of the important class, the upper row, and accuracy, the lower row, over ten trials.} \begin{center} \footnotesize \begin{tabularx}{\textwidth}{@{}l*{2}{C}c*{2}{C}c*{2}{C}c@{}} \toprule & & CIFAR-10 & & & GTSRB & & & AwA2 & \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} & cat & dog & airplane & limit-80-end & no-passing-end & limit-80 & mouse & beaver & moose \\ method & (worst1) & (worst2) & (median) & (worst1) & (worst2) & (median) & (worst1) & (worst2) & (median) \\ \midrule CAMRI & 0.792$\pm$0.027 & \textbf{0.850}$\pm$0.019 & \textbf{0.917}$\pm$0.018 & \textbf{0.835}$\pm$0.033 & \textbf{0.960}$\pm$0.035 & \textbf{0.995}$\pm$0.002& \textbf{0.129}$\pm$0.046 & \textbf{0.238}$\pm$0.053 & \textbf{0.645}$\pm$0.053 \\ (proposal) & 0.882$\pm$0.004 & 0.882$\pm$0.001 & 0.879$\pm$0.002 & 0.980$\pm$0.002 & 0.984$\pm$0.003 & 0.981$\pm$0.001& 0.667$\pm$0.013 & 0.663$\pm$0.014 & 0.655$\pm$0.025 \\ \midrule ArcFace & 0.765$\pm$0.040 & 0.842$\pm$0.031 & 0.910$\pm$0.028 & 0.808$\pm$0.038 & 0.941$\pm$0.070 & 0.993$\pm$0.002& 0.101$\pm$0.061 & 0.187$\pm$0.052 & 0.612$\pm$0.046 \\ & 0.880$\pm$0.004 & 0.878$\pm$0.006 & 0.880$\pm$0.004 & 0.980$\pm$0.003 & 0.982$\pm$0.002 & 0.981$\pm$0.003 & 0.671$\pm$0.008 & 0.666$\pm$0.009 & 0.676$\pm$0.008 \\ \midrule Wasserstein& - & - & - & - & - & - & - & - & - \\ & - & - & - & - & - & - & - & - & - \\ \midrule CRWWCE & - & - & - & 0.796$\pm$0.030 & 0.923$\pm$0.047 & - & - & - & - \\ & - & - & - & 0.980$\pm$0.002 & 0.980$\pm$0.004 & - & - & - & - \\ \midrule WCE & \textbf{0.802}$\pm$0.034 & - & - & 0.829$\pm$0.038 & 0.945$\pm$0.038 & - & - & - & - \\ & 0.879$\pm$0.002 & - & - & 0.980$\pm$0.002 & 0.981$\pm$0.004 & - & - & - & - \\ \midrule cross-entropy & 0.738$\pm$0.041 & 0.834$\pm$0.030 & 0.888$\pm$0.022 & 0.760$\pm$0.049 & 0.870$\pm$0.097 & 0.989$\pm$0.003 & 0.118$\pm$0.051 & 0.145$\pm$0.045 & 0.559$\pm$0.066 \\ (baseline) & 0.879$\pm$0.005 & 0.879$\pm$0.005 & 0.879$\pm$0.005 & 0.980$\pm$0.002 & 0.980$\pm$0.002 & 0.980$\pm$0.002 & 0.651$\pm$0.013 & 0.651$\pm$0.013 & 0.651$\pm$0.013 \\ \bottomrule \end{tabularx} \label{tab:1} \end{center} \end{table*} \subsection{Experiment Settings} \subsubsection{Datasets} We used three datasets, CIFAR-10\cite{krizhevsky2009learning}, GTSRB\cite{Stallkamp-IJCNN-2011}, and AwA2\cite{8413121}, for the multi-class classification problem. CIFAR-10 is a 10 classes dataset with a balanced number of data per class and has 50,000 training images and 10,000 test images of size $32\times32\times3$. GTSRB is a color image dataset of German road signs. It has 43 classes with an imbalanced number of data per class and contains 39,209 training images and 12,630 test images. We resized it to $48\times48\times3$ because the image sizes vary from $15\times15\times3$ to $250\times250\times3$. AwA2 has 50 classes with an imbalanced number of data per class and contains 37,322 animal images. We resized the image size to $64\times64\times3$ and divided the dataset into 26,157 training images and 11,165 test images. In all datasets, we normalized all pixels so that pixel values are within the range of $[0, 1]$. \subsubsection{CNN Setups} We used Tensorflow\cite{abadi2016tensorflow} to implement loss functions and a deep CNN model. The CNN model consists of a feature extractor and a linear classifier. The feature extractor contains convolution layers, batch normalization, max-pooling, dropout layers, and a global average pooling layer. The linear classifier contains two FC layers. ReLU function activates the FC and convolution layers other than the last FC layer. We set the batch size to $64$ and the number of epochs to $300$. We used Adam\cite{kingma2014adam} with the $0.001$ learning rate for optimization. \subsubsection{Parameter Settings} \label{sec:param} The ranges of weight parameters and margin parameters for each loss function were set in the following manners by preliminary experiments so that the recall of the important class becomes high. In the following, let $\kappa$ be the index of an important class. {\bf WCE.} The weight vector \bi{w} of WCE is a $K$ dimensional vector. Its $\kappa$th element was set to $4, 8, 12,\cdots, 40$ and the other elements in \bi{w} were set to $1$. {\bf CRWWCE.} The weight vector $\bi{w}^{\text{fn}}$, a $K$ dimensional vector, and matrix $\bi{W}^{\text{fp}}$, a $K\times K$ matrix, of CRWWCE were set as follows. $\kappa$th element in $\bi{w}^{\text{fn}}$ was set to $1, 4, 8, 12,\cdots, 40$, and the other elements were set to $1$. Each element in $\bi{W}^{\text{fp}}$ was set as follows: (1) all elements in the $\kappa$th row and $\kappa$th column were varied as $1.0,1.2, \cdots, 4.0$, (2) all diagonal elements were set to $0$, and (3) the other elements were set to $1$. {\bf Wasserstein loss.} The distance matrix \bi{C} of Wasserstein loss is a $K \times K$ matrix. Each element in \bi{C} was set as follows: (1) all elements in the $\kappa$th row and $\kappa$th column were varied as $1.0,1.2, \cdots, 4.0$, (2) all diagonal elements were set to $0$, and (3) the other elements were set to $1$. {\bf ArcFace.} The scalar margin parameter $m$ was set to $0,\frac{\pi}{64},\frac{2}{64}\pi,\cdots,\frac{8}{64}\pi$. The inverse of the temperature parameter $s$ was set to $2^i$, where $i$ was varied as $i=0,1,\cdots,6$. {\bf CAMRI loss.} The margin vector \bi{m} of CAMRI loss is a $K$ dimensional vector. Its $\kappa$th element was set to $0,\frac{\pi}{64},\frac{2}{64}\pi,\cdots,\frac{8}{64}\pi$ and the other elements were set to $0$. The inverse of the temperature parameter $s$ was set to $2^i$, where $i$ was varied as $i=0,1,\cdots,6$. \subsubsection{Evaluation Methods} We compare the recall of each important class and the accuracy of models obtained by training with CAMRI loss, ArcFace, Wasserstein, CRWWCE, WCE, and cross-entropy loss. We then investigate whether recall of an important class is improved without sacrificing accuracy. We tested our model with three classes with the worst, second-worst, and median recall among all classes when the model had been trained with cross-entropy loss. With this criterion, \{cat, dog, airplane\}, \{limit-80-end, no-parking-end, limit-80\}, and \{mouse, beaver, moose\} were chosen as the important classes respectively for CIFAR-10, GTSRB, and AwA2. We made ten training trials with all combinations of the parameters described in the previous subsection. With the following criteria, the results are summarized in Table~\ref{tab:1}. \begin{enumerate} \item Recall and accuracy with the regular cross-entropy loss are used as the baseline. \item Among penalty parameter settings specified in the previous subsection, the results that maintain equal or better accuracy compared to the baseline are selected. The results of methods that do not achieve equal accuracy in any parameter setting are not shown in the table. \label{itm:ext} \item For each method, the results obtained with the parameter setting that achieves the highest recall are shown. \item The results of the method which achieved the highest recall are shown in bold letters. \end{enumerate} \subsection{Performance Evaluation} The results are summarized in Table~\ref{tab:1}. As we discussed already, improving recall with scarifying accuracy is extremely easy. For this reason, cells in Table~\ref{tab:1} are left blank when the accuracy is less than the baseline (cross-entropy loss), even if the corresponding recall is improved. From Table~\ref{tab:1}, we can see that CAMRI loss, achieved the improvement of the recall of the important classes in all settings. Also, ArcFace improves the recall of the important classes in all settings except for mouse class of AwA2. Comparing CAMRI loss and ArcFace, CAMRI loss achieves higher recall than ArcFace, which suggests that adding angular margin only to the important class induces better results. Wasserstein, CRWWCE, and WCE improve the recall by sacrificing the accuracy in many cases. This is because these methods do not sharpen the angle between the distribution of feature vectors and ground-truth weight vectors as described in Section \ref{sec:analysis}. Therefore, these methods need to decrease the accuracy to improve the recall. \subsection{Effects of Margin on Separability} \begin{figure}[tb] \centerline{ \includegraphics[width=85mm]{fig2.eps}} \caption{ Normalized feature vectors of MNIST $\bi{z}^{\prime}$ (represented as dots) and weight vectors (represented as solid lines) in the two-dimensional feature space. The vertical and horizontal axes represent the elements $z_1^{\prime}, z_2^{\prime}$ of $\bi{z}^{\prime}$, respectively. The left shows L2-constrained softmax loss, the right shows ArcFace, and the bottom shows CAMRI loss. In the results, class "3" (shown in red) was set as the important class. } \label{fig:6} \end{figure} From Table~\ref{tab:1}, we can see that adding the angular margin only to the important class (i.e., CAMRI loss) attains better results than adding the angular margin to all classes equally (i.e., ArcFace). We investigate the reason for this result by visualizing the feature space and observing the effect of class-sensitive angular margin on the separability of features. For the visualization purpose, we trained a CNN with two-dimension feature space using MNIST\cite{lecun1998gradient}. Fig.~\ref{fig:6} shows the ground truth weight vectors and the corresponding distribution of normalized feature vectors in the feature space when the loss function is without angular margin (L2-constrained softmax loss), with the equal angular margin (ArcFace), and with the class-sensitive angular margin (CAMRI loss). Here, we set "3" as the important class (shown in red color). Comparing the case without margin (Fig.~\ref{fig:6}, left) and with the equal margin for all classes (Fig.~\ref{fig:6}, right), we can see that there is no improvement in the separability of class "3" (represented by red color). On the other hand, when the margin is added only to "3” (Fig.~\ref{fig:6}, bottom), the angle between the weight vector for "3" and the adjacent weight vectors are well separated. The features are distributed in the direction of the weight vector for "3", and the separability of class "3" from other classes is relatively increased. This increase in the separability of the important class is caused by adding the angular margin penalty only to the important class, yielding the improvement of the recall. \subsection{Reducing Intra-class Angular Variance of Important Class by Class-sensitive Angular Margin} \begin{table}[tb] \caption{ The standard deviation of the radian angle between the feature vectors and the ground truth weight vector with each method. } \label{tab:2} \begin{center} \begin{tabularx}{\linewidth}{lCCCCC} \toprule \multicell{}{class} & CAMRI & ArcFace & \multicell{WCE}{($w=4$)} & \multicell{ArcFace}{($m=0$)} & \multicell{Cross-}{entropy} \\ \midrule cat (important) & \textbf{0.0695} & 0.0384 & 0.0603 & \textbf{0.0928} & 0.0842 \\ min excl. cat & 0.0948 & \textbf{0.0168} & \textbf{0.0540} & 0.0937 & \textbf{0.0570} \\ median excl. cat & 0.1143 & 0.0403 & 0.0766 & 0.1009 & 0.0806 \\ \midrule dog (important) & \textbf{0.0717} & 0.0256 & 0.0667 & 0.1009 & 0.0815 \\ min excl. dog & 0.0919 & \textbf{0.0158} & \textbf{0.0602} & \textbf{0.0928} & \textbf{0.0570} \\ median excl. dog & 0.1116 & 0.0328 & 0.0835 & 0.1009 & 0.0806 \\ \midrule airplane (important) & \textbf{0.0367} & 0.0512 & 0.0727 & 0.0512 & 0.0806 \\ min excl. airplane & 0.0439 & \textbf{0.0441} & \textbf{0.0627} & \textbf{0.0441} & \textbf{0.0570} \\ median excl. airplane & 0.0558 & 0.0555 & 0.0714 & 0.0555 & 0.0815 \\ \bottomrule \end{tabularx} \end{center} \end{table} We observed that class-sensitive angular penalization enhances the intra-class compactness of feature vectors in the feature space. For measuring the intra-class compactness, we measured the standard deviations of the angle between the ground truth weight vectors and corresponding feature vectors. Table~\ref{tab:2} shows the standard deviations of the angles when cat, dog, and airplane are set as the important class (average of ten trials). For each important class, the minimum and the median excluding the important class are also shown, and the smallest value is shown in bold letters. The margin parameter values of CAMRI loss and ArcFace follows the setting of Table~\ref{tab:1}. WCE is measured with a fixed value of $w=4$. The value of $s$ for Arcface ($m=0$) is the same as that of CAMRI loss. Comparing the standard deviation of the important class and the other classes, we can see that CAMRI loss makes the standard deviation of the important class $0.783$ times lower than the minimum excluding of the important class on average. In contrast, the other methods do not necessarily make the standard deviation of the important class lowest, which means that the other methods do not improve the intra-class compactness even with penalization. We remark that our method does not necessarily attain the lowest standard deviation for the important class among other methods. The standard deviation of the important class is made lower than other classes relatively, which is considered to be sufficient to improve the recall of the important class. \subsection{Effects of Penalty Changes on Recall and Accuracy} \begin{figure}[tb] \centerline{ \includegraphics[width=85mm]{fig3.eps} } \caption{ Changes in recall of the important class (blue line) and accuracy (orange line) by varying penalties are shown. The left columns are WCE, and the right columns are CAMRI loss. From top to bottom, the important class is cat, dog, and airplane of CIFAR-10. The horizontal axis is the value of the penalty ($w_\kappa$ for WCE and $m_\kappa$ for CAMRI loss). The vertical axis is the value of recall of the important class and accuracy. The lines show the mean value in ten trials, and the bands show the standard deviation. } \label{fig:5} \end{figure} For further investigation of the relationship between the penalty given to the important class and recall/accuracy, we evaluated the changes of recall and accuracy with respect to penalty parameters of CAMRI loss and WCE. Cat, dog, and airplane classes of CIFAR-10 are trained as the important class with changing the penalty parameters. Fig.~\ref{fig:5} shows how the recall and the accuracy change when the angular margin $m_\kappa$ of CAMRI loss and the weight penalty $w_\kappa$ of WCE change, where $\kappa$ is the index of the important class. As $w_\kappa$ increases, WCE improves the recall but sacrifices the accuracy. In contrast, as $m_\kappa$ increases, CAMRI loss improves the recall, and the accuracy is almost maintained. The results show that WCE improves the recall by controlling the trade-off of recall and accuracy, while CAMRI improves the recall by acquiring a feature representation having better intra-class compactness without sacrificing the accuracy. \subsection{Impacts of Improving Recall of the Important Class on the Other Classes} \begin{figure*}[tb] \centerline{ \includegraphics[width=180mm]{fig4.eps} } \caption{ The left (a) is the baseline confusion matrix of CIFAR-10 obtained by trained with cross-entropy loss. (b), (c), and (d) are the differences between the confusion matrix with cross-entropy loss (i.e., (a)) and that with the CAMRI loss, where cat, dog, and airplane be set to the important class, respectively. Each result is the average of ten trials and corresponds to the result shown in Table~\ref{tab:1}. } \label{fig:3} \end{figure*} When CAMRI loss improves recall of an important class while maintaining accuracy, it is thought that recall of other classes may be decreased. Therefore, we observed the difference of confusion matrixes with CAMRI loss and baseline. Fig.~\ref{fig:3} (a) shows the confusion matrix when CIFAR-10 is trained with cross-entropy loss. Fig.~\ref{fig:3} (b), (c), and (d) show the difference of the confusion matrix when CIFAR-10 is trained with cross-entropy loss and CAMRI loss, where cat, dog, and airplane are set as the important classes, respectively. Both are the average of ten training trials. When the recall of the important class (say, class A) improves, some misclassified samples as the important class are correctly classified. At the same time, the recall of some other classes (say, class B) decreases. Between class A and class B, there appears to be a trend of decreasing the number of class A samples misclassified as class B. Looking at Fig.~\ref{fig:3} (b), the recall of cat (important class) improves, while the recall of bird and dog decreases. At the same time, the number of samples of cat misclassified as bird and cat misclassified as dog decreases, and the recall of bird and dog decreases. Fig.~\ref{fig:3} (c) also shows the same tendency. The number of samples of dog (important class) misclassified as bird decreases, and the recall of dog improves while the recall of bird decreases. Similarly, Fig.~\ref{fig:3} (d) shows the number of samples of airplane (important class) misclassified as bird decreases, and the recall of airplane improves while the recall of bird decreases. These results show, firstly, that samples that were misclassified as the important class are correctly classified, which contributes to improving recall of the important class. Secondly, the recall of classes to which samples that contribute to the recall improvement belong tends to decrease. These two factors suggest that the separation between the important class and another class is impacted when CAMRI loss improves the recall of the important class. \section{Conclusion} \label{sec:conclusion} We proposed CAMRI loss, which improves the recall of an important class without sacrificing overall accuracy. First, by analyzing the contour plots of existing loss functions, we found that it is necessary to reduce the angle between the feature vectors and the ground truth weight vectors for improving the separation of different classes in the feature space (intra-class compactness). In order to achieve this, we introduced class-sensitive additive angular margin. Experimental results showed that CAMRI loss improved the recall of the important class without sacrificing the accuracy compared to other methods. We experimentally confirmed that CAMRI loss makes the intra-class compactness of the important class relatively smaller than the other classes. In this study, the number of the important class is assumed to be one, but the number of the important class is often multiple in the real world. Whether the recall of multiple important classes can be improved with our proposal remains our future work. \section*{Acknowledgement} This work is partly supported by Japan science and technology agency (JST), CREST JPMJCR21D3, and Japan society for the promotion of science (JSPS), Grants-in-Aid for Scientific Research 19H04164 and 18H04099. \bibliographystyle{IEEEtran}
{ "timestamp": "2022-09-23T02:12:43", "yymm": "2209", "arxiv_id": "2209.10920", "language": "en", "url": "https://arxiv.org/abs/2209.10920" }
\section{Introduction} In this article, we introduce Liesel,\footnote{\url{https://liesel-project.org}} a probabilistic programming framework for the development and estimation of a broad range of Bayesian models in Python. The framework, named after a fountain in its birth city G\"{o}ttingen, Germany, allows the user to represent statistical models as directed acyclic graphs (DAGs) and to implement tailor-made Markov chain Monte Carlo (MCMC) algorithms. Liesel provides many default components for these tasks, which are easy to extend and liberate the researcher from the time-consuming duty of re-implementing the basic components of their models and inference algorithms, giving them the opportunity to focus on the novel aspects of their research. This way, Liesel meets the requirements of many computational statisticians working on new methods or extensions of existing ones. Currently, the framework is particularly useful for developing semi-parametric regression models, since it includes all components required for this model class, but it can easily be extended beyond these models. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/workflow} \caption{The standard workflow using the Liesel framework. When working with semi-parametric regression models, the first step is usually to create a Liesel model with the help of RLiesel. Then, the model graph is manipulated to accommodate newly developed features, and finally, Goose is used to develop an MCMC algorithm combining standard components like NUTS with user-defined kernels if required. The framework is, however, very flexible. The Liesel-Model library is not limited to semi-parametric regression models but can handle any Bayesian network expressed as a DAG. Goose communicates with the model via an interface which is also available for PyMC models or even self-written, JAX-compatible model representations.} \label{fig:liesel-workflow} \end{figure} The Liesel framework consists of three main components: Goose, an MCMC library, Liesel-Model, a class system for representing statistical models as DAGs, and RLiesel, an R interface to conveniently set up semi-parametric regression models. The components and their relationships are illustrated in Figure~\ref{fig:liesel-workflow}. A standard workflow with Liesel involves the following steps: First, a semi-parametric regression model is configured with RLiesel, returning a Liesel-Model object. Second, the Liesel-Model object is modified for the research question at hand if required. Third, MCMC estimation is performed with Goose, potentially with different sampling schemes. In the end, Goose's utility functions can be used for model and estimation diagnostics. Before we proceed to describe the three components of Liesel in more detail, we would like to point out that Liesel is by no means limited to semi-parametric regression models. In fact, the Liesel-Model library can be used to represent any model falling into the category of Bayesian networks, including, for example, regression models, spatial models, change-point models, Gaussian process models or Bayesian neural networks. For this rich model class, which may involve discrete model parameters, there is, to the best of our knowledge, no one-size-fits-all MCMC algorithm. For this reason, Goose encourages the researcher to use their expertise to design an optimal sampling scheme for their specific problem by providing a set of building blocks, which can be used to extend and replace standard MCMC algorithms. Moreover, Goose is not limited to the Liesel-Model library. As indicated in Figure~\ref{fig:liesel-workflow}, the Liesel framework is designed to be modular, which allows Goose to be agnostic about the concrete model implementation. Goose can also be used to estimate PyMC models or user-defined, JAX-compatible model implementations. \subsection{Software components} \paragraph{Liesel-Model} The model building library of Liesel (called Liesel-Model in this article to distinguish it from the Liesel probabilistic programming framework as a whole) facilitates the development of complex statistical models allowing the user to represent them as directed acyclic graphs (DAGs). DAGs are easy to reason about and to manipulate. In Liesel, each node of a DAG represents either data or a computation. The edges indicate data flow or, put differently, how the value of a node depends on the other nodes. Hence, the relationship between the model parameters and the conditional distributions of the model naturally translates to a DAG. Liesel provides methods to alter, remove or replace subgraphs of a model. This way, the user can extend or modify a given model, for example, a semi-parametric regression model created with RLiesel. More specifically, a prior in the model hierarchy can be replaced by updating the corresponding subgraph. This feature makes Liesel especially well-suited for the development of new statistical models, and in combination with RLiesel, it can simplify research on semi-parametric regression models significantly. \paragraph{Goose} Liesel's MCMC library is called Goose. To perform MCMC estimation, one needs to construct a Markov chain with an equilibrium distribution that matches the target distribution, i.e.~the posterior distribution. The chain is simulated for a given certain number of iterations, and the draws from the chain are used to approximate the posterior distribution. While a valid MCMC algorithm is mathematically guaranteed to converge to the posterior distribution, the convergence can be slow in practice. For this reason, most MCMC algorithms need to be tuned, i.e.~they need to learn some hyperparameters during a warmup phase to work efficiently. Goose supports the user in building an MCMC algorithm for their estimation target by offering a broad range of well-tested kernels that can be combined in flexible ways to construct problem-specific MCMC algorithms. In this context, a kernel is an algorithm to transition a part of the parameter vector to a new state within an MCMC iteration. Most kernels in Goose also implement an automatic tuning procedure, which guarantees a high computational efficiency without requiring a manual adjustment of the kernel hyperparameters. The user can combine standard kernels like the No-U-Turn Sampler (NUTS) provided by Liesel with self-implemented ones, e.g.~specific Gibbs updates. Of course, Goose also supports using a single kernel like NUTS for the full parameter vector as in Stan. \paragraph{RLiesel} The RLiesel package for R is built on top of the Liesel-Model library. It can be used to configure semi-parametric regression models with the convenient R formula notation. The models are represented as DAGs using the Liesel node and model classes and can be manipulated to incorporate new developments, e.g.~new predictor components or prior hierarchies. Finally, the user can take advantage of a default sampler setup or build a custom MCMC algorithm for their model using Goose. RLiesel is based on the \texttt{reticulate} package, which allows for a seamless integration of Python and R. With RLiesel, we strive to make Liesel accessible to the statistics community, where R is the predominant language, and to allow for the integration of Liesel with many popular R-based post-sampling utilities. RLiesel does not only demonstrate how Liesel can be used to implement complex statistical models, but it can also serve as a solid basis for further methodological research on the popular model class of semi-parametric regression. Semi-parametric regression has received a lot of attention among applied statisticians in recent years and is closely related to the concepts of structured additive distributional regression \citep{Klein2015Multivariate} and generalized additive models for location, scale and shape \citep[GAMLSS, ][]{Rigby2005}. These models allow the researcher to explore complex relationships between explanatory and response variables including linear, non-linear, random and spatial effects. Many of them are also multi-predictor models, where different features of the response distribution such as variance, skewness or kurtosis can be related to covariate information. Due to its generality, semi-parametric regression can be understood as an ``umbrella'' model class comprising many interesting models, which pose a broad range of statistical and computational challenges. RLiesel and Liesel allow the statistician to address these issues with a set of well-tested building blocks, an intuitive graph-based model representation and API, and a modular library for MCMC inference. This is particularly important due to the complexity of the model class, which would make an implementation from scratch a very time-consuming task. \subsection{Related software} Most statistical software packages for Bayesian inference can be classified into software for a specific model class on the one hand and general probabilistic programming languages (PPLs) on the other hand. Liesel and RLiesel try to cover a middle ground between these two approaches: RLiesel facilitates the definition of semi-parametric models, while Liesel-Model and Goose are capable of expressing and estimating a broad range of statistical models. Hence, Liesel has similar capabilities as general-purpose PPLs like Stan \citep{SDT2022}, JAGS \citep{Plummer2022}, NIMBLE \citep[the successor to BUGS,][]{deValpine2017} or PyMC \citep{Salvatier2016}. Unlike these software projects, however, Liesel features a graph representation allowing for the modification of the model before estimation. Furthermore, with Liesel, users have full control of the estimation algorithm. Stan and JAGS provide only very limited options to customize the MCMC algorithm. In Stan, NUTS or HMC can be used, or alternatively a mean-field variational inference method. Certain parameters of the samplers, e.g.~the initial step size or the target acceptance rate, can be configured. However, block-based sampling is not possible and user-implemented samplers cannot be integrated. Moreover, discrete parameters cannot be modeled with Stan, since it relies on gradient-based samplers. Compared to Stan, NIMBLE allows for a more detailed configuration of the MCMC algorithm. For example, the default samplers can be reordered or replaced, even with user-defined samplers. In contrast to Liesel, NIMBLE misses capabilities for automatic differentiation and consequently does not provide any gradient-based samplers. Moreover, NIMBLE restricts the compilation of user-defined functions to a subset of the R programming language, which makes third-party libraries difficult to use, while Liesel can wrap code of other JAX-based libraries. PyMC also offers some options to customize the MCMC algorithm but does not go as far as Liesel, and similar to other general-purpose PPLs, does not feature a mutable model object. For complex models or large datasets, general-purpose PPLs may be slow or unable to sample the model at all. In these situations, model-specific software remains important, and modeling frameworks with customizable MCMC algorithms like Liesel or PyMC may serve as a basis for the implementation of model-specific solutions. Its flexible model building library sets Liesel apart from other more specialized software. Similar to \texttt{brms} \citep{Buerkner2017}, which provides an interface for various types of multi-level models in Stan, RLiesel provides an interface for semi-parametric regression models in Liesel. RLiesel's features are comparable to other software in the field like \texttt{mgcv} \citep{Wood2022}, \texttt{gamlss} \citep{Stasinopoulos2017}, \texttt{GJRM} \citep{Marra2022}, BayesX \citep{Brezger2005} and \texttt{bamlss} \citep{Umlauf2021}. Its approach is different, however, in that the intermediate graph-based model representation can be modified and extended, allowing for the implementation of new models that are derived from a base model. BayesX was one of the first software packages for fast MCMC inference in semi-parametric regression models with spatial covariate effects. The software cannot be extended easily, however, restricting the user to the pre-defined predictor components (i.e.~linear, non-linear, spatial covariate effects, etc.). \texttt{bamlss} is another Bayesian software that allows the user to define their own predictor components, which need to be linear in a basis expansion of the covariates, and the corresponding regression coefficients need to follow a (potentially degenerate) multivariate normal prior. In that regard, the model graph of Liesel is more expressive and more flexible. The inference procedure in \texttt{bamlss} can be configured with the \texttt{optimizer} and \texttt{sampler} arguments, but a comprehensive collection of MCMC kernels as in Goose is missing. Automatic differentiation and high-performance computing hardware are also not supported in \texttt{bamlss}. Finally, the packages \texttt{mgcv} and \texttt{GJRM} are not primarily focused on Bayesian inference, although \texttt{mgcv} offers an interface to JAGS using the \texttt{jagam()} function. In contrast to Liesel, both packages have an exclusive focus on semi-parametric regression using basis function approaches. \subsection{Technology stack} Liesel uses a modern machine learning technology stack for the efficient implementation of the model graph and the MCMC kernels. In particular, Liesel depends on the Python packages NumPy \citep{Harris2020}, JAX \citep{Bradbury2022}, BlackJAX \citep{Lao2022} and TensorFlow Probability \citep{Dillon2017}. JAX, a library for scientific computing with support for automatic differentiation (AD) and just-in-time (JIT) compilation, is of particular importance for Liesel, since its features enable the implementation of computationally efficient inference algorithms. For example, when using reverse-mode AD, the value and the gradient of the log-posterior of a model can both be evaluated in the same amount of time -- up to a constant. Furthermore, JAX supports using CUDA-enabled graphics cards for its computations, and running them on even more powerful tensor processing units (TPUs) or networks of those. Liesel runs on Linux, macOS and with some limitations on Windows,\footnote{JAX, one of Liesel's dependencies, does not provide official builds for Windows. However, JAX can either be built by the user or run using the Windows Subsystem for Linux (WSL).} and can be used on laptops, desktop computers and servers. Liesel's development is hosted on GitHub,\footnote{\url{https://github.com/liesel-devs/liesel}} where bugs can be reported and new features can be requested. The latest release of Liesel, 0.1.3 at the time of writing, is also available on the Python Package Index (PyPI). The remainder of this article is organized as follows: In Section~\ref{sec:liesel-model}, the Liesel-Model library is discussed. Section~\ref{sec:goose} describes Liesel's MCMC library Goose, its main design goals, and the interfaces that allow the user to implement their own MCMC kernels and warmup schemes. RLiesel, the R interface for semi-parametric and distributional regression is covered in Section~\ref{sec:rliesel} together with some theoretical background on these model classes. Finally, Section~\ref{sec:case-study} describes a case study showing how the components of the Liesel framework can be used together to evaluate different MCMC algorithms on a semi-parametric regression model. The article concludes with a discussion in Section~\ref{sec:discussion}. \section{Liesel: Developing probabilistic graphical models} \label{sec:liesel-model} \emph{\textbf{Please note:} The model building library of Liesel is going to receive a major update in version 0.2, which we plan to release in fall 2022. The arXiv preprint will be updated after the release to reflect the changes in version 0.2. For this reason, we focus on the abstract concepts and do not present any code examples in the current version of this section.} The model building library of Liesel allows the user to express a broad range of (typically Bayesian) statistical models as probabilistic graphical models (PGMs). Particular attention is paid to the representation of semi-parametric regression models, which are described in Section~\ref{sec:rliesel}, and for which a number of convenience functions are provided. In general, however, almost any statistical model can be expressed with Liesel. The PGM representation allows for a convenient factorization of the log-probability of the model (or the unnormalized log-posterior in a Bayesian context). It is also the basis for the user interface that can be used to update the nodes in a natural way and to modify the structure of the graph (e.g.~by adding or removing nodes or edges). \subsection{Probabilistic graphical models and directed acyclic graphs} A PGM uses a graph to express the conditional dependence and independence between a set of random variables. For Bayesian models, one typically relies on directed acyclic graphs (DAGs) to represent hierarchical structures without any loops or circular dependencies, permitting the factorization of the joint probability into a product of conditional probabilities. More precisely, if $M = (X, E)$ is a DAG with nodes $x \in X$ representing random variables and edges $e \in E$ representing conditional dependencies between them, the joint probability of $M$ can be written as $$\prod_{x \in X} p\bigl(x \mid {\operatorname{Inputs}(x)}\bigr),$$ i.e.~the product of the probabilities of the individual nodes conditional on their inputs (or parents). The inputs of a node $x \in X$ are all nodes $x' \in X$ for which $x$ and $x'$ are not conditionally independent given the other nodes of the model. \subsection{Nodes and models in Liesel} Liesel uses Python classes to implement and enrich the mathematical concept of a node in a PGM. A node has two important properties: a value and a log-probability, which is the evaluation of the log-probability density or mass function of the node at its value. To keep both properties in sync, i.e.~to avoid an inconsistent state, the node class comes with methods for setting its value and updating its state. The model class, on the other hand, represents a PGM and can hold a number of nodes. It provides methods for the evaluation of the model log-probability and for updating the nodes in a topological order. The model graph can also be visualized conveniently. The nodes are able to cache their value and log-probability, meaning that the model graph is stateful. The results of expensive mathematical operations can be stored directly in the graph, enabling performance improvements for MCMC sampling, especially if multiple parameter blocks are used. If required, the user can implement new types of nodes and models due to the modular and extensible design of Liesel. More details on the key features of the nodes and models are provided in the following paragraphs. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{figures/node-types} \caption{The nodes of a Liesel model can be strong (blue) or weak (orange), and can have a probability distribution (double border) or not (single border). Weak nodes are functions of their inputs and can always be recomputed from the strong nodes of the model. Nodes with a distribution have a log-probability that is part of the model log-probability. For a graphical representation of a concrete semi-parametric regression model, see Figure~\ref{fig:dist-reg}.} \label{fig:node-types} \end{figure} \paragraph{Nodes} Liesel extends the concept of a node in a PGM, where nodes are used to represent random variables, and adds a distinction between so-called ``strong'' and ``weak'' nodes. Strong nodes have a value that is either fixed or set by an inference algorithm such as a sampler or optimizer. With some rare exceptions, the random variables of a model are strong nodes and can represent observed data (e.g.~the response of a regression model) or a model parameter (in a Bayesian context). Conversely, not all strong nodes are random variables. Hyperparameters or design matrices are examples of nodes strong without an associated probability distribution. In contrast, weak nodes represent functions of their inputs. These functions are usually deterministic and describe the mappings between the random variables of a model and their probability distributions. Weak nodes can also represent pseudo-random functions, in which case however, they require the state of the PRNG (stored in a strong node) as one of their inputs. The weak nodes can always be recomputed from the strong nodes, and hence, the state of a model is uniquely defined by the strong nodes. Weak nodes can be used to cache the results of expensive computations, because their value only needs to be updated when their inputs have changed. Node subclasses can implement weak nodes representing commonly used functions. By default, Liesel comes with a number of weak nodes facilitating the development of semi-parametric regression models. If a node has a probability distribution, its log-probability is the evaluation of its probability mass or density function at its current value. For convenience, the log-probability of a node without a distribution is defined to be zero. Summing up the node log-probabilities gives the model log-probability, which can be interpreted as the unnormalized log-posterior in a Bayesian context. The log-posterior can be decomposed into the log-likelihood (considering only the observed nodes) and the log-prior (considering only the parameter nodes). Liesel supports probability distributions that follow the class interface from TensorFlow Probability (TFP). Thus, all distributions from TFP can be used with Liesel and new ones can be implemented. One feature of TFP that is particularly useful for Bayesian statistics is the possibility to transform distributions with bijectors. When defining a transformed distribution, TFP automatically adjusts the log-probability with the log-determinant of the Jacobian of the bijector. For an overview of the different node types -- strong and weak, with and without a probability distribution -- see Figure~\ref{fig:node-types}. Finally, we provide a concrete example and describe which node types would be used to represent a generalized linear model (GLM) in Liesel: The response vector $\yvec$ and the design matrix $\Xmat$ of a GLM are the observed data and would be two strong nodes. While the design matrix is fixed, the response is assumed to follow a probability distribution from the exponential family such as a Poisson or gamma distribution. The vector of regression coefficients $\betavec$ is the only model parameter and would be another strong node. In a Bayesian context, the regression coefficients are assigned a prior distribution, whose hyperparameters would again be strong nodes. In contrast, the linear predictor $\etavec = \Xmat\betavec$ would be a weak node representing a simple matrix-vector product. The expected value of the response $\muvec = h(\etavec)$ is the element-wise evaluation of the response (or inverse link) function $h$ at the linear predictor $\etavec$ and would be encoded in separate weak node. \paragraph{Models} A Liesel model is a collection of nodes with properties for the model log-probability, the log-likelihood and the log-prior. Upon initialization, the model computes and stores a topological order of the nodes, which is required for updating the model. The API allows the user to extract and set the state of the model, that is, the values and log-probabilities of the nodes. If some of the nodes have a random seed as an input, the model can manage the PRNG state by splitting and distributing a JAX PRNG key. The key feature of the model is its update mechanism, which also supports partial updates. If the value of a strong node is modified, its outputs (i.e.~nodes that have the modified node as one of their inputs) are recursively flagged as outdated. By calling the update method on the outdated nodes in a topological order, a consistent state can be restored. This is exactly how the update mechanism of the model works. For situations when only a subset of the nodes is of interest and a full update of the model graph is unnecessary, a partial update can be triggered through the model by specifying the target nodes of the update. The nodes and the model in Liesel follow a stateful, object-oriented approach, which is incompatible with JAX's requirement for pure, stateless functions. To take full advantage of JAX's and Goose's features for JIT compilation, the computations need to be separated from the state of the model. For this purpose, Liesel provides helpers to extract pure functions from the model, which can be used to compute the log-probability and to update the state. These functions are also used in the model interface that can connect the model with Goose. \subsection{Benefits of using Liesel} Goose, the MCMC library that comes with Liesel, can be used independently of the model building library. When using Goose, the user can decide whether their model is best represented with Liesel, PyMC or a self-written log-probability function. Comparing these different approaches, we see the following particular benefits of using Liesel: \begin{description} \item[Caching] Weak nodes can be used to cache the results of expensive computations. This feature is particularly useful for efficient MCMC sampling with multiple parameter blocks, as supported by Goose. Using weak nodes as a cache, the results from the other branches of a tree-like model graph can be recycled when updating the branches individually. Further performance improvements can be achieved with Liesel's partial updates of the model graph, allowing the user to compute only those quantities that are relevant for a given operation. \item[Graph manipulations] The graph of a Liesel model can be modified, allowing for a workflow with a base model, which can be customized to implement new variants of the model. This approach is most convenient if the base model is a semi-parametric regression model that can be configured with RLiesel (Section~\ref{sec:rliesel}). RLiesel provides many model components for semi-parametric regression, e.g.~different spline bases, penalties and response distributions. \item[Hackability] Liesel tries to get out of the way of the user who is extending a model or implementing a new one. The design of the node and model classes is simple and follows the principle of least astonishment. When in doubt, less surprising behavior is favored over more convenience. New operations for a model can be implemented as weak nodes using JAX, which provides a familiar, NumPy-like user interface. \item[Visualization] The graph of a Liesel model is composed of statistically meaningful nodes with values and log-probabilites. It is a wrapper around the computational graph of the model and can be plotted using the functions provided by Liesel. The visualization of the model graph can be useful for various purposes, including debugging or strengthening the intuition about the underlying statistical model. \end{description} \section{Goose: A toolbox for modular MCMC algorithms} \label{sec:goose} The Liesel framework includes a library named Goose for tailoring MCMC algorithms to specific estimation problems. Goose provides the means for statisticians to develop their own MCMC algorithms that fit the models they are working on better than generic samplers. Goose assists the statistician in three ways: First, by using Goose, they are freed from tedious bookkeeping tasks like storing the sample chains, managing the PRNG state or parallelizing the code to run multiple chains. Second, Goose provides the building blocks of an MCMC algorithm called kernels. A kernel is an algorithm that transitions the parameter vector or (in a blocked sampling scheme) a part of it within an MCMC iteration. Kernels can also define warmup procedures allowing them to learn their hyperparameters and thus removing the need to set them by hand. Third, a well-defined interface allows the combination of user-implemented problem-specific kernels with the default kernels in case the kernels that are shipped with Goose are not sufficient for the estimation problem. All in all, Goose enables users to construct entirely new algorithms but also to use existing building blocks and combine them in new ways to match the estimation problem at hand. Statisticians using Goose can focus on how one MCMC transition should be performed. In this section, we introduce Goose in detail and our key design choices. Some implementation details are also discussed. \subsection{The primary design goals} The general goal of providing a modular framework for MCMC inference for statistical models can further be broken down into the following more specific design goals: \begin{itemize} \item Goose should free the user from monotonous tasks that are repeatedly encountered when implementing MCMC algorithms. Among these are storing the intermediate states, multi-chain management, tracking errors and debug messages, and calling tuning algorithms at the right time. \item Goose should allow the user to decide how to transition the model parameters from one to the next MCMC iteration. In Goose, we do that by letting the user combine multiple transition kernels. Each kernel moves a part of the parameter vector or, if only one kernel is used, the entire parameter vector using a valid MCMC transition. \item Goose should have a mechanism to tune the transition kernels automatically during a warmup phase and should thereby avoid that the user needs to tune the kernel hyperparameters by hand. \item The user should have full control over the combined MCMC algorithm. That means, in particular, that all defaults must be changeable, but even more importantly, Goose must allow the implementation of user components. Therefore, the framework should be based on a collection of modular components with well-documented interfaces. The user should be able to compose and extend the components in a flexible, yet straightforward way. \item Goose must support continuous and discrete model parameters. \item Liesel models should be first-class citizens and easy to set up with Goose. However, Goose should be a general MCMC framework that can be used with any JAX model, e.g.~a PyMC model or a hand-coded model by the user. \item Goose strives to be convenient to use and fast. To achieve these goals, Goose provides pre-implemented components of popular MCMC algorithms like HMC and NUTS. Furthermore, Goose makes heavy use of JAX's capabilities for automatic differentiation (sparing the user the implementation of derivatives) and just-in-time compilation (speeding up the repeated evaluation of the log-probability of the model). For this reason, the models and the components of the MCMC algorithms need to be expressed in JAX. \item Whenever possible, Goose should wrap well-tested MCMC kernels from other libraries such as the NUTS and HMC kernels from BlackJAX. This way, we can avoid re-implementing complex algorithms, which would be unnecessarily error-prone, while extending the user base of existing projects like BlackJAX. \end{itemize} However, there are also aspects that are outside the scope of Goose. For instance, Goose does not check the mathematical correctness of the sampling schemes. It is up to the user to design a valid MCMC algorithm. The results from Goose should generally be reproducible on the same system. However, reproducibility between different hardware cannot be guaranteed due to small differences in the floating point arithmetic. These differences may add up to observable differences during many MCMC iterations using modern MCMC algorithms.\footnote{Exact reproducibility is limited for many modern computational tools. See for example Stan's reference manual (\url{https://mc-stan.org/docs/reference-manual/reproducibility.html}) or the corresponding section in Liesel's tutorial book (\url{https://liesel-devs.github.io/liesel-tutorials/reproducibility.html}).} \subsection{Main components of Goose} Goose is composed of many classes and interfaces. The design boils down to a few central pieces users must understand to successfully use Goose as their tool to create MCMC algorithms in a few steps. A deeper understanding is required to write extensions. The most important building blocks and their relationships are illustrated in Figure~\ref{fig:goose}. We describe their roles here. Note that we sometimes refer to the model parameters as the ``position''. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{figures/goose} \caption{Entity-relationship diagram of Goose's main components. Only the most important classes, fields and methods are shown here.} \label{fig:goose} \end{figure} \begin{description} \item[Engine] The \texttt{Engine} class is the central part of Goose and acts as a coordinating entity, hiding a big part of the complexity from the user. In particular, after the user has decided how the transitions should be executed, it makes sure that the right functions and methods are called at the right time guaranteeing that the transitions of the position happen as requested. Moreover, the engine keeps track of the sampling history and advances through the different sampling phases (e.g.~the warmup and posterior phase). It also coordinates the PRNG state and provides the top-level user interface. \item[Kernel] A kernel object performs a single MCMC transition, i.e.~an update of the position or some elements of the position. The update must be a valid MCMC transition, for example based on the Metropolis-Hastings algorithm. The \texttt{Kernel} interface describes how the engine can interact with the kernels. The user can either use pre-implemented kernels or implement new kernel classes adhering to the kernel interface. \item[Epoch] An epoch is a series of MCMC iterations. The \texttt{EpochConfig} class describes an epoch. Epochs are used to communicate to the kernels which phase of the MCMC algorithm they are in and which operations they are allowed to perform in this phase. More specifically, we divide the sampling process into a warmup and a posterior phase. Samples from the posterior phase are expected to be valid MCMC samples. In contrast, during the warmup phase, the chain may not yet have converged, and during the so-called adaptation epochs, the Markov property may be violated. This way, we allow the kernels to learn their hyperparameters during the adaptation epochs in the warmup phase. If done right, this can spare the user the manual tuning of the kernel hyperparameters, and it can lead to more efficient sampling in the posterior phase. The simplest setup would contain only two epochs: a burn-in epoch (part of the warmup phase) and a posterior epoch (part of the posterior phase), each containing multiple MCMC iterations. On the other hand, a more complex setup can include multiple adaptation epochs in the warmup phase. \item[ModelInterface] The \texttt{ModelInterface} describes how the components of Goose can communicate with the model. Most importantly, it describes how the unnormalized log-posterior can be evaluated for a given position. By defining the model interface as an abstraction layer, Goose can easily be used with different model backends. \end{description} To set up an MCMC algorithm, the user needs to combine the different components of Goose into one valid engine object that handles the communication between them. However, the constructor of the engine is quite complex. To ease the creation of an engine object, the \texttt{EngineBuilder} class can be used. It provides a step-by-step interface for the configuration of an engine. Using the engine builder, Goose leaves the user with only a few tasks to set up an MCMC sampler. These are: (i) Select the appropriate kernels such that every part of the position is moved and add the kernels to the builder. (ii) Supply the builder with an instance of a model interface so that the engine knows how to communicate with the model. (iii) Set the initial values for the position. (iv) Define a sequence of epochs with the desired warmup scheme and the right number of posterior samples. Goose provides a helper function for this task. (v) Additionally, the user must initialize the random number generator and decide how many chains should be run in parallel. Afterwards, the engine is ready to be used for sampling. \subsection{Some implementation details} To enable a deeper understanding of Goose, we describe how the sampling is performed on an implementation level. We explain in detail how the engine communicates with the kernels and provide an overview of the sequence of these interactions. A simplified sequence diagram of the sampling process is shown in Figure~\ref{fig:engine}. Before the sampling is started with the method \texttt{sample\_all\_epochs()}, the user has to create an engine object as described above. That means a sequence of kernels and a sequence of epochs must be defined, the engine must be connected to the model via the model interface, and the initial position must be set. In the following, we assume that only one kernel is used. However, the extension to multiple kernels is straightforward and described later. \begin{figure} \centering \includegraphics[height=18cm]{figures/engine} \caption{Sequence diagram of the communication between the engine and a kernel. For simplification, we show only one kernel here. However, the extension to multiple kernels is natural by calling the kernel methods in a sequence, which can be achieved by wrapping the kernels in a \texttt{KernelSequence} object. The engine provides additional methods to run the epochs one by one and to append epochs, which are not shown here. These methods allow for an interactive use of the engine, while the diagram illustrates a ``one-shot'' run of an already configured engine.} \label{fig:engine} \end{figure} The sampling process is divided into multiple phases, which we call ``epochs''. Each epoch has a duration, i.e.~the number of MCMC iterations that are performed in the epoch, and a type. At the beginning of each epoch, the kernel method \texttt{start\_epoch()} is called informing the kernel about the new epoch and allowing it to modify the kernel state. The kernel state is a data structure used to store parameters defining the behavior of the kernel. It may be modified during the warmup. The scale of the proposal distribution (also known as the step size) of a random walk kernel serves as an example in this section. The kernel state can also include a cache required to calculate the actual parameters that affect the transitions. Allowing the kernel to change its state at the beginning of an epoch enables it to prepare for the subsequent operations. Afterwards, the control is handed back to the engine, and it calls the kernel method \texttt{transition()} for each MCMC iteration in the current epoch. The transition method is supposed to move the position and return the new position together with additional information (which would typically include whether the position changed, how large the acceptance probability was, whether an error occurred, etc.) to the engine. The engine takes care of storing the position and the additional information. Note that during the warmup phase, the kernels are allowed to change their state in the transition method, which allows for on-the-fly tuning of kernel parameters and updates of the cache. This is required, for example, for the dual averaging algorithm \citep[Section 3.2]{Nesterov2009, Hoffman2014} or for Welford's online algorithm for calculating the empirical variance of an element of the position \citep{Welford1962}. Once all transitions defined in the current epoch have been carried out, the kernel method \texttt{end\_epoch()} is called. Again, the kernel can change its state and prepare for the following tuning. To invoke the tuning, the kernel method $\texttt{tune()}$ is called if the current epoch is an adaptation epoch. In the adaptive random walk kernel, this method would be the place to calculate the new step size based on Welford's algorithm and update it in the kernel state. The kernel is allowed to request the history of the positions visited in the current epoch. Having the history available facilitates the implementation of certain tuning algorithms. The outlined process is repeated for each epoch. As soon as the first epoch of the posterior phase is encountered, the kernel method \texttt{end\_warmup()} is called before the call to \texttt{start\_epoch()}. It informs the kernel that the warmup phase is over, and subsequent to this call, the kernel must respect the Markov property. Finally, the user can request the sampling results from the engine and inspect them. The results do not only contain the chain of the visited positions but also meta-information and an error log (e.g.~an error is reported if the log-posterior evaluates to $-\infty$). Liesel also provides some utilities for the inspection of the chains. A more interactive approach is also possible. The user can always add more epochs to continue sampling. One restrictions is that Goose does not allow posterior epochs to be followed by epochs of any other type. The interactive approach is facilitated by the engine methods \texttt{append\_epoch()} and \texttt{sample\_next\_epoch()}. The user can run a few warmup epochs, inspect the chains, decide if they have reached the typical set and converged, add more warmup epochs if necessary or move on to the posterior epoch otherwise. Everything that has been said so far can easily be generalized to multiple kernels. In that case, each method call is carried out in a loop over the sequence of kernels defined by the user. Note that the kernels cannot share their state. If users want to work with custom MCMC transition or tuning methods or extend Goose's collection of kernels, they have to implement a new class that is required to follow the \texttt{KernelInterface}. The two most important methods to do so are \texttt{transition()} and \texttt{tune()}. We describe them in more detail and also provide more information on the implementation of the engine, which is useful to understand the requirements for the kernel methods. \paragraph{The engine.} As described above, the engine orchestrates the sampling process and provides the top-level user interface. It also hides some complexity that arises from using JAX and JIT-compiled functions. Using JAX comes with many benefits, e.g.~automatic differentiation (AD) and just-in-time (JIT) compilation. Furthermore, JAX programs can be executed on high-performance devices like GPUs and TPUs. For efficient sampling, the engine automatically groups multiple MCMC iterations into one compilation unit and uses JAX's \texttt{jit()} function to compile them together. Thus, the MCMC iterations are performed together on the computing device without the need for communication with the host. This ensures a better performance, especially if the computing device is not the CPU. One drawback, or rather one limitation, is the requirement of ``pureness''\footnote{A pure function is a function whose value depends solely on the values of its arguments and which furthermore has no side effects. In JAX and Goose, the concept of pureness is a bit weaker. A function may depend on variables in the environment. However, the values of those variables are then compiled into the function, and therefore, the behavior of the function does not change if the variables are updated later. Consequently, the compiled function is pure.} for functions to be compiled with JAX. Pureness is not necessarily a disadvantage, because pure functions are easier to reason about for humans and for the compiler. This can result in faster execution times compared to non-pure functions. Goose needs to guarantee that the compiled functions are pure. This implies that the engine must manage the PRNG state -- we use JAX's splittable Threefry counter-based PRNG -- as well as the kernel states. Goose requires all kernel methods called within the compiled functions (e.g.~\texttt{transition()} and \texttt{tune()}) to be pure, meaning that the kernels cannot store values changing over time in fields but must pass them back to the engine via a \texttt{KernelState} object, and receive them again from the engine together with the PRNG state for the next transition. \paragraph{The transition method.} The two most important methods every kernel needs to implement are the \texttt{transition()} and the \texttt{tune()} method. These methods are called by the engine and need to be pure and jittable. The purpose of the transition method is to move the position or parts of it using a valid MCMC step, e.g.~a Metropolis-Hasting algorithm. The position is a subset of the model state. Through the standardized model interface, the kernel can extract the position from the model state. The signature of the \texttt{transition()} method is as follows: \begin{lstlisting} Py> class Kernel: + # ... + + def transition( + self, + prng_key: KeyArray, + kernel_state: KernelState, + model_state: ModelState, + epoch: EpochState, + ) -> TransitionResult[KernelState, TransitionInfo]: + # ... + + # ... \end{lstlisting} Since the \texttt{transition()} method must be pure and MCMC transitions generally involve the generation of random numbers, the state of the PRNG needs to be provided as an argument. In addition, the \texttt{transition()} method receives the kernel state, the model state and the epoch state as arguments, and returns a \texttt{TransitionResult} object, which wraps the new kernel state, the new model state and some meta-information about the transition, e.g.~an error code or the acceptance probability (in a \texttt{TransitionInfo} object). An error code of zero indicates that the transition did not produce an error. All inputs and outputs must be valid ``pytrees'' (i.e.~arrays or nested lists, tuples or dicts of arrays). The structure of these objects, e.g.~the shape of the arrays in the kernel state, must not change between transitions. This allows the kernels to have specialized \texttt{KernelState} and \texttt{TransitionInfo} classes. \paragraph{Tuning a kernel.} The sampling process can be divided into epochs of four types: fast and slow adaptation epochs, burn-in epochs and posterior epochs. The adaptation and burn-in epochs are so-called warmup epochs. During the adaptation epochs, the kernels are allowed to learn their hyperparameters from the history. Samples from the adaptation epochs are usually invalid as MCMC samples, because the Markov property of the chain is violated. In contrast, during a burn-in epoch, the kernels should no longer adjust their hyperparameters and the Markov property should be respected, but the chain may still require some more time to converge. Finally, when reaching the first posterior epoch, the chain should have converged, all transitions should be valid, e.g.~there should be no divergent transitions, and hence, the samples should approximate the target distribution appropriately. The kernel method \texttt{tune()} is supposed to update the kernel hyperparameters at the end of an adaptation epoch. The method receives the PRNG state, the model state, the kernel state, the epoch state and optionally the ``history'', i.e.~the samples from the previous epoch, as arguments. It returns a \texttt{TuningResult} object that wraps the new kernel state and some meta-information about the tuning process, e.g.~an error code. As for the transition, the \texttt{TuningInfo} class can be kernel-specific but must be a valid pytree. The signature of the \texttt{tune()} method is as follows: \begin{lstlisting} Py> class Kernel: + # ... + + def tune( + self, + prng_key: KeyArray, + kernel_state: KernelState, + model_state: ModelState, + epoch: EpochState, + history: Position | None, + ) -> TuningResult[KernelState, TuningInfo]: + # ... + + # ... \end{lstlisting} \paragraph{Debugging.} The engine can be configured to store more information about the sampling process, e.g.~for debugging purposes. The extra information can include the log-posterior, log-likelihood, log-prior or any other quantity that can be computed from the model state by a quantity generator. Debugging is further facilitated with the option to store the kernel states for each iteration. Moreover, the engine can store information about the transitions and the tuning such as the acceptance probabilities or the proposals. In any case, the \texttt{transition()} and \texttt{tune()} methods of the kernels need to return an error code and inform the engine about non-fatal errors and warnings. The engine keeps a log and warns the user about potential problems. Goose's diagnostic tools can further aid the detection of potential sampling issues. \subsection{Standard kernels in Goose} Goose provides several kernels that can be used directly with many models. We discuss some of them here: \begin{description} \item[RandomWalkKernel] The RandomWalkKernel implements a Gaussian proposal distribution and a Metropolis-Hastings acceptance step. The kernel is self-tuning and uses the dual average algorithm to adjust the step size (i.e.~to scale the proposal distribution) during fast and slow adaptation epochs, such that a user-defined target acceptance rate, by default of 0.234 \citep{Gelman1997}, is reached. \item[HMCKernel and NUTSKernel] The HMCKernel and NUTSKernel use the gradient of the log-posterior to generate MCMC chains with a low autocorrelation. The implementation of the \texttt{transition()} method is based on BlackJAX's implementations of the HMC \citep{Neal2011} and NUTS \citep{Hoffman2014, Lao2020, Phan2019} algorithms. Both kernels are able to tune the step size during fast and slow adaptation epochs using the dual averaging algorithm. After slow adaptation epochs, the mass vector or matrix of the impulse is adjusted based on the empirical variance-covariance of the samples from the previous epoch. \item[IWLSKernel] The IWLSKernel is named after the method proposed by \citet{Gamerman1997}, which is often used for Bayesian distributional regression models \citep{Brezger2005}. However, Liesel's implementation is also inspired by the roughly equivalent Metropolis-adjusted Langevin algorithm (MALA) with the Riemann metric \citep{Girolami2011}. This approach allows us to add a step size parameter in a straightforward way, which can then be tuned with the dual averaging algorithm during fast and slow adaptation epochs. More precisely, the IWLSKernel employs a Metropolis-Hastings correction and a Gaussian proposal density, where the mean vector $\muvec$ and the covariance matrix $\Sigmamat$ depend on the gradient (score) and the Hessian (Hess) of the log-posterior, i.e. $$\muvec = \thetavec + \nicefrac{s^2}{2} \operatorname{Hess}(\thetavec)^{-1} \operatorname{score}(\thetavec), \qquad \Sigmamat = s^2 \operatorname{Hess}(\thetavec)^{-1},$$ where $s$ denotes the step size and $\thetavec$ the position vector. The factor $\nicefrac{1}{2}$ that is multiplied with $s^2$ in the mean vector comes from the Langevin diffusion, which is the basis of the MALA algorithm. \item[GibbsKernel] The GibbsKernel can wrap a user-defined function generating samples from a full conditional into a Goose-compatible kernel. With a Gibbs sampler, no tuning is necessary or possible, and therefore, the GibbsKernel has a trivial \texttt{tune()} method returning an empty kernel state. \item[MHKernel] Similar to the GibbsKernel, the MHKernel implements a Metropolis-Hastings sampler as a wrapper around a user-defined function generating proposals based on the current state. If the proposal distribution is asymmetric, the function must also return the Metropolis-Hastings correction factor. An optional step size argument is also provided, which is tuned with the dual averaging algorithm if used. \end{description} \subsection{Beyond pre-implemented kernels} The default Goose kernels are sufficient to estimate many statistical models with MCMC. However, Goose was specifically designed for cases when specialized kernels are needed. In these situations, new kernel classes adhering to the kernel interface can be implemented. The developer does not need to start from scratch, however. Goose comes with some building blocks that facilitate the implementation of new kernel classes. For example, if a kernel should support dual averaging, Goose can extend the kernel state with the necessary fields. It also comes with functions to calculate the error sum and to adjust the step size. A mixin for Metropolis-Hastings kernels is provided as well. \section{RLiesel: An R interface for semi-parametric regression} \label{sec:rliesel} In this section, we discuss semi-parametric and distributional regression, the model classes Liesel offers first-class support for, before introducing RLiesel, an R interface that assists the user with the configuration of these regression models in Liesel. We also describe a natural workflow for RLiesel using R Markdown and Quarto. \subsection{Semi-parametric regression} \label{sec:semi-par} Semi-parametric regression models combine parametric (usually linear) and non-parametric (usually spline-based) covariate effects. The standard semi-parametric regression model is given by \begin{equation} y_i = \beta_0 + \xvec_{i1}'\betavec_1 + f_{2}(\xvec_{i2}, \betavec_2) + \dots + f_{L}(\xvec_{iL}, \betavec_L) + \varepsilon_i, \qquad \varepsilon_i \overset{\text{i.i.d.}}{\sim} \mathcal{N}(0, \sigma^2), \label{eq:semi-par} \end{equation} where the response $y_i$ is modeled as a function of the covariates $\xvec_{i1}$ with parametric effects and the covariates $\xvec_{il}$ with the non-parametric effects $f_{l}(\xvec_{il}, \betavec_l)$ for $l = 2, \dots, L$. The regression coefficients are the intercept $\beta_0$, the slope coefficients $\betavec_1$ and the spline coefficients $\betavec_l$. Fitting the model requires the estimation of the regression coefficients and the variance of the additive Gaussian error term $\varepsilon_i$. One typical example of a non-parametric covariate effect is the B-spline $f(\xvec_i, \betavec) = \boldsymbol{b}(x_i)'\betavec$, where $\boldsymbol{b}(x_i)$ is the vector of B-spline basis functions for a fixed set of knots evaluated at $x_i$. For better readability, the index $l$ is omitted in the remainder of this section. The given B-spline representation is linear in the spline coefficients $\betavec$, allowing for a straightforward evaluation of the log-likelihood and the use of efficient estimation techniques. To avoid overfitting, certain smoothness properties can be encouraged through regularization, giving rise to the concept of penalized B-splines, also known as P-splines \citep{Eilers1996, Lang2004}. In Bayesian statistics, regularization is achieved through informative priors, such as the multivariate normal distribution with the density \begin{equation} p(\betavec \mid \tau^2) \propto \left(\frac{1}{\tau^2}\right)^{\rk(\Kmat)/2} \exp\left(-\frac{1}{2\tau^2} \betavec'\Kmat\betavec\right), \label{eq:mvn-prior} \end{equation} where $\tau^2$ is the variance (or inverse smoothing) parameter, and $\Kmat$ is a (potentially rank-deficient) penalty matrix. For P-splines with equidistant knots, it is common to penalize the second differences of the spline coefficients using the penalty matrix $\Kmat = \Dmat_2'\Dmat_2$, where $\Dmat_2$ is the second-order difference matrix such that $\Dmat_2\betavec = \Delta^2\betavec$. In this case, the penalty matrix is in fact rank-deficient, implying that additional constraints, usually a sum-to-zero constraint, are required for the identification of the spline coefficients. The hyperprior on the variance parameter $\tau^2$ is typically weakly informative with support on the non-negative real line. \citet{Lang2004} suggest to use the conjugate inverse gamma prior with the hyperparameters $a = b = 0.01$ (or some other small number), allowing us to draw directly from the full conditional. However, priors like the half-Cauchy distribution or half-normal distribution might have better statistical properties in practice \citep{Gelman2006, Klein2016Priors}. The concept of semi-parametric regression also encompasses other effect types that can be expressed as the inner product of a vector of basis function evaluations and a vector of regression coefficients, e.g.~random effects for clustered data or spatial effects. The structure of the penalty matrix $\Kmat$ in the multivariate normal prior~\eqref{eq:mvn-prior} depends on the desired effect type. For a random effect, we have $\Kmat = \Imat$, for an (intrinsic) Gaussian Markov random field, $\Kmat$ arises from the neighborhood structure \citep{Rue2005}, and for more general spatial effects, Vecchia approximations can be used to construct $\Kmat$ \citep{Katzfuss2021}. Note that the linear effect $\xvec_i'\betavec$ also fits into this framework by setting $\Kmat = \Zeromat$, reducing the multivariate normal prior~\eqref{eq:mvn-prior} to a flat prior. Consequently, parametric and non-parametric covariate effects can be treated the same way in this framework, and are generically referred to as predictor components or smooth terms. Semi-parametric regression is sometimes (perhaps more accurately, but also more verbosely) called structured additive regression. Consult \citet[Chapters~8 and~9]{Fahrmeir2013} for more information on predictor components and structured additive regression. \subsection{Distributional regression} Semi-parametric or structured additive regression predictors are often used in the context of distributional regression. These models are also known as generalized additive models for location, scale and shape (GAMLSS) and combine multiple regression predictors for different response parameters, that is, \begin{equation} p(y_i \mid \xvec_i, \betavec) = p(y_i \mid \theta_1(\xvec_{i1}, \betavec_1), \dots, \theta_K(\xvec_{iK}, \betavec_K)), \label{eq:dist-reg} \end{equation} where the response $y_i$ follows a probability distribution with the parameters $\theta_k$ for $k = 1, \dots, K$, each of which is modeled as a function of the covariates $\xvec_{ik}$ and the regression coefficients $\betavec_k$. In contrast to generalized linear models (GLMs), the response distribution is not limited to the exponential family but can be of any parametric type, including for example non-negative continuous distributions like the Weibull or Pareto distribution. Distributional regression models for count data can take zero-inflation and overdispersion into account \citep{Klein2015Count}, while fractional responses (i.e.~single or multiple percentages) can be analyzed with the beta or Dirichlet distribution \citep{Klein2015Multivariate}. With mixed discrete-continuous distributions, we can add points with a non-zero probability mass to the support of a continuous response distribution. Finally, the distributional regression framework allows us to study multivariate response vectors using either conventional multivariate distributions \citep{Michaelis2018} or copulas to describe complex dependence structures with arbitrary marginal distributions \citep{Klein2016Copula}. In distributional regression, each parameter of the response distribution is modeled with a semi-parametric regression predictor $\eta_{ik}$ (just as the one in Model~\eqref{eq:semi-par} in the previous section) and a response (or inverse link) function $h_k$, such that \begin{equation} \theta_{k}(\xvec_{ik}, \betavec_l) = h_k(\eta_{ik}) = h_k(\beta_{k0} + \xvec_{ik1}'\betavec_{k1} + f_{k2}(\xvec_{ik2}, \betavec_{k2}) + \dots + f_{kL_k}(\xvec_{ikL_k}, \betavec_{kL_k})). \label{eq:dist-par} \end{equation} The response function $h_k$ is a one-to-one mapping of the predictor $\eta_{ik}$ from the real line to the appropriate parameter space. For positive-valued response parameters, the exponential function is typically used as a response function, and for parameters on the unit interval, the logistic function is a common choice. The distributional regression model~\eqref{eq:dist-reg} with the semi-parametric predictor~\eqref{eq:dist-par} is a Bayesian hierarchical model, where the posterior can be factorized as $p\bigl(\bigcup_{k,l} \{\betavec_{kl}, \tau^2_{kl}\} \mid \bigcup_i \{y_i\}\bigr) = \prod_i p\bigl(y_i \mid \bigcup_{k,l} \{\betavec_{kl}\}\bigr) \cdot \prod_{k,l} p(\betavec_{kl} \mid \tau^2_{kl}) \cdot p(\tau^2_{kl})$. The model graph is a DAG with a tree-like structure, making it a good fit for software like Liesel, PyMC or Stan. \subsection{DAG representations of semi-parametric regression models} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/dist-reg} \caption{One possible DAG representation of the semi-parametric distributional regression model~\eqref{eq:dist-reg}. The different node types are described in Figure~\ref{fig:node-types}: Strong nodes are blue, weak nodes are orange. Nodes with double borders have a probability distribution, and oblique nodes are model parameters. Plate notation is used to indicate the range of the variable indices.} \label{fig:dist-reg} \end{figure} One possible DAG representation of the semi-parametric distributional regression model is shown in Figure~\ref{fig:dist-reg}. The strong node $\alphavec_{kl}$ denotes the fixed hyperparameters of the prior of the variance parameter $\tau^2_{kl}$. Typically, $\alphavec_{kl} = (a_{kl}, b_{kl})' = (0.01, 0.01)'$ in the case of an inverse gamma prior. The choice of the weak nodes is essentially arbitrary: The nodes $f_{ikl}$, $\eta_{ik}$ and $\theta_{ik}$ could also be merged into a single weak node. In Liesel, we encourage a structure of the model graph that resembles the mathematical formulation of the semi-parametric distributional regression model in Equation~\eqref{eq:dist-reg} and \eqref{eq:dist-par}. This allows us to provide a number of pre-defined nodes for the components of the model class, which can be combined by the user in different ways. The DAG representation can also be modified to improve the computational efficiency of the model. In the DAG as shown in Figure~\ref{fig:dist-reg}, the evaluation of the log-probability of $\betavec_{kl}$, i.e.~the evaluation of the multivariate normal prior~\eqref{eq:mvn-prior}, requires computing the rank of the penalty matrix $\Kmat_{kl}$. Given that the penalty matrix is usually a fixed hyperparameter, it is wasteful to repeat this expensive operation every time $\betavec_{kl}$ or $\tau^2_{kl}$ are updated. The performance of the model can be improved by adding a strong node with the pre-computed rank of $\Kmat_{kl}$. This node can then be used as an input for the probability distribution of $\betavec_{kl}$, hence avoiding the repeated computation of the matrix rank. \subsection{Setting up semi-parametric regression models with RLiesel} RLiesel is an R interface for Liesel, which can be used to configure semi-parametric distributional regression models. It is implemented as a thin wrapper around the \texttt{mgcv} package \citep{Wood2022}. The entry point to the package is the \texttt{liesel()} function, which requires the user to pass in the response data and distribution, and the predictors as arguments. The predictors are specified as R formulas with the extensions from \texttt{mgcv} to define non-parametric predictor components. They are passed on to the \texttt{gam()} function from \texttt{mgcv}, which initializes the design and penalty matrices. Finally, the Liesel model graph is built and filled with the data from \texttt{mgcv}. A concrete example how a model can be specified in RLiesel is given in the case study in Section~\ref{sec:case-study}. \texttt{mgcv} is the state-of-the-art package for semi-parametric regression in R. It is extremely powerful, supports many different response distributions and predictor components, and is installed with R by default. Other notable features of \texttt{mgcv} are the automatic smoothness selection \citep{Wood2004} and various multivariate smooth terms. To the best of our knowledge, no package with a comparable set of features exists in Python. Most newer R packages in the domain of semi-parametric regression modeling depend on \texttt{mgcv} in one way or another. With our implementation of RLiesel, we follow the same approach and leverage the features of \texttt{mgcv} for the use with JAX and Liesel, avoiding the need to re-implement all predictor components in Python. RLiesel configures the model graph, but does not automatically run an estimation procedure. Goose can be used for MCMC-based estimation, but needs to be configured in Python. For a seamless integration of RLiesel and Goose, we recommend Quarto \citep{Scheidegger2022} and \texttt{reticulate} \citep{Ushey2022}. Quarto allows the user to write and render dynamic documents in Markdown with embedded R and Python code cells, and using \texttt{reticulate}, objects can be shared between the R and Python processes at runtime. With this setup, the model can be configured using RLiesel in a R code cell, then exchanged with the Python process, before an MCMC algorithm is developed in another code cell. Finally, the estimation results can be visualized either in Python or R, depending on the user's preferences. \section{Case study: Comparing different sampling schemes} \label{sec:case-study} In this case study, we show how RLiesel and Goose can be used to set up and compare different sampling schemes on a simple semi-parametric distributional regression model. Often, a one-size-fits-all MCMC algorithm does not work too well with a specific model. In these cases, one can try to reparametrize the model to improve the performance of the MCMC algorithm, or alternatively, one can try to develop a more suitable sampling scheme. The second approach is the particular strength of Liesel and Goose. Goose facilitates building custom samplers for specific estimation problems, allowing the user to combine different pre-defined and self-written kernels. We use a dataset of LIDAR measurements, which was collected to determine the mercury concentration in the atmosphere, to evaluate the performance of five sampling schemes combining IWLS, Gibbs, NUTS and HMC kernels in different parameter blocks. For a detailed description of the experiment, see \citet{Holst1996}. Two lasers with different wavelengths were emitted by the LIDAR device, and the log-ratio between the signals (the amount of reflected light, $y_i$) was recorded for each range (the distance the light traveled, $x_i$). The data is shown in Figure~\ref{fig:lidar-splines} together with an estimate of the mean function. The derivative of the mean function is proportional to the desired estimate of the mercury concentration. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{lidar/lidar_files/figure-html/splines-1} \caption{The log-ratio of the LIDAR signals for each range on top of a MCMC sample of 4000 estimated mean functions (left) and 4000 estimated standard deviation functions (right). The red lines mark the posterior mean, the sample was obtained with the IWLS-Gibbs scheme described in Section~\ref{sec:lidar-schemes}.} \label{fig:lidar-splines} \end{figure} \subsection{Gaussian location-scale regression in RLiesel} From Figure~\ref{fig:lidar-splines}, the non-linearity and heteroscedasticity of the LIDAR measurements becomes apparent. The semi-parametric Gaussian location-scale regression model \begin{equation} y_i \sim \mathcal{N}(\beta_0 + f(x_i), (\exp(\gamma_0 + g(x_i))^2) \label{eq:lidar-model} \end{equation} is able to accommodate these properties of the data. Here, $\beta_0$ and $\gamma_0$ are the intercepts, and $f(x_i)$ and $g(x_i)$ are P-splines as described in Section~\ref{sec:semi-par}. For the P-splines, we use a cubic B-spline basis and a second-order difference penalty on the regression coefficients. The model belongs to the distributional regression framework as defined in Equation~\eqref{eq:dist-reg}, using a Gaussian response distribution and a log-link for the standard deviation. With RLiesel, we can set up Model~\eqref{eq:lidar-model} as follows: \begin{lstlisting} R> library(SemiPar) R> data(lidar) R> R> library(rliesel) R> use_liesel_venv() R> R> model <- liesel( + response = lidar$logratio, + distribution = "Normal", + predictors = list( + loc = predictor(~s(range, bs = "ps"), inverse_link = "Identity"), + scale = predictor(~s(range, bs = "ps"), inverse_link = "Exp") + ), + data = lidar + ) \end{lstlisting} The response variable and distribution, and the semi-parametric regression predictors are passed as arguments to the \texttt{liesel()} function. The predictors are specified as one-sided R formulas, where we can use the \texttt{s()} function from the \texttt{mgcv} package to define spline-based predictor components with the multivariate normal prior~\eqref{eq:mvn-prior}. The argument \texttt{bs = "ps"} indicates that we are using a P-spline. As Liesel depends on TensorFlow Probability (TFP) to represent probability distributions, we need to use the same class and parameter names. Here, the argument \texttt{distribution = "Normal"} refers to the class of the same name in TFP, which has the parameters \texttt{loc} and \texttt{scale} for the mean and the standard deviation of the normal distribution. \subsection{Sampling schemes with different kernels in Goose} \label{sec:lidar-schemes} For the LIDAR model, we are using the IWLS-within-Gibbs sampling scheme as a benchmark. This scheme is provided as the default in RLiesel and has been propagated in the literature on semi-parametric distributional regression for several years \citep{Klein2015Count}. It combines one IWLS kernel for the regression coefficients~$\betavec$ with one Gibbs kernel for the smoothing parameter $\tau^2$ of each predictor component. Thus, in complex models with many predictor components, it results in a high number of parameter blocks, and sometimes in MCMC chains with a high autocorrelation. Furthermore, the use of the observed Fisher information in the IWLS kernel can cause numerical instabilities. Software packages like BayesX and \texttt{bamlss} replace the observed with the expected Fisher information whenever possible to mitigate these problems, but this workaround is model-specific and not possible with automatic differentiation. Given the shortcomings of the IWLS-within-Gibbs scheme, it is interesting to compare its performance with gradient-based MCMC methods that do not require second derivatives such as HMC or NUTS. Relying only on the gradient, these kernels make it computationally feasible -- also in complex models -- to update large parameter blocks or the entire parameter vector. HMC and NUTS have been popularized with software like Stan \citep{SDT2022} and PyMC \citep{Salvatier2016}, and are known to work well in many applications \citep[Chapter 30]{MacKay2003}. In the LIDAR model, the smoothing parameters $\tau^2_f$ and $\tau^2_g$ need to be log-transformed if sampled with HMC or NUTS to guarantee an unconstrained parameter space. The configuration of all five sampling schemes is described in Table~\ref{tab:lidar-schemes}. \begin{table}[!ht] \renewcommand{\arraystretch}{1.1} \newcommand{\cellcolor[HTML]{efa9b5}IWLS}{\cellcolor[HTML]{efa9b5}IWLS} \newcommand{\cellcolor[HTML]{b6eaae}Gibbs}{\cellcolor[HTML]{b6eaae}Gibbs} \newcommand{\cellcolor[HTML]{a3d4f5}NUTS}{\cellcolor[HTML]{a3d4f5}NUTS} \newcommand{\cellcolor[HTML]{d1eafa}HMC}{\cellcolor[HTML]{d1eafa}HMC} \centering \caption{The sampling schemes for the LIDAR model. The IWLS kernel was used with the observed Fisher information as a metric (obtained through automatic differentiation). The NUTS kernel was configured with a maximum tree depth of 10 and a diagonal metric (tuned based on the empirical variances of the warmup samples). The HMC kernel was used with 64 integration steps and a diagonal metric. A smaller number of integration steps would have resulted in an insufficient exploration of the posterior distribution. The step size of the IWLS, NUTS and HMC kernels was calibrated with the dual averaging algorithm during the warmup epochs. } \label{tab:lidar-schemes} \begin{tabular}{>{\bfseries}l|c|c|c|c|c|c} & $\beta_0$ & $\betavec_f$ & $\tau^2_f$ or $\log(\tau^2_f)$ & $\gamma_0$ & $\gammavec_g$ & $\tau^2_g$ or $\log(\tau^2_g)$ \\ \hline IWLS-Gibbs & \cellcolor[HTML]{efa9b5}IWLS & \cellcolor[HTML]{efa9b5}IWLS & \cellcolor[HTML]{b6eaae}Gibbs & \cellcolor[HTML]{efa9b5}IWLS & \cellcolor[HTML]{efa9b5}IWLS & \cellcolor[HTML]{b6eaae}Gibbs \\ \hline NUTS-Gibbs & \cellcolor[HTML]{a3d4f5}NUTS & \cellcolor[HTML]{a3d4f5}NUTS & \cellcolor[HTML]{b6eaae}Gibbs & \cellcolor[HTML]{a3d4f5}NUTS & \cellcolor[HTML]{a3d4f5}NUTS & \cellcolor[HTML]{b6eaae}Gibbs \\ \hline NUTS1 & \multicolumn{6}{c}{\cellcolor[HTML]{a3d4f5}NUTS} \\ \hline NUTS2 & \multicolumn{3}{c|}{\cellcolor[HTML]{a3d4f5}NUTS} & \multicolumn{3}{c}{\cellcolor[HTML]{a3d4f5}NUTS} \\ \hline HMC2 & \multicolumn{3}{c|}{\cellcolor[HTML]{d1eafa}HMC} & \multicolumn{3}{c}{\cellcolor[HTML]{d1eafa}HMC} \\ \hline \end{tabular} \end{table} Setting up sampling schemes and parameter blocks is straightforward with Goose. To facilitate the configuration of an MCMC engine, a builder class can be used. Through the builder, kernels can be assigned to one or more parameters, the model and initial values can be set, as well as the number of MCMC iterations. Finally, the engine can be built and run. The following code snippet illustrates the procedure for the NUTS2 scheme, but the setup of the other schemes works analogously: \begin{lstlisting} Py> builder = gs.EngineBuilder(seed=1337, num_chains=4) Py> Py> k1 = ["loc_p0_beta", "loc_np0_beta", "loc_np0_tau2_transformed"] Py> k2 = ["scale_p0_beta", "scale_np0_beta", "scale_np0_tau2_transformed"] Py> builder.add_kernel(gs.NUTSKernel(k1)) Py> builder.add_kernel(gs.NUTSKernel(k2)) Py> Py> builder.set_model(lsl.GooseModel(model)) Py> builder.set_initial_values(model.state) Py> Py> builder.set_duration(warmup_duration=1000, posterior_duration=1000) Py> Py> engine = builder.build() Py> engine.sample_all_epochs() \end{lstlisting} \subsection{Run time and effective sample size} All sampling schemes from Table~\ref{tab:lidar-schemes} converged to the same posterior distribution shown in Figure~\ref{fig:lidar-splines}, so we can focus on comparing their efficiency rather than the parameter estimates. The MCMC algorithms were compiled and run on an Intel i7-1185G7 CPU with 8 cores and 3 GHz. The compilation was generally much more expensive than the generation of one chain with 1000 warmup and 1000 posterior iterations (Figure~\ref{fig:lidar-timings}). The IWLS-Gibbs and NUTS-Gibbs schemes were particularly slow to compile, presumably because combining two types of kernels means more work for the compiler, while the sampling schemes involving one or two NUTS kernels took most time to run. The reason for the performance issues with NUTS was that the maximum tree depth of 10 was reached in about 90\% of the posterior iterations for the NUTS1 scheme, and in 75\% for NUTS2. The problem did not occur with the NUTS-Gibbs scheme, where we split the regression coefficients $\betavec$ and the smoothing parameters $\tau^2$ into separate blocks. We tried to improve the performance of the NUTS1 and NUTS2 schemes with a non-centered parameterization as recommended by the \citet[User's Guide, Section~25.7]{SDT2022} by diagonalizing the penalty matrices of the P-splines as described by \citet[Section~5.4]{Wood2017}, but did not achieve an efficiency improvement. Other reparametrizations or the use of a Riemann metric \citep{Girolami2011} might help to speed up the NUTS kernels, but we did not explore these options further in this case study. \begin{figure} \centering \includegraphics[width=0.65\linewidth]{lidar/lidar_files/figure-html/timings-1} \caption{The compile and run time of the sampling schemes. The timings are obtained on an Intel i7-1185G7 CPU with 8 cores and 3 GHz for one MCMC chain with 1000 warmup and 1000 posterior iterations. The IWLS-Gibbs and NUTS-Gibbs schemes are most expensive to compile (because they combine two types of kernels), while the NUTS1 and NUTS2 schemes are most expensive to run (due to the high tree depth).} \label{fig:lidar-timings} \end{figure} The efficiency of an MCMC algorithm cannot be assessed based on the run time alone, but the quality of the samples needs to be taken into account as well. We use the effective sample size \citep[ESS,][]{Gelman2013} for this purpose. The ESS estimates the size an independent sample would need to have to contain the same amount of information as the correlated MCMC sample. An MCMC chain with a high autocorrelation generally has a low ESS. For the LIDAR model, the NUTS-Gibbs scheme has the highest ESS with a median of 318.67 per 1000 iterations, and the HMC2 scheme has the lowest ESS with a median of 25.56 (Table~\ref{tab:lidar-ess}). The table also shows the ESS per second, which takes both the quality of the samples and the run time into account. By that measure, the two schemes involving a Gibbs kernel perform best, with a median of 869.05 for NUTS-Gibbs and 325.21 for IWLS-Gibbs. \begin{table}[!ht] \centering \caption{The bulk ESS and bulk ESS per second of the sampling schemes. 30 MCMC chains are generated per scheme, and the summary statistics are computed pooling all 22 parameters of the LIDAR model. The ESS per second is computed based on the run time of the posterior iterations, not taking the compilation and the warmup iterations into account. The NUTS-Gibbs scheme is the most efficient, both in terms of ESS and ESS per second.} \label{tab:lidar-ess} \begin{tabular}{l|>{\bfseries}l|rr>{\bfseries}rrr} \toprule \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & 5\% & 25\% & Median & 75\% & 95\% \\ \midrule \multirow{5}{*}{\rotatebox[origin=c]{90}{Bulk ESS}} & IWLS-Gibbs & 33.53 & 70.01 & 91.17 & 114.61 & 269.58\\ & NUTS-Gibbs & 122.97 & 205.34 & 318.67 & 482.70 & 939.08\\ & NUTS1 & 7.80 & 46.30 & 92.78 & 347.65 & 945.62\\ & NUTS2 & 25.90 & 72.04 & 140.14 & 456.46 & 866.49\\ & HMC2 & 1.62 & 6.81 & 25.56 & 291.10 & 1249.13\\ \midrule \multirow{5}{*}{\rotatebox[origin=c]{90}{Bulk ESS/s}} & IWLS-Gibbs & 119.61 & 249.72 & 325.21 & 408.80 & 961.55\\ & NUTS-Gibbs & 335.37 & 560.01 & 869.05 & 1316.41 & 2561.02\\ & NUTS1 & 2.88 & 17.11 & 34.30 & 128.52 & 349.57\\ & NUTS2 & 17.96 & 49.95 & 97.16 & 316.48 & 600.77\\ & HMC2 & 7.56 & 31.88 & 119.55 & 1361.79 & 5843.52\\ \bottomrule \end{tabular} \end{table} \section{Discussion} \label{sec:discussion} In this article, we introduced the probabilistic programming framework Liesel, which allows the user to express Bayesian models as directed acyclic graphs and to build custom MCMC algorithms. With our software, established MCMC algorithms can be combined in new ways, and the user can implement problem-specific kernels and warmup schemes. Goose, Liesel's MCMC library, is independent of Liesel's graph-based model representation and can also be used with other JAX-compatible software, for example PyMC or user-defined log-posterior functions. Models expressed in Liesel can be modified through a programmer-friendly API. A base model can be generated with RLiesel, a tool to configure semi-parametric regression models, and new ideas can be explored with little effort by modifying the base model. Using state-of-the-art technology like just-in-time compilation, automatic differentiation and cluster computing, which is possible with JAX, Liesel allows for a fast development and testing cycle in Python while maintaining good computational performance. The development of Liesel will be continued in the coming years. Liesel uses many libraries that are under active development and whose API changes must be reflected in our software. We also plan to integrate new features and other enhancements of these libraries into Liesel. Based on JAX's experimental module for sparse linear algebra, for example, we will improve the performance of different models using efficient decomposition algorithms for matrices with band structures or more general sparsity patterns. The next major update of the software, Liesel 0.2, is planned for fall 2022. It will feature an improved model representation, making manipulations and extensions of the model graph easier and safer. In the new version, the graph of the statistical variables in the model will be built on top of a graph of computational nodes. This approach will result in an interface that is more convenient in standard use cases and more ``hackable'' in advanced use cases. The new interface aims to be simple and transparent with a small number of classes that do not surprise the developer with any ``magic'' behavior. Liesel will also be extended with more model components and new MCMC kernels. The new building blocks in the modeling library will facilitate the rapid development of new types of models, thus speeding up research. In particular, RLiesel will be extended with the functionality to build non-linear models that overcome the typical additive predictor structure of semi-parametric regression, or models that involve covariates that are themselves assigned a model specification such as measurement error models or more general structural equation models. These extensions will also serve as a demonstration of the functionality and flexibility that Liesel offers for the development of Bayesian (regression) models. Liesel's technology stack facilitates the implementation of gradient-based methods. Having automatic differentiation available will allow us to use general optimization algorithms to implement variational inference methods. Stochastic gradient MCMC (SG-MCMC) is a relatively new class of Monte Carlo algorithms that scale well to large datasets. Compared to traditional MCMC, these algorithms reduce the computational costs by using subsamples of the original dataset, while maintaining a high accuracy of the parameter estimates. Tools like Stan, PyMC and NIMBLE that enabled the broad success of Bayesian methods in many application areas are still missing SG-MCMC methods, although the first steps have been made (e.g.~in the R package \texttt{sgmcmc}). We plan to implement SG-MCMC kernels and non-traditional tuning methods for SG-MCMC in Liesel in the near future. \bibliographystyle{plainnat}
{ "timestamp": "2022-09-23T02:14:34", "yymm": "2209", "arxiv_id": "2209.10975", "language": "en", "url": "https://arxiv.org/abs/2209.10975" }
\section{Introduction} In plasmonic nanoparticle lattices, the radiative coupling of single particle plasmon resonances with the diffracted orders of the lattice give rise to hybrid modes known as surface lattice resonances (SLRs), which can possess particularly narrow linewidths \cite{zou2004silver,kravets2008extremely,auguie2008collective}. SLRs combined with strong near fields of localized plasmon resonances and molecular emitters have enabled coherent emission phenomena such as lasing and Bose-Einstein condensation in both weak \cite{hakala2017lasing,hakala2018bose} and strong coupling regimes \cite{vakevainen20,ramezani17}. Lasing characteristics of plasmonic lattice lasers (PLLs) depend on the dispersion of SLR modes, which can be conveniently tuned by varying parameters such as lattice geometry, interparticle distance, as well as material, size, and shape of the constituent nanoparticles. \cite{humphrey14,guo17, yang14, knudson19, zhou13, fernandez22, heilmann22}. These various degrees of freedom indicate that many areas of the parameter space may still be unexplored, even in some of the simplest cases. For instance, in the case of square lattice symmetry with cylindrical particles, the polarization dependence of the radiative feedback and the consequent anisotropy of the spatial coherence of lasing has only recently been recognized \cite{Asamoah_2021}. The results from Ref.\cite{Asamoah_2021} suggest that since the feedback mechanism is based on radiative coupling, the directionality and the polarization dependence of the nanoparticle scattering govern the spatial coherence properties of the laser. Thus, the control over the scattering properties of the particles (and radiative feedback) could play a crucial role also for the beaming properties of these coherent light sources. Various polarization and beaming patterns of PLLs have been observed by modifying the lattice geometry from the most simple (square) case towards more complex ones, for instance rectangular, hexagonal and honeycomb lattices \cite{Zhou2013, Hoang2017, Pourjamal2019, Tenner2018, Guo2019}. Here, we take an alternative route and instead of modifying the geometry of the lattice, we modify the single particle scattering properties while keeping the lattice symmetry (and other parameters of the lattice) constant. This allows us to distinguish the single particle scattering induced effects from the effects induced by the lattice geometry. Surprisingly, we observe a transition of spatial coherence properties from one to two dimensions in response to increasing particle diameter. The transition is accompanied with a drastic change in the polarization and beaming properties of the sample for both x- and y-polarizations. Further, we demonstrate a phase locking between degenerate x- and y-polarized lasing for large enough diameters. The physical mechanism governing both the transition of spatial coherence from one to two dimensions, as well as the phase locking of the x- and y-polarizations is identified by means of customized T-matrix scattering simulations. Our results demonstrate the crucial role of the size dependent scattering properties of the nanoparticles to the lasing characteristics of the PLL sources. \section{Results} \textbf{Sample preparation and optical setup.} The lattice consists of cylidrical gold nanoparticles with a period of $580$~nm in both x- and y-directions. The particles reside on a borosilicate substrate overlaid with a gain medium containing fluorescent IR-792 molecules in BA:DMSO (2:1) solution, whose refractive index matches the substrate index. The gain medium was pumped with a pulsed femtosecond laser (792 nm, 1 kHz, 150 fs). All the measurement results are averaged over 300 pulses. We focused on particle diameters of $80$, $100$ and $120$~nm. The mode frequencies for each sample were obtained by measuring angle and wavelength resolved transmittances with a white light source, see Supporting Information Figs.~1 and 2. Figs.~1 (a, b) summarize the main hypothesis of our manuscript. For conciseness, we consider only y-polarized dipoles in the figures. For small particles, the radiative feedback takes place only in x-direction, see Fig.~\ref{fig:Fig1} (a). Thus, spatial coherence is expected to be predominantly extended in x-direction as well. Due to modified scattering properties, a larger particle diameter induces 1) a modified scattering pattern, allowing the feedback to occur also in diagonal directions of the lattice and 2) significantly higher overall scattering intensity, see Fig.~\ref{fig:Fig1} (b). Thus, a transition from one to two-dimensional spatial coherence may be feasible. Further, if the increased diameter increases the cross-coupling between x- and y-polarizations, the phase locking between otherwise independent x- and y-polarized degenerate lasing modes modes can take place. The measurement scheme and a SEM picture of the particle lattice are presented in figure \ref{fig:Fig1} (c). More detailed description of the setup is presented in the Supporting Information Fig.~1. We employ a custom built wavefront folding interferometer (WFI) \cite{koivurova2019scanning,halder2020mirror} to study the spatial coherence properties of the PLL emission. The two arms of the WFI flip the source plane image in horizontal (x) and vertical (y) directions, respectively, allowing us to map field correlations between spatial points $(x, y)$ and $(-x, -y)$ on the lattice. An example of this is shown in the figure \ref{fig:Fig1} (d). \begin{figure} \centering \includegraphics[width=1\columnwidth]{figure1_v2.eps} \caption{(a) Small particle diameters are expected to radiate similar to ideal dipoles such that y-polarized dipole moments radiate predominantly to x-direction. Thus, in the lasing regime, the radiative feedback is expected to take place in x-direction as well (as indicated by the black arrows). Consequently, the spatial coherence is expected to be high in x-direction. (b) Increasing the particle diameter modifies the scattering properties of the particle, potentially allowing for radiative feedback in both x- and y-directions. (c) The scheme for analysis and a SEM image of the particle lattice. (d) The WFI flips the source plane images with respect to the center ($x=0$, $y=0$) of the lattice, allowing for interference and spatial coherence measurements between any point pairs ($x$, $y$) and ($-x$, $-y$).} \label{fig:Fig1} \end{figure} \textbf{Measurement results.} In Fig. \ref{fig:Fig2} (a) is shown the real space intensity distribution for the 80 nm diameter particle array with a pump fluency of $1.1P_\mathrm{th}$. The lasing emission spectra as well as the threshold curves are shown in Supporting Information Figs.~2-5. First, we note that the center of the array is bright, with gradually decreasing intensity towards the edges. Previously, such intensity patterns have been associated with the so-called bright mode of the plasmonic lattice, a hybrid composed of diffracted orders of the lattice and dipolar plasmonic excitations in each particle \cite{hakala2017lasing}. The dipolar excitations of these collective SLR modes are weaker at the edges of the lattice due to reduced incident radiation on each particle when moved away from the lattice center. Somewhat similar intensity patterns are observed for Figs.~\ref{fig:Fig2}~(d, g) for 100 and 120 nm diameter particles respectively. Intriguingly, the 120 nm case shows more pronounced, high spatial frequency intensity variation in the center of the lattice. \begin{figure} \centering \includegraphics[width=1\columnwidth]{fig2.eps} \caption{The obtained source plane data for 80 nm (a-c), 100 nm (d-f), and 120 nm (g-i) particles. The first column shows the source plane intensity, while the second and third show the WFI data and the obtained spatial degree of coherence (DOC), respectively. The white arrows indicate the observed polarizations in (c,f,i).} \label{fig:Fig2} \end{figure} Figures \ref{fig:Fig2} (b), (e) and (h) contain the correlation information measured with the WFI. In Fig. \ref{fig:Fig2} (b) one can readily identify pronounced interference fringes along the two axes where $x=0$ and $y=0$ (marked by the red lines). Furthermore, the result implies negligible phase correlations elsewhere, namely between points $(x,y)$ and $(-x,-y)$, when both $x$ and $y$ are non-zero. Such a behaviour is indicative of 1-dimensional lasing feedback. For conciseness, we choose the $x=0$ fringes for closer inspection. The fringes extend over the whole lattice in the y direction, implying that the spatial coherence is limited by the lattice size. Strikingly, the fringes extend over a very short, approximately $5.8~\mu$m, or approximately $8$ particles in x-direction, indicative of very small spatial coherence. The degree of spatial coherence obtained from the fringe visibility is shown in the Fig. \ref{fig:Fig2} (c), exhibiting a cross pattern. Polarization resolved analysis reveals that the two arms of the cross have orthogonal linear polarizations, as indicated by the arrows. With 100 nm diameter particles we observe qualitatively similar behaviour (second row of Fig. \ref{fig:Fig2}) as in the 80 nm case. However, a slightly larger area of emission is observed (Fig.~\ref{fig:Fig2} (d)). Further, the WFI signal reveals a somewhat more pronounced interference fringes in particular along the x-axis, see Fig.~\ref{fig:Fig2} (e). Notably, in this case the degree of coherence for both x- and y-polarizations are approximately equal (Fig.~\ref{fig:Fig2} (f)). As the particle diameter is increased to 120 nm, a drastic transition in the overall behaviour is observed, see Figs. \ref{fig:Fig2} (g-i). In particular, the interference fringes are visible over the entire array, see Fig. 2 (h). This suggests well defined phase correlations and spatial coherence in 2 dimensions, such that even the points away from the axes $x = 0$ and $y = 0$ are phase correlated. The obtained degree of coherence in the Fig. \ref{fig:Fig2} (i) further confirms this conclusion: The degree of coherence is significant over the entire center part of the lattice with equal contributions from both x- and y-polarizations. Figure \ref{fig:Fig3} (a-i) shows the far field emission of the PLL, which was characterized by polarization resolved Fourier imaging. For 80 nm diameter particles, a cross shaped intensity pattern is observed, see Fig. \ref{fig:Fig3} (a). The horizontal line of the cross consists of x-polarized and the vertical line of y-polarized light, see Figs. \ref{fig:Fig3} (b) and (c). Notably, the horizontal (vertical) line of the cross appears in the far field when the interference fringes in the source plane appear in the vertical (horizontal) direction. Thus, the features in the far field are localized in the direction in which the source plane spatial coherence extends over the entire lattice. Similar phase dependent behaviour was recently demonstrated in 2D plasmonic condensates. \cite{taskinen21} The angular divergences obtained from these images are $\delta \theta_x = 0.48^{\circ}, \delta \theta_y = 0.50^{\circ}$ for Fig.~3 (a), which, assuming a fully coherent beam gives a spatial coherence width of $\delta x = 2\times\pi/\delta k_x = 104~\mu\mathrm{m}$, and $\delta y = 2\times\pi/\delta k_y = 101~\mu\mathrm{m}$. The obtained numbers are in excellent accordance with the lattice dimensions ($100~\mu\mathrm{m} \times 100~\mu\mathrm{m}$). \begin{figure} \centering \includegraphics[width=1\columnwidth]{figure3.eps} \caption{The obtained far field data for 80 nm (a-c), 100 nm (d-f), and 120 nm (g-i) particles. The first, second and third columns show the unpolarized, x- and y-polarized far field intensities, respectively. The scale bar in figure (a) is 1 degree. The scale is the same for all figures.} \label{fig:Fig3} \end{figure} For 100 nm diameter particles (middle row of Fig. \ref{fig:Fig3}) a somewhat similar far field pattern is observed, with the distinction that the cross shaped pattern is slightly more localized at $\theta_{x,y}=0$. This is expected due to the slightly larger area of spatial coherence compared to the 80 nm particle case, as seen when from the Figs. \ref{fig:Fig2} (c) and (f). For 120 nm diameter, an entirely different far field pattern is observed, as seen in Figs. 3 (g-i). Importantly, the increased spatial coherence as seen in Fig. 2 dramatically reduces the asymmetry of the far-field distribution. Remarkably, only 20 nm increment in the particle diameter can induce this effect. \textbf{T-matrix simulations.} To rationalize these results, we carried out multiple scattering T-matrix simulations including contributions from electric and magnetic dipoles, quadrupoles and hexapoles. The technical details of the method can be found from our previous work \cite{Marek2021}. For conciseness, we present the results for the case where a single nanoparticle in the center of the lattice is driven with a y-polarized dipole spherical wave, see Fig.~\ref{fig:Fig4}. Note that due to symmetry, analogous results would be obtained for the x-polarized case. Nominal lattice periodicity, size and shape of the particles, as well as dielectric functions of the materials are the same as in the experiments. The first observation from the simulations are that the scattering pattern of y-polarized fields changes drastically with increasing particle diameter, see Figs.~4 (a-c). For 80 nm particles the pattern appears almost like a dipolar radiation pattern: the maximum intensity is observed in the x-direction of the driven particle. However, for the 100 nm case, the maxima reside slightly away from the x-direction. For 120 nm, the maxima form a cross-shaped pattern, with almost negligible intensity in the x-direction. Another noteworthy observation is that the overall scattering intensity increases by over 3 orders of magnitude when increasing the diameter from 80 to 120 nm. We believe that the drastic changes observed in the spatial coherence properties in Fig.~2 and the corresponding changes in the beaming properties in Fig.~3 are due to both modified scattering pattern as well as increased scattering intensity for large enough particles. Due to highly nonlinear character of stimulated emission processes in plasmonic nanolasers, even a minor variation in scattering efficiency can induce sufficient radiative feedback and result in macroscopic phase coherence over the entire lattice. \begin{figure} \centering \includegraphics[width=1\columnwidth]{fig4.eps} \caption{T-matrix scattering simulations for 80nm (a,d), 100 nm (b,e) and 120 nm (c,f) diameters. Top row shows y-polarized scattering intensities at the lattice plane, while the bottom row shows the x-polarized fields. In all cases, a single dipole in the center of the lattice is driven by a y-polarized dipole.} \label{fig:Fig4} \end{figure} To summarize, our results in Figs. 2 and 3 established an intimate connection between the particle diameter and the spatial coherence and far field beaming properties of plasmonic lasers. The most notable changes occurred between 100 nm and 120 nm particles, with the phase correlations extending throughout the entire center part of the lattice for the 120 nm case. This suggests that the degree of coherence underwent a transition from one to two dimensions for both x- and y-polarizations. Fig.~\ref{fig:Fig4} revealed the underlying physical mechanism for the drastic changes observed in the experiments, namely the modified radiation pattern as well as increased radiation intensity with large diameter particles. An interesting question then arises, whether there also exist well defined phase correlations between x- and y-polarizations? In Figs.~4 (d-f), we present the scattering intensities for x-polarized fields, while still driving the central particle with y-polarization. Notably, while the pattern stays qualitatively the same for all particle sizes, the intensity exhibits similar increment (3 orders of magnitude) with increasing particle diameter. Thus, for large enough particles, y-polarized dipoles could indeed produce sufficient x-polarized scattered fields to establish phase correlations between x- and y-polarizations of the lasing signal. Due to the symmetry of the lattice, the x- and y-polarized modes are degenerate, and thus the phase correlations can be conveniently studied by measuring the Stokes parameters. \textbf{Stokes parameters.} To study the potential correlations between x- and y-polarizations, we carried out an experiment to recover the Stokes parameters of the far field radiation for 100 nm and 120 nm particle sizes, see Fig. 5. For the details of the experiment, see Supporting Information Fig. 1. Fig. 5 (a) presents the S1 parameter for d = 100 nm sample. Note, that S1 describes the relative fraction of x- and y-polarized light. Value S1 = 1 implies x-polarized light, S1 = -1 implies y polarized light and S1 = 0 indicates equal contributions from both x- and y-polarized light. Notably, the Stokes parameter measurements are in full agreement with the polarization resolved far field analysis in Fig. 3: The vertical arm of the emission is y-polarized, while horizontal arm is x-polarized. The overlap region of both arms results in S1 = 0, indicating equal contributions from x- and y-polarizations. The Stokes parameter S2 describes the degree of diagonal polarization, and the value S2 = 0 in Fig. 5 (b) indicates no diagonal polarization. Further, S3 = 0 in Fig. 5 (c) indicates no circular polarization is present. Altogether, the results for 100 nm diameter suggest that each arm of the cross has a linear polarization and in the overlap region, the x- and y-polarizations have no well-defined phase difference (i.e., they are not phase locked). For 120 nm sample the S1 parameter exhibits a different distribution, see Fig. 5 (d). In particular, within the area of the main beam (indicated by the circle), S1 is approximately zero. This suggests equal contributions from x- and y-polarizations throughout the beam area. More importantly, the S2 = -1, suggesting a -45 degree polarization and therefore a well-defined phase correlation between x- and y-polarizations. This is in stark contrast to 100 nm case, where S2 = 0 and no phase correlations are present. Our observations can be explained by the comparison between Figs.~4 (e, f). While in both figures the x-induced dipoles have a similar pattern, for 120 nm particles the intensity of the x-induced dipole is over 20 times higher than for 100 nm particle. Thus, a large particle diameter enables not only 2-dimensional spatial coherence for both polarizations (as observed in Figs. 2 and 3), but it also enables phase correlations between two orthogonal polarizations. While the detailed study of the required conditions for such phase locking remains a scope for future work, some considerations can nevertheless be put forward. In our previous work \cite{Asamoah_2022}, we have shown that under the same exact experimental conditions (including the pump, particle shape, material, lattice periodicity), a further increase of the diameter to 140 nm produces lasing from so-called bound state in continuum (BIC) modes, whose origin lies in the quadrupolar resonances of large diameter particles. It is feasible that the 120 nm is a border line case, where the multipolar contribution to the resonance is sufficient to phase lock the x- and y-polarizations. \begin{figure} \centering \includegraphics[width=1\columnwidth]{fig5.eps} \caption{The Stokes parameters determined from far field emission for 100 nm (a, b and c) and 120 nm particles (d, e and f). First, second and third column show the S1, S2 and S3 parameters, respectively. The dashed lines indicate the FWHM intensities of the unpolarized beams.} \label{fig:Fig5} \end{figure} \section{Conclusions} To conclude, we have studied the single particle scattering induced effects in lasing plasmonic lattices. By keeping the lattice geometry constant and varying only the particle diameter we were able to distinguish lattice geometry induced effects from single particle scattering induced effects. For the first time, we demonstrate a transition of the spatial coherence from one to two dimensions with increasing particle diameter. The far field emission undergoes a transition from a cross-shaped pattern to a symmetric, approximately circular beam in response to changes in the spatial coherence. T-matrix simulations indicate that the physical mechanism governing this transition is associated with the increased scattering to the diagonal directions in the lattice. The symmetry of the lattice allows the coexistence of two independent, orthogonally polarized modes. Strikingly, the cross-coupling of y-polarized dipole radiation to x-polarization is strongly increasing with particle size. With large enough diameters, this induces a well-defined phase correlation in otherwise independent (but degenerate) x- and y-polarized modes. \section{Methods} \textbf{Correlation functions.} Here we present the main equations necessary for the analysis of our experimental results. The emission considered in this work is pulsed with a picosecond-scale pulse length, with spectral linewidths on the nanometer scale. The emission may be anisotropic in the sense that the divergence properties in the x- and y-directions can be different with varying particle diameter. The electric field generated by the source is denoted by $E(\boldsymbol{\rho};t)$, where $\boldsymbol{\rho} = (x,y)$ contains the transverse coordinates, and $t$ is the time in the moving reference frame of the pulse. To quantify the spatial correlation properties of the field, we employ the time integrated mutual coherence function (MCF), defined as \begin{equation*} \Gamma\left(\boldsymbol{\rho}_1,\boldsymbol{\rho}_2\right) = \int_{-\infty}^{\infty} \langle E^*(\boldsymbol{\rho}_1;t)E(\boldsymbol{\rho}_2;t) \rangle \mathrm{d}t, \end{equation*} where the angle brackets denote ensemble averaging. This form of the MCF is the relevant one when performing measurements with a slow detector. Moreover, we have set the time delay between the two copies of the field to zero, since we are interested only on the spatial coherence properties. If we set $\boldsymbol{\rho}_1 = \boldsymbol{\rho}_2 = \boldsymbol{\rho}$, the time integrated MCF yields the spatial intensity distribution, i.e. $\Gamma(\boldsymbol{\rho},\boldsymbol{\rho}) = I(\boldsymbol{\rho})$. The intensity can be used to define a normalized quantity \begin{equation*} \gamma(\boldsymbol{\rho}_1,\boldsymbol{\rho}_2) = \frac{\Gamma(\boldsymbol{\rho}_1,\boldsymbol{\rho}_2)}{\sqrt{I(\boldsymbol{\rho}_1)I(\boldsymbol{\rho}_2)}}, \end{equation*} which is a measure of the complex degree of spatial coherence of the time-averaged pulsed field. Then, the magnitude of the degree of coherence is obtained as DOC = $|\gamma|$. \textbf{T-matrix simulations.} The propagation of electromagnetic fields inside the array were simulated using the multiple-scattering T-matrix method implemented by the open-source QPMS suite\cite{QPMS,Marek2021}. The particle in the center of the array was excited with a y-polarized regular electric dipole spherical wave (artificially local, i.e. its direct effects are limited to that single particle) of unit intensity; from that particle, the field was then allowed to scatter throughout the whole array, yielding the patterns in Fig.~\ref{fig:Fig4}. The simulation frequencies for each particle size were pre-determined using mode calculations with corresponding \emph{infinite} periodic arrays\cite{Marek2021}, with the imaginary part of the mode frequency being discarded. The electric permittivity of the metal was modelled using the Lorentz-Drude formula with parameters taken from \cite{rakic_optical_1998}, the background medium was set to have constant relative permittivity of 1.52. \section{Acknowledgements} We acknowledge Academy of Finland Flagship Programme, Photonics Research and Innovation \textrm{PREIN} 320165, 320166 and Academy of Finland project number 322002. We acknowledge the computational resources provided by the Aalto Science-IT project. \section{References} \bibliographystyle{unsrt}
{ "timestamp": "2022-09-23T02:12:32", "yymm": "2209", "arxiv_id": "2209.10911", "language": "en", "url": "https://arxiv.org/abs/2209.10911" }
\section{Introduction} Comets are some of the most pristine bodies in the Solar System, having remained relatively unchanged since their formation 4.6 billion years ago. Cometary nuclei provide insights into the composition of the early protoplanetary disk (PPD) through their isotopic abundance ratios. As their composition reflects the physico-chemical conditions of the disk at the location of their formation in the protosolar nebula (PSN), understanding where each comet was formed reveals details as to the evolution of the Solar System. Decades of remote sensing of comets have revealed these objects to be water-ice rich, with a typical carbon monoxide composition of CO/H$_2$O = 4\% \citep{Bockelee2017}, and depleted in N$_2$ despite the abundance of this molecule in the atmospheres and surfaces of the outer Solar System bodies, such as Triton or Pluto \citep{Cochran2000}. However, radio observations of the long-period comet C/2016 R2 (PanSTARRS) revealed that its composition is unlike any comet observed before, with the spectrum dominated by bands of CO$^+$. This CO-rich comet is remarkably depleted in water, with a H$_2$O/CO ratio of only $\sim0.32$ \% \citep{McKay2019} with an upper limit of H$_2$O/CO < 0.1 \citep{Biver2018}. Further, it has a peculiar abundance of N$_2^+$, with N$_2$/CO estimated to be between 0.05 $\pm$ 0.01 \citep{McKay2019}, 0.06 $\pm$ 0.01 \citep{Opitom2019}, and 0.08 $\pm$ 0.02 \citep{Biver2018}, which had never been seen in such high quantities in comets before. This composition changes our perception of comet formation, as it was previously understood that CO ice is unlikely freeze out without abundant water ice, which has a higher binding energy than CO \citep{Boogert2015}. Most volatile species would also be expected to deplete with each subsequent passage of this comet within the inner Solar System. Understanding the dynamical history of this comet is therefore of essential importance to understanding the timeline of planetesimal formation in our Solar System. Other potential N$_2$-presenting candidates have been identified, such as C/1908 R1 (Morehouse), C/1961 R1 (Humason), C/1987 P1 (Bradfield), C/2001 Q4 (NEAT) with N$_2$/CO=0.027 \citep{Feldman2015}, and C/2002 VQ94 (LINEAR) for which N$_2$/CO=0.06 \citep{Korsun2008}. A few short-period comets also show an increased N$_2$/CO ratio, such as comet 29P/Schwassmann-Wachmann 1 with N$_2$/CO=0.013 \citep{Ivanova2016}, or comet 67P presenting N$_2$/CO=0.0287, but this result came from in situ measurements \citep{Rubin2020}. Some others present moderately unusual water-poor compositions; for example, interstellar comet 2I/Borisov, measured to have CO/H$_2$O between 35\% and 173\% \citep{Cordiner2020, Bodewits2020}, which is significantly higher than the average cometary values for our Solar System, and could be explained by an unusual formation environment beyond the CO snow line of its own system \citep{Price2021}. Comet C/2009 P1 (Garradd) is another outlier with a CO production rate of 63\% of that of water, yet no N$_2$ was detected \citep{Feaga2014}. This simultaneously CO- and N$_2$-rich and water-poor composition, along with none of the usual neutrals seen in most cometary spectra, makes C/2016 R2 a unique and intriguing specimen, the only one of its kind to ever be observed. Such a small sample size makes it impossible to draw conclusions as to a shared formation reservoir. The long-period comets share highly eccentric, almost parabolic orbits ---even hyperbolic in the case of C/1908 R1 (Morehouse) and C/2001 Q4 (NEAT)---, while Comet 29P/Schwassmann-Wachmann 1 is likely a captured Oort Cloud object \citep{Neslusan2017}. It is clear these objects must have spent the majority of their lifetime at high heliocentric distance, else they would have already lost their volatile content. Unfortunately, attempts to trace back their dynamical history with any degree of certainty is made impossible by the inherent chaotic nature of their motion due to frequent close encounters with the gas giants, which strip comets of their dynamical memory. As a result, there is no peculiar C/2016 R2-like orbit despite its otherwise peculiar nature, and we cannot trace its dynamical history backwards to a potential shared formation reservoir. The origin of the unusual composition of C/2016 R2 is highly disputed. It may be a fragment of a differentiated object as suggested by \cite{Biver2018}, similar to the CO-rich interstellar comet 2I/Borisov \citep{Cordiner2020}. If CO is absent in the upper layers of an as-of-yet undiscovered differentiated comet, as suggested by \cite{DeSanctis2001}, then it is possible that C/2016 R2 is a fragment of the core of such a comet. \citet{Desch2021} theorize that 1I/'Oumuamua may be an N$_2$ iceberg chipped off from the surface of an ex-Pluto by an impact during a period of dynamical instability, which could be applied to C/2016 R2. Another possibility is that the particular composition of C/2016 R2 simply arises from where it formed in the PSN: Perhaps this disk could evolve over time to create exotic compositions at different disk locations in unique proportions, in "special" comet-forming annuli. Two studies independently estimated the possible origin of this comet from building blocks formed in a peculiar region of the PSN, near the ice lines of CO and N$_2$. By evaluating the radial transport of volatiles in the PSN, \cite{Mousis2021} found that the peculiar N$_2$/CO ratio of C/2016 R2 could be replicated by agglomeration from particles near the N$_2$ and CO ice lines, within the 10-15 au region. Meanwhile, the CO/H$_2$O ratio would remain deeply depleted inward of the CO ice line, around the 8-11 au region. Cold traps of hypervolatiles in the PSN in a small, specific region of the disk could explain the peculiar composition of this latter comet. Similarly, \cite{Price2021} model the effect of drifting solid material in the PPD and find that the ideal location for the formation of CO-rich, H$_2$O-poor objects is beyond CO ice line. However, this would seem to indicate that more CO-rich comets should exist than have previously been observed. The N$_2$/CO ratio was not a part of their study. Here we explore the potential fates of comets formed from these building blocks using a numerical simulation of early Solar System formation. By examining the dynamical evolution of only the objects formed in a small exotic pocket, or "Sweet Spot," of the PSN, which allows for peculiar-composition comets to form, we hope to understand why so few are observed today. In Section \ref{sec:meth}, we describe the model we use to simulate the early Solar System and the dynamical evolution of these small bodies. In Section \ref{sec:res}, we report on these results and examine more closely the fates of all comets, then narrow our interest to comets that would have formed inside the Sweet Spot. Finally, in Section \ref{sec:con}, we provide our conclusions as to what these fates will be. \section{Methods}\label{sec:meth} We employ the Jumping Neptune scenario from \cite{Nesvorn2015}. We begin with five planets: Jupiter, Saturn, and three ice giants of comparable mass, as described by \cite{Deienno2017}. This third ice giant, henceforth I1, undergoes a series of encounters with Jupiter and Saturn which causes a divergent jump in their semi-major axes before inducing a jump in Neptune's orbit as well. Finally, I1 is ejected onto a hyperbolic orbit, leaving the remaining four planets near their present-day orbits. We tested several alternative simulations, varying the multi-resonance configuration, the distance from the last planet to the inner edge of the disk (1 or 2 au), the mass of the disk (20 or 40 $M_\Earth$), and the inclination of the disk in relation to the plane of the planets, with five different evolutions in each case. We selected the simulations that best satisfy the criteria of similarity with the Solar System today, consistent with the current orbital structure of the trans-Neptunian population, in line with \citet{Deienno2017}, which were all from the 3:2, 3:2, 2:1, 3:2 multi-resonance configuration, with a disk of 40 $M_\Earth$. The initial multi-resonant configurations we choose for Jupiter, Saturn, Uranus, Neptune, and I1, along with parameters for the location and mass of disk, begin in a 3:2, 3:2, 2:1, 3:2 resonance, as \cite{Baguet2019} find this is able to place a secular tilt resonance in the area of the cold Edgeworth-Kuiper belt (between 39 and 48 au). This provides us with five scenarios to explore, as defined in Table \ref{table:1}. All these configurations require the existence of the fifth giant planet, with a mass comparable to those of Uranus or Neptune, which is eventually ejected during the instability. This ice planet would have formed within the volatile-rich zone identified by \citet{Mousis2021}. All the planetary evolution simulations were run self-consistently with the five planets and a swarm of 1000 massive particles of the same mass, each 1/1000th of the mass of the disk. The disk extends from its inner edge to 30 au. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{Plots/Fig_1.png} \caption{Dynamical evolution of the planets and 1000 comet clones in Scenario 1 with eccentricity as a function of semi-major axis in log scale. Planets are represented in black, with Jupiter, Saturn, I1, Uranus, and Neptune, respectively, from left to right. Blue indicates comets formed between 4 and 8 au. Turquoise indicates comets formed between 8 and 12 au. Green indicates comets formed between 12 and 20 au. Yellow indicates comets formed between 20 and 50 au. After 100 Myr, the area around the giant planets is entirely cleared. Comets formed between 4 and 12 au are the first to be lost, and by 10 Myr almost none remain. By 100 Myr, the comets that remain in our simulation are almost entirely from the 20-50 au population.} \label{fig:stacked} \end{figure} \begin{figure*} \sidecaption \includegraphics[width=12cm]{Plots/Fig_2.png} \caption{Final positions in log scale of the first 1000 clones from Scenario 1 color coded for the last recorded time before they are removed from the simulation (see main text). Violet clones were lost early in the simulation, while yellow clones indicate those that remain. The black line corresponds to an equal initial and final position: comets above would have moved away from their initial formation position, and comets below would have migrated inwards. The area near the orbits of Jupiter and Saturn clears quickly, sending the clones on highly eccentric orbits even before Neptune's migration occurs. We see the relatively stable location of the current day Edgeworth–Kuiper Belt.} \label{fig:Final} \end{figure*} \begin{table} \caption[]{Initial conditions for the five scenarios explored in this study. The multi-resonance configuration is the 3:2, 3:2, 2:1, 3:2, with the outermost planet at 20.18 au from the Sun. In all cases studied here, we used a disk of 40 $M_\Earth$. (a) Distance of the inner bound of the disk (au), (b) inclination of the disk with respect to the invariable plane ($^\circ$), (c) node of the disk ($^\circ$), and (d) running number, i.e., number of generations used.} \label{table:1} \centering \begin{tabular}{l c c c c} \hline\hline & (a) & (b) & (c) & (d) \\ \hline Scenario 1 & 1 & 1 & 100.2 & 08\\ Scenario 2 & 1 & 0 & 0 & 02 \\ Scenario 3 & 2 & 0 & 0 & 04\\ Scenario 4 & 2 & 1 & 100.2 & 01\\ Scenario 5 & 2 & 1 & 100.2 & 05\\ \hline \end{tabular} \end{table} The present-day Edgeworth–Kuiper belt extends from the orbit of Neptune at 30 au to approximately 50 au from the Sun. However, most of the small bodies of the outer Solar System originated from the region between Jupiter and $\sim$30~au \citep{Gomes2003,2005Natur.435..459T,2008Icar..196..258L,Kaib2008}. With this in mind, we limit our simulations to planetesimals formed in the 4 - 50 au range. This allows us to neglect the influence of the inner planets, which, having small orbits, require more integration steps and longer calculations on each of our clones. While the CO-rich comet-forming zone could extend to 100 au \citep{Price2021}, the mass depletion of the classical belt is already well explored. We then run a modified \texttt{SWIFT} numerical integrator which uses a pre-recorded evolution of the giant planets \citep{Petit1999} and evolves our system over 100 Myr. The previously calculated evolution of the planets is recorded every 1000 yr or less and the positions of the planets are interpolated at each time-step necessary for the integration of the motion of the test particles. This ensures that each simulation for a given planetary evolution will use exactly the same planetary evolution track, avoiding divergence due to the intrinsic chaotic nature of planetary motion. Thus, our final planetary system is sure to correctly reproduce the structure of the Solar System. The major difference in planetary behavior between these scenarios is the moment of ejection of I1. This occurs at 5 Myr, 6 Myr, 7 Myr, 8 Myr, and 13 Myr for scenario 1, 2, 3, 4, and 5, respectively. For each scenario, we run 50 sets of 1000 massless comet facsimiles or "clones." Each clone has randomly generated orbital elements setting them on the same plane as the disk with varying semi-major axes between 4 au ---to avoid the inner Solar System--- and 50 au. The clones are distributed with a number density that varies as $r^{-1/2}$, or a surface density that varies as $r^{-3/2}$. We therefore have a total of 250\,000 clones for our five scenarios. Our simulations count a clone as lost if it reaches beyond 10000 au as we do not yet have the ability to estimate the effects of the Galactic tidal forces. If a clone moves under 0.005 au from the Sun, or in collision with a planet, it is also removed from the integration, as it is most likely destroyed. \section{Results and Discussion}\label{sec:res} We examine the orbital elements of each clone, identified by its formation location (initial semi-major axis). This is shown in Fig. \ref{fig:stacked}. Within the first 1 Myr, 21\% of all clones are lost from the simulation. This number rises to nearly half (49\%) after 10 Myr. By the end of the 100 Myr simulation run, we have lost three quarters (76\%) of our initial population. Only a quarter (24\%) of our clones remain. A snapshot of the first 1000 clones in our first scenario is shown in Fig.~\ref{fig:Final} with their first and final positions. Each clone is color coded for the moment it is lost, with earlier losses in purple and those that remain in the end shown in yellow. The major loss of clones occurs before $\sim$10 Myr: after this time, the area around the giant planets (<~15 au) is entirely cleared. It is important to note here that if we had used a four-planet model, based on current planetary orbits, Saturn would play the role of I1 and clear this region, leading to the same outcome. We examine the percentage of clones lost in our simulations more closely for each 1 au annulus from 4 au to 50 au, as shown in Fig.~\ref{fig:PerDis}. We see that for every 1 au annulus between 4 and 10 au, over 95\% of the clones are lost before the end of the 100 Myr in each scenario. This number dips to 90\% around 12 au. Then, between 12 and 20 au, each scenario still loses a minimum of 80\% of their clones within the simulation time. In comparison, annuli beyond 40 au \textemdash the current location of the Classical Edgeworth–Kuiper Belt\textemdash\ only lose half their clones, showing a zone that is relatively stable, containing objects that do not move far from where they are formed. The behavior of these clones is consistent between scenarios and independent of the moment of ejection of I1. \begin{figure} \centering \includegraphics[width=\linewidth]{Plots/Fig_3.png} \caption{Percentage of clones lost per formation location for each of the five scenarios. The gray zone indicates the limitation of our simulation. The blue zone indicates the N$_2$/CO enrichment zone as predicted by \cite{Mousis2021}, while the overlaid green zone indicates the location of the ideal CO/H$_2$O enrichment zone.} \label{fig:PerDis} \end{figure} Based on the ranges proposed by \cite{Price2021} and \cite{Mousis2021}, we examine different formation zones. \cite{Price2021} suggest a wide range, arguing that the CO/H$_2$O ice-enrichment zone is likely between 20 and 100 au, though these authors do not investigate a N$_2$/CO ice-enrichment zone. As the CO/H$_2$O ice enrichment zone evolves over time, and without seeing how N$_2$ would evolve in the simulations of \cite{Price2021}, we cannot determine where a specific C/2016 R2 formation zone could occur. In light of the fact that both CO and N$_2$ have similar sublimation temperatures, the two ice lines should be near each other and make the 20-30 au annulus an area to explore. Meanwhile, the results of \cite{Mousis2021} would indicate a narrow area, as they find a CO/H$_2$O ice-enrichment zone of $\sim$1-2 au wide, near 10 au. Their N$_2$/CO ice-enrichment zone is narrower still, seemingly less than 1 au. The overlapping formation zone for a C/2016 R2-like comet would therefore be incredibly narrow. We examine both a wide C/2016 R2 forming annulus between 8 and 20 au; a narrow one, only 8-10 au; and the narrowest one between 10 and 11 au, that is, the Sweet Spot. Interestingly, I1 is initialized and subsequently ejected from this narrow region as well. \begin{figure*} \centering \includegraphics[width=\textwidth]{Plots/Fig_4.jpg} \caption{Final semi-major axes and eccentricities (left) and perihelion distances (right) of all clones from all simulations remaining after 100 Myr. Comets formed in the 8-11 au zone are shown in black. Any remaining object with a perihelion $q<35$ au will likely be sent to the Oort Cloud by Neptune, or will lose its hypervolatile majority ices to vacuum via insolation heating \citet{Lisse2022}.} \label{fig:RemQ} \end{figure*} The resulting statistics are shown in Table \ref{table:2}. On average, each simulation loses 75\% of its clones by 100 Myr, losing 90\% of all clones formed in the 8-20 au range, 97\% of all clones formed in the 8-10 au range, and $\sim80\%$ in the 20-30 range. Consistently, each simulation loses 96\% of all comets formed between 10 and 11 au. If we examine the region of clones initialized between 8 and 20 au, we find that half the clones are already lost by 5 Myr, with two-thirds of clones being ejected after 15 Myr. If we narrow that region further to 8-11 au, we find that 60\% of clones formed in this region are ejected in the first 1 Myr and 90\% after 10 Myr. A handful of clones ($\sim$0.1\%) are lost to collisions with the giant planets. Depending on the chronology, these could help account for the delivery of the building blocks of the Galilean and Saturnian satellites necessary for their formation \citep{Ronnet2018, Anderson2021}. In each simulation, irrespective of the scenario, only $\sim$1\% of all remaining clones were from the initial 8-11 au population; these are shown in black in Fig.~\ref{fig:RemQ}. They will either find themselves on highly eccentric orbits, be absorbed into the Edgeworth–Kuiper belt, or join the scattered disk. These clones seem to be evenly distributed within the population of remaining comets. The 10-11 au population makes up only 0.4\% of all surviving clones. \begin{table} \caption[]{Statistical loss outcomes of each of the scenarios after 100 Myr for each formation zone.} \label{table:2} \centering \begin{tabular}{lccccc} \hline\hline & Total loss & 8-10 au & 10-11 au & 8-20 au & 20-30 au\\ \hline S1 & 73$\%$ & 97$\%$ & 96$\%$ & 88$\%$ & 76$\%$ \\ S2 & 79$\%$ & 97$\%$ & 96$\%$ & 90$\%$ & 80$\%$ \\ S3 & 75$\%$ & 98$\%$ & 96$\%$ & 91$\%$ & 79$\%$ \\ S4 & 77$\%$ & 97$\%$ & 96$\%$ & 90$\%$ & 79$\%$ \\ S5 & 73$\%$ & 97$\%$ & 96$\%$ & 89$\%$ & 76$\%$ \\ \hline \end{tabular} \end{table} We must now estimate how many C/2016 R2-like comets could be captured by the Oort Cloud, so as to then evolve dynamically over the next 4 Gyr and return to visit the inner Solar System on C/2016 R2-like orbits. While it is tempting to say that the cometesimals lost from our simulation were ejected from our Solar System, a further investigation of the orbital elements at the moment they were removed from the simulation is required in order to estimate their capture rate by the Oort Cloud. This rate is poorly constrained as of yet as this would depend greatly on the timeline of evolution coinciding with our Sun's ejection from its parent cluster. Further numerical simulations are required in order to investigate the behavior of these comets beyond the 10000 au cutoff, although the effects of Galactic tides 4 Gyr ago are still unknown. Nevertheless, we can make a safe estimate of which comets are bound to the Solar System from the energy $z$ of the cometesimals at the moment they are lost: \begin{equation} z = i_\alpha \frac{GM_\Sun}{2a}, \end{equation} \noindent where $i_\alpha$ is 1, 0, and -1 for e>1, e=1, and e<1, respectively, $M_\Sun$ the mass of the Sun, and $G$ the gravitational constant. We consider an object captured by the Oort Cloud if the final semi-major axis is $a>10000$ au and its final energy is $z/GM_\Sun \leq 0.00005$ at the moment it is lost from our simulation. Otherwise, we consider it has a truly hyperbolic orbit and is seen as detached from the Solar System. Under these conditions, 11\% of all our clones have potentially reached the Oort Cloud. The distribution of contributions to the Oort Cloud from each formation zone can be seen in Fig. \ref{fig:Families}. Of the cometesimals formed between 8 and 11 au, 13\% may have reached the Oort Cloud, which represents 12\% of all the cometesimals potentially captured. The 20-30 au formation zone contributes 12\% (making up 22\% of the total number of those captured); the 30-40 au formation zone contributes 9\% (making up 14\% of the total number of those captured); and the 40-50 au formation zone contributes only 5\% (making up 6\% of the total number of those captured). If this estimate is accurate, we should have far more CO-rich comets in the Oort Cloud, between 10\% and 20\% of all long-period comets we observe today. However, this is not the case. As the timeline of evolution coinciding with our Sun's ejection from its parent cluster is still poorly constrained, we re-estimate our capture conditions. \citet{Zwart2021} estimate that, so long as the Sun is a cluster member, clones with an eccentricity of $e > 0.98$ and a semi-major axis of $a > 2400$ au would be vulnerable to being stripped by the cluster potential or by passing stars. This could not apply to the entire course of our simulation as the Oort Cloud would be unable to form if this were the case. If we align ourselves with \citet{Zwart2021} and consider that our Sun is still within its parent cluster for the first 10~Myr of our simulation, we consider a clone captured if it fulfills the first criteria (final $a>10000$ au final $z/GM_\Sun \leq 0.00005$) along with a new criterion, $e<0.98$ if this event occurs within the first 10~Myr of the simulation. We have drastically different results, as seen in Fig. \ref{fig:Families}. Under these conditions, of the cometesimals formed between 8 and 11 au, only 1\% may have reached the Oort Cloud, which represents 4\% of all the cometesimals potentially captured. The 20-30 au formation zone contributes 7\% (making up 32\% of the total number of those captured); the 30-40 au formation zone contributes 8\% (making up 33\% of the total number of those captured); and the 40-50 au formation zone contributes only 5\% (making up 15\% of the total number of those captured). These results are coherent with our current understanding of the chronology of Oort Cloud formation \citep{Zwart2021}, whereby it is estimated that the bulk or 70\% of the Oort Cloud material originates from the 15-40 au region, and are near what we observe today. \begin{figure} \centering \includegraphics[width=\linewidth]{Plots/Fig_5.png} \caption{Probability of the fates of cometesimals from each formation zone at the end of 100~Myr if the sun has left its parent cluster (upper pannel) and if the Sun is still in its parent cluster for the first 10~Myr of our simulation (lower pannel). In orange we see the cometesimals ejected from the Solar System, in blue those captured by the Oort Cloud, and in green those remaining in the simulation. If the sun has left its parent cluster before the beginning of the simulation, each region <~35~au would contribute $\sim$ 11\% of its clones to the Oort Cloud. If the Sun is still in its parent cluster for the first 10~Myr of the simulation, only 1\% of clones formed between 8 and 11 au are likely to have reached the Oort Cloud, representing 4\% of all the cometesimals potentially captured.} \label{fig:Families} \end{figure} \section{Conclusions}\label{sec:con} We find that the majority of objects formed between Saturn and the N$_2$ ice line are ejected early and rapidly in the simulation, meaning that even by the time the Jumping Neptune scenario occurs, the clones are already lost. This could explain the lack of N$_2$-rich, CO-rich, and H$_2$O-depleted comets: these were formed in a very narrow region, and that region empties rapidly because of the influence of giant planets. \citet{Zwart2021} call this procession of comet ejections from the 5-11 au zone the `Conveyor Belt', which aptly describes the phenomenon we see here. Objects formed in this region would be ejected early from the Solar System, in less than $\sim$10 Myr, and be unlikely to join the Oort Cloud. Therefore, if N$_2$-rich, CO-rich, and H$_2$O-depleted comets were to have formed under 11 au, $>$90\% of this population would have been ejected from the Solar System without having been captured by the Oort Cloud. As the \citet{Price2021} formation zone for CO-enrichment is tens of astronomical units in width, suggesting that nearly half of all observable comets (~40\%) will be CO-rich, the Oort Cloud should be full of comets of this type. As this is not the case, then we should rule out this model in favor of that of \citet{Mousis2021}. It should also be noted that if this mechanism is indeed the one by which the comets formed and were subsequently captured, then C/2016 R2-like comets may be some of the first long-period comets to have formed, and the earliest to reach the Oort Cloud. There, objects with a nucleus larger than 5 km could survive thousands of orbits, and hypervolatile loss upon possible perihelion passage. This would indicate that C/2016 R2 represents one of the first Oort Cloud objects, which could provide a direct measurement of CO/N$_2$/CH$_4$ ratios in the PSN \citep{Steckloff2021,Davidsson2021,Prialnik2021,Lisse2022}, and would explain why so few N$_2$-rich, CO-rich, and H$_2$O-depleted comets have been observed today. We must also consider the possibility that many of the cometesimals remaining may have lost their bulk hypervolatile species in the billions of years since their formation, or even within the time frame of our simulation. Pure hypervolatile ices are only stable on gigayear timescales beyond a heliocentric distance of 100 au \citep{Lisse2021}. If this ejection period were to take place at the same time as the sublimative period of the Edgeworth–Kuiper Belt \citep{Lisse2021, Steckloff2021}, then they would only have $\sim$20 Myr to be placed on a trajectory toward the Oort Cloud before they lose their mostly hypervolatile ices to vacuum via insolation heating \citep{Lisse2022}. When looking at both the sublimation chronology and the Oort Cloud formation chronology together, we have a small window of only $\sim$10 Myr in which an object could be ejected from the giant planet region and inserted into the Oort Cloud. Our results are in line with the hypothesis of \citet{Lisse2022}, which states that interstellar object 2I/Borisov was ejected early from its parent system. However, these chronologies (the Sun's ejection from its parent cluster; the sublimative period of the Edgeworth–Kuiper Belt; the Jumping Jupiter/Neptune scenario) are still poorly constrained, even more so when considering how and when these timelines overlap. The window for the formation of C/2016 R2 in the Conveyor Belt region and subsequent capture by the Oort Cloud could be longer or shorter than our 10 Myr estimate. We understand that a more quantitative simulation should account for the effect of Galactic tides and perturbations from passing stars from the Sun's birth cluster. However, such an endeavor is far beyond the scope of the present work. The mass distribution of the Galaxy and the position of the Sun in it 4 billion years ago is still unknown, as are the initial mass function and orbital distribution of the Sun's birth cluster. The dynamics of the Solar System, in particular in its infancy, are inherently chaotic. Therefore, one would need to run a large number of simulations, varying the Galactic potential and the Sun's birth cluster influence, in order to only get an average statistics of possible events, given that it is unclear whether our Solar system is generic or peculiar. Another possible explanation for the composition of C/2016 R2 was presented by \citet{Desch2021}, who suggest that C/2016 R2 could be a fragment of a differentiated KBO surface that was created from an impact during the period of energetic impacts during the 2:1 Jupiter:Saturn resonance epoch. If the CO-rich formation zone is further out, as suggested by \citet{Price2021}, then the objects formed there would have longer to form, impact, and to travel to the Oort Cloud. Further studies of the chronology of the formation of these objects and the time frame for the dynamic instability should be explored in order to investigate the likelihood of these scenarios. A geo-chemical study of how these objects form accompanied with a detailed isotopic and chemical analysis of their current composition would also be beneficial. Understanding whether a C/2016 R2-like object can form with its peculiar composition in situ in the disk or the composition arises from the differentiation of a Pluto-like object would shed light on which of these processes is more likely. This also allows for the existence of possible exotic comets, with peculiar enrichments stemming from unique composition pockets in the disk. Hypothetically, the ice line of each species would create a small enrichment zone, producing small bodies dominated by this species rather than H$_2$O. By examining the ice lines of the volatile molecules, we can estimate the probabilities of finding comets with each composition. \begin{acknowledgements} We acknowledge the region of Bourgogne-Franche-Comté for their funding of the \texttt{DIAZOTE} project and this work. The project leading to this publication has received funding from the Excellence Initiative of Aix-Marseille Université - A*Midex, a French “Investissements d’Avenir programme” AMX-21-IET-018. \end{acknowledgements} \bibliographystyle{aa} \section{Introduction} Comets are some of the most pristine bodies in the Solar System, having remained relatively unchanged since their formation 4.6 billion years ago. Cometary nuclei provide insights into the composition of the early protoplanetary disk (PPD) through their isotopic abundance ratios. As their composition reflects the physico-chemical conditions of the disk at the location of their formation in the protosolar nebula (PSN), understanding where each comet was formed reveals details as to the evolution of the Solar System. Decades of remote sensing of comets have revealed these objects to be water-ice rich, with a typical carbon monoxide composition of CO/H$_2$O = 4\% \citep{Bockelee2017}, and depleted in N$_2$ despite the abundance of this molecule in the atmospheres and surfaces of the outer Solar System bodies, such as Triton or Pluto \citep{Cochran2000}. However, radio observations of the long-period comet C/2016 R2 (PanSTARRS) revealed that its composition is unlike any comet observed before, with the spectrum dominated by bands of CO$^+$. This CO-rich comet is remarkably depleted in water, with a H$_2$O/CO ratio of only $\sim0.32$ \% \citep{McKay2019} with an upper limit of H$_2$O/CO < 0.1 \citep{Biver2018}. Further, it has a peculiar abundance of N$_2^+$, with N$_2$/CO estimated to be between 0.05 $\pm$ 0.01 \citep{McKay2019}, 0.06 $\pm$ 0.01 \citep{Opitom2019}, and 0.08 $\pm$ 0.02 \citep{Biver2018}, which had never been seen in such high quantities in comets before. This composition changes our perception of comet formation, as it was previously understood that CO ice is unlikely freeze out without abundant water ice, which has a higher binding energy than CO \citep{Boogert2015}. Most volatile species would also be expected to deplete with each subsequent passage of this comet within the inner Solar System. Understanding the dynamical history of this comet is therefore of essential importance to understanding the timeline of planetesimal formation in our Solar System. Other potential N$_2$-presenting candidates have been identified, such as C/1908 R1 (Morehouse), C/1961 R1 (Humason), C/1987 P1 (Bradfield), C/2001 Q4 (NEAT) with N$_2$/CO=0.027 \citep{Feldman2015}, and C/2002 VQ94 (LINEAR) for which N$_2$/CO=0.06 \citep{Korsun2008}. A few short-period comets also show an increased N$_2$/CO ratio, such as comet 29P/Schwassmann-Wachmann 1 with N$_2$/CO=0.013 \citep{Ivanova2016}, or comet 67P presenting N$_2$/CO=0.0287, but this result came from in situ measurements \citep{Rubin2020}. Some others present moderately unusual water-poor compositions; for example, interstellar comet 2I/Borisov, measured to have CO/H$_2$O between 35\% and 173\% \citep{Cordiner2020, Bodewits2020}, which is significantly higher than the average cometary values for our Solar System, and could be explained by an unusual formation environment beyond the CO snow line of its own system \citep{Price2021}. Comet C/2009 P1 (Garradd) is another outlier with a CO production rate of 63\% of that of water, yet no N$_2$ was detected \citep{Feaga2014}. This simultaneously CO- and N$_2$-rich and water-poor composition, along with none of the usual neutrals seen in most cometary spectra, makes C/2016 R2 a unique and intriguing specimen, the only one of its kind to ever be observed. Such a small sample size makes it impossible to draw conclusions as to a shared formation reservoir. The long-period comets share highly eccentric, almost parabolic orbits ---even hyperbolic in the case of C/1908 R1 (Morehouse) and C/2001 Q4 (NEAT)---, while Comet 29P/Schwassmann-Wachmann 1 is likely a captured Oort Cloud object \citep{Neslusan2017}. It is clear these objects must have spent the majority of their lifetime at high heliocentric distance, else they would have already lost their volatile content. Unfortunately, attempts to trace back their dynamical history with any degree of certainty is made impossible by the inherent chaotic nature of their motion due to frequent close encounters with the gas giants, which strip comets of their dynamical memory. As a result, there is no peculiar C/2016 R2-like orbit despite its otherwise peculiar nature, and we cannot trace its dynamical history backwards to a potential shared formation reservoir. The origin of the unusual composition of C/2016 R2 is highly disputed. It may be a fragment of a differentiated object as suggested by \cite{Biver2018}, similar to the CO-rich interstellar comet 2I/Borisov \citep{Cordiner2020}. If CO is absent in the upper layers of an as-of-yet undiscovered differentiated comet, as suggested by \cite{DeSanctis2001}, then it is possible that C/2016 R2 is a fragment of the core of such a comet. \citet{Desch2021} theorize that 1I/'Oumuamua may be an N$_2$ iceberg chipped off from the surface of an ex-Pluto by an impact during a period of dynamical instability, which could be applied to C/2016 R2. Another possibility is that the particular composition of C/2016 R2 simply arises from where it formed in the PSN: Perhaps this disk could evolve over time to create exotic compositions at different disk locations in unique proportions, in "special" comet-forming annuli. Two studies independently estimated the possible origin of this comet from building blocks formed in a peculiar region of the PSN, near the ice lines of CO and N$_2$. By evaluating the radial transport of volatiles in the PSN, \cite{Mousis2021} found that the peculiar N$_2$/CO ratio of C/2016 R2 could be replicated by agglomeration from particles near the N$_2$ and CO ice lines, within the 10-15 au region. Meanwhile, the CO/H$_2$O ratio would remain deeply depleted inward of the CO ice line, around the 8-11 au region. Cold traps of hypervolatiles in the PSN in a small, specific region of the disk could explain the peculiar composition of this latter comet. Similarly, \cite{Price2021} model the effect of drifting solid material in the PPD and find that the ideal location for the formation of CO-rich, H$_2$O-poor objects is beyond CO ice line. However, this would seem to indicate that more CO-rich comets should exist than have previously been observed. The N$_2$/CO ratio was not a part of their study. Here we explore the potential fates of comets formed from these building blocks using a numerical simulation of early Solar System formation. By examining the dynamical evolution of only the objects formed in a small exotic pocket, or "Sweet Spot," of the PSN, which allows for peculiar-composition comets to form, we hope to understand why so few are observed today. In Section \ref{sec:meth}, we describe the model we use to simulate the early Solar System and the dynamical evolution of these small bodies. In Section \ref{sec:res}, we report on these results and examine more closely the fates of all comets, then narrow our interest to comets that would have formed inside the Sweet Spot. Finally, in Section \ref{sec:con}, we provide our conclusions as to what these fates will be. \section{Methods}\label{sec:meth} We employ the Jumping Neptune scenario from \cite{Nesvorn2015}. We begin with five planets: Jupiter, Saturn, and three ice giants of comparable mass, as described by \cite{Deienno2017}. This third ice giant, henceforth I1, undergoes a series of encounters with Jupiter and Saturn which causes a divergent jump in their semi-major axes before inducing a jump in Neptune's orbit as well. Finally, I1 is ejected onto a hyperbolic orbit, leaving the remaining four planets near their present-day orbits. We tested several alternative simulations, varying the multi-resonance configuration, the distance from the last planet to the inner edge of the disk (1 or 2 au), the mass of the disk (20 or 40 $M_\Earth$), and the inclination of the disk in relation to the plane of the planets, with five different evolutions in each case. We selected the simulations that best satisfy the criteria of similarity with the Solar System today, consistent with the current orbital structure of the trans-Neptunian population, in line with \citet{Deienno2017}, which were all from the 3:2, 3:2, 2:1, 3:2 multi-resonance configuration, with a disk of 40 $M_\Earth$. The initial multi-resonant configurations we choose for Jupiter, Saturn, Uranus, Neptune, and I1, along with parameters for the location and mass of disk, begin in a 3:2, 3:2, 2:1, 3:2 resonance, as \cite{Baguet2019} find this is able to place a secular tilt resonance in the area of the cold Edgeworth-Kuiper belt (between 39 and 48 au). This provides us with five scenarios to explore, as defined in Table \ref{table:1}. All these configurations require the existence of the fifth giant planet, with a mass comparable to those of Uranus or Neptune, which is eventually ejected during the instability. This ice planet would have formed within the volatile-rich zone identified by \citet{Mousis2021}. All the planetary evolution simulations were run self-consistently with the five planets and a swarm of 1000 massive particles of the same mass, each 1/1000th of the mass of the disk. The disk extends from its inner edge to 30 au. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{Plots/Fig_1.png} \caption{Dynamical evolution of the planets and 1000 comet clones in Scenario 1 with eccentricity as a function of semi-major axis in log scale. Planets are represented in black, with Jupiter, Saturn, I1, Uranus, and Neptune, respectively, from left to right. Blue indicates comets formed between 4 and 8 au. Turquoise indicates comets formed between 8 and 12 au. Green indicates comets formed between 12 and 20 au. Yellow indicates comets formed between 20 and 50 au. After 100 Myr, the area around the giant planets is entirely cleared. Comets formed between 4 and 12 au are the first to be lost, and by 10 Myr almost none remain. By 100 Myr, the comets that remain in our simulation are almost entirely from the 20-50 au population.} \label{fig:stacked} \end{figure} \begin{figure*} \sidecaption \includegraphics[width=12cm]{Plots/Fig_2.png} \caption{Final positions in log scale of the first 1000 clones from Scenario 1 color coded for the last recorded time before they are removed from the simulation (see main text). Violet clones were lost early in the simulation, while yellow clones indicate those that remain. The black line corresponds to an equal initial and final position: comets above would have moved away from their initial formation position, and comets below would have migrated inwards. The area near the orbits of Jupiter and Saturn clears quickly, sending the clones on highly eccentric orbits even before Neptune's migration occurs. We see the relatively stable location of the current day Edgeworth–Kuiper Belt.} \label{fig:Final} \end{figure*} \begin{table} \caption[]{Initial conditions for the five scenarios explored in this study. The multi-resonance configuration is the 3:2, 3:2, 2:1, 3:2, with the outermost planet at 20.18 au from the Sun. In all cases studied here, we used a disk of 40 $M_\Earth$. (a) Distance of the inner bound of the disk (au), (b) inclination of the disk with respect to the invariable plane ($^\circ$), (c) node of the disk ($^\circ$), and (d) running number, i.e., number of generations used.} \label{table:1} \centering \begin{tabular}{l c c c c} \hline\hline & (a) & (b) & (c) & (d) \\ \hline Scenario 1 & 1 & 1 & 100.2 & 08\\ Scenario 2 & 1 & 0 & 0 & 02 \\ Scenario 3 & 2 & 0 & 0 & 04\\ Scenario 4 & 2 & 1 & 100.2 & 01\\ Scenario 5 & 2 & 1 & 100.2 & 05\\ \hline \end{tabular} \end{table} The present-day Edgeworth–Kuiper belt extends from the orbit of Neptune at 30 au to approximately 50 au from the Sun. However, most of the small bodies of the outer Solar System originated from the region between Jupiter and $\sim$30~au \citep{Gomes2003,2005Natur.435..459T,2008Icar..196..258L,Kaib2008}. With this in mind, we limit our simulations to planetesimals formed in the 4 - 50 au range. This allows us to neglect the influence of the inner planets, which, having small orbits, require more integration steps and longer calculations on each of our clones. While the CO-rich comet-forming zone could extend to 100 au \citep{Price2021}, the mass depletion of the classical belt is already well explored. We then run a modified \texttt{SWIFT} numerical integrator which uses a pre-recorded evolution of the giant planets \citep{Petit1999} and evolves our system over 100 Myr. The previously calculated evolution of the planets is recorded every 1000 yr or less and the positions of the planets are interpolated at each time-step necessary for the integration of the motion of the test particles. This ensures that each simulation for a given planetary evolution will use exactly the same planetary evolution track, avoiding divergence due to the intrinsic chaotic nature of planetary motion. Thus, our final planetary system is sure to correctly reproduce the structure of the Solar System. The major difference in planetary behavior between these scenarios is the moment of ejection of I1. This occurs at 5 Myr, 6 Myr, 7 Myr, 8 Myr, and 13 Myr for scenario 1, 2, 3, 4, and 5, respectively. For each scenario, we run 50 sets of 1000 massless comet facsimiles or "clones." Each clone has randomly generated orbital elements setting them on the same plane as the disk with varying semi-major axes between 4 au ---to avoid the inner Solar System--- and 50 au. The clones are distributed with a number density that varies as $r^{-1/2}$, or a surface density that varies as $r^{-3/2}$. We therefore have a total of 250\,000 clones for our five scenarios. Our simulations count a clone as lost if it reaches beyond 10000 au as we do not yet have the ability to estimate the effects of the Galactic tidal forces. If a clone moves under 0.005 au from the Sun, or in collision with a planet, it is also removed from the integration, as it is most likely destroyed. \section{Results and Discussion}\label{sec:res} We examine the orbital elements of each clone, identified by its formation location (initial semi-major axis). This is shown in Fig. \ref{fig:stacked}. Within the first 1 Myr, 21\% of all clones are lost from the simulation. This number rises to nearly half (49\%) after 10 Myr. By the end of the 100 Myr simulation run, we have lost three quarters (76\%) of our initial population. Only a quarter (24\%) of our clones remain. A snapshot of the first 1000 clones in our first scenario is shown in Fig.~\ref{fig:Final} with their first and final positions. Each clone is color coded for the moment it is lost, with earlier losses in purple and those that remain in the end shown in yellow. The major loss of clones occurs before $\sim$10 Myr: after this time, the area around the giant planets (<~15 au) is entirely cleared. It is important to note here that if we had used a four-planet model, based on current planetary orbits, Saturn would play the role of I1 and clear this region, leading to the same outcome. We examine the percentage of clones lost in our simulations more closely for each 1 au annulus from 4 au to 50 au, as shown in Fig.~\ref{fig:PerDis}. We see that for every 1 au annulus between 4 and 10 au, over 95\% of the clones are lost before the end of the 100 Myr in each scenario. This number dips to 90\% around 12 au. Then, between 12 and 20 au, each scenario still loses a minimum of 80\% of their clones within the simulation time. In comparison, annuli beyond 40 au \textemdash the current location of the Classical Edgeworth–Kuiper Belt\textemdash\ only lose half their clones, showing a zone that is relatively stable, containing objects that do not move far from where they are formed. The behavior of these clones is consistent between scenarios and independent of the moment of ejection of I1. \begin{figure} \centering \includegraphics[width=\linewidth]{Plots/Fig_3.png} \caption{Percentage of clones lost per formation location for each of the five scenarios. The gray zone indicates the limitation of our simulation. The blue zone indicates the N$_2$/CO enrichment zone as predicted by \cite{Mousis2021}, while the overlaid green zone indicates the location of the ideal CO/H$_2$O enrichment zone.} \label{fig:PerDis} \end{figure} Based on the ranges proposed by \cite{Price2021} and \cite{Mousis2021}, we examine different formation zones. \cite{Price2021} suggest a wide range, arguing that the CO/H$_2$O ice-enrichment zone is likely between 20 and 100 au, though these authors do not investigate a N$_2$/CO ice-enrichment zone. As the CO/H$_2$O ice enrichment zone evolves over time, and without seeing how N$_2$ would evolve in the simulations of \cite{Price2021}, we cannot determine where a specific C/2016 R2 formation zone could occur. In light of the fact that both CO and N$_2$ have similar sublimation temperatures, the two ice lines should be near each other and make the 20-30 au annulus an area to explore. Meanwhile, the results of \cite{Mousis2021} would indicate a narrow area, as they find a CO/H$_2$O ice-enrichment zone of $\sim$1-2 au wide, near 10 au. Their N$_2$/CO ice-enrichment zone is narrower still, seemingly less than 1 au. The overlapping formation zone for a C/2016 R2-like comet would therefore be incredibly narrow. We examine both a wide C/2016 R2 forming annulus between 8 and 20 au; a narrow one, only 8-10 au; and the narrowest one between 10 and 11 au, that is, the Sweet Spot. Interestingly, I1 is initialized and subsequently ejected from this narrow region as well. \begin{figure*} \centering \includegraphics[width=\textwidth]{Plots/Fig_4.jpg} \caption{Final semi-major axes and eccentricities (left) and perihelion distances (right) of all clones from all simulations remaining after 100 Myr. Comets formed in the 8-11 au zone are shown in black. Any remaining object with a perihelion $q<35$ au will likely be sent to the Oort Cloud by Neptune, or will lose its hypervolatile majority ices to vacuum via insolation heating \citet{Lisse2022}.} \label{fig:RemQ} \end{figure*} The resulting statistics are shown in Table \ref{table:2}. On average, each simulation loses 75\% of its clones by 100 Myr, losing 90\% of all clones formed in the 8-20 au range, 97\% of all clones formed in the 8-10 au range, and $\sim80\%$ in the 20-30 range. Consistently, each simulation loses 96\% of all comets formed between 10 and 11 au. If we examine the region of clones initialized between 8 and 20 au, we find that half the clones are already lost by 5 Myr, with two-thirds of clones being ejected after 15 Myr. If we narrow that region further to 8-11 au, we find that 60\% of clones formed in this region are ejected in the first 1 Myr and 90\% after 10 Myr. A handful of clones ($\sim$0.1\%) are lost to collisions with the giant planets. Depending on the chronology, these could help account for the delivery of the building blocks of the Galilean and Saturnian satellites necessary for their formation \citep{Ronnet2018, Anderson2021}. In each simulation, irrespective of the scenario, only $\sim$1\% of all remaining clones were from the initial 8-11 au population; these are shown in black in Fig.~\ref{fig:RemQ}. They will either find themselves on highly eccentric orbits, be absorbed into the Edgeworth–Kuiper belt, or join the scattered disk. These clones seem to be evenly distributed within the population of remaining comets. The 10-11 au population makes up only 0.4\% of all surviving clones. \begin{table} \caption[]{Statistical loss outcomes of each of the scenarios after 100 Myr for each formation zone.} \label{table:2} \centering \begin{tabular}{lccccc} \hline\hline & Total loss & 8-10 au & 10-11 au & 8-20 au & 20-30 au\\ \hline S1 & 73$\%$ & 97$\%$ & 96$\%$ & 88$\%$ & 76$\%$ \\ S2 & 79$\%$ & 97$\%$ & 96$\%$ & 90$\%$ & 80$\%$ \\ S3 & 75$\%$ & 98$\%$ & 96$\%$ & 91$\%$ & 79$\%$ \\ S4 & 77$\%$ & 97$\%$ & 96$\%$ & 90$\%$ & 79$\%$ \\ S5 & 73$\%$ & 97$\%$ & 96$\%$ & 89$\%$ & 76$\%$ \\ \hline \end{tabular} \end{table} We must now estimate how many C/2016 R2-like comets could be captured by the Oort Cloud, so as to then evolve dynamically over the next 4 Gyr and return to visit the inner Solar System on C/2016 R2-like orbits. While it is tempting to say that the cometesimals lost from our simulation were ejected from our Solar System, a further investigation of the orbital elements at the moment they were removed from the simulation is required in order to estimate their capture rate by the Oort Cloud. This rate is poorly constrained as of yet as this would depend greatly on the timeline of evolution coinciding with our Sun's ejection from its parent cluster. Further numerical simulations are required in order to investigate the behavior of these comets beyond the 10000 au cutoff, although the effects of Galactic tides 4 Gyr ago are still unknown. Nevertheless, we can make a safe estimate of which comets are bound to the Solar System from the energy $z$ of the cometesimals at the moment they are lost: \begin{equation} z = i_\alpha \frac{GM_\Sun}{2a}, \end{equation} \noindent where $i_\alpha$ is 1, 0, and -1 for e>1, e=1, and e<1, respectively, $M_\Sun$ the mass of the Sun, and $G$ the gravitational constant. We consider an object captured by the Oort Cloud if the final semi-major axis is $a>10000$ au and its final energy is $z/GM_\Sun \leq 0.00005$ at the moment it is lost from our simulation. Otherwise, we consider it has a truly hyperbolic orbit and is seen as detached from the Solar System. Under these conditions, 11\% of all our clones have potentially reached the Oort Cloud. The distribution of contributions to the Oort Cloud from each formation zone can be seen in Fig. \ref{fig:Families}. Of the cometesimals formed between 8 and 11 au, 13\% may have reached the Oort Cloud, which represents 12\% of all the cometesimals potentially captured. The 20-30 au formation zone contributes 12\% (making up 22\% of the total number of those captured); the 30-40 au formation zone contributes 9\% (making up 14\% of the total number of those captured); and the 40-50 au formation zone contributes only 5\% (making up 6\% of the total number of those captured). If this estimate is accurate, we should have far more CO-rich comets in the Oort Cloud, between 10\% and 20\% of all long-period comets we observe today. However, this is not the case. As the timeline of evolution coinciding with our Sun's ejection from its parent cluster is still poorly constrained, we re-estimate our capture conditions. \citet{Zwart2021} estimate that, so long as the Sun is a cluster member, clones with an eccentricity of $e > 0.98$ and a semi-major axis of $a > 2400$ au would be vulnerable to being stripped by the cluster potential or by passing stars. This could not apply to the entire course of our simulation as the Oort Cloud would be unable to form if this were the case. If we align ourselves with \citet{Zwart2021} and consider that our Sun is still within its parent cluster for the first 10~Myr of our simulation, we consider a clone captured if it fulfills the first criteria (final $a>10000$ au final $z/GM_\Sun \leq 0.00005$) along with a new criterion, $e<0.98$ if this event occurs within the first 10~Myr of the simulation. We have drastically different results, as seen in Fig. \ref{fig:Families}. Under these conditions, of the cometesimals formed between 8 and 11 au, only 1\% may have reached the Oort Cloud, which represents 4\% of all the cometesimals potentially captured. The 20-30 au formation zone contributes 7\% (making up 32\% of the total number of those captured); the 30-40 au formation zone contributes 8\% (making up 33\% of the total number of those captured); and the 40-50 au formation zone contributes only 5\% (making up 15\% of the total number of those captured). These results are coherent with our current understanding of the chronology of Oort Cloud formation \citep{Zwart2021}, whereby it is estimated that the bulk or 70\% of the Oort Cloud material originates from the 15-40 au region, and are near what we observe today. \begin{figure} \centering \includegraphics[width=\linewidth]{Plots/Fig_5.png} \caption{Probability of the fates of cometesimals from each formation zone at the end of 100~Myr if the sun has left its parent cluster (upper pannel) and if the Sun is still in its parent cluster for the first 10~Myr of our simulation (lower pannel). In orange we see the cometesimals ejected from the Solar System, in blue those captured by the Oort Cloud, and in green those remaining in the simulation. If the sun has left its parent cluster before the beginning of the simulation, each region <~35~au would contribute $\sim$ 11\% of its clones to the Oort Cloud. If the Sun is still in its parent cluster for the first 10~Myr of the simulation, only 1\% of clones formed between 8 and 11 au are likely to have reached the Oort Cloud, representing 4\% of all the cometesimals potentially captured.} \label{fig:Families} \end{figure} \section{Conclusions}\label{sec:con} We find that the majority of objects formed between Saturn and the N$_2$ ice line are ejected early and rapidly in the simulation, meaning that even by the time the Jumping Neptune scenario occurs, the clones are already lost. This could explain the lack of N$_2$-rich, CO-rich, and H$_2$O-depleted comets: these were formed in a very narrow region, and that region empties rapidly because of the influence of giant planets. \citet{Zwart2021} call this procession of comet ejections from the 5-11 au zone the `Conveyor Belt', which aptly describes the phenomenon we see here. Objects formed in this region would be ejected early from the Solar System, in less than $\sim$10 Myr, and be unlikely to join the Oort Cloud. Therefore, if N$_2$-rich, CO-rich, and H$_2$O-depleted comets were to have formed under 11 au, $>$90\% of this population would have been ejected from the Solar System without having been captured by the Oort Cloud. As the \citet{Price2021} formation zone for CO-enrichment is tens of astronomical units in width, suggesting that nearly half of all observable comets (~40\%) will be CO-rich, the Oort Cloud should be full of comets of this type. As this is not the case, then we should rule out this model in favor of that of \citet{Mousis2021}. It should also be noted that if this mechanism is indeed the one by which the comets formed and were subsequently captured, then C/2016 R2-like comets may be some of the first long-period comets to have formed, and the earliest to reach the Oort Cloud. There, objects with a nucleus larger than 5 km could survive thousands of orbits, and hypervolatile loss upon possible perihelion passage. This would indicate that C/2016 R2 represents one of the first Oort Cloud objects, which could provide a direct measurement of CO/N$_2$/CH$_4$ ratios in the PSN \citep{Steckloff2021,Davidsson2021,Prialnik2021,Lisse2022}, and would explain why so few N$_2$-rich, CO-rich, and H$_2$O-depleted comets have been observed today. We must also consider the possibility that many of the cometesimals remaining may have lost their bulk hypervolatile species in the billions of years since their formation, or even within the time frame of our simulation. Pure hypervolatile ices are only stable on gigayear timescales beyond a heliocentric distance of 100 au \citep{Lisse2021}. If this ejection period were to take place at the same time as the sublimative period of the Edgeworth–Kuiper Belt \citep{Lisse2021, Steckloff2021}, then they would only have $\sim$20 Myr to be placed on a trajectory toward the Oort Cloud before they lose their mostly hypervolatile ices to vacuum via insolation heating \citep{Lisse2022}. When looking at both the sublimation chronology and the Oort Cloud formation chronology together, we have a small window of only $\sim$10 Myr in which an object could be ejected from the giant planet region and inserted into the Oort Cloud. Our results are in line with the hypothesis of \citet{Lisse2022}, which states that interstellar object 2I/Borisov was ejected early from its parent system. However, these chronologies (the Sun's ejection from its parent cluster; the sublimative period of the Edgeworth–Kuiper Belt; the Jumping Jupiter/Neptune scenario) are still poorly constrained, even more so when considering how and when these timelines overlap. The window for the formation of C/2016 R2 in the Conveyor Belt region and subsequent capture by the Oort Cloud could be longer or shorter than our 10 Myr estimate. We understand that a more quantitative simulation should account for the effect of Galactic tides and perturbations from passing stars from the Sun's birth cluster. However, such an endeavor is far beyond the scope of the present work. The mass distribution of the Galaxy and the position of the Sun in it 4 billion years ago is still unknown, as are the initial mass function and orbital distribution of the Sun's birth cluster. The dynamics of the Solar System, in particular in its infancy, are inherently chaotic. Therefore, one would need to run a large number of simulations, varying the Galactic potential and the Sun's birth cluster influence, in order to only get an average statistics of possible events, given that it is unclear whether our Solar system is generic or peculiar. Another possible explanation for the composition of C/2016 R2 was presented by \citet{Desch2021}, who suggest that C/2016 R2 could be a fragment of a differentiated KBO surface that was created from an impact during the period of energetic impacts during the 2:1 Jupiter:Saturn resonance epoch. If the CO-rich formation zone is further out, as suggested by \citet{Price2021}, then the objects formed there would have longer to form, impact, and to travel to the Oort Cloud. Further studies of the chronology of the formation of these objects and the time frame for the dynamic instability should be explored in order to investigate the likelihood of these scenarios. A geo-chemical study of how these objects form accompanied with a detailed isotopic and chemical analysis of their current composition would also be beneficial. Understanding whether a C/2016 R2-like object can form with its peculiar composition in situ in the disk or the composition arises from the differentiation of a Pluto-like object would shed light on which of these processes is more likely. This also allows for the existence of possible exotic comets, with peculiar enrichments stemming from unique composition pockets in the disk. Hypothetically, the ice line of each species would create a small enrichment zone, producing small bodies dominated by this species rather than H$_2$O. By examining the ice lines of the volatile molecules, we can estimate the probabilities of finding comets with each composition. \begin{acknowledgements} We acknowledge the region of Bourgogne-Franche-Comté for their funding of the \texttt{DIAZOTE} project and this work. The project leading to this publication has received funding from the Excellence Initiative of Aix-Marseille Université - A*Midex, a French “Investissements d’Avenir programme” AMX-21-IET-018. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2022-09-23T02:10:52", "yymm": "2209", "arxiv_id": "2209.10862", "language": "en", "url": "https://arxiv.org/abs/2209.10862" }
\section{Introduction} In recent years, the pandemic has increased the need of remote connections, and we have witnessed to a mass adoption of virtual technologies particularly for teamwork. This has opened new perspectives for different platforms that allow virtual interactions with others, and fostered the already ascending development of the Metaverse. The Metaverse has been recently defined as a "post-reality universe, a perceptual and persistent multiuser environment merging physical reality with digital virtuality" \cite{mystakidis2022metaverse}. While being designed around the human, which constitutes the physical reality of this interplay, the digital virtuality relies on immersive technologies that allow spatial and interactive features, namely AR and VR. Eventually, these devices became the core of the fourth wave of computing innovation \cite{kamenov2017immersive}. \par \par Currently, there is an ongoing discussion on the potential protocols that will govern the Metaverse, with a particular focus on the controversial interplay between openness and privacy \cite{mystakidis2022metaverse}. The latest virtual devices allow tracking a large number of behavioral metrics, such as the headset's and controllers' position and rotation (which reflect the users' physical actions), all the interactions between the user and any virtual object present in the scene, and also eye movements. All these data can be source of personal information, and even the user's identity (e.g.,~\cite{miller2020personal} \cite{pfeuffer2019behavioural} \cite{rogers2015approach}). While being private, this information would help to restrict the use of the headset to specific individuals. For example, it would be possible to allow authentication only to those who have the rights, thus increasing the security of such technologies. \subsection{Contributions.} In this study, we assessed the feasibility of profiling and identifying users by leveraging behavioral data generated during an AR and a VR headset. We propose a general profiling framework that could be applied to different virtual devices (i.e., VR, AR), different applied fields (i.e., everyday use-case of a smart technology, and a work scenario), and different type of user's behaviors (i.e., walking, searching for landmarks, pointing, performing controller-based operations and physical actions). Second, we deeply study users' profiling at different levels (i.e., identification, age, gender), introducing - to the best of our knowledge - the novelty of profiling personal information of users (specifically, gender and age) in virtual contexts. Third, we additionally explored the impact of each sensor in users' profiling both with AR and VR, specifically assessing the relevance of position and rotation of the headsets and controllers hardware, and of the eye-tracking technology embedded in the VR device. More importantly, we fill a gap in the literature on users' profiling in AR scenarios: while it is true that AR technology is still immature, it is also true that it is largely understudied compared to VR. We summarize our contributions as follows: \begin{itemize} \item we propose a general profiling framework for AR and VR technologies; \item we study users' profiling with respect to identification, age and gender inference in virtual contexts, which is novel in the AR context; \item we conduct extensive studies to assess sensor' importance in our profiling tasks. \end{itemize} \par \subsection{Organization.} In Section~\ref{sec.rel_work}, we provide background and review literature on users' profiling. Section~\ref{sec.method} presents the general profiling framework we adopted in our experiments. The dataset and experimental settings are shown in Section~\ref{sec.dataset} and Section~\ref{sec.experiment}, respectively. We report our results in Section~\ref{sec.results}, and conclude with a discussion in Section~\ref{sec.concl}. \section{Background \& Related Work}\label{sec.rel_work} This section aim to describe the importance of security and privacy in virtual technologies such as AR and VR. Section~\ref{ssec.rel_work/applications} summarizes the application of virtual technologies in different fields, from the industry to the medicine. Section~\ref{ssec.rel_works/privacy} introduces the threats to AR and VR application with a cyber-security perspective. Section~\ref{ssec.rel_works/sota} describes literature of user profiling in virtual technologies. \subsection{AR / VR use-cases in daily and work scenarios}\label{ssec.rel_work/applications} \subsubsection{Industry and remote work} With the advent of Industry 4.0, the benefits of virtual devices have been repeatedly shown in many domains: in the design cycle of products and manufacturing systems~\cite{berni2020applications}, for programming machines~\cite{malik2020virtual}, in the teleoperation industry~\cite{linn2017virtual,xiao2020three} and also for training novices~\cite{prattico2021towards,roldan2019training}. In any of these applications, virtual technologies allow the operator to perform work tasks while being immersed in virtual environment that faithfully emulates the physical one. This is particularly important also for architecture, engineering and construction experts: virtual technologies in this sector are helpful for stakeholder engagement, design support and review, construction planning and monitoring, management, and training \cite{delgado2020research}. \par Taken together, the reasons why to employ virtual technologies in industry are numerous. This will potentially lead to large-scale adoption of VR and AR devices in industrial fields, opening new questions about how to ensure individuals' security on the work place. An effective algorithm for automatically identifying workers wearing a headset might help in this direction. For example, it would be possible to enabling authentication only for those who have the rights on the workplace (e.g., site manager). Further, assuming that older workers might prefer a different design of the virtual environment \cite{liu2020you}, an accurate user profiling might help customizing virtual features on the user's age. \subsubsection{Education} As AR and VR have the potential to bridge the limitations of 2D e-learning environments, online education is one of the fundamental pivots of Metaverse \cite{mystakidis2022metaverse}. Literature extensively examined the characteristics that lead to the successful integration of immersive virtual technologies in education, as well as its positive influences on learning outcomes. For example, 17 positive effects of VR were identified for education, such as improving skills, living more realistic experiences, enhancing the intrinsic motivation and the level of interest in learning \cite{chavez2018virtual}; however, these effects were subject-specific. Additionally, \cite{radianti2020systematic} reviewed VR application studies by focusing on immersive VR environments for higher education. They showed that, even though most of the literature reports that VR for education is still in its experimental stages, there is a strong general interest in the use of immersive VR particularly in engineering, medicine and computer science education, and that this technology is mature enough for teaching procedural, practical and declarative knowledge. \par In view of a large-scale adoption of VR/AR for education, a viable profiling or identification algorithm certainly comes in handy. For example, automatic authentication can be efficient when VR/AR technologies are adopted in numerous classes. Similarly, it would be helpful to automatically detect a student's age for adapting lessons and virtual contents. \subsubsection{Gaming and entertainment} Virtual technologies play an essential role in the gaming market too. While VR games were already well spread since the 1990s (e.g., Virtual Reality Gear~\cite{ansari2022implementing}), from 2018, AR also reached a large entertainment crowd with the popularization of Pokémon Go, Snapchat, Apple's ARKit, and Google.com's ARCore~\cite{vasista2022augmented}. This sector is expected to grow exponentially, as it also embraced entertainment areas that go far beyond gaming and arcade: film and music industry, live show sectors and sports are just a few examples \cite{abdelmaged2021implementation}. Particularly after the expensive loss caused by the pandemic in these sectors, the development of immersive virtual platforms can help support the cinema, music and live-show industry \cite{ansari2022implementing}. For instance, VR cinema was deployed for movies, theatre and art exhibitions~\cite{sharma2022product}, providing users with a 360 leisure experience. Last, the recent explosion of virtual influencers phenomenon~\cite{conti2022virtual} confirms the crucial role of virtual technologies in both entertainment and marketing levels. \par It is clear how the user's profiling/identification could be used for marketing strategies in this sector (e.g., delivering customized advertising). Further, particularly in gaming platforms, user identification might help detect banned individuals and prevent their access to virtual games. \subsubsection{Medicine} Virtual technologies have been proven to be reliable medical tools both for doctors and patients. For instance, as it allows for simulate surgeries, VR can be beneficial for medical education and training \cite{yeung2021virtual}. Interestingly, AR has the potential to superimpose salient clinical records or visual aids supporting a surgery over the patient's body \cite{birlo2022utility}. Research on virtual control systems for remote robotic surgery operations is also growing \cite{taylor2016medical}. Further, from the patient's point of view, VR can help improve cognitive abilities after a traumatic brain injury \cite{maggio2022virtual}, or it can help increase engagement in Parkinson's motor training via gamification \cite{van2019effectiveness}. \par Under this view, detecting whether a user is the chief of surgery rather than a student can help restrict the rights during a surgical operation involving AR/VR. Similarly, profiling patients using a virtual headset could allow training customization and automatic recordings of clinical improvements. \subsubsection{AR as a smart wearable technology} The latest AR smart glasses are fully wearable devices with computational functions. They allow users to download applications from a mobile operating system and provide various functionalities by freeing the user's hands \cite{kim2021applications}. Notably, most AR glasses currently on the market do not offer an integrated experience of social networks and streaming content. For instance, Vuzix developed AR smart glasses specifically designed to be used with drones or for navigation in unknown areas. AR as an assistive navigation device has also been tested in applied research \cite{zhao2019designing}. However, Facebook has already partnered with Ray-Ban and launched their Ray-Ban stories, which have raised important questions about ethical and privacy issues \cite{iqbal2022adopting}. Even though they currently do not allow projecting holograms in the field of view, this is an essential hint for possible connections between AR technology and social networks. \par In the foreseeable future, the next generation of smart glasses will likely allow projecting e-mails and notifications from social networks on the user's field of view. In this perspective, accurate automatic identification of the user during everyday activities could help restrict the visualization of personal messages only to the owner of the glasses. \subsection{Privacy in Emerging Technologies}\label{ssec.rel_works/privacy} The increasing popularity of big data~\cite{bigdata} coupled with the rapid adoption of various ``smart'' devices has resulted in parallel increases in privacy concerns. In today's society, most people consider data collection incessant and believe that the risks outweigh any benefits~\cite{riskspr}. To prevent (or at least reduce) the exposure of personal data, current and emerging technologies should support privacy by default~\cite{ozturk2021privacy}, in accordance with recent legislation such as GDPR~\cite{gdpr}. Fortunately, researchers are actively focusing on studying and adding a security and privacy level in emerging technologies. For instance, Di Pietro and Cresci~\cite{9750221} deeply discussed security and privacy issues arising in the metaverse, allowing a better understanding and a consequent improvement of the technology with respect to its users. Similarly, Nair et al.~\cite{nair2022going}, proposed a system to browse metaverse in incognito, protecting their privacy from companies, surveillance agencies, or data brokers. Researchers have also focused on incorporating privacy-preserving measures on daily usage systems, such as authentication~\cite{barni2010privacy}, and more recently, de-authentication techniques\cite{cardaioli2022privacy}. Besides protecting users' data from unwanted usage or sharing, past literature shows how attackers can use \textit{public} data in unconventional ways to profile users or to infer \textit{private} users' data (e.g., gender, age, personality traits). For instance, Conti and Tricomi~\cite{conti2020pvp} studied user profiling in video games, showing how public gaming data can be exploited to track gamers for malicious activities, e.g., harassment or cyberbullying. Kosinski et al.~\cite{kosinski2013private} leveraged Facebook data to infer users' gender, age, personality, or sexual orientation. Jurgens et al.~\cite{jurgens2015geolocation} predicted people's physical locations from their tweets, while Zhang et al.~\cite{zhang2020practical} leveraged Sharing Platforms' reviews to predict users' gender. The results of such studies highlight the high risks connected with data availability and point to the need for further research to protect users' privacy better. \subsection{Users Profiling in AR and VR applications}\label{ssec.rel_works/sota} Privacy risks in AR and VR technologies is not deeply discussed in the current literature. Roger et al.~\cite{rogers2015approach} investigated the task of user identification, i.e., identifying a given user among a group of known people. This study is conducted in an AR environment through Google Glasses among 20 participants. Behavioral features include head movement (i.e., accelerometer and gyroscope) and eye blinking patterns. The best performing model - a Random Forest - achieved 94\% of accuracy in the task. Li et al.~\cite{li2016whose} proposed Headbanger, an authentication system for wearable devices. The authentication task differs from the identification one since in the first, users can be unknown, while in the latter, the algorithm aims to identify a user in a group of given users. This study is conducted in an AR environment through Google Glasses among 95 participants. The proposed system relies on motion sensors (mainly the headset accelerometer), and the system authenticates users by leveraging three distance metrics, such as Cosine Distance, Correlation distance, and dynamic-time warping distance. Headbanger achieves 95\% of accuracy in the task. Mustafa et al.~\cite{mustafa2018unsure} proposed an authentication system for VR, highlighting the importance of such a security mechanism, especially when a user is completely immersed in the virtual environment, which can lead to the dangerous \textit{lunch time attack}~\cite{eberz2015preventing}.\footnote{A \textit{lunch time attack} occurs when the victim walks away from the logged-in device, and thus an attacker can utilize such systems with the victim's privilege~\cite{conti2020auth}.} This study was conducted through a Google Cardboard VR with a Samsung Galaxy S5 mounted and involved over 23 participants. Behavioral features involve sensors like the headset's accelerometer and gyroscope, from which the authors extracted features such as summary statistics (e.g., mean, variance) and frequency domain features (e.g., energy). The best performing model - a Logistic Regression - achieve 93\% of accuracy in the task. Pfeuffer et al.~\cite{pfeuffer2019behavioural} studied the problem of user identification in VR. The experiment is conducted with HTC Vive, involving 22 participants. The authors consider a broad spectrum of features that capture head, hand and eye motions. The best performing model - a Random Forest - achieves up to 40\% of accuracy in the task. Miller et al.~\cite{miller2020personal} further deeply explore the identification task in VR, as similarly done by Pfeuffer et al.~\cite{pfeuffer2019behavioural}. The experiment is conducted with HTC Vive, involving 511 participants. Behavioral features include summary statistics (e.g., maximum, minimum, average, std), position and rotation of headset and controllers (both right and left hand). The best performing model - a Random Forest - achieves up to 95\% of accuracy in the task. \par The reader can notice that existing works mainly focus on tasks inferring mainly the person with authentication or identification tasks, while there is a lack of understanding of wheter behavioral data can be leveraged to infer other users' private information such as age and gender. Similarly, prior works consider only AR or VR technologies for their experiments. This is a limitation, since the level of virtual immersion allowed by the two technological devices is substantially different \cite{milgram1995augmented}, and this significantly affects the user's behaviors. Our paper thus aims to fill the current literature gap by considering different privacy inference tasks (i.e., age, gender, identification) explored in both AR and VR environments. \begin{table}[ht] \centering \caption{State of the art overview.} \label{tab:sota} \footnotesize \resizebox{\columnwidth}{!}{% \begin{tabular}{cc|cc|ccccc} \toprule & & \multicolumn{2}{c|}{\textit{Technology}} & \multicolumn{4}{c}{\textit{Privacy-level}}\\ \textbf{Reference}& \textbf{\#Participants}& \textbf{AR} & \textbf{VR} & \textbf{Age} & \textbf{Authentication} & \textbf{Gender} & \textbf{Identification} \\ \midrule Roger et al.~\cite{rogers2015approach}& 20 & Google Glass & & & & & \cmark\\ Li et al.~\cite{li2016whose}& 95 & Google Glass & & & \cmark & & \\ Mustafa et al.~\cite{mustafa2018unsure}& 23 & & Google Cardboard VR & & \cmark & & \\ Pfeuffer et al.~\cite{pfeuffer2019behavioural}& 22 & & HTC Vive & & & & \cmark\\ Miller et al.~\cite{miller2020personal}& 511 & & HTC Vive & & & & \cmark\\\midrule \textit{Our}& 34 (AR) and 35 (VR) & Microsoft HoloLens & HTC VIVE Pro & \cmark & & \cmark & \cmark\\ \bottomrule \end{tabular} } \end{table} \section{Methodology}\label{sec.method} This section describes the methodology we propose to execute inference task with virtual technologies. Section~\ref{ssec.method/overview} motivates the reasons of our investigation. Section~\ref{ssec.method/framework} presents our proposed framework. \subsection{Scope of the work}\label{ssec.method/overview} Augmented Reality (AR) and Virtual Reality (VR) devices contain several sensors (e.g., accelerometer, gyroscope, eye tracking) essential to interact with virtual environments. Sensors' data describing human behaviour can be used to build biometric applications, opening several opportunities to enhance and tailor users' experience. However, such data might pose risks for users' privacy and security. In this study, we aim to understand whether it is possible to profile users by leveraging their interaction with AR and VR applications. In particular, we conduct our study by considering two categories of profiling: \begin{enumerate} \item \textit{User identification}, where we aim to identify a given user within a known population; \item \textit{Private information inference}, where we aim to infer users' gender and age. \end{enumerate} We thus propose a general framework to accomplish both tasks, which can be extended to infer additional users' information. \subsection{Inference Framework}\label{ssec.method/framework} \subsubsection{Overview} Our goal is to define a generic pipeline that can be adapted and applied on any virtual technology (e.g., AR, VR) context to infer users' private information. As shown in Figure~\ref{fig:pipeline}, the pipeline consists of four steps, starting from the \textit{user} from whom we record the behaviors, to his/her actual profiling: \begin{enumerate} \item \textit{Raw Data Acquisition}. In this phase, users' behavioral data are acquired. Virtual technologies' devices continuously generate data from users' interactions with the virtual environment (i.e., time series). From these data, we can describe users' behavior. The amount and type of information depends on the virtual technology and its devices. For instance, data might come from users' input (e.g., pressing joystick's buttons) and users' movements. \item \textit{Bias Removal}. This phase aims to remove potential biases from time series that might lead to train erroneous machine learning models. \item \textit{Time Series Engineering}. This phase aims to extract insightful information from the time series. \item \textit{Machine Learning Prediction}. This phase aims to infer users' private information from the data elaborated in the previous phase by leveraging machine learning algorithms. \end{enumerate} \begin{figure}[!ht] \centering \includegraphics[width=.9\textwidth]{Figures/META.pdf} \caption{Overview of the proposed framework for user profiling in Augmented and Virtual Reality.} \label{fig:pipeline} \end{figure} \paragraph{Raw Data Acquisition} Users interact with AR and VR applications through devices such as headsets and joysticks. Such devices embeds several functional sensors to offer users an immersive experience. For example, users move and explore the virtual environment through sensors like accelerometer and gyroscope embedded in the headset. Thus, by combining information retrievable by each sensor $s^i$ of the equipment, we can trace users activity $a$ at a given time $t$: \begin{equation}\label{eq.timestamp} \vec{a_t} = [s^0_t, s^1_t, ..., s^n_t], \end{equation} where the subscript denotes the timestamp, and the superscript the sensor involved. We call this process \textit{acquisition phase}. Acquisition phase can be repeated over time, resulting in a user temporal behavioural description. Thus, by acquiring data in $\Delta t = t - t_0$, we obtain a behavioral time series, described as follows: \begin{equation}\label{eq.behavior} \vec{\mathbf{B}}_{\Delta t} = [\vec{a}_{t_0}, \vec{a}_{t_1}, ..., \vec{a}_{t-1},\vec{a}_t]. \end{equation} $\vec{\mathbf{B}}_{\Delta t}$ represents an atomic sample of a user action (or task) of duration $ \Delta t$ that we will use in the next phases to infer their private information. \paragraph{Bias Removal} The acquisition phase might lead to enormous quantity of raw data. Such data might not only describe users behaviour, but also environmental information strongly correlated to experimental sessions. For example, using the raw headset height to identify users might be erroneous since such information might not be persistent over time (e.g., different shoes, different body position)~\cite{miller2020personal}. The problem of \textit{spurious correlations} in cybersecurity applications is well known~\cite{arp2022and}. We thus need to be extra careful in understanding if sensors might lead to erroneous and inconsistent machine learning performance. The process of bias removal depends on the sensors' nautre and require an ad-hoc analysis. We explain in details our implementation in Section~\ref{ssec.experiment/implementation}. The de-biasing phase results in a new vector of de-biased actions: \begin{equation}\label{eq.debias} \vec{\mathbf{B}_{\Delta t}} = [\vec{d}_{t_0}, \vec{d}_{t_1}, ..., \vec{d}_{t - 1} ,\vec{d}_t], \end{equation} where $d_{t_i}$ is the de-biased version of the feature $a_{t_i}$. \paragraph{Time Series Engineering} Raw temporal data should be properly elaborated to extract meaningful information. Moreover, given the huge amount of data, such sequences should be aggregated (i.e., compressed) to limit the computational cost of their analyses. The aggregation strategy can consider the whole sequence of a specific features, or just subpart of it. For example, given a sensor $s^i_{\Delta t}$ and its de-biased values over the time $d^i_{\Delta t} = [d_{t_0}^i, d_{t_1}^i, ..., d_{t - 1}^i ,d_t^i]$, the aggregation of a whole sequence results in a unique number $x^i$, while the partial aggregation (e.g., a transformation every $q$ times step) in a vector of numbers $[x^i_0, x^i_1, ..., x^i_m]$, where $m= t/q$. Note that the subscript does not denote anymore the temporal axis. Popular features derived from the aggregation phase are the mean, standard deviation, min, max~\cite{miller2020personal}. At the end of the process, we obtain, for each participant action or task, an aggregated datapoint that will be used by the machine learning models. \paragraph{Machine Learning} The last phase of the pipeline involves machine learning approaches like Logistic Regression (LR), Decision Tree (DT) and Random Forest (RF). Training a well-performing model requires validation strategies that consider the nature of the inference. For instance, if the aim is to identify a user within a known population, the training, validation and testing splits should contain samples of the population. However, to avoid trials (or sessions) bias, the three split should consider samples belonging to different trials of collection. On the opposite, when inferring information like age and gender, the three splits should contain different set of users. Regarding the type of machine learning algorithm, we suggest the utilization of \textit{inherently interpretable} models (e.g, LR, DT) to better understand models decisions while inferring. Moreover, interpretable models allows a transparent debugging phase to identify the presence of spurious features~\cite{nadeem2022sok}. Finally, given the unbalance nature of the problem (i.e, not all the classes are distributed equally), we suggest using performance metrics like F1-score with macro average. \section{Dataset overview}\label{sec.dataset} For the present investigation, we chose two use-case scenarios of virtual technologies, one involving AR and one VR. For both of them, we asked permission to the authors \cite{nenna2021augmented}, \cite{nenna2022virtualization} and \cite{nenna2022influence} for sharing their data with our team and conduct the present study. The first dataset, described in Seciton~\ref{ssec.dataset/ar}, thus comes from a study on the multitasking effects when using AR while walking outdoor \cite{nenna2021augmented}. Specifically, the authors took an experimental paradigm typically used in behavioral and cognitive research outside the lab, in a real dynamic scenario, and measured dual-task walking effects in young users responding to augmented stimuli during navigation. The second dataset, described in Section~\ref{ssec.dataset/vr}, instead comes from a use case scenario introducing VR into robotics and manufacturing industry \cite{nenna2022influence} \cite{nenna2022virtualization}. In this context, the authors tested users guiding an industrial robotic arm via different control systems in VR. Even though the allowed behaviors were kept as simple as possible to ensure experimental control, both scenarios give an important glimpse into practical applications of virtual technologies in the field. Furthermore, in both cases, the dual-task methodology was deployed for testing users' behavior under different levels of workloads. This traditional paradigm is extensively used in human factors and applied research and represents an ecologically valid but still control method for imposing mental strain on a user \cite{nenna2021augmented} \cite{nenna2022virtualization}. \subsection{AR experiment}\label{ssec.dataset/ar} \par The AR experiment investigated multitasking effects in participants using AR while walking outdoor \cite{nenna2021augmented}. For this case study, 45 young adults wore the Microsoft HoloLens 1st generation smart glasses (OS Windows 10, CPU Intel 32-bit 1GHz, memory 2GB RAM and 1GB HPU RAM, 2.3 megapixel widescreen head-mounted display, field of view 30 × 17, mass 579g) and performed: i) a visual task, in which they discriminated between different augmented targets presented in their peripheral view, ii) a navigation task, in which they reached a series of augmented landmarks via physical walking outdoor, and iii) the combination of these tasks, which they called dual-task. The virtual environment, shown in figure~\ref{fig:AR_env}, was programmed in Unity (2017.4.18f1) and participants interacted with the augmented targets both via a wireless Xbox One controller and via physical collision with the virtual objects (e.g., walking through an augmented target). \par Each participant performed 80 trials of the visual task, 50 trials of the navigation task and 50 trials of the dual-task. Specifically, for each trial of the visual task, a green or red object appeared lateralized on the left or on the right side of the visual field for 300ms; hence, the participant was asked to press a specific button on the joystick based on the color of the target and independently from the hemifield where it appeared. Differently, in the navigation task, a series of augmented landmarks appeared one after the other at -90°, 0° or 90° with respect to the participant's position and at a distance of 3m from each other. Participants were thus instructed to first inspect the surrounding to find the landmark, and then walk through it. In the dual-task, finally, participants walked through the series of landmarks while concurrently responding to the lateralized augmented stimuli. These tasks were specifically designed for measuring the effects of multitasking outside the lab. Therefore, they offer good insights into the potential impact of AR during outdoor walking. \par The dataset is composed of 21 females (age mean = 24.28, SD = 2.22) and 24 males (age mean = 24, SD = 2.62) and comprises the following continuous measures: position (in meters) of the AR headset in the three axes (x, y, z), and rotation of the AR headset in Euler angles. Furthermore, time stamps of any button press on the joystick and any collision with virtual objects presented in the scene were also registered, even if they were not considered for the present work. It is to notice that, since the datasets of the first 11 participants did not include data on the headset position, we ran our investigation on 34 participants out of 45. \begin{figure}[h!] \centering \includegraphics[width = 0.5\textwidth]{Figures/AR_environment.jpg} \caption{AR Environment.} \label{fig:AR_env} \end{figure} \subsection{VR experiment}\label{ssec.dataset/vr} The VR experiment deployed a virtual reproduction of an industrial robotic arm (Universal Robot UR5) developed in Unity (version 2020.2.1f1) \cite{nenna2022virtualization} \cite{nenna2022influence}. The virtual environment was designed to test performance and eye parameters of users during a simulated teleoperation task. All participants wore an HTC VIVE Pro Eye VR device (resolution 1440x1600 pixels per eye, refresh rate 90Hz, field of view 110°, weight 555g) and were provided with both VR controllers. \par The dataset included 21 young adults (10 females, 11 males) and 14 participants who reported being more than 50 years old (8 females, 6 males). Therefore, overall, 18 females (age mean = 39.33, SD = 14.21) and 17 males (age mean = 37.75, SD = 16.32) participated at the experiment. All participants in VR guided the robotic arm shown in figure~\ref{fig:VR_env} through a pick-and-place via two different control systems (controller buttons and physical actions) and under two levels of workload (single-task and dual-task). For the pick-and-place task, they had to pick a bolt from the workstation and place it into a box. When using the controller buttons system, they performed the task by only using the pad buttons on the VR controllers. With the physical actions system, instead, they still used the VR controllers, but they were allowed to physically approach the robot with their hand, grasp it and then move it over the worktable by physically moving their arm. Furthermore, in contrast with the single-task condition, in the dual-task, participants operated the pick-and-place task while also performing simple arithmetic sums. A series of numbers ranging between 1 and 10 were randomly presented on a virtual screen in front of the participant for the whole duration of the pick-and-place actions. 2.5s elapsed between each number presentation, with a random jitter of 0.3s. After the place action, participants reported the result of the arithmetic operation by pointing a virtual keyboard through the controller and then moved to the next trial. In each condition, the young participants performed 40 trials, while the old participants performed 20 trials. \par The following continuous measures were registered: position (in meters) in the three axes (x, y, z) and rotation in Euler angles of both the VR headset and its controllers. As the VR device employed for these investigations is additionally provided with an integrated eye-tracker, the following continuous eye parameters were also recorded: pupil size (in millimeters) and eye openness (expressed from 0 to 1). Finally, time stamps of any button press on the controllers and any collision with virtual objects in the scene were registered too, but they were not used for the present investigation. \begin{figure}[h!] \centering \includegraphics[width = 0.5\textwidth]{Figures/VR_environment.jpg} \caption{VR Environment} \label{fig:VR_env} \end{figure} \section{Experimental setting}\label{sec.experiment} This section describes our experimental settings. In particular, starting from the AR and VR datasets previously described in Section~\ref{sec.dataset}, we define inferring experiments based on the task-level (Section~\ref{ssec.experiment/task}) and action-level (Section~\ref{ssec.experiment/action}). Section~\ref{ssec.experiment/implementation} describes the methodology we follow in our experiments (i.e., de-biasing, feature extraction, model selection). \subsection{Task Identified from the AR and VR Experiments: Task-level}\label{ssec.experiment/task} For the present investigation, we isolated specific macro tasks on which we performed users' profiling/identification. \subsubsection{Augmented Reality} Specifically, in AR we considered the same navigation task as identified by the authors \cite{nenna2021augmented}, and then we called mental task what the authors called visual discrimination task. In the latter, participants were discriminating between different colored and lateralized augmented objects while standing still. As this task was mentally demanding, we consider it as a mental task. Differently, in the navigation task, participants were looking for augmented targets in their surroundings and then walked through it. The navigation task was performed both as a single-task (low workload) and concurrently with the mental task (high workload). To recap, in AR environment, we identified the following tasks: \begin{itemize} \item Mental Task (MT); \item Navigation Task - Low workload (NT-Low); \item Navigation Task - High workload (NT-High). \end{itemize} \subsubsection{Virtual Reality} In VR, instead, we followed the same categorization used by the authors \cite{nenna2022influence}. Therefore, we considered two different pick-and-place tasks according to the type of interactions allowed between the user and the virtual robot: controller buttons and physical actions. The CB-based task corresponds to the pick-and-place performed via controller buttons, while the PA-based task includes the same pick-and-place executed via physical actions. Both tasks were executed under low and high workload: compared to the low workload, in the high workload condition participants executed the pick-and-place task simultaneously with the arithmetic task. Overall, the following tasks were identified from the VR scenario: \begin{itemize} \item Controller-based Task - Low workload (CT-Low); \item Controller-based Task - High workload (CT-High); \item Action-based Task - Low workload (AT-Low); \item Action-based Task - High workload (AT-High). \end{itemize} \subsection{Actions Identified from the AR and VR Tasks: Action-level}\label{ssec.experiment/action} Further, from each of the tasks discussed above, we identified a series of different actions both from the AR and VR experiments. The analysis performed on these actions is at a micro-level, and it is based on the type of interactions performed and on the range of motion involved. \par \subsubsection{Augmented Reality} Specifically, from the tasks performed in AR, we extracted the following operations: button interaction, search and walk. In the button interaction, we included the task sections in which participants were standing still while discriminating between the lateralized colored targets. Specifically, they pressed specific buttons on the joystick according to the hemifield where the virtual object was displayed. In the search operation, participants were engaged in the visual inspection of the surroundings for finding a virtual landmark; this operation was performed while participants were standing still and just rotated their head for inspecting the surrounding. Finally, in the walking operation, participants were physically walking to the identified virtual landmark. Both the search and walk operations were performed as single-task and concurrently with the secondary mental task (namely, the visual discrimination task). As argued by the authors \cite{nenna2021augmented}, participants perceived lower workload when performing only the navigation task rather than performing the same task concurrently with the mental task. In other words, the secondary mental task put a strain on the users' mental resources. Therefore, we here refer to the dual-task as the high workload condition, while the single-task is considered as a low workload condition. Table~\ref{tab:AR_actions} represents the actions isolated in the AR environment. \begin{table}[ht!] \caption{Augmented Reality actions organized per type of operation and workload level.} \label{tab:AR_actions} \centering \begin{tabular}{c| >{\centering\arraybackslash}m{2cm} >{\centering\arraybackslash}m{2cm} >{\centering\arraybackslash}m{2cm}} \toprule \multirow{2}{*}{\diagbox{\textit{Workload}}{ \textit{Operation}} } & \textbf{\textit{Button}} & \multirow{2}{*}{\textbf{\textit{Search}}} & \multirow{2}{*}{\textbf{\textit{Walk}}} \\ & \textbf{\textit{Interaction}} & &\\ \toprule \textit{\textbf{Low}} & -- & \includegraphics[width=2cm]{Figures/Actions/AR_NoMental_Search.jpg} & \includegraphics[width=2cm]{Figures/Actions/AR_NoMental_Walking.jpg} \\ \textit{\textbf{High}} & \includegraphics[width=2cm]{Figures/Actions/AR_Mental_Button_Interaction.jpg} & \includegraphics[width=2cm]{Figures/Actions/AR_Mental_Search.jpg} & \includegraphics[width=2cm]{Figures/Actions/AR_Mental_Walking.jpg} \\ \bottomrule \end{tabular} \end{table} \par \subsubsection{Virtual Reality} From the VR tasks, we extracted idle, pointing, button and physical interactions. Specifically, we extracted time intervals in which participants were only looking at the robot while it was executing either a pick or a place automation. Those time frames were considered as idle actions, as participants were only looking at the scene without interacting with any of the virtual contents. Idle intervals in which participants were mentally summing the numbers for the arithmetic task were considered as high workload idles, while those cases in which participants were not engaged in the arithmetic task, nor in any interactions with virtual objects, were considered as low workload idles. The pointing action was identified by selecting time periods in which participants were using the VR controller for pointing the numbers on the virtual keyboard shown in figure X. In that experimental phase, they where reporting the sum at the previously performed arithmetic task. In the button interaction, participants guided the virtual robot through the pick-and-place task by only pressing specific buttons on the VR controller. Differently, in the physical interactions, participants physically touched the virtual robot and moved their own arm for relocating it over the worktable. In line with what demonstrated by the authors~\cite{nenna2022virtualization}, both button and physical interactions were categorized according to the level of workload involved. Specifically, when the pick-and-place was performed concurrently with the arithmetic task, participants were imposed with higher workload compared to when performing the pick-and-place task without additional tasks. Table~\ref{tab:VR_actions} represents the actions isolated in the VR environment. \begin{table}[ht!] \caption{Virtual Reality actions organized per type of operation and workload level.} \label{tab:VR_actions} \centering \begin{tabular}{c| >{\centering\arraybackslash}m{2cm} >{\centering\arraybackslash} >{\centering\arraybackslash}m{2cm} >{\centering\arraybackslash}m{2cm} >{\centering\arraybackslash}m{2cm}} \toprule \multirow{2}{*}{\diagbox{\textit{Workload}}{ \textit{Operation}} } & \multirow{2}{*}{\textbf{\textit{Idle}}} & \multirow{2}{*}{\textbf{\textit{Pointing}}} & \textbf{\textit{Button }} & \textbf{\textit{Physical}} \\ & & & \textbf{\textit{Interaction}} & \textbf{\textit{Interaction}}\\ \toprule \textit{\textbf{Low}} & \includegraphics[width = 2cm]{Figures/Actions/NoMental_Idle.jpg} & \includegraphics[width=2cm]{Figures/Actions/NoMental_Pointing_Interaction.jpg} & \includegraphics[width=2cm]{Figures/Actions/NoMental_ButtonInteraction.jpg} & \includegraphics[width=2cm]{Figures/Actions/NoMental_Physical_Interaction.jpg} \\ \textit{\textbf{High}} & \includegraphics[width=2cm]{Figures/Actions/Mental_Idle.jpg} & -- & \includegraphics[width=2cm]{Figures/Actions/Mental_Button_Interaction.jpg} & \includegraphics[width=2cm]{Figures/Actions/Mental_Physical_Interaction.jpg} \\ \bottomrule \end{tabular} \end{table} \subsection{Implementation}\label{ssec.experiment/implementation} \subsubsection{De-biasing and Feature Extraction} AR and VR datasets contain different type of raw features acquired from the sensors. We now describe, for each category of sensors, a description of the features and de-biasing techniques we applied. \begin{itemize} \item \feature{Head Position} (AR and VR), represented as a 3D coordinate (x, y, z) measuring the relative distance (in meters) of the user from a center point in the virtual environment. This feature might contain both sessions and users static traits (e.g., height). We thus derived different variants of this information, such as the movement, computed as the norm between two points at 5 timestamp of distance, and the vertical oscillation, computed as the difference between two height values at 5 timestamp of distance. \item \feature{Head Rotation} (AR and VR), represented as a 3D value. For each axis, we compute its angular speed by considering points at 5 timestamp of distance. This transformation can remove information related to trials (e.g., specific positioning of objects with respect to the participant). \item \feature{Eyes} (VR), includes data on pupil size (in millimeters) and eye openness (0-1), for both left and right eyes. It is to notice that, in order to overcome possible confounding variables~\cite{kramer2020physiological} \cite{mathot2018pupillometry}, it is usually appropriated to preprocess the raw eye data for flattening individual differences. However, as the aim of the present work was specifically to capture individual traits and behaviors for allowing identification/profiling, we opted for not preprocessing eye-tracking data. On the contrary, we leveraged the individual differences in pupil size and eye openness~\cite{bargary2017individual} \cite{fawcett2022individual} \cite{aminihajibashi2019individual} for better identifying and profiling users. Further, we enhance this set of features by additionally computing the symmetry among the eyes for both pupil dilatation and eye openness. On an applied level, using the raw output of the HTC Vive Pro Eye device speeds up the identification/profiling process and allows higher generalizability to multiple VR devices. \item \feature{Controller Position} (VR), represented as a 3D coordinates (x, y, z) relative to the virtual environment center point. Similarly to the \feature{head position}, this feature might contain both sessions and users traits. We thus transform it in the movement, computed as the norm between two points at 5 timestamps of distance. \item \feature{Controller Rotation} (VR) represented as a 3D value. We conduct the same process of \feature{head rotation}. \end{itemize} Finally, each feature of the previously describe families is aggregated with tsfresh\footnote{\url{https://tsfresh.readthedocs.io/en/latest/index.html}}. Given a time series, this library extracts more than 100 features, including average, standard deviation, quantile, and entropy. We further refined the features by keeping only the relevant ones.\footnote{We used tfresh feature\_selection function: \url{https://tsfresh.readthedocs.io/en/latest/api/tsfresh.feature_selection.html}} Thus, starting from the raw time series of a single action within a single tasked performed in a single trial by a single user, we extract a single aggregated data point. The process is repeated for all the users, trials, actions, and tasks, obtaining 9360 datapoints in AR, and 16520 datapoints in VR. \subsubsection{Models Training and Validation} In our experiments, we test four different algorithms: logistic regression, ridge classifier, decision tree, and random forest. As a baseline, we defined a Dummy classifier that randomly predicts the outcome based on the training ground-truth distribution. For each experiment presented in Section~\ref{sec.results}, we adopt a common validation strategy: for each discussed model, we find the best hyper-parameters through a grid-search validation based on a training, validation and testing split of 70\%, 10\%, and 20\% of samples, respectively. For private inferring tasks (i.e., age and gender), the splits contain different set of users, i.e., users in training are not present in validation and testing set. Similar, users in validation are not present in both training and testing set. Machine learning models are designed as a multilabel task for the user identification task. On the opposite, we considered binary tasks both age (i.e., young and old) and gender (i.e., male and female). Note that the young class correspond to users defined in $[19 - 24]$ (AR) and $[23 - 30]$; the old class is defined in $[25, 29]$ (AR) and $[31 - 69]$. We now report the parameter grids involved in the grid-searches. \begin{itemize} \item Logistic Regression (LR). C: ${0.1, 1, 10}$. \item Ridge (RI). Alpha: ${0.01, 0.1, 1., 10}$. Fit intercept: ${False, True}$. \item Decision Tree (DT). Max Depth: ${3, 5, 7}$. Min samples leaf: $1, 3, 5$. \item Random Forest (RF). N estimators: ${50, 100, 150}$. Max Depth: ${3, 5, 7}$. Min samples leaf: $1, 3, 5$. \end{itemize} To provide accurate results, each experiment is repeated five times. We thus report both mean and standard deviation of the F1-scores (with macro average). We implemented our experiments in Python 3.8.5 and we used Scikit-Learn~\cite{scikit-learn} library for training models and validation algorithms. \section{Results}\label{sec.results} In this section, we present the results of our experiments. We present both results for the task and action levels, in sections~\ref{ssec.experiment/task} and ~\ref{ssec.experiment/action}, respectively. We then conclude with an ablation study to better understand the effect of different sensors to models' performance (Section~\ref{ssec.results/ablation}), \subsection{Task-Level}\label{ssec.results/task} In this section, we present profiling performance at a task-level. In particular, each presented experiment consider distinctly the tasks presented in Section~\ref{ssec.experiment/task}. In more details, we train, validate, and test our model only on the task under investigation, predicting each time the identity, age, and gender separately. For instance, we train a specific model to predict gender based only on the Mental Task. \subsubsection{Identification} Figure~\ref{fig:AR_VR_Id} shows the identification results in AR and VR environments. LR and RI achieved the highest (and comparable) performances in AR, whereas LR and RF performed best in VR. In general, all our algorithms outperform the baseline (Dummy). Looking at the results on the Overall Tasks, both in VR (OT-VR) and AR (OT-AR), we immediately notice that in VR identification, the performances remain quite stable as the number of users increases, while AR degrades significantly. Indeed, AR best algorithms performance goes from near 90\% F1-Score (two users) to slightly above 60\% F1-Score (30 users). Instead, in VR, LR yields almost perfect prediction on two users, while the F1-Score is above 95\% when performing identification over 30 users. This might reflect the different amount of sensors available in VR (headset, controller, and eye-related behaviors) compared to those available in AR (only headset-related behaviors). We further discuss the impact of each of the involved sensors in Section~\ref{ssec.results/ablation}. \par When looking at the individual tasks, we can see that the identification algorithm performs even better than in the overall task, particularly in AR. For instance, we reached 70\% F1-Score over 30 users in the NT-Low, which is roughly 10\% higher than in the OT-AR. One reason of this result might be related to the nature of the performed task: in the NT-Low, participants were actively moving in the surroundings without performing any additional task. Therefore, their movements might have been more linear compared to the situation in which they performed the same task under high workload (NT-High), thus revealing more identifiable movements' patterns. The same does not apply to the VR scenario. Here, when looking at each of the identified actions, the higher the workload the better the performance of the identification algorithm. Indeed, the best performance was obtained at the AT-High and CT-High, where the F1-Score was around 95\% and 97\%, respectively. Again, possible explanations might be related to the nature of the tasks and to the number of sensors embedded in the devices. In the VR scenario, participants were only moving their upper body, and in the high workload conditions they were additionally engaged in a secondary mental task. We know from literature that higher workload is related to higher changes in the eye behavior \cite{nenna2022virtualization}. Therefore, the VR-embedded eye-tracker might have had an important impact on the identification performance, particularly when users were under higher mental strain rather than when performing less demanding tasks (i.e., CT-Low, AT-Low). \begin{figure}[h!] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Figures/AR_id_sep.pdf} \caption{Augmented Reality} \label{fig:AR_task_id} \end{subfigure} \par\bigskip \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Figures/VR_id_sep.pdf} \caption{Virtual Reality} \label{fig:VR_task_id} \end{subfigure} \caption{User Identification on task-level.} \label{fig:AR_VR_Id} \end{figure} \subsubsection{Age} Figure~\ref{fig:task-age} shows the age classification results in AR and VR environment at task-level. Results from the age profiling clearly yielded better performance in the VR compared to the AR scenario. While in VR all models performed significantly better than the baseline, in AR the F1-Score was consistently lower than the baseline, in all tasks. This is likely to be related to the low age variability of participants that took part in the AR experiment. Therefore, we here only discuss performances of age profiling only in relation to the VR experiment. \par In VR, the LR and RF algorithms appear to perform better then the other models in all tasks, but in the OT, whereas RI produced higher F1-Score compared to LR. On the task-level, the users' age was profiled with higher accuracy when they performed the pick-and-place task via physical actions (AT-High and AT-Low, in which F1-Score was around 90\% and 85\% respectively) compared to controller buttons (CB-High, CB-Low, in which F1-Score was below 80\% in both cases). A possible interpretation on this point is that the movements' pattern of older users might have been quite different from younger users. Also, we know from literature that robot teleoperation is significantly influenced by age~\cite{grabowski2021teleoperated}. In this view, our algorithm was particularly successful in detecting users' age during the pick-and-place task only when physical actions were allowed. \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\textwidth]{Figures/AR_Age_sep.pdf} \caption{Augmented Reality.} \end{subfigure} \hspace{1cm} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\textwidth]{Figures/VR_Age_sep.pdf} \caption{Virtual Reality.} \end{subfigure} \caption{Age profiling on task-level.} \label{fig:task-age} \end{figure} \subsubsection{Gender} Figure~\ref{fig:task-gender} shows the gender classification results in AR and VR environment at task-level. When profiling users' gender, we obtained substantially better results in VR compared to AR. Indeed, in VR, all the tested algorithms performed above the baseline (dummy). More specifically, we can observe a better performance obtained through LR and RF, which reached a maximum F1-Score of 75\%. Differently, when detecting users' gender in the AR scenario, our algorithms performed only 5-10\% above the baseline. \par For the algorithms' performance within each of the identified tasks in VR, a better performance is achieved in tasks involving higher workload (CT-High, AT-High) compared to those under low workload (CT-Low, AT-Low). These results align with recent literature on behavioral gender differences in the VR pick-and-place task. For instance, Nenna et al.~\cite{nenna2022influence} demonstrated how men outperformed women in the pick-and-place tasks in terms of task execution time, particularly when using controller buttons. These differences might have been even more marked when performing an additional mental task, thus allowing a better gender profiling. We observe a similar trend in the AR scenario, in which better performance are reached in the task involving higher workload (NT-High). This behavior reflects previous findings related to the different walking pattern between men and women~\cite{nenna2021augmented}. Indeed, on average, the walking velocity of men is significantly higher than women' one, particularly under high workload. As we were recording the headset shifts in time, the different walking velocity might have been prominent in the gender profiling. \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\textwidth]{Figures/AR_Gender_sep.pdf} \caption{Augmented Reality.} \end{subfigure} \hspace{1cm} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\textwidth]{Figures/VR_Gender_sep.pdf} \caption{Virtual Reality.} \end{subfigure} \caption{Gender profiling on task-level.} \label{fig:task-gender} \end{figure} \subsection{Action-Level}\label{ssec.results/action} Starting from the results obtained in the overall task, we here aimed at seeing whether some actions had a particular influence on the identification and profiling performances. Specifically, we opted for leveraging the model that demonstrated better results, which was the Logistic Regression (LR). Each presented experiment consider distinctly the tasks presented in Section~\ref{ssec.experiment/action}. In more details, we train, validate, and test our model only on the action under investigation, predicting each time the identity, age, and gender separately. For instance, we train a specific model to predict age based only on Button Interaction with Low Workload. \subsubsection{Identification} Table~\ref{tab.action-identification} shows the identification results in AR and VR environment at action-level. Performance obtained on task-level reached an F1-Score of about 60\% in the AR and above 90\% in the VR scenario. When looking at the action-level, specifically for AR, we see that the walking action reaches the highest performance (F1-Score is about 0.80\% under low workload and 0.78\% under high workload), while the search action and button interaction reveal F1-Scores below 0.70\%. This suggests that the walking action is prominent in identifying users in AR, possibly because the walking pattern is the most singular feature in such a use-case of AR. Differently, in VR, we observe higher F1-Scores for both button and physical interactions specifically under high workload (F1-Score is about 0.96\% in both cases). Also the pointing action reached a very similar F1-Score (0.96\%), while the idle time intervals yield lower F1-Scores (below 0.80\% both under high and low workloads). It seems that the most interactive actions (using controller buttons, pointing and physically moving the upper body) thus yield better results compared to periods in which users were passively looking at the virtual surroundings. \begin{table}[!ht] \caption{User identification on action-level organized per type of operation and workload level. Random guess at 0.03 for both AR and VR tasks. All the measures in F1-Score.} \centering \begin{tabular}{c|ccc|cccc} \cmidrule{2-8} & \multicolumn{3}{c|}{\textit{Augmented Reality}} & \multicolumn{4}{c}{\textit{Virtual Reality}}\\ \cmidrule{1-8} \multirow{2}{*}{\diagbox{\textit{Workload}}{ \textit{Operation}} } & \textbf{\textit{Button}} & \multirow{2}{*}{\textbf{\textit{Search}}} & \multirow{2}{*}{\textbf{\textit{Walk}}} & \multirow{2}{*}{\textbf{\textit{Idle}}} & \multirow{2}{*}{\textbf{\textit{Pointing}}} & \textbf{\textit{Button }} & \textbf{\textit{Physical}} \\ & \textbf{\textit{Interaction}} & & & & & \textbf{\textit{Interaction}} & \textbf{\textit{Interaction}}\\ \toprule \textit{\textbf{Low}} & -- & \res{0.66}{0.03} & \res{0.80}{0.02} & \res{0.78}{0.02} & \res{0.96}{0.01} & \res{0.92}{0.01} & \res{0.93}{0.02} \\ \textit{\textbf{High}} & \res{0.61}{0.02} & \res{0.69}{0.01} & \res{0.78}{0.02} & \res{0.86}{0.01} & -- & \res{0.96}{0.00} & \res{0.96}{0.01} \\ \bottomrule \end{tabular} \label{tab.action-identification} \end{table} \subsubsection{Age} Table~\ref{tab.action-age} shows the age classification results in AR and VR environment at action-level. Users' age was profiled with an F1-Score of about 0.5\% on the overall task executed in AR, and 0.80\% in VR. As the age profiling revealed to be unsuccessful in AR, we will not pay close attention on the action-level results on this use-case. This results confirm what we observed at task-level (see Figure~\ref{fig:task-age}). Regarding the VR scenario, we can note that, under low workload, the pointing (F1-Score = 0.88\%) and physical interactions (F1-Score = 0.82\%) were the most crucial in profiling users' age, compared to actions allowing less interactivity with the virtual environment (F1-Scores below 0.80\%). This might be an hint for a different movement and interaction pattern shown by older and younger users, which comes out particularly when higher freedom of movement is allowed. This is also in line with what observed on task-level. Moreover, this trend becomes even more evident when the physical interactions are performed under high workload (F1-Score = 0.90\%), likely reflecting the multitasking and motor difficulties related to age~\cite{li2005ecological}. \begin{table}[!ht] \caption{Age profiling on action-level organized per type of operation and workload level. Random guess at 0.5 for both AR and VR tasks. All the measures in F1-Score.} \centering \begin{tabular}{c|ccc|cccc} \cmidrule{2-8} & \multicolumn{3}{c|}{\textit{Augmented Reality}} & \multicolumn{4}{c}{\textit{Virtual Reality}}\\ \cmidrule{1-8} \multirow{2}{*}{\diagbox{\textit{Workload}}{ \textit{Operation}} } & \textbf{\textit{Button}} & \multirow{2}{*}{\textbf{\textit{Search}}} & \multirow{2}{*}{\textbf{\textit{Walk}}} & \multirow{2}{*}{\textbf{\textit{Idle}}} & \multirow{2}{*}{\textbf{\textit{Pointing}}} & \textbf{\textit{Button }} & \textbf{\textit{Physical}} \\ & \textbf{\textit{Interaction}} & & & & & \textbf{\textit{Interaction}} & \textbf{\textit{Interaction}}\\ \toprule \textit{\textbf{Low}} & -- & \res{0.40}{0.03} & \res{0.45}{0.02} & \res{0.77}{0.10} & \res{0.88}{0.06} & \res{0.70}{0.09} & \res{0.82}{0.05} \\ \textit{\textbf{High}} & \res{0.47}{0.02} & \res{0.44}{0.01} & \res{0.49}{0.02} & \res{0.83}{0.09} & -- & \res{0.81}{0.07} & \res{0.90}{0.05} \\ \bottomrule \end{tabular} \label{tab.action-age} \end{table} \subsubsection{Gender} Table~\ref{tab.action-gender} shows the gender classification results in AR and VR environment at action-level. On task-level, our algorithms generated an F1-Score of about 0.5\% in AR and above 0.7\% in VR. Even though the gender profiling did not perform sufficiently well in AR, we can here observe that, under high workload, both walk (F1-Score = 0.6\%) and search (F1-Score = 0.58\%) had a major influence in detecting the user gender compared to the same actions performed under low workload, and also to the button interaction (all f-scores < 0.50\%). These results are in line with what observed on task-level, whereby the gender profiling performed better in the NT-high compared to NT-low. Additionally, we here observe how the walking action has the larger influence on the accuracy of gender profiling compared to the other actions. Again, it might be related to different walking velocities demonstrated by men and women, particularly under high workload \cite{nenna2021augmented}. \par When looking at the actions performed in VR, the pointing action stands out. With an f-score of 0.82\%, it strongly contributes to the gender profiling compared to all other actions. This might be related both to a singular movement pattern and/or to gender-related eye parameters' variations. Further, results obtained at task-level on a better performance achieved under high compared to low workload are here confirmed only for button interactions. Indeed, the F1-Score at button interactions is about 0.08\% higher when users are under high rather than low workload. Again, this reflects results showed in previous study demonstrating faster operation times in men compared to women specifically when using controller buttons, but not when acting via physical actions~\cite{nenna2022virtualization}. This suggests that profiling users' gender might be easier during tasks involving button interactions, but not in those allowing higher interactivity with the virtual environment. \begin{table}[!ht] \caption{Gender profiling on action-level organized per type of operation and workload level. Random guess at 0.5 for both AR and VR tasks. All the measures in F1-Score.} \centering \begin{tabular}{c|ccc|cccc} \cmidrule{2-8} & \multicolumn{3}{c|}{\textit{Augmented Reality}} & \multicolumn{4}{c}{\textit{Virtual Reality}}\\ \cmidrule{1-8} \multirow{2}{*}{\diagbox{\textit{Workload}}{ \textit{Operation}} } & \textbf{\textit{Button}} & \multirow{2}{*}{\textbf{\textit{Search}}} & \multirow{2}{*}{\textbf{\textit{Walk}}} & \multirow{2}{*}{\textbf{\textit{Idle}}} & \multirow{2}{*}{\textbf{\textit{Pointing}}} & \textbf{\textit{Button }} & \textbf{\textit{Physical}} \\ & \textbf{\textit{Interaction}} & & & & & \textbf{\textit{Interaction}} & \textbf{\textit{Interaction}}\\ \toprule \textit{\textbf{Low}} & -- & \res{0.50}{0.02} & \res{0.45}{0.06} & \res{0.60}{0.10} & \res{0.82}{0.09} & \res{0.62}{0.05} & \res{0.66}{0.11} \\ \textit{\textbf{High}} & \res{0.54}{0.03} & \res{0.58}{0.03} & \res{0.60}{0.06} & \res{0.63}{0.05} & -- & \res{0.74}{0.06} & \res{0.66}{0.08} \\ \bottomrule \end{tabular} \label{tab.action-gender} \end{table} \subsection{Sensors Relevance - Ablation Study}\label{ssec.results/ablation} In this section, we conduct an ablation study to understand which sensors contribute the most in our identification, age, and gender predictions. In brief, we trained a Logistic Regression (LR) using only specific subsets of features. In the AR environment, we distinguish between \feature{Head Position} and \feature{Head Rotation} features. In VR, we also consider \feature{Eyes}, \feature{Controller Position}, and \feature{Controller Rotation} features. The ablation study was carried out both at Task-Level (Section~\ref{subsub:abl-task}) and Action-Level (Section~\ref{subsub:abl-act}). \subsubsection{Task-level} \label{subsub:abl-task} Table~\ref{tab:AR_task_abl} and Table~\ref{tab:VR_task_abl} show the results of the ablation study for AR and VR tasks, respectively. In the AR environment, \feature{Head Rotation} features are predominant in the Mental Task for identification and gender prediction. Indeed, in this task, participants were standing still and were instructed to don't move their head; however, it was plausible that their head oscillated in singular ways, which were detected by our algorithm and leveraged for their identification. In opposition, during the navigation task, \feature{Head Position} has more impact in all the targets, given that it records the walking patterns. Such pattern was used in the literature to identify people~\cite{katiyar2013study}, and could help in Age and Gender prediction as well. In VR, the identification stage seems to be driven manly by \feature{Eyes} features, followed by \feature{Controller} features. Reasonably, eyes blinking patterns and pupils' dilatation can be person-specific~\cite{bargary2017individual} \cite{fawcett2022individual} \cite{aminihajibashi2019individual}, and thus act as a biometric feature. The controllers, instead, were the main means to interact with the virtual world. Thus, it is reasonable that how a person interact within the environment helps in the identification. This result is in line with recent founds on video games using mouse and keyboards to profile users~\cite{conti2020pvp}. Therefore, we could expect AR identification achieve better performances if such sensors are available, particularly eyes trackers, as reasoned before in Section~\ref{ssec.results/task}. In predicting the age, the \feature{Controller} features yields the best performance. This finding can be the consequence of younger people being more familiar with joystick usage. When the workload is high, younger participants may pay more attention to the task rather than how to use the joystick. Moreover, in a low workload scenario, \feature{Head} and \feature{Eyes} features contributes similarly. On the other hand, in gender inference, the \feature{Head} and \feature{Eyes} features play the most significant role. Indeed, as shown in past literature, there are gender-based differences in how they visually explore a virtual world~\cite{sargezeh2019gender}. \feature{Controller} features influence the prediction mainly in high workload controller based tasks. \begin{table}[h!] \scriptsize \caption{Ablation study of sensor importance at task-level in AR. All the measures in F1-Score.} \centering \label{tab:AR_task_abl} \begin{tabular}{l|l|ccc} \cmidrule{2-5} \multicolumn{1}{l}{} & & \textit{\textbf{Identification}} & \textit{\textbf{Age}} & \textit{\textbf{Gender}} \\\cmidrule{2-5} \multicolumn{1}{l}{} & \textbf{Guessing} & 0.03 & 0.5 & 0.5 \\\cmidrule{2-5} \multicolumn{1}{l}{} & \textbf{Mental Task} & & & \\ \multicolumn{1}{l}{} & Head Position & 0.38 & \textbf{0.46} & 0.51 \\ \multicolumn{1}{l}{} & Head Rotation & \textbf{0.54} & 0.40 & \textbf{0.55} \\\midrule \multirow{3}{*}{\rotatebox[origin=c]{90}{Low W.}} & \textbf{Navigation Task} & \textbf{} & & \\ & Head Position & \textbf{0.64} & \textbf{0.45} & \textbf{0.56} \\ & Head Rotation & 0.46 & 0.40 & 0.45 \\\midrule \multirow{3}{*}{\rotatebox[origin=c]{90}{High W.}} & \textbf{Navigation Task} & \textbf{} & & \\ & Head Position & \textbf{0.65} & \textbf{0.45} & 0.51 \\ & Head Rotation & 0.48 & 0.44 & \textbf{0.52} \\\bottomrule \end{tabular} \end{table} \begin{table}[h!] \scriptsize \caption{Ablation study of sensor importance at task-level in VR. All the measures in F1-Score.} \centering \label{tab:VR_task_abl} \begin{tabular}{l|l|ccc} \cmidrule{2-5} & & \textbf{\textit{Identification}} & \textbf{\textit{Age}} & \textbf{\textit{Gender}} \\ \cmidrule{2-5} & \textbf{Guessing} & 0.03 & 0.5 & 0.5 \\ \cmidrule{2-5} \multirow{12}{*}{\rotatebox[origin=c]{90}{Low Workload}} & \textbf{Controller Based Task} & & & \\ & Head Position & 0.41 & 0.68 & \textbf{0.64} \\ & Head Rotation & 0.45 & \textbf{0.76} & 0.55 \\ & Eyes & \textbf{0.83} & 0.75 & 0.59 \\ & Controller Position & 0.39 & 0.69 & 0.57 \\ & Controller Rotation & 0.59 & 0.69 & 0.58 \\ & \textbf{Action Based Task} & & & \\ & Head Position & 0.50 & 0.76 & \textbf{0.62} \\ & Head Rotation & 0.51 & 0.76 & 0.60 \\ & Eyes & \textbf{0.83} & 0.74 & 0.54 \\ & Controller Position & 0.51 & 0.76 & 0.58 \\ & Controller Rotation & 0.68 & \textbf{0.81} & 0.55 \\ \midrule \multirow{12}{*}{\rotatebox[origin=c]{90}{High Workload}} & \textbf{Controller Based Task} & & & \\ & Head Position & 0.48 & 0.73 & 0.61 \\ & Head Rotation & 0.56 & 0.68 & 0.57 \\ & Eyes & \textbf{0.88} & \textbf{0.79} & \textbf{0.69} \\ & Controller Position & 0.45 & 0.78 & 0.60 \\ & Controller Rotation & 0.64 & 0.68 & 0.62 \\ & \textbf{Action Based Task} & & & \\ & Head Position & 0.55 & 0.75 & 0.53 \\ & Head Rotation & 0.55 & 0.80 & \textbf{0.62} \\ & Eyes & \textbf{0.89} & 0.83 & \textbf{0.62} \\ & Controller Position & 0.57 & 0.86 & 0.50 \\ & Controller Rotation & 0.73 & \textbf{0.87} & 0.50 \\\bottomrule \end{tabular} \end{table} \subsubsection{Action-level} \label{subsub:abl-act} Table~\ref{tab:AR_act_abl} and Table~\ref{tab:VR_act_abl} reports the results of the ablation study for AR and VR Actions, respectively. In AR, the \feature{Head Position} has more impact than \feature{Head Rotation} in predicting our target actions, especially for the walk action. This is reasonable given that such sensor is mainly recording the users' walking speed. \feature{Head Rotation} becomes relevant in the Button Interaction action, in which the participants could just rotate their head, and is quite useful to distinguish between genders. As in previous results, the age was difficult to predict. The only case in which we surpass the baseline is in the Walk action with high workload, but the improvement is too small to reason about it. Looking at VR, we notice that \feature{Head Position} remains relevant to predict the gender, particularly in scenarios with low workload. However, most of the times, the \feature{Eyes} features are the main discriminant to predict our targets. In identification, \feature{Eyes} reached the highest F1-Score in six out of seven actions, suggesting that these features might be the main reason behind the higher identification performances in VR rather than AR. Further, \feature{Eyes} are predominant in low workload scenarios to predict the users' age. \feature{Controller} features are quite useful to infer the user's age especially in high workload actions, while only small differences appear in their usage from people of different genders. Regarding the identification task, the \feature{Controller Rotation} appears more useful than \feature{Controller Position}. Last, it is interesting to see how in the idle actions, the \feature{Eyes} play a significant role, particularly in the high workload scenario, in which were able to identify a person with 81\% of F1-Score. \begin{table}[h!] \caption{Ablation study of sensor importance at action-level in AR. All the measures in F1-Score.} \centering \label{tab:AR_act_abl} \scriptsize \begin{tabular}{l|l|ccc} \cmidrule{2-5} & & \textit{\textbf{Identification}} & \textit{\textbf{Age}} & \textit{\textbf{Gender}} \\\cmidrule{2-5} & \textbf{Guessing} & 0.03 & 0.5 & 0.5 \\ \midrule \multirow{6}{*}{\rotatebox[origin=c]{90}{Low Workload}} & \textbf{Search} & & & \\ & Head Position & 0.\textbf{60} & 0.40 & 0.52 \\ & Head Rotation & 0.51 & 0.40 & 0.\textbf{58} \\ & \textbf{Walk} & & & \\ & Head Position & 0.\textbf{77} & 0.44 & \textbf{0.60} \\ & Head Rotation & 0.55 & \textbf{0.47} & 0.49 \\\midrule \multirow{9}{*}{\rotatebox[origin=c]{90}{High Workload}} & \textbf{Button Interaction} & & & \\ & Head Position & 0.38 & \textbf{0.46} & 0.52 \\ & Head Rotation & \textbf{0.56} & 0.40 & \textbf{0.56} \\ & \textbf{Search} & & & \\ & Head Position & \textbf{0.62} & 0.40 & \textbf{0.60} \\ & Head Rotation & 0.52 & \textbf{ 0.43} & 0.57 \\ & \textbf{Walk} & & & \\ & Head Position & \textbf{0.75} & \textbf{0.51} & \textbf{0.53} \\ & Head Rotation & 0.55 & 0.43 & 0.47 \\\bottomrule \end{tabular} \end{table} \begin{table}[h!] \scriptsize \caption{Ablation study of sensor importance at action-level in VR. All the measures in F1-Score.} \centering \label{tab:VR_act_abl} \begin{tabular}{l|l|ccc} \cmidrule{2-5} & & \textit{\textbf{Identification}} & \textit{\textbf{Age}} & \textit{\textbf{Gender}} \\\cmidrule{2-5} & \textbf{Guessing} & 0.03 & 0.5 & 0.5 \\\midrule \multirow{24}{*}{\rotatebox[origin=c]{90}{Low Workload}} & \textbf{Idle} & & & \\ & Head Position & 0.41 & 0.62 & \textbf{0.62} \\ & Head Rotation & 0.44 & 0.69 & 0.59 \\ & Eyes & \textbf{0.75} & \textbf{0.80} & 0.55 \\ & Controller Position & 0.38 & 0.69 & 0.58 \\ & Controller Rotation & 0.55 & 0.72 & 0.55 \\ & \textbf{Pointer} & & & \\ & Head Position & 0.67 & 0.80 & 0.57 \\ & Head Rotation & 0.73 & 0.83 & 0.62 \\ & Eyes & \textbf{0.91} & \textbf{0.86} & \textbf{0.71} \\ & Controller Position & 0.64 & 0.70 & 0.59 \\ & Controller Rotation & 0.83 & 0.81 & 0.51 \\ & \textbf{Button Interaction} & & & \\ & Head Position & 0.50 & 0.72 & \textbf{0.63} \\ & Head Rotation & 0.55 & 0.73 & 0.56 \\ & Eyes & \textbf{0.85} & \textbf{0.78} & 0.61 \\ & Controller Position & 0.47 & 0.72 & 0.58 \\ & Controller Rotation & 0.71 & 0.75 & 0.60 \\ & \textbf{Physical Interaction} & & & \\ & Head Position & 0.59 & 0.75 & 0.62 \\ & Head Rotation & 0.56 & 0.81 & \textbf{0.63} \\ & Eyes & \textbf{0.87} & 0.80 & 0.61 \\ & Controller Position & 0.63 & 0.74 & 0.57 \\ & Controller Rotation & 0.75 & \textbf{0.85} & 0.56 \\\midrule \multirow{18}{*}{\rotatebox[origin=c]{90}{High Workload}} & \textbf{Idle} & & & \\ & Head Position & 0.49 & 0.75 & 0.60 \\ & Head Rotation & 0.47 & 0.76 & 0.55 \\ & Eyes & \textbf{0.81} & 0.77 & \textbf{0.63} \\ & Controller Position & 0.46 & \textbf{0.79} & 0.50 \\ & Controller Rotation & 0.65 & \textbf{0.79} & 0.49 \\ & \textbf{Button Interaction} & & & \\ & Head Position & 0.57 & 0.69 & 0.56 \\ & Head Rotation & 0.65 & 0.66 & 0.55 \\ & Eyes & \textbf{0.93} & \textbf{0.83} & \textbf{0.67} \\ & Controller Position & 0.50 & 0.77 & 0.61 \\ & Controller Rotation & 0.73 & 0.72 & 0.61 \\ & \textbf{Physical Interaction} & & & \\ & Head Position & 0.63 & 0.82 & 0.54 \\ & Head Rotation & 0.63 & 0.81 & 0.62 \\ & Eyes & 0.71 & 0.87 & \textbf{0.66} \\ & Controller Position & 0.66 & \textbf{0.90} & 0.45 \\ & Controller Rotation & \textbf{0.79} & 0.86 & 0.49 \\\bottomrule \end{tabular} \end{table} \section{Discussions and Conclusions} \label{sec.concl} The profiling of users wearing virtual technologies can present several opportunities and threats, so it is important to examine it closely. In this work, we performed users' identification and profiling in two virtual-based scenarios, one involving AR and the other involving VR. We aimed to test different algorithms and leverage behavioral data outputted by the two virtual devices to accurately trace it back to the user identity and personal information (i.e., gender and age). Further, we developed a generic pipeline that can be used with different virtual devices and in different behavioral contexts: i.e., while walking, searching for landmarks in the surroundings, pointing to a virtual keyboard for typing, and operating on a virtual robot both via controller-based interaction and physical actions. Specifically, both virtual environments simulated highly realistic scenarios and most of these behaviors were executed both under high and low workload, giving a good insight on realistic applications of virtual technologies in the field. \par The results show that users can be identified and profiled both in AR and VR, with VR accuracy being higher. Specifically, in AR, user identification reached good results within the walking action at a low workload, while in VR, the identification algorithm was particularly successful when users performed more physical actions (i.e., pointing, physically interacting with the virtual robot) under a higher workload. As observed from the ablation study, this was mainly due to the additional eye-tracking sensors embedded in the VR but not in the AR headset. Indeed, while in VR the eye parameter had the most significant impact, the head movement revealed had the greatest influence on the AR users' identification. When detecting age, instead, our algorithms were not able to accurately detect the users' age in AR. This was plausibly related to the low age variability of the tested sample, as the age of participants that took part at the experiment ranged between 19 and 29. Differently, in VR, we worked on an experimental sample whose age ranged between 23 and 69 years old, and our algorithms were thus able to detect the user age with good accuracy. Age detection performed better in the most physical actions and interactions rather than in those involving just joysticks and controller buttons, specifically under a higher workload. Interestingly, eye parameters revealed to have the greatest influence on age detection in all actions but in the physical interactions, in which the controller position and rotation had higher impacts. On gender profiling, instead, we observed how the walking activity was again the most prominent in helping detect the user's gender in AR, with the head position being the most influential sensor for detecting such personal information. Differently, in VR, our algorithms better performed during the pointing action and under actions at high workload. In this case, the eye-related behaviors demonstrated the most considerable influence on gender detection during both these actions. In agreement with AR findings, the head position is quite relevant. Both findings align with literature on the different eye and head movement behaviors between men and women. \par In conclusion, our work thoroughly studied users' profiling in AR and VR technologies. To the best of our knowledge, previous applied research on user profiling never compared performance obtained with these technologies. On this matter, our results highlighted that profiling is more straightforward in Virtual Reality. Through our ablation study, we additionally found eye sensors to be particularly useful in all our predictions (i.e., identification, age, gender), thus revealing to be likely responsible for the performance differences between AR and VR. Therefore, while being conscious of the technical challenges of accurately detecting eye behaviors in the real world, our findings highlight the importance of incorporating eye-tracking technologies to AR headsets. To sum up, our work show the potential of user profiling methodologies with virtual technologies, and pave the road to several future works on how to improve AR and VR technologies with respect to users' profiling.
{ "timestamp": "2022-09-23T02:10:26", "yymm": "2209", "arxiv_id": "2209.10849", "language": "en", "url": "https://arxiv.org/abs/2209.10849" }
\section{Introduction} The extensive literature analyzing the dynamic principal-agent problem has shown that it is important but difficult to design the optimal shape of contracts in a tractable way. Indeed, optimal contracts in dynamic agency problems are generally defined as complex functionals of a stream of contractible variables, such as revenues. Moreover, as first identified by \citet{rogerson1985repeated} (see also \cite{laffont2009theory}) theoretical contracts exhibit memory, even in the most commonly used models that assume uncorrelated shocks, which unfortunately prevents them from matching real-world practices (see \citet{bolton2005contract}). In addition, firms' revenues empirically show long memory\footnote{In this paper, we use indifferently the terms long (short) memory and long (short)-range dependence see definition 2.1. page 42 in \citet{BeranBook}.} and we lack a theoretical framework that justifies the signing of simple tractable contracts in an environment with inter-temporal links across time periods.\\ In a Brownian setting, the breakthrough paper by \citet{holmstrom1987aggregation} (HM) shows that the optimal contract is linear in profits under some specific assumptions: the agent exerts effort continuously, principal and agent have CARA utilities, the agent bears a pecuniary cost of effort and finally the outcomes generated in the absence of effort are modeled by a fully observable Brownian motion. Since then, several attempts have been made to obtain closed-form contracts in environments that relax at least one of the assumptions of the HM model. \citet{sung1995linearity} showed that the optimal contract is still linear when the agent also controls the variance of the output. \citet{hellwig2002discrete} showed that linear contracts are nearly optimal in a discrete-time version of the HM model. \citet{edmans2011tractability} and \citet{edmans2012dynamic} obtain striking general results in a discrete-time model where none of the four hypotheses is retained but where the agent makes its decision in each period after having observed the noise. However, they focus primarily on the cheapest implementation of a particular action, rather than on the objective of maximizing the principal's preference. Another important recent contribution beyond the Holmstrom and Milgrom setting has been made by \citet{carroll2015robustness} who showed that optimal contracts are linear in a general one-period model with uncertainty.\\ In this article, we enrich the Holmstrom and Milgrom modeling framework by going beyond the assumption that the revenues are driven by a Brownian motion. We will instead consider Volterra Gaussian processes that are a generalization of the standard Brownian motion to study time-dependent effects. More precisely, a Volterra Gaussian process is a Wiener integral process with respect to a standard Brownian motion involving a deterministic integrand called -{\it kernel}. Thus, at every point in time, it is an infinite linear combination of i.i.d. Gaussian random variables with time-dependent coefficients. Although we are aware of the shortcomings of the three remaining HM assumptions\footnote{For instance, \cite{edmans2011tractability} clearly argue {\it there is ample evidence of decreasing absolute risk aversion, and many effort decisions do not involve a monetary expenditure}}, our targeted choice is primarily motivated by the fact that, by allowing arbitrary integrand kernel functions in the Wiener integral, our Volterra Gaussian agency model encompasses agency models with short and long run autocorrelations. In particular, one of the main examples of Volterra Gaussian processes is the mean-reverting process which allows us to get closer of recent models of dynamic contracts with persistence such as those developed in \citet{williams2011ECMA} or career concerns as in \citet{cisternas18}.\\ For a long time, Volterra Gaussian processes have been considered as a natural tool for modeling continuous phenomena with memory. In particular, the fractional Brownian motion (FBM), a Volterra Gaussian process with short and long range dependence, initially introduced by \citet{kolmogorov1940wienersche}, was popularized by \citet{mandelbrot1968fractional} in finance to model the empirically-validated long-term dependence of stock returns. More recently, a stream of literature suggested the use of variants of the fractional Brownian motion in stochastic volatility modeling to capture the roughness of the time series of the volatility of an asset which has been observed empirically in the market, see \citet{gatheral2018volatility,AJLP17} and the references therein. In general, such processes are non-Markovian and non-semimartingales, which make their study more intricate, both theoretically and practically, and prevents the use of standard stochastic calculus tools. Within the framework of optimal dynamic contracting theory, the continuous-time semi-martingale models have received a lot of attention the past thirty years, when a significant progress has been made by relying on the recursive approach pioneered by \citet{Green1987}, \citet{spear1987repeated}, \citet{thomas1990income}. Discrete-time models were first developed (see \citet{clementi2006theory,demarzo2007optimal}, followed by \citet{biais2007dynamic}), while the breakthrough paper by \citet{sannikov2008continuous} resulted in the recent study of dynamic contracting in continuous-time models (see also \citet{demarzo2006optimal, biais2010large, demarzo2012dynamic}). The main advantage of continuous-time semi-martingale models lies in the fact the procedure to find the optimal contract can be embedded in the standard theory of Markovian stochastic control using the theory of martingales and stochastic calculus, see \citet{schattler1993first} for a general presentation and \citet{cvitanic2018dynamic} for a rigorous mathematical justification. Hence, under a fairly general set of assumptions, the contract can be characterized unambiguously by solving an Hamilton-Jacobi-Bellman equation where the so-called agent promised value plays the role of a state variable.\\ In allowing a non-semimartingale and non-Markovian setting, this paper makes an additional methodological contribution to solve for the optimal contract. The main idea is to use the so-called martingale optimality principle to study the agent and the principal problem sequentially as a Stackelberg game. The first step is to offer a class of incentive-compatible contracts by revisiting the martingale approach of \citet{schattler1993first} and \citet{sannikov2008continuous}. The second step and our main contribution is to explicitly solve the principal problem which becomes a controlled stochastic Volterra problem. This requires the introduction of an auxiliary state variable - {\it the effort-corrected forward output} - which captures all the non-Markovianity and allows the application of the martingale optimality principle for the principal problem. In a one-dimensional setting, our key result is that the optimal contract is linear in the terminal value of the output, although the principal has in general a coarser information than the agent. The optimal contract has interesting features. The slope or marginal value of the contract is independent of the output dynamics, only the intercept depends on the latter as a function of the optimal effort which is proportional to the kernel. Therefore, random parts of contracts signed in different one-dimensional Gaussian environments are identical although the required effort levels are environment specific, deterministic and exhibit interesting features in relation with the properties of the kernel. \\ We extend the paper with a discussion of the optimality of linear contracts in higher dimension. We address the multitask principal-agent problem in which a principal with CARA preferences hires a single agent with CARA preferences to perform different tasks. The outcome of each task is assumed to follow a Volterra Gaussian process whose evolution depends on the agent's continuous effort in each task while the profit is the sum of these different outcomes. Our main result is that there is no value in observing the agent's activities separately when the cost of effort is assumed to be a radial quadratic function. Under this assumption, the optimal contract only uses aggregate information and is still linear in the end-of-period outcome. When we consider a general effort cost function, we characterize the contract that would be optimal if the principal were able to observe the Brownian filtration and measure the utility gap when a less-informed principal forces herself to sign the best linear contract and identify factors that reduce the loss of utility associated with the use of linear contracts. This paper thus shows that linear contracts can closely achieve maximum principal utility in Gaussian environments. \section{The one-dimensional model} In this section, we present the economic model which is essentially an extension of the \citet{holmstrom1987aggregation} framework. \\ {\it General Description:} We consider a risk-averse investor, who owns a project and signs a fixed-term contract with a risk-averse manager, the latter being necessary to operate a project. Time is continuous and the time horizon is $T>0$. In the absence of effort, the stochastic output process $(X_t)_{t\leq T}$ of the project evolves up to time $T$ as \begin{equation}\label{eq:Xintro} X_t=g_0(t)+\int_0^t K(t,s) dB_s, \end{equation} where $B$ is a standard one-dimensional Brownian motion, $g_0:[0,T]\to {\mathbb R}$ is a measurable deterministic input function, $K : [0,T]^2 \to {\mathbb R}$ is a measurable Volterra Kernel, i.e. $K(t,s)=0$ for $s \geq t$ such that \begin{align} \sup_{t\leq T} \int_0^T K^2(t,s) ds < \infty. \end{align} We first observe that our model encompasses a large class of output dynamics that offers great modeling flexibility to model agency relationships in different sectors of the economy. Obviously, this setting contains the Holmstrom and Milgrom Brownian model by choosing $g_0(t)=x_0$ and $K(t,s)= \sigma$ for any pair $s < t$ for some constant $\sigma$. Even more generally, the case of a time-dependent volatility can be recovered by setting $K(t,s)= \sigma(s) 1\!\!1_{s<t}$ for some square-integrable function $\sigma$. More interestingly, this framework also contains mean-reverting processes which are widely used to model output in the energy and mining sectors, if for instance, we choose $g_0(t)=e^{-\lambda t}x_0+ \frac{\mu_0}{\lambda} (1-e^{-\lambda t})$ and $K(t,s)=e^{-\lambda(t-s)} 1\!\!1_{s<t}$, the output then follows the Ornstein-Ulhenbeck dynamics $$ dX_t=(\mu_0-\lambda X_t)\,dt + dB_t. $$ Another example is the Brownian bridge pinned for instance in $0$ at some time $T_0>T$ which falls into this category with a kernel given by $$ K(t,s)=\frac{T_0-t}{T_0-s}1\!\!1_{s< t}. $$ The Brownian bridge has the semi-martingale decomposition $$ dX_t=-\frac{X_t}{T_0-t}\,dt+dB_t, $$ and may be used in any situation where the agent has access to information about the future output. For example, it can be used to model the output of a seasonal crop which will end up being zero after the harvest season. A more striking example is given by the family of fractional Gaussian processes such as: \begin{itemize} \item the Riemann-Liouville fractional Brownian motion where for $s<t$, $$ K(t,s)=c_H (t-s)^{H-1/2}, \,\quad H \in (0,1), \mbox{ for some constant } c_H. $$ \item the fractional Brownian motion whose covariance function is $\Sigma_0(s,u)=\frac {1} 2 (s^{2H}+u^{2H}-|s-u|^{2H})$, for $H\in (0,1)$, admits the Volterra representation \eqref{eq:Xintro} with the kernel \begin{align*} K(t,s)= 1\!\!1_{s< t}\frac{(t-s)^{H-1/2}}{\Gamma(H+\frac 1 2)} \, {}_2 F_1\left(H-\frac 1 2; \frac 1 2-H; H+\frac 1 2; 1-\frac t s \right), \end{align*} where ${}_2F_1$ is the Gauss hypergeometric function, see \citet{decreusefond1999stochastic}. \end{itemize} Both types of fractional Brownian motion do not fall into the semi-martingale and Markovian frameworks when the so-called Hurst parameter $H$ is different than $1/2$ (which corresponds to the case of the standard Brownian motion): they exhibit long range dependence when $H>1/2$ and short run dependence when $H<1/2$ and prove to be statistically very good models for industries related to power generation.\\ Even more remarkably, equations with delay in the drift in the form of linear integro-differential convolution equations: \begin{align}\label{eq:stochastic integro} dX_t = \left( h(t) + \int_{[0,t]} \mu(ds) X_{t-s} \right) dt + \sigma dB_t , \end{align} with initial condition $X_0 \in {\mathbb R}$, where $h:[0,T] \to {\mathbb R}$, $\mu:\mathcal B([0,T]) \to {\mathbb R}$ of bounded variation, $X_0 \in {\mathbb R}$ and $\sigma \in {\mathbb R}$ admit a unique solution in the form of a Gaussian Volterra process \eqref{eq:Xintro} for some specific choice of input curve $g_0:[0,T]\to {\mathbb R}$ and convolution kernel $K:[0,T]^2 \to {\mathbb R}$, see Appendix \ref{Integro} for a detailed presentation of this observation. For instance, setting $\mu(dt) = \sum_{k=1}^m a_k \delta_{t_k} $, we recover equations with delay. Such equations fall into the semi-martingale framework, but are clearly not Markovian.\\ While very different in nature and in modeling objective, all these dynamics have in common to be Gaussian processes. Another important observation about this framework relates to assumptions about the asymmetry in information, which has been interpreted by \cite{holmstrom1987aggregation} as a distinction between linear optimal contracts in outcomes $X$ and those in accounts $B$. We denote by $\mathbb{F}^B$ the augmented filtration generated by $(B_t)_{t\leq T}$ and $\mathbb{F}^X$ the one generated by the output process $(X_t)_{t\leq T}$. It readily follows from \eqref{eq:Xintro} that $\mathbb{F}^X \subset \mathbb{F}^B$. Hereafter, we assume that the agent has better information than the principal about the project in the sense that he has access to the full information $\mathbb{F}^B$ while the principal observes only some aggregated information generated by the output $\mathbb{F}^X$. In general, these two filtrations do not coincide even in one-dimensional models as shown in the following example corresponding to a situation where the principal observes the output in a discretionary way.\\ {\it Discrete observations of a Brownian motion:} Assume $X_t=f(t)B_t$ where $f$ is a bounded function on $[ 0,T ]$. Observe that $X$ is a Volterra process with $K(t,s)=f(t)1\!\!1_{s \le t}$. Consider a subdivision $0<t_1<\ldots<t_n=T$ of the interval $[0,T]$ and let $f$ be the function defined as a linear combination of unit impulses \begin{align}\label{eq:fdis} f(t)=\sum_{i=1}^N 1\!\!1_{t_i}(t). \end{align} The output process is purely discontinuous with $X_{t_i}=B_{t_i}$ and $X_t=0$ for $t \neq t_i$ and may correspond to a situation where the principal performs audits at regular intervals. Therefore, $\mathbb{F}^X$ is strictly included in $\mathbb{F}^B$. We deduce that, even in a situation where the principal knows the agent is not exerting effort, the principal has a coarser information than the agent. In Volterra Gaussian models, we must therefore be careful that there may be asymmetric information between the principal and the agent regardless of the agency problem we introduce below.\\ {\it Agency problem:} We assume that the agent can exert a continuous effort $(a_t)_{t\leq T}$ that modifies the probability distribution of $X$ as follows \begin{equation}\label{outputeffort} X_t=g_0(t)+\int_0^t K(t,s)(a_s\,ds + dB_s^a), \end{equation} where $B^a$ is also a Brownian motion. As is customary in the agency theory literature, while the output process $X$ is observable by both players, the effort is the agent's private information. The agent's cost for exercising some effort level $a$ is modeled through a strictly convex $C^2$ function $k(a)$ satisfying $k(0)=0$. To alleviate the exposition, we will assume hereafter that the effort cost function is quadratic, \begin{align}\label{eq:costk} k(a)=\kappa \frac{a^2}{2}, \quad \mbox{for some } \kappa>0. \end{align} Hereafter and in accordance with the paper of \citet{holmstrom1987aggregation}, we model the preferences of the principal and the agent with CARA utility functions that are given respectively by $$ U_P(x):=- {\rm exp} (-\gamma_P x)\, \hbox{ and } U_A(x):=- {\rm exp} (-\gamma_Ax), \quad \forall x\in{\mathbb R}. $$ In the beginning of the relationship, principal and agent agree on a contract of maturity $T$. To foster incentives, the contrat specifies a payment at time $T$ which is modeled by a random variable $\xi$ that is supposed to be ${\cal F}_T^X$ measurable. We assume that both players can fully commit to the contract and that the agent has a reservation utility level $R_0=U_A(y_0)<0$ below which he will refuse the contract. The latter inequality is referred to the participation constraint of the agent who has the option to reject a contract and enjoy a utility of autarky $R_0$.\\ {\it Description of the probabilistic background:} For completeness, we recall the rigorous formulation of the agency problem in order to make understandable the first-order conditions that we will give in the next section. Let $(\Omega,\mathcal{F}, \mathbb{F} :=(\mathcal F_t)_{t\leq T},\mathbb{P}_0)$ be a filtered probability space on which a $\mathbb{F}$--Brownian motion $B:=(B_{t})_{{t\leq T}}$ is defined with natural (completed) filtration $\mathbb{F}^B:=(\mathcal F^B_t)_{t\leq T}$. \vspace{0.5em} The firm's output or cash-flows observed by the principal are given by a stochastic process $X$ with dynamics under $\mathbb{P}_0$, \begin{equation}\label{eq:dynX} X_t=g_0(t)+\int_0^t K(t,s) d B_s, \end{equation} The impact of the agent's effort is modeled as a change of probability measure which changes the drift of the driving Brownian process. More precisely, agent's admissible actions are given by the following set \begin{align*}{} \mathcal A = \left\{ (a_t)_{t \leq T} \, \mathbb{F}\mbox{-progressively measurable: there exists $A>0$ s.t.~} \int_0^T a_s^2 ds \leq A, \; \mathbb{P}_0-\mbox{a.s.} \right\}. \end{align*} Observe that the set of admissible actions $\mathcal A$ is not empty because it contains bounded actions. Clearly, any admissible process $a \in \mathcal A$ satisfies the Novikov's criterion $$ {\mathbb E}\left[ {\rm exp} \left( \frac{1}{2} \int_0^T a_s^2ds\right)\right]<+\infty, $$ which ensures that the process $\left( {\rm exp} \left( \int_0^T {a_s}\,dB_s-\frac{1}{2}\int_0^T a_s^2\,ds\right)\right)_{0\le t\le T}$ is a martingale, see \citet[Proposition 5.12 p. 198]{karatzasshreve91}. We can therefore define a family of equivalent probability measures $\mathbb{P}_a$ by $$ \frac{d\mathbb{P}_a}{d\mathbb{P}_0}= {\rm exp} \left( \int_0^T {a_s}\,dB_s-\frac{1}{2}\int_0^T a_s^2\,ds\right), $$ where $a$ ranges trough ${\mathcal{A}}$. Under $\mathbb{P}_a$, the process $B^a=B-\int_0^{\cdot} a_s\,ds$ is a $\mathbb{F}-$Brownian motion by Girsanov theorem and $X$ evolves as \begin{equation}\label{eq:dynX:Pa} X_t=g_0(t)+\int_0^t K(t,s)(a_s\,ds+ d B_s^a). \end{equation} Because, the effort is unobservable, the principal only observes the trajectory of the output process $X$, the deterministic curve $g_0$ but not the last two terms of the decomposition \eqref{eq:dynX:Pa} separately. \\ Interestingly, in the case of general Volterra processes, this model leads to a novel simple setting where we have persistence of past efforts on the output variation. To understand this, let us imagine that the agent makes a constant effort $a$ on the interval $[0,t]$ and then stops exerting effort after $t$. Then, we have for $h>0$, that $$ {\mathbb E}[(X_{t+h}-X_t)| \mathcal F_t]=g_0(t+h)-g_0(t)+\int_0^t (K(t+h,s)-K(t,s))(dB_s^a+a\,ds), $$ which induces persistence of past efforts on the future output increments whenever the functions $K(t+h,.)$ and $K(t,.)$ are not identical. Notice that in the HM model, the kernel is constant, so we recover that past efforts have no influence on future variations of the output. \\ {\it The Principal-agent problem:} It is well known that principal-agent relationships can be viewed as a Stackelberg game. The principal moves first by offering a contract that consists in a compensation $\xi$, which belongs to the set of ${\cal F}_T^X$ measurable random variables, to the agent. The latter then reacts by choosing an effort policy based on the information available at each date inducing a probability measure $\mathbb{P}_a$. For any given contract $\xi$, let $V_0^A(\xi)$ denote the agent's utility at time $0$ which is defined as \begin{equation} \label{agentutility} V_0^A(\xi):=\sup_a {\mathbb E}^a\left( U_A\left(\xi - \int_0^T k(a_s) \,ds \right)\right) \end{equation} recall the definition of $k$ in \eqref{eq:costk}. As common in agency problems, we define the concept of incentive-compatible contracts. \begin{defi}\label{def:ic} A contract $\xi$ is said to be {\it incentive compatible} if $V_0^A$ is finite and if there exists an effort policy $a^*(\xi) \in {\mathcal{A}}$ that maximizes \eqref{agentutility}, i.e. $$ V_0^A(\xi)={\mathbb E}^{a^\ast(\xi)}\left( U_A\left(\xi - \int_0^T k(a^*_s(\xi)) \,ds \right)\right). $$ \end{defi} It is critical to understand what incentive-compatible contracts are, as these are the ones for which the principal can enforce desirable efforts. As common in the literature, we will focus on a class $\Xi$ of contracts $\xi$ that are incentive-compatible (IC). Before defining rigorously the class of IC contracts $\Xi$ we will focus on, we clarify the principal's problem. By offering an incentive-compatible contract $\xi \in \Xi$, the principal will be able to anticipate the optimal effort level $a^\ast(\xi)$. Hence, she will propose an incentive-compatible contract that maximizes the expected value of her CARA preference. Then, her aim is to solve \begin{equation} \label{eq:Principalpb} V_0^P:=\sup_{\xi \in \Xi} {\mathbb E}^{a^\ast(\xi)}\left[U_P\left(X_T-\xi\right)\right], \end{equation} under the participation constraint ${\mathbb E}^{a^\ast(\xi)}\left( U_A\left(\xi - \int_0^T k(a^*_s(\xi)) \,ds \right)\right) \ge R_0$. \\ The first result of this paper is given by the following theorem which shows that the problem \eqref{eq:Principalpb} admits an optimal contract which is linear in end-of-period outcomes. The result of the Holmstrom-Milgrom model thus extends to all Gaussian Volterra processes, even though these may exhibit very different statistical properties. Following \citet{schattler1993first}, we introduce the class of contracts we will focus on. Let us define \begin{equation}\label{BSDEdrift} f^\ast(z):=\frac{\gamma_A}{2} |z|^2 + \inf_{a \in \mathbb R} \{k(a)-a z\} =\frac{\kappa \gamma_A-1}{2\kappa }z^2 , \end{equation} and consider the following class $\Xi$ of Incentive Compatible contracts, see Proposition~\ref{IC} below, $$ \Xi=\{ \xi=Y_T^{(y,\beta)} \in \mathcal{F}_T^X, \text{ where }y \ge y_0, \beta=(\beta_t)_{t\leq T} \in {\mathcal{A}} \text{ and } Y_T^{(y,\beta)}=y+\int_{0}^T f^\ast(\beta_s) ds +\int_0^T \beta_s dB_s \}. $$ We have: \begin{theorem}\label{Main} The optimal contract $\xi^*$ that maximizes the principal problem \eqref{eq:Principalpb} is linear in end-of-period profits $X_T$ and is given by \begin{align}\label{eq:Maincontract} \xi^*=y_0 -\frac{\gamma_P+1/\kappa}{\gamma_A+\gamma_P+1/\kappa} g_0(T)+ \frac{\kappa\gamma_A-1}{2\kappa} \int_0^T (\beta^*_s)^2\,ds+\frac{\gamma_P+1/\kappa}{\gamma_A+\gamma_P+1/\kappa}X_T, \end{align} and the optimal level of recommended effort $a^*$ that maximizes the agent's problem \eqref{agentutility} is deterministic and given by $a^*=\frac{\beta^*}{\kappa}$ with \begin{align}\label{eq:Maineffort} \beta^*_t=\frac{\gamma_P+1/\kappa}{\gamma_A+\gamma_P+1/\kappa}K(T,t), \quad t \leq T. \end{align} \end{theorem} \begin{proof} See Section~\ref{sectionP}. \end{proof} Similarly to HM, the optimal compensation is made up of a deterministic base salary $y_0 -\frac{\gamma_P+1/\kappa}{\gamma_A+\gamma_P+1/\kappa} g_0(T)+ \frac{\kappa\gamma_A+1}{2\kappa} \int_0^T (\beta^*_s)^2\,ds$ and a random compensation to foster incentives $\frac{\gamma_P+1/\kappa}{\gamma_A+\gamma_P+1/\kappa}X_T$. One of the striking results is, when agents have CARA preferences, the incentive part of the optimal contract, through the performance-based bonus coefficient $\frac{\gamma_P+1/\kappa}{\gamma_A+\gamma_P+1/\kappa}$, is common to all one-dimensional Volterra Gaussian models and thus independent of the output dynamics, even though they have very different statistical properties. Only the base salary is industry-specific depending on the output dynamic through the Volterra kernel $K$. The optimal effort level is deterministic and firm-specific and can, depending on the choice of the Volterra kernel, exhibit interesting behaviors. For instance, for the mean-reverting dynamics, $i.e.~K(t,s)=e^{-\lambda (t-s) } 1\!\!1_{s<t}$, the optimal effort is increasing if the mean-reverting intensity $\lambda$ is positive. The closer one gets to contract maturity, the more work the agent has to do. The intuition is that the optimal effort should compensate for the natural tendency of the process to revert to its long-term average. The closer the contract is to maturity, the greater the effort should be to allow $X$ to deviate from its long-term average and thus allow the principal to benefit from a greater profit. When the mean-reverting intensity is negative, the effort must be greater at the beginning of the contract in order to give the necessary impetus to the process to diverge towards large positive values. Once this momentum is established, it is less effective to ask the agent to work.\\ The following two sections are dedicated to proving Theorem~\ref{Main}. Section~\ref{agentpb} solves the agent problem, while section~\ref{sectionP} solves the principal problem. An extension of Theorem~\ref{Main} to the multi-dimensional set-up is considered in Section~\ref{S:multid}. \section{The one-dimensional agent problem}\label{agentpb} This section aims at completely solving the problem of the Agent in \eqref{agentutility}. The ideas developed here are not new, they rely on the martingale approach to stochastic control already used in \citet{schattler1993first} which we adapt to develop the first-order approach to principal-agent problems in a continuous-time Gaussian setting with exponential utilities. We will show that the class $\Xi$ of contracts are incentive compatible contracts and design the optimal response of the agent for a given contract in $\Xi$. Our construction relies on the following martingale optimality principle that brings a clear intuition of the stochastic maximum principle used in the context of dynamic contracting by \citet{williams2011ECMA}. \subsection{The Martingale optimality principle} The Martingale optimality principle must be seen as a sufficient condition for a contract to be incentive-compatible. The following lemma, which is due to \citet{hu2005utility} and proved in Appendix \ref{MOP} for completeness, states this principle. \begin{lem} \label{martingaleoptimality} Given a contract $\xi$, suppose the existence of a family of stochastic processes $R^a(\xi):=(R_t^a)_{t\leq T}$ indexed by $a \in {\mathcal{A}} $ such that the following four assertions hold \begin{itemize} \item[i)] $R_T^a = U_A\left(\xi - \int_0^T k(a_s) ds\right), \quad \forall a \in {\mathcal{A}}$, \item[ii)] $R_{\cdot}^a$ is a $(({\cal F}_t)_{t\in [0,T]},\mathbb{P}^a)$-supermartingale for every $a$ in ${\mathcal{A}}$, \item[iii)] $R_0^a$ is independent of $a$, \item[iv)] there exists $a^\ast$ in $\mathcal{A}$, such that $R^{a^\ast}$ is a $(\mathcal F_t)_{t\in [0,T]}$-martingale. \end{itemize} Then, $\xi$ is incentive compatible for the Agent problem \eqref{agentutility} and $a^\ast$ is the agent best reply. \end{lem} In the dynamic agency literature, the process $(R_t^{a^\ast})_{t\leq T}$ describes the Agent's expected utility given the contract $\xi$. A contract $\xi$ is thus incentive compatible if we are able to build such a family $R^a(\xi)$. This will be done in the next section. \subsection{Enlarging the class of Incentive Compatible Contracts} In accordance with the result of \citet{schattler1993first}, we expect that the contracts belonging to $\Xi$ are incentive compatible. It is at this point that a difficulty arises in our setting compared to the Brownian model of \citet{holmstrom1987aggregation} and more generally to the standard literature where the information sets of the two players coincide in the absence of moral hazard. Because the principal has a coarser information (recall that the paths of $B$ are not always observable by the principal), she cannot in general implement the process $(Y_t^{y,\beta})_{t\leq T}$ given by \begin{align}\label{eq:Yproc} Y_t^{y,\beta} = y + \int_0^t f^*(\beta_s)ds + \int_0^t \beta_s dB_s, \end{align} for $y\geq y_0$ and $\beta \in {\mathcal{A}}$, because $Y_t^{y,\beta}$ fails to be $\mathcal F^X_t$-measurable. In other words, the contracts in $\Xi$, that are the most natural to be incentive compatible are a priori inaccessible, unless we are able to characterize the controls $\beta \in {\mathcal{A}}$ that induce $Y_T^{(y,\beta)} \in \mathcal{F}_T^X$. Putting aside for a while this problem of information asymmetry between the two players, we consider a larger game where the principal is supposed to have the same information as the agent. We will forget for a while the constraint $\xi \in \mathcal F^X_T$ and introduce the enlarged set of contracts $$ \hat \Xi=\{ \xi=Y_T^{(y,\beta)} \text{ where } y \ge y_0, \, \beta \in {\mathcal{A}},\quad Y_T^{(y,\beta)}=y+\int_{0}^T f^\ast(\beta_s) ds +\int_0^T \beta_s dB_s\} $$ and naturally extend Definition~\ref{def:ic} of incentive-compatibility for $\mathcal F^B_T$-measurable contracts. We have the following result that we prove for sake of completeness using the Martingale optimality principle. \begin{pro}\label{IC} Let $\hat \xi \in \hat \Xi$ be of the form $$\hat \xi = y + \int_0^T f^*(\beta_s)ds + \int_0^T \beta_sdB_s,$$ with $y\geq y_0$ and $\beta \in \mathcal A$. Then, $\hat \xi$ is incentive compatible for the Agent problem in \eqref{agentutility} and satisfies the participation constraint. Furthermore, the agent best reply is given by the effort $a^{*}(\hat \xi)= \frac{\beta}{\kappa}$ and the utility of the agent at $0$ is given by $V_0^A(\hat \xi)=- {\rm exp} (-\gamma_A y)$. \end{pro} \begin{proof} Fix $\hat \xi \in \hat\Xi$. Let $y\geq y_0$ and $\beta \in \mathcal A$ such that $\hat \xi =y+\int_{0}^T f^\ast(\beta_s) ds +\int_0^T \beta_s dB_s$ and define the process $Y^{(y,\beta)}$ by \eqref{eq:Yproc}. For an admissible effort policy $a=(a_t)_{t\leq T} \in {\mathcal{A}}$, we define $R^a$ as $$ R^{a}_t:=- {\rm exp} \left(-\gamma_A \left(Y_t^{(y,\beta)} - \int_0^t k(a_s) ds \right)\right), \quad t\in [0,T].$$ We will show that the family $(R_t^a)_{t\leq T}$ satisfies condition i)--iv) of the Martingale optimality principle of Lemma~\ref{martingaleoptimality}. Observe that $Y_T^{(y,\beta)}= \hat\xi$ so that Lemma~\ref{martingaleoptimality}-i) is satisfied. Also, $R^a_0 = - {\rm exp} (-\gamma_A y)$ is independent of $a$ as needed in Lemma~\ref{martingaleoptimality}-iii). Furthermore, recalling that $B^a=B-\int_0^{\cdot} a_s\,ds$, we note that \begin{align*} Y_t^{(y,\beta)} - \int_0^t k(a_s) ds &= y + \int_{0}^t \left(f^\ast(\beta_s)- k(a_s)\right) ds +\int_0^t \beta_s dB_s \\ &= y + \int_{0}^t \left(f^\ast(\beta_s)- k(a_s) + a_s \beta_s \right) ds +\int_0^t \beta_s dB^a_s. \end{align*} Using the definition of $f^{\ast}$ in \eqref{BSDEdrift}, a completion of the squares in $a_s$ yields the expression $$ f^\ast(\beta_s)- k(a_s) + a_s \beta_s = -\frac{\kappa}{2}\left(a_s - a^\ast_s\right)^2 + \frac{\gamma_A}{2} \beta^2_s $$ with $a^\ast_s:=\frac{\beta_s}{\kappa}$ so that combining the above leads to \begin{align*} R_t^a = - {\rm exp} \left( -\gamma_A y\right) {\rm exp} \left(\frac{\gamma_A\kappa}{2}\left(a_s -a^\ast_s\right)^2 \right) {\rm exp} \left( -\frac{\gamma_A^2}{2} \int_0^t \beta_s^2 ds - \gamma_A\int_0^t \beta_s dB^a_s\right) \end{align*} It remains to argue that the process $M^a:= {\rm exp} \left(-\gamma_A \int_0^{\cdot} \beta_sdB^a_s-\frac{\gamma_A^2}{2}\int_0^{\cdot}\beta_s^2\, ds\right)$ is a martingale under $\P^a$. Indeed, if this is the case, then, since $- {\rm exp} (\frac{\gamma_A\kappa}{2}(a_s -a^\ast_s)^2 \leq -1$, $$ \mathbb E^a[R^a_T] \leq - {\rm exp} (-\gamma_A y) \mathbb E^a[M_T^a] = - {\rm exp} (-\gamma_A y) = R^a_0, $$ which shows that $R^a$ is a $\mathbb{P}^a$-supermartingale for each $a\in \mathcal A$, which corresponds to condition Lemma~\ref{martingaleoptimality}-ii). Furthermore, for $a=a^{\ast}$, we have that $R^{a^*}$ is a $\mathbb{P}^{a^{\ast}}$-martingale which gives Lemma~\ref{martingaleoptimality}-iv). Obtaining that $M^a$ is a martingale under $\mathbb{P}^a$ is equivalent to proving that $$ M_t= {\rm exp} \left( \int_0^t {a_s}\,dB_s-\frac{1}{2}\int_0^t a_s^2\,ds\right) {\rm exp} \left(-\gamma_A \int_0^t \beta_s \,dB^a_s-\frac{\gamma_A^2}{2}\beta_s^2\, ds\right) $$ is a martingale under $\P^0$. But, observe that $$ M_t= {\rm exp} \left( \int_0^t {(a_s-\gamma_A\beta_s)}\,dB_s-\frac{1}{2}\int_0^t (a_s-\gamma_A\beta_s)^2\,ds\right) $$ which is a martingale for $(a_t)_t$ and $(\beta_t)_t$ in $\mathcal A$. An application of Lemma \ref{martingaleoptimality} shows that $\hat\xi$ is incentive compatible such that the agent best reply is given by the effort $a^*_s(\hat \xi) = \frac{\beta_s}{\kappa}$. Finally, since $y\geq y_0$, the identity $$ V_0^A(\xi) =\mathbb E^{a^*}[R^{a^*}] = - {\rm exp} (- \gamma_A y) \geq - {\rm exp} (-\gamma_A y_0)=R_0, $$ shows that $\hat\xi$ satisfies the participation constraint by giving the required utility of the agent at $0$, which concludes the proof. \end{proof} Notably, when the principal offers a contract parametrized by the pair $(y,\beta)$, the agent best reply is $\frac{\beta}{\kappa}$ and thus independent of $y$. This is due to the no wealth effect of CARA preferences. The agent utility is $- {\rm exp} ({-\gamma_A y})$ and thus independent of $\beta$. This is due to the agent's full commitment allowing the principal to choose the best incentive contract that binds the participation constraint. To sum up, restricting our attention to contracts in $\hat \Xi$ transforms the puzzling principal's problem to a stochastic Volterra control problem, namely\footnote{ To alleviate notations, we will denote hereafter $\P^\beta$, the probability corresponding to the agent's effort choice $a=\frac{\beta}{\kappa}$.} \begin{equation}\label{secondbest} V_{SB}=\sup_{y\ge y_0,\beta \in {\mathcal{A}}} {\mathbb E}^\beta\left[ U_P\left(X_T-Y_T^{y,\beta}\right) \right]=\sup_{\hat \Xi} {\mathbb E}\left[ U_P\left(X_T-\hat\xi\right) \right]. \end{equation} where $(Y_t^{y,\beta})_{t\leq T}$ is given by \eqref{eq:Yproc}. The principal problem \eqref{secondbest} corresponds to the enlarged stochastic control problem where the principal would have access to the information generated by the Brownian motion. Clearly, the principal value \eqref{eq:Principalpb} satisfies $V_0^P \le V_{SB}$ because of the inclusion $\Xi \subset \hat \Xi$. In the one-dimensional Brownian model, \citet{holmstrom1987aggregation} show that the two values coincide because the sets of information $\mathbb{F}^B$ and $\mathbb{F}^X$ are identical ($\Xi=\hat \Xi$) and thus there is no need to introduce the { enlarged control problem. Our contribution will be to show that the two values always coincide for one-dimensional Gaussian Volterra models, even if $\mathbb{F}^X$ is strictly included in $\mathbb{F}^B$. This is the object of the next section. \section{The one-dimensional principal Gaussian problem}\label{sectionP} This section is devoted to the explicit resolution of the principal problem \eqref{secondbest} and to the proof of Theorem \ref{Main}. Contrary to the standard literature, the problem \eqref{secondbest} is not a Markovian stochastic control problem because the process $X_t$ is not necessarily Markov. More precisely, it corresponds to a stochastic Volterra control problem with the following controlled processes \begin{align*} X_t &= g_0(t) + \frac{1}{\kappa} \int_0^t K(t,s)\beta_s ds + \int_0^t K(t,s)dB_s^{\beta},\\ Y^{y,\beta}_t &= y + \frac{\kappa\gamma_A + 1}{2\kappa} \int_0^t \beta_s^2 ds + \int_0^t \beta_s dB^{\beta}_s. \end{align*} We will show that the optimal second-best contract exists and is furthermore $\mathcal F^X_T$-measurable. As a consequence, the second-best principal value $ V_{SB}$ will coincide with the principal value $V_0^P$. In other words, our main message is that there is no gain to the principal in acquiring more information than that generated by the observed output process $X$ in one-dimensional Gaussian Volterra models, regardless of the definition of the kernel $K$. For instance, in the example of discrete observations of Brownian motion, i.e.~$K(t,s)=f(t)1\!\!1_{s \le t}$ with $f$ as in \eqref{eq:fdis}, there is no gain to the principal in increasing the frequency of the discrete observations of the Brownian output.\\ For $y \geq y_0$ and a control policy $\beta \in {\mathcal{A}}$, we define $J(y,\beta)= {\mathbb E}^{\beta} \left[ {\rm exp} \left( - \gamma_P\left(X_T - Y_T^ {y, \beta} \right) \right) \right],$ in order to write the second-best principal problem \begin{equation} \label{relaxed} V_{SB} = \inf_{y\geq y_0}V_{SB}(y), \, \text{ with }V_{SB}(y) =\inf_{\beta \in {\mathcal{A}}} J(y,\beta). \end{equation} The rest of the section is dedicated to the proof of Theorem \ref{Main} that characterizes the optimal control for the principal problem \eqref{secondbest}. The idea of the proof of Theorem \ref{Main} is to apply again the martingale optimality principle. To do this, we need to introduce a good family of processes indexed by $\beta$. Inspired by the agent problem, one possibility would be to consider the following family $$ {\rm exp} \left(-\gamma_P \left( X_t - Y_t^{ y, \beta} \right) \right). $$ Unfortunately, it may be impossible to apply It\^o's formula since the process $X$ may not be a semi-martingale, as in the fractional Gaussian processes case. To get around this problem, we introduce a new state variable that can be interpreted as a forward price which is a semi-martingale that coincides with $X$ at date $T$. Let us define the {\it effort-corrected forward output} process by $$ g_t^{\beta}(T) = {\mathbb E}^{\beta}\left[ X_T -\frac{1}{\kappa}\int_t^T K(T,u) \beta_u du \mid \mathcal F_t\right]. $$ Using the output dynamics \eqref{outputeffort} with effort $\beta \in {\mathcal{A}}$, we have $$ g^\beta_t(T)= g_0(T) + \frac{1}{\kappa} \int_0^t K(T,u) \beta_u du + \int_0^t K(T,u) dB_u^{\beta}. $$ Then, we observe that the process $(g^{\beta}_t(T))_{t \le T}$ is a semi-martingale on $[0,T)$ with dynamics \begin{align}\label{eq:dynamicsg} dg^{\beta}_t(T) = \frac{1}{\kappa}K(T,t) \beta_t dt + K(T,t) dB_t^{\beta} \end{align} and terminal value $g^\beta_T(T)=X_T$. To apply the martingale optimality principle, we will consider the family of processes \begin{align*} M_t^{\beta} = {\rm exp} \left(-\gamma_P \left( g^{\beta}_t(T) - Y_t^{ y, \beta} \right) + \phi_t \right), \end{align*} where $\phi$ is the deterministic function given by $$ \phi_t = \frac{\gamma_P} 2 \left( \gamma_P^2 - \frac{(\gamma_P + 1/\kappa)^2}{(\gamma_A +\gamma_P+ 1/\kappa)} \right) \int_t^T K(T,s)^2ds . $$ Lemma \ref{L:M}, which is proved in Appendix~\ref{MOP}, provides the dynamics of $M^{\beta}$ that plays a key role in the determination of the optimal contract. \begin{lemma}\label{L:M} For each $\beta \in {\mathcal{A}}$, we have \begin{align}\label{eq:Mdynamics} \frac{dM_t^{\beta}}{M_t^{\beta}} = \frac{\gamma_P} 2 (\gamma_A +\gamma_P+1/\kappa) (\beta_t - \beta_t^*)^2 dt + \left(\gamma_P \beta_t -\gamma_P K(T,t) \right) dB_t^{\beta}, \quad \mathbb P^{\beta}-a.s. \end{align} with $\beta^*$ given by \eqref{eq:Maineffort}. \end{lemma} \begin{proof} See Appendix~\ref{MOP}. \end{proof} We can now complete the proof of Theorem~\ref{Main}. \begin{proof}[Proof of Theorem~\ref{Main}] $ \bullet$ \textit{The Principal's problem} is solved by an application of the martingale optimality principle on the process $M^{\beta}$. Fix $\beta \in \mathcal A$ and $y\geq y_0$. We show that the family $M^{\beta}$ satisfies the four assertions of the martingale optimality principle in Lemma~\ref{martingaleoptimality}. We have, \begin{itemize} \item[i)] For all $\beta \in {\mathcal{A}}$, it follows from \eqref{eq:Mdynamics} that $M^{\beta}$ is a $\P^{\beta}$-sub-martingale and thus $$ M_0^{\beta} \leq {\mathbb E}^{\beta}\left[M_T^{\beta}\right] = {\mathbb E}^{\beta} \left[ {\rm exp} \left( -\gamma_P \left( X_T - Y_T^{y,\beta} \right) \right) \right]=J(y, \beta),$$ where we used that $g^{\beta}_T(T)= X_T$. \item[ii)] Observe that $M_0^\beta= {\rm exp} \left(-\gamma_P \left( g_0(T) - y \right) + \phi_0 \right)=M_0$ is independent of $\beta$. \item[iii)] Finally, for $\beta^*$ given by \eqref{eq:Maineffort}, $M^{\beta^*}$ is a $\P^{\beta^*}$-martingale by \eqref{eq:Mdynamics} and thus, we have $$ J(y,\beta^*) = {\mathbb E}^{\beta}\left[M_T^{\beta^*}\right] = M_0^{\beta^*} = M_0 \leq J( y,\beta), \quad \beta \in {\mathcal{A}}, $$ which shows that $\beta^*$ is the optimal control for the second-best principal problem and the principal value is given by $V_{SB}(y)=M_0$. \end{itemize} Optimizing on $y\geq y_0$, yields $V_{SB}= {\rm exp} \left(-\gamma_P \left( g_0(T) - y_0 \right) + \phi_0 \right)$. Furthermore, since $\beta^*_t$ is proportional to the Volterra Kernel $K(T,t)$, it is straightforward to obtain the linear form of the contract $\xi^*=Y_T^{y_0,\beta^*}$ as in \eqref{eq:Maincontract}. In particular, $\xi^*$ is $\mathbb{F}^X_T$-measurable as an affine function of $X_T$. Therefore, the optimal control for the enlarged principal problem \eqref{secondbest} induces an optimal contract in $\Xi$, so that $V_{SB}=V_0^P$.\\ $\bullet$\textit{The Agent's problem.} An application of Proposition~\ref{IC} yields that the optimal level of recommended effort $a^*$ that maximizes the agent's problem \eqref{agentutility} is given by $a^*=\frac{\beta^*}{\kappa}$. \end{proof} To sum up, this study shows that the transition from Brownian to Volterra models preserves the optimality of linear in end-of-period profit contracts. Moreover, in one-dimensional models, the principal does not suffer from having a coarser information than the agent about the dynamics of the production process. Aggregating production over time is sufficient for optimal compensation in Volterra Gaussian environments and there is no need to use all the available information - the brownian path in our setting- to design an optimal contract. The next section deals with the robustness of this result in the multi-dimensional set-up. \section{The multi-dimensional model} \label{S:multid} So far, we have assumed that the shocks are modeled by a one-dimensional Brownian motion. In this section, we present a tractable class of multitask principal-agent problems, such as the one faced by a firm with a manager that supervises several projects. This model amounts to study the principal-agent problem in the case where the shocks are modeled by a standard Brownian motion of dimension $d$, that we also denote by $(B_t)_{t\le T}$. As in Holmstrom and Milgrom, the $i$-th component $B^i_t$ of $B_t$ is interpreted as the outcome of the $i$-th account of the firm. We model the aggregate output or profit of the firm as follows $$ X_t=g_0(t)+\int_0^t <K(t,s),dB_s>, $$ where $<\cdot,\cdot>$ denotes the canonical inner product in ${\mathbb R}^d$, $g_0$ is a deterministic function and $K : [0,T]^2 \to {\mathbb R}^d$ is a measurable Volterra Kernel, i.e. $K(t,s)=0$ for $s \geq t$ such that \begin{align} \sup_{t\leq T} \int_0^T ||K(t,s)||^2 ds < \infty. \end{align} For instance, the case $K(t,s)=\sigma1\!\!1_{s \le t},\,\sigma \in {\mathbb R}^d$ corresponds to the Brownian model in \citet[Section 4]{holmstrom1987aggregation}. We can also consider a mining company that exploits two different types of minerals in two different mines. Each component of $X$ represents the revenue of a mine. This may correspond to the choice $K(t,s)=(e^{-\lambda_1(t-s)},e^{-\lambda_2(t-s)})$ where each separate outcomes follows a mean-reverting process with two different mean-reverting intensity $\lambda_i,\,i=1,2$.\\ Even more importantly than in the one-dimensional case, the filtration generated by the output $\mathbb{F}^X$ is strictly included in the filtration generated by the multi-dimensional Brownian motion $\mathbb{F}^B$ and thus, the principal has always a coarsest information than the agent even when the latter does not exert any hidden effort. This observation is central to Holmstrom and Milgrom's distinction between the optimal contracts that are linear in profits or in accounts and more generally to understand when it is useless to use all the information generated by the Brownian motion. It is useful to recall here that we focus in this paper on contracts that are $\mathcal F_T^X$ measurable. \\ In a similar way to the one-dimensional case, we assume that the agent can exert a continuous vector of effort $(a_t)_{t\leq T} \in {\mathbb R}^d$, $a_t^i$ being the effort made by the manager to improve the account $i$, that modifies the probability distribution of $X$ as follows: $$ X_t=g_0(t)+\int_0^t <K(t,s),dB_s +a_s\,ds>, $$ Similarly to the one-dimensional case, we say that an agent's action $a=(a_t)_t$ is admissible if $a=(a_t)_t$ is $\mathbb{F}$-progressively measurable and such that there exists $A>0$ such that $$ \int_0^T||a_s||^2\,ds < A, \quad \mathbb{P}^0-a.s. $$ Still denoting by ${\mathcal{A}}$ the set of admissible actions, we define for any $a \in {\mathcal{A}}$ a family of equivalent probability measures $\mathbb{P}_a$ by $$ \frac{d\mathbb{P}_a}{d\mathbb{P}_0}= {\rm exp} \left( \int_0^T <a_s,dB_s>-\frac{1}{2}\int_0^T ||a_s||^2\,ds\right). $$ Under $\mathbb{P}_a$, the process $B^a=B-\int_0^{\cdot} a_s\,ds$ is a $\mathbb{F}-$Brownian motion and the output dynamics is \begin{equation}\label{outputeffortdimd} X_t=g_0(t)+\int_0^t <K(t,s), dB_s^a+a_s\,ds>. \end{equation} We also assume that the agent incurs an instantaneous cost $k(a)$ where $k$ is a convex function on ${\mathbb R}^d$ with $k(0_{{\mathbb R}^d})=0$. When the kernel is a constant vector, Holmstrom and Milgrom have considered the case where the effort cost function is $k(a)=g(\sum_{i=1}^d a_i)$ with $g$ strictly convex and have showed that the optimal compensation is linear in profit in that case. Nevertheless, this specification does not allow us to determine the optimal effort that the agent must make in each of his tasks. In this section, we will rather consider a quadratic effort cost function \begin{align* k(a)=\frac{1}{2}<a,\Gamma a>, \end{align*} where $\Gamma$ is a symmetric positive-definite matrix. When $\Gamma$ is proportional to the identity matrix, i.e.~$\Gamma = \kappa I_d$ for some $\kappa >0$, we say that the effort cost function is radial, because, in this case, the effort cost function is proportional to the norm of the vector $a$. A radial cost is to assume that the effort costs are not specific to the different tasks that define the accounts. In the sequel, we will highlight the interplay between the choice of the matrix $\Gamma$ and the optimality of linear contracts. In a nutshell, our main results of this section in the multi-dimensional framework can be summarized as follows: \begin{itemize} \item If $\Gamma$ is proportional to the identity matrix, then the optimal contract $\xi^*$ is linear in the end-of-period profit $X_T$. As in the one-dimensional model, the principal does not have to worry about her lack of information to sign an optimal contract. \item For more general matrices $\Gamma$, the optimal contract $\xi^*$ is no longer linear in the end-of-period profit $X_T$. More importantly, $\xi^*$ is not necessarily $\mathcal F_T^X$ measurable, meaning that the less-informed principal cannot implement/sign the contract $\xi^*$. In this situation, we quantify the gap between such contract $\xi^*$ and the best linear contract that can be implemented by the principal. The gap can be interpreted as the value of information. \end{itemize} \subsection{The agent's problem} From a methodological viewpoint, there is no hurdle to adapt the techniques developed in Section \ref{agentpb}. As a consequence, we will roughly repeat the approach detailed in Section \ref{agentpb} to apply the martingale optimality principle and consider a class of incentive-compatible contracts. To do this, we assume for a while that the principal has access to the Brownian filtration generated by $(B_t)_t$ and can implement the controlled process \begin{align}\label{eq:Y-beta-d} Y_t^{ y,\beta}=y+\int_0^t f^*(\beta_s)dt+\int_0^t<\beta_s, d B_s>, \end{align} with $y \ge y_0$ and $\beta \in {\mathcal{A}}$, and for $z \in {\mathbb R}^d$, \begin{align*} f^*(z)=\frac{\gamma_A}{2} ||z||^2+\inf_a \left( \frac{1}{2}<a,\Gamma a> -<a,z> \right) = \frac{1}{2} <z , \left(\gamma_A I_d - \Gamma^{-1}\right) z> \end{align*} to offer the wage $\xi=Y_T^{y,\beta}$ which is $\mathcal F^B_T$ measurable. For a given contract $\xi$ defined by $(y,\beta) \in [y_0,\infty)\times \mathcal A$, the agent has to determine his best response $a^*(\xi)$. To apply the martingale optimality principle, we introduce the family of processes indexed by $a$ given by $$ R_t^a=- {\rm exp} \left( -\gamma_A \left(Y_t^{ y, \beta}-\int_0^t \frac{1}{2}<a_s,\Gamma a_s>\,ds\right)\right) $$ The first-order condition gives the agent's best effort, $a^*(\xi)_t=\Gamma^{-1}\beta_t$. Furthermore, the agent utility at time $0$ is given by $V_0^A(\xi) = R_0^a = - {\rm exp} (-\gamma_A y).$ We collect the result in the following proposition, which is the analogue of Proposition~\ref{IC} in the multi-dimensional framework. \begin{pro}\label{IC-d} Let $\hat \xi$ be a contract of the form $$\hat \xi = y + \int_0^T f^*(\beta_s)ds + \int_0^T <\beta_s, dB_s>,$$ with $y\geq y_0$ and $\beta \in \mathcal A$. Then, $\hat \xi$ is incentive compatible for the Agent problem \eqref{agentutility} and satisfies the participation constraint. In particular, the agent best reply is given by the effort $a^{*}(\hat \xi)=\Gamma^{-1} {\beta}$ and the agent utility at time $0$ is given by $V_0^A(\hat \xi) = - {\rm exp} (-\gamma_A y).$ \end{pro} We can again re-write the Principal's problem as Volterra stochastic optimal control problem on the process $$ Y_T^{ y,\beta}=y+\frac{1}{2}\int_0^T <\beta_t,\left(\gamma_A Id+\Gamma^{-1}\right)\beta_t>\,dt+\int_0^T<\beta_t, d B^\beta_t>, $$ where the stochastic process $B^\beta=(B^\beta_t)_t$ is a d-dimensional Brownian motion under the probability measure indexed by $a^*_t=\Gamma^{-1}\beta_t$ that we will denote hereafter $\P^*$.\\ Then, without asymmetry of information, the enlarged principal problem is given by $$ V_{SB} = \sup_{y\geq y_0} V_{SB}(y), $$ with \begin{align}\label{eq:Vsb_d} V_{SB}(y)=\sup_{\beta \in {\mathcal{A}}}{\mathbb E}^* \left[ U_P\left( X_T-Y_T^{y,\beta} \right) \right] \end{align} and is an upper bound for the principal problem \eqref{eq:Principalpb} with the constraint $\xi \in \mathcal F^X_T$. \subsection{The Enlarged Principal problem} The idea is to mimic the methodology developed in details in the one-dimensional case. For this purpose, we reintroduce the effort-corrected forward output $$ g_t^\beta(T)={\mathbb E}\left[ X_T-\int_t^T <K(T,s), \Gamma^{-1} \beta_s>\,ds | \mathcal{F}_t \right] $$ and apply the martingale optimality principle to the process \begin{align*} M_t^{\beta} = {\rm exp} \left(-\gamma_P \left( g^{\beta}_t(T) - Y_t^{y,\beta} \right) + \phi_t \right), \end{align*} where $\phi$ is the deterministic function to determine.\\ The following theorem gives the optimal contract in a multi-dimensional setting for the enlarged principal problem. \begin{theorem}\label{Main-d} Let $\Gamma$ be symmetric positive-definite. The optimal level of effort $\Gamma^{-1} \beta^*$ that maximizes the enlarged principal's problem \eqref{eq:Vsb_d} is deterministic and $\beta^*$ is given by \begin{align}\label{eq:betavsb-d} \beta^*_t=\left(\left(\gamma_A+\gamma_P\right) I_d+\Gamma^{-1}\right)^{-1} \left(\gamma_P I_d+\Gamma^{-1}\right)K(T,t), \quad t \leq T. \end{align} The utility of the principal at time $0$ is given by $$ V_{SB} = V_{SB}(y_0) $$ with \begin{align}\label{eq:Vsb-explicit} V_{SB}(y)= - {\rm exp} \left(-\gamma_P(g_0(T)-y) + \phi_0\right) \end{align} and \begin{align*} \phi_0 = \frac{\gamma_P}{2} \langle K_T, \left(\gamma_P I_d - \left(\gamma_P I_d + \Gamma^{-1}\right) \left(\left(\gamma_A + \gamma_P\right) I_d + \Gamma^{-1}\right)^{-1} \left(\gamma_P I_d + \Gamma^{-1}\right)\right) K_T\rangle_{L^2}, \end{align*} where $K_T(s):=K(T,s)$ and $\langle f,g \rangle_{L^2}:=\int_0^T <f(s),g(s)>ds$. The optimal contract $\xi^*$ that maximizes the principal problem is given by \begin{align}\label{eq:optimalcontract-d} \xi^*=y_0+ \int_0^T f^*(\beta^*_s) ds + \int_0^T <\beta^*_s,dB_s>. \end{align} \end{theorem} \begin{proof} See Appendix~\ref{MOP}. \end{proof} We now make two important observations. \begin{rem}\label{R:enlarged} \begin{itemize} \item We note that in general, the optimal contract \eqref{eq:optimalcontract-d} is not linear in $X_T$, indeed the term $\int_0^T <\beta_s^*, dB_s\rangle $ with $\beta^*$ given by \eqref{eq:betavsb-d} cannot be expressed in terms of the integral $\int_0^T <K(T,s),dB_s>$. \item More importantly, $\xi^*$ is measurable with respect to $\mathcal F^B_T$ but not necessarily with respect to the smaller filtration $\mathcal F^X_T$, which means that such contract cannot be implemented by the less-informed principal. \end{itemize} \end{rem} The following corollary shows that if the cost is radial then the optimal contract is linear in $X_T$ and it can therefore be implemented by the principal. In Section \ref{S:valueofinfo} below, we study a class of optimal linear implementable contracts for the principal in case the cost is not radial. \begin{corollary}\label{radial} Assume that the effort cost function is radial, i.e.~$\Gamma= \kappa Id$ for some $\kappa>0$. Then, the optimal level of effort that maximizes the enlarged principal's problem \eqref{eq:Vsb_d} is deterministic and given by \begin{align}\label{eq:betastard} \beta^*_t=\frac{\gamma_P+{1}/{\kappa}}{\gamma_A+\gamma_P+{1}/{\kappa}}K(T,t). \end{align} In particular, the optimal contract $\xi^*$ is linear in profits and given by \begin{align*} \xi^*=y_0-\frac{\gamma_P+1/\kappa}{\gamma_A+\gamma_P+1/\kappa} g_0(T)+ \frac{\gamma_A + {1}/{\kappa}}{2} \int_0^T <\beta^*_s, \beta^*_s> ds + \frac{\gamma_P+{1}/{\kappa}}{\gamma_A+\gamma_P+{1}/{\kappa}} X_T. \end{align*} Furthermore, $V_{SB}=V_0^P$. \end{corollary} \begin{proof} The expression for $\beta^*$ and $\xi^*$ follow directly from Theorem~\ref{Main-d}. In particular, $\xi^*$ is $\mathcal F^X_T$-measurable as an affine function of $X_T$. Therefore, the optimal control for the enlarged principal problem \eqref{eq:Vsb_d} induces a contract which is implementable by the principal, so that $V_{SB}=V_0^P$ \end{proof} The message of Corollary~\ref{radial} is very simple. When an agent has to allocate his time on several tasks and when his effort cost function is not specific to tasks and thus measured by the norm of the vector $a$, it is not necessary for the principal to scrutinize the revenues of each activity. It is sufficient to sign a linear contract in the final value of the aggregate profits to give the optimal incentives without regard to the optimal level of information. When the effort cost function is radial, the principal need not observe the paths of individual accounts to offer an optimal compensation that is linear in profits. With this specification, we can disentangle the optimal efforts to allocate to the different tasks because the ith component to the optimal effort is proportional to the ith component of the kernel. In order to illustrate how an agent should optimally allocate his time to the different tasks he has to perform, let us consider the following toy example. A salesperson must visit two clients in two different geographical areas. We assume that the first geographical area generates Brownian outcomes and the second area generates mean-reverting outcomes. The firm's aggregate output is given by $$ X_t=B_t^1+\int_0^t e^{-\lambda (t-s)}\,dB_s^2, \text{ with }\lambda >0. $$ While the share of the output that goes to the agent is independent of the parameter $\lambda$, the saleperson must differentiate his customers visit. The first customer's visit must be on a constant basis and the second customer's visit must be accelerated as the maturity of the contract approaches.\\ \subsection{ The subclass of linear contracts and the value of information}\label{S:valueofinfo} In this section, we no longer assume that the cost is radial, we study the subclass of linear contracts, and we quantify the incurred loss on the utility of the principal. Beyond their simplicity, the main advantage of linear contracts is that they are $\mathcal F_T^X$ measurable and can therefore be implemented by the less-informed principal. We will define the value of information as the premium the principal would have to pay to access the agent's information and implement the optimal contract. We will consider contracts as in \eqref{eq:Y-beta-d} with $y\geq y_0$ but only for controls $\beta$ in the form $$ \beta_t = b K(T, t), \quad \mbox{for some } b\in \mathbb R. $$ Note that in this case, Proposition~\ref{IC-d} still apply to ensure that such contracts are Incentive Compatible and the agent best reply is still $\Gamma^{-1}\beta$. Furthermore, for any $b\in {\mathbb R}$ the contract $Y_T^{y,b}$ is by construction linear in $X_T$ and given by \begin{align*} Y_T^{y,b} = y + \int_0^T f^*(bK(T,s))ds + b (X_T-g_0(T)), \end{align*} so that $b$ is the share of the output that goes to the agent. In this case, the principal will optimize on $(y,b)$ to find the optimal linear contract \begin{align*} V_{{lin}} = \sup_{y\geq y_0} V_{{lin}}(y), \end{align*} with \begin{align}\label{eq:Vlin_d} V_{{lin}}(y)=\sup_{b \in \mathbb R}{\mathbb E}^* \left[ U_P\left( X_T-Y_T^{y,b} \right) \right] \end{align} A direct computation and optimization of the expectation leads to the following result for the optimal linear contract. \begin{theorem} Let $\Gamma$ be symmetric positive-definite. The optimal level of effort that maximizes the linear principal's problem \eqref{eq:Vlin_d} is given by $\Gamma^{-1}\beta^*$ with \begin{align}\label{eq:betalin-d} \beta^*_t&= b^* K(T,t), \quad t \leq T, \\ b^* &= \frac{\langle K_T, \left(\gamma_P I_d+\Gamma^{-1}\right) K_T \rangle_{L^2}}{\langle K_T, \left(\left(\gamma_A + \gamma_P\right) I_d+\Gamma^{-1}\right) K_T \rangle_{L^2}}. \end{align} The utility of the principal at time $0$ is given by $$ V_{lin} = V_{lin}(y_0) $$ with \begin{align}\label{eq:Vlin-explicit} V_{lin}(y)= - {\rm exp} \left(-\gamma_P(g_0(T)-y) + \chi_0\right) \end{align} and \begin{align*} \chi_0 = \frac{\gamma_P}{2} \langle K_T, \left(\gamma_P I_d -b^* \left(\gamma_P I_d + \Gamma^{-1}\right)\right) K_T\rangle_{L^2}. \end{align*} The optimal linear contract $\xi^*$ that maximizes the linear principal's problem is given by \begin{align*} \xi^*=y_0-b^*g_0(T)+ \int_0^T f^*(\beta^*_s) ds + b^* X_T. \end{align*} \end{theorem} \begin{proof} Fix $y \geq y_0$, $b\in {\mathbb R}$ and $\beta_t = bK(T,t)$. Then the random variable $X_T - Y_T^{y,b}$ reads \begin{align*} X_T - Y_T^{y,b} &= g_0(T)-y + < K_T, \left(b\Gamma^{-1} - \frac {b^2}2 \left(\gamma_A I_d + \Gamma^{-1}\right) \right) K_T >_{L^2} \\ &\quad + (1-b) \int_0^T <K(T,s),dB_s^{\beta}> \end{align*} and is therefore Gaussian under $\P^{\beta}$. So that a direct computation of the Laplace transform of a Gaussian random variable yields \begin{align*} {\mathbb E}^{\beta} \left[ U_P\left( X_T-Y_T^{y,b} \right) \right] &= - {\rm exp} \left(-\gamma_P(g_0(T)-y) + F(b) \right) \end{align*} with \begin{align*} F(b) = < K_T, \left(\frac{\gamma_P^2}{2} I_d - b\left( \gamma_P^2 I_d + \gamma_P \Gamma^{-1}\right) + \gamma_P\frac {b^2}2 \left((\gamma_A + \gamma_P) I_d + \Gamma^{-1} \right) \right)K_T >_{L^2}. \end{align*} A direct maximization of $F$ on $b$ yields that the optimum is achieved for $b^*$ given by \eqref{eq:betalin-d} and $F(b^*)=\chi_0$. Maximizing on $y\geq y_0$ then gives $ V_{lin} = V_{lin}(y_0) $. \end{proof} Obviously, when the principal restricts to linear contracts, her utility at $0$ satisfies $V_{lin}(y) \leq V_{SB}(y) $. It follows from \eqref{eq:Vsb-explicit} and \eqref{eq:Vlin-explicit} that $\chi_0 \geq \phi_0$. More precisely, one has \begin{align}\label{eq:diffvalue} V_{lin}(y_0) = V_{SB}\left( y_0 + \frac{\chi_0-\phi_0}{\gamma_P} \right) = V_{SB}( y_0) {\rm exp} \left(\chi_0-\phi_0 \right). \end{align} The term $ {\rm exp} \left(\phi_0-\chi_0 \right)=\frac{V_{SB}( y_0)}{V_{lin}(y_0)}$ lies in $[0,1]$ and can be interpreted as the value of information in the following way: since in general, the principal cannot implement the optimal contract in the enlarged filtration, recall Remark~\ref{R:enlarged}, she has to restrict to sub-optimal, more simple but implementable contracts. The price to pay when she restricts to linear contracts, is the decrease of her utility by the factor $ {\rm exp} \left(\phi_0-\chi_0\right)$, which would correspond to the price to pay to access the optimal contract. Note that the agent utility at time $0$ is unchanged compared to the previous section, and is still equal to $ {\rm exp} (\gamma_A y_0)$ by Proposition~\ref{IC-d} when the principal proposes the contract $(y_0,b^*)$. \begin{remark} We note that contrary to the one dimensional setting, the coefficient $b^*$ in \eqref{eq:betalin-d} depends in general on the kernel $K$. For the case of radial costs, i.e.~$\Gamma = \kappa I_d$ for some $\kappa >0$, one recovers from \eqref{eq:betalin-d} that $b^*= (\gamma_P + {1}/{\kappa})/(\gamma_A + \gamma_P + {1}/{\kappa}) $ which is independent of $K$. Note also that in this context, $\chi_0=\phi_0$, so that the value of information vanishes in this case, that is linear contracts are optimal for the Principal's problem, recall Corollary~\ref{radial}. \end{remark} Having characterized both the fully optimal contract and the optimal linear end-of-period contract, it remains to compare the performances of the two types of contracts. We implement this comparison by studying the sensitivity of the nonnegative difference $\chi_0-\phi_0$ with respect to the input kernel $K$ and the cost matrix $\Gamma$. The smaller the quantity $(\chi_0-\phi_0)$, the more efficient the implementation of a linear contract, recall \eqref{eq:diffvalue}. The next proposition provides an upper bound for the value of information in terms of two quantities: the condition number\footnote{The condition number of symmetric positive definite matrix $S$ is the ratio $\frac{\lambda_{max}}{\lambda_{min}}$ where $\lambda_{max}$ (resp. $\lambda_{min}$) is the largest (resp. smallest) eigenvalue of $S$.} of the matrix $\Gamma$, denoted $Cond(\Gamma)$ and the $L^2$-norm of the kernel $K$. The condition number $Cond(\Gamma)$ measures how sensitive is the effort cost function to changes in efforts. \begin{pro}\label{valueinfo} There exists a positive constant $C$ independent of the dimension $d$, kernel $K$ and terminal time $T$ such that \begin{align}\label{eq:upperbound} 0 \leq \chi_0-\phi_0 \le C(Cond(\Gamma)-1)\int_0^T \vert \vert K(T,t) \vert\vert^2\,dt. \end{align} \end{pro} \begin{proof} See Appendix \ref{MOP}. \end{proof} When the cost is radial, then $Cond(\Gamma)=1$ so that one recovers $\chi_0 = \phi_0$, meaning that linear contracts are optimal, recall Corollary~\ref{radial}. When $Cond(\Gamma)$ is close to one, it means that the agent's best reply in terms of effort, solution to the linear equation $\Gamma a^*=\beta$, is not very sensitive to errors in the principal control $\beta$. In that case, it is noticeable that the optimal linear contract is nearly optimal regardless of the Volterra process that drives the output.\\ For convolution kernels of the form $K(T,t)=1\!\!1_{t<T} k(T-t)$, $\int_0^T \|K(T,t)\|^2 dt = \int_0^T \|k(t)\|^2dt $, so that the upper bound in \eqref{eq:upperbound} shrinks as the horizon of the contract $T$ decreases, suggesting that linear contract seem more performant for short-term relationships compared to long-term relationships. Furthermore, for the exponential kernel $k(t)=e^{-\lambda t}$ with $\lambda \in {\mathbb R}$, we have $\int_0^T \|K(T,t)\|^2 dt = (1-e^{-2\lambda T})/2\lambda$, thus the higher is the mean-reverting intensity, the smaller the upper bound. For the fractional kernel $k(t)=\sqrt{2H}t^{H-1/2}$ with $H\in (0,1)$, we have $\int_0^T \|K(T,t)\|^2 dt = T^{2H} $. We now illustrate numerically the value of information $ {\rm exp} (\phi_0-\chi_0)$ using Equation \eqref{vi_app} for exponential and fractional kernels with $d=2$ and with a diagonal matrix for the cost efforts $\Gamma = \mbox{diag}(\lambda_1, \lambda_2)$. First, we look at the case of two exponential kernels $k_i(t)=e^{-\rho_i t}$ with different mean reversions\footnote{Note the change in notation to avoid confusion with the eigenvalues of $\Gamma$.} $\rho_i \in {\mathbb R}$, $i=1,2$. Figure \ref{fig:exp} describes a situation where providing one unit of effort for Task 1 is more costly than for Task 2, $\lambda_1 > \lambda_2$, and when the mean-reverting parameters $\rho_i$ vary. The figure shows that linear contracts are more performant for negative and smaller mean reversions, which is the case that is usually of interest in practice. When the intensity of the most expensive task is fixed and positive, the variation of the intensity parameter of the least expensive task has very little effect on the value of the information. More generally, the value of information increases when $\rho_2$ increases. The linear contracts are very efficient (more than 90\%) when the mean-reverting parameter of the most expensive task is one. \begin{center} \includegraphics[scale=.4]{exponential.png} \captionof{figure}{Impact of the mean reversion parameters $\rho_1$ and $\rho_2$ on the value of information with respect to the terminal time $T$.} \label{fig:exp} \end{center} For our second example, we consider two fractional kernels $k_i(t)=\sqrt{2H_i}t^{H_i-1/2}$ with $H_i\in (0,1)$, we see on Figure~\ref{fig:frac} that for short maturities linear contracts are more performant for Hurst indices larger than $1/2$ (long-memory processes), while for larger maturities they are more performant for values of $H<1/2$, (short memory processes). The inflexion point at $T=1$ is explained by the behavior of the variance of the fractional Brownian motion that reads $\int_0^T \|K(T,t)\|^2 dt = T^{2H} $, see Figure~\ref{fig:L2frac}. \begin{center} \includegraphics[scale=.4]{fractional.png} \captionof{figure}{Impact of the Hurst indices $H_1$ and $H_2$ on the value of information with respect to the terminal time $T$.} \label{fig:frac} \end{center} \begin{center} \includegraphics[scale=.6]{L2frac.png} \captionof{figure}{Impact of the Hurst index $H$ on $L^2$-norm of the fractional kernel $t\mapsto\sqrt{2H}t^{H-1/2}$.} \label{fig:L2frac} \end{center} \section{Conclusion} The principal-agent framework generally leads to complex optimal contracts that do not align with real-world practices. Also, the proposal of a theoretical framework justifying the signing of simple (linear) optimal contracts deserves attention. In this paper, we have shown that the remarkable results of the Holmstrom and Milgrom model extend surprisingly to a large class of Gaussian models that exhibit memory: the Volterra processes. In particular, we prove that optimal contracts are linear in one-dimensional models and that the principal has no incentives to expand its information set. In multi-dimensional models, this is no longer generally the case except when the effort cost function is radial. Nevertheless, we are able measure the utility gap when the principal proposes a linear contract in the case of a general effort cost function. Thus, we can examine the key features that make the performance of linear contracts very close to optimality.
{ "timestamp": "2022-09-26T15:08:12", "yymm": "2209", "arxiv_id": "2209.10878", "language": "en", "url": "https://arxiv.org/abs/2209.10878" }
\section{Introduction} Understanding the quantum chromodynamics (QCD) phase diagram as a function of temperature and baryon chemical potential is crucial for studying the nature of strong interaction \cite{Aoki:2006we,Andronic:2017pug,Heinz:2013th,Kharzeev:2004ey}. Heavy-ion collisions (HICs) with various combinations of two opposing beams of heavy ions and colliding energies can create matter with a state that is away from normal nuclear density, thereby offering a unique opportunity to access the structure of the QCD phase diagram \cite{Pandav:2022xxx,Gupta:2011wh,Ding:2015ona}. To study the properties of the quark--gluon plasma (QGP) and determine the exact location of the conjectured critical end point are prime goals of the Beam Energy Scan (BES) program at the Relativistic Heavy Ion Collider (RHIC). The cumulants of conserved quantities in HICs at relativistic energies, such as net charge, net baryon number and net strangeness have been proposed as sensitive observables to explore the critical end point \cite{Hatta:2003wn,Luo:2017faz,STAR:2021iop}. A nonmonotonic energy dependence of the quartic cumulant ratio of net proton for central (<5\%) Au + Au collisions at $\sqrt{s_{NN}}$ = 7.7 $\sim$ 62.4 GeV has been reported by the STAR Collaboration \cite{STAR:2013gus,STAR:2014egu,Xu:2014jsa}, which implies that the QCD critical end point, if created in HICs, could exist in the energy region mentioned above. To pin down the uncertainties and assumptions involved, intense efforts from experimental side are currently in progress, such as the BES-II program at RHIC \cite{Yang:2017llt}, the fixed target experiments at the future Facility for Antiproton and Ion Research (FAIR) \cite{CBM:2016kpk}, as well as dedicated future programs at the Nuclotron-based Ion Collider Facility (NICA) \cite{Kekelidze:2016wkp} and the High Intensity heavy ion Accelerator Facility (HIAF) \cite{Yang:2013yeb}. On the theoretical side, studying the issues that may affect cumulants is also of great importance to analyze the critical behavior and the properties of QGP in HICs. Previous theoretical studies have considered that detector efficiency, volume fluctuation \cite{Skokov:2012ds,Xu:2016qzd,Xu:2016skm,Li:2017via,Luo:2013bmi}, charge and baryon conservation \cite{Sakaida:2014pya,Shuryak:2018lgd}, interaction between particles \cite{Bzdak:2013pha} will affect the cumulants to a certain extent. The nuclear structure (such as nuclear deformation \cite{Jia:2021qyu}, density distribution \cite{Xu:2017zcn}, neutron skin \cite{Li:2019kkh}) effects on HICs at relativistic energy have attracted a lot of attention in recent years. For example, the initial density distribution of nuclei or initial geometry fluctuations is related to higher-order anisotropic flow \cite{Alver:2010dn}. However, to the best of our knowledge, the influence of the initial density fluctuations on the cumulants of particles has not been widely studied. In the present work, we intend to study how the initial density fluctuations can influence the cumulants of final observables, by varying the minimum distance $d_{\rm min}$ between two nucleons in the initialization (the preparation of colliding nuclei) of the ultrarelativistic quantum molecular dynamics (UrQMD) model \cite{Bass:1998ca}. \section{Initialization of UrQMD Model} \begin{figure} \centering \includegraphics[width=\linewidth]{rho-r.pdf} \caption{The density distribution in the initialized nucleus. The shaded region respect one standard deviation interval about the mean value of density. The results obtained with $d_{\rm min}=0, 1.0, 1.6$ fm are compared. The inset displays the standard deviation.} \label{fig:1} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{rho-t.pdf} \caption{Time evolution of (\textbf{a}) net proton density ($\rho$) and (\textbf{b}) its standard deviation ($\sigma_{\rho}$) in the collision central region (|x|, |y|, |z| < 1 fm) with impact parameter $b$ = 0 fm. Results obtained from calculations in the modes of cascade (UrQMD/c) and mean-field (UrQMD/m) with $d_{\rm min}=1.0 \; $ and 1.6 fm are compared.} \label{fig:2} \end{figure*} The UrQMD model is a microscopic transport model, in which basic physical laws (for example, the energy and momentum conservation laws) are obeyed exactly. It has been widely used to describe the dynamic process of collision of \emph{p + p}, \emph{p + A} and \emph{A + A} systems over broad energy scales \cite{Bass:1998ca,Bleicher:1999xi,Bleicher:2022kcu}. The initialization, propagation of particles in a mean-field potential and collision term are the main ingredients of the UrQMD model. In the initialization, each nucleon is described by a Gaussian wave packet, and the centroids of the Gaussian packets are randomly distributed within a sphere of radius $R$. It can be calculated with an empirical formula \cite{Bass:1998ca,Bleicher:1999xi,Wang:2021sdu} \begin{equation}\label{urho} \begin{aligned} R = \left (\frac{3}{4\pi\rho_0}\right)^{1/3}\left(\frac{1}{2}(A+(A^{1/3}-1)^3)\ \right)^{1/3}. \end{aligned} \end{equation} Here, $\rho_0$ = 0.16 fm$^{-3}$ is the saturation density and $A$ is the mass number. The radius calculated with this formula is smaller than the one usually used, $R = 1.12 \times A^{1/3}$ fm, in view of the width of each Gaussian. After the coordinates of each nucleon are sampled, the distance between each nucleon pair in a sampled nucleus is calculated and the smallest $\delta r_{\rm min}$ is consequently found out. In the default version of the UrQMD model, the sampled nucleus is resampled if $\delta r_{\rm min}$ is smaller than $d_{\rm min}=1.6\;\rm fm$. Different values of $d_{\rm min}$ are used in different QMD-like models \cite{hermann}. The maximum value of $d_{\rm min}$ can be estimated from the size of nuclei, which should be smaller than 2$R_0$, where $R_0$ is the coefficient of the empirical formula for calculating nuclear radii. However, the minimum value is not known. Indeed, $d_{\rm min}$ is a model parameter (not a physical parameter) used to speed-up the process of the initialization in QMD-like models. Setting $d_{\rm min}$ to a reasonable value will make the sampling of target and projectile nuclei faster. It is reasonable to infer that the density distribution is different if different values of $d_{\rm min}$ are used. More specifically, the fluctuation on the density distribution in the coordinate space is stronger with a smaller value of $d_{\rm min}$, as displayed in Fig.~\ref{fig:1}. The one standard deviation ($\sigma_{\rho}$, width of the shaded band) is decreased with increasing $d_{\rm min}$, implying a reduction of the density fluctuation in the initialization. For example, $\sigma_{\rho}$ obtained with $d_{\rm min}=1.6$ fm is about one half of that obtained with $d_{\rm min}=1.0$ fm. Thus, it is necessary to know whether the final observables, such as cumulants, are influenced by varying $d_{\rm min}$ or not. Moreover, it is of particular interest to know whether the information on the density fluctuation in the initial stage can be preserved in the final stage of HIC. To demonstrate the effect of nuclear mean-field potential on cumulants, simulations where it is modeled by either UrQMD in the presence of a mean-field potential (UrQMD/m) or a pure cascade (UrQMD/c) were compared. The propagation of the UrQMD model was stopped by default at 80 fm$/$c, which was sufficient to observe the fluctuation of the particle number at that energy. When discussing the time evolution of the density in the coordinate space, we set the stopping time to 20 fm$/$c, which was enough to see the density fluctuation in the central region, as almost no particles can remain in the central region after 20 fm/c. The time evolution of the net proton density $\rho$ and its standard deviation $\sigma_{\rho}$ is displayed in Fig.~\ref{fig:2}. With the same value of $d_{\rm min}$, the density obtained with UrQMD/c (solid line) was slightly larger than that obtained with UrQMD/m because of the repulsive nature of the nuclear mean-field potential in the compressed stage. At about t $\leq$ 3 fm/c, it can be seen that both in UrQMD/c and UrQMD/m, the density obtained in the case of $d_{\rm min}=1.0$ fm was larger than that in $d_{\rm min}=1.6$~fm, and the influence of $d_{\rm min}$ on density was even stronger than that of the nuclear mean-field potential. While at t $>$ 3 fm/c, the difference in density caused by $d_{\rm min}$ almost vanished, implying that the influence of $d_{\rm min}$ on the density may have been washed out during the fireball expansion stage. The effect of $d_{\rm min}$ on the standard deviation of the net proton density was very evident, as can be seen in Fig.~\ref{fig:2} $\rm(b)$. At about t $<$ 3 fm/c, the standard deviation obtained in the case of $d_{\rm min}=1.0$ fm was larger than that in $d_{\rm min}=1.6$~fm because of the larger initial density fluctuation in the former case. While at t $>$ 3 fm/c, the standard deviation obtained in the case of UrQMD/c was larger regardless of $d_{\rm min}$, due to the increased stochastic particle collisions causing an increased fluctuation. From the time evolution of the net proton density and its standard deviation, one sees that the fingerprint of the initial density fluctuation in the coordinate space only existed at about t $<$ 3 fm/c. However, it is not clear whether the influence of the initial density fluctuation in the coordinate space can be translated into the momentum space. \begin{figure} \centering \includegraphics[width=\linewidth]{yield-proton.pdf} \caption{Transverse momentum spectra of free protons at different centralities in central rapidity window in Au+Au collisions at $\sqrt{s_{NN}}$ = 7.7 GeV. The solid and dashed lines are the results of UrQMD/c and UrQMD/m with $d_{\rm min}=1.6$ fm, respectively. The experimental data are from the STAR collaboration \cite{STAR:2017sal}.} \label{fig:3} \end{figure} In the mean-field potential term, the coordinates and momenta of hadrons propagate according to Hamilton's equations of motion. Previous works have pointed out that the mean-field potential is necessary for describing HICs even at relativistic energies \cite{Li:2007yd,qfli1,Li:2021sdc}. A good agreement between the UrQMD model calculations and the STAR measured data is illustrated in Fig.~\ref{fig:3}, which shows the transverse momentum spectra of free protons at different centralities in a central rapidity window. The results obtained with both UrQMD/c and UrQMD/m are in line with the STAR data \cite{STAR:2017sal}. We checked that, by varying $d_{\rm min}$, changes on the transverse momentum spectra and the rapidity distribution were negligible. The total yield of free protons in the case of UrQMD/c was slightly larger than that in UrQMD/m, because more fragments were formed with the potential. \section{Fluctuations} The cumulants can be expressed as follows \cite{STAR:2013gus,STAR:2010mib}: \begin{align} &C_1=M=\langle N\rangle, \notag \\ &C_2=\sigma ^2=\langle (N-\langle N\rangle)^2\rangle= \langle (\delta N)^2\rangle, \notag \\ &C_3=S\sigma ^3=\langle (\delta N)^3\rangle, \notag \\ &C_4=\kappa \sigma ^4=\langle (\delta N)^4\rangle-3(\langle (\delta N)^2\rangle)^2. \end{align} {where} $N$ represents the net-proton number in a given acceptance for a single event and the bracket denotes an event average. Usually, the following ratios are used to eliminate the volume~effect: \begin{align} &C_2/C_1=\sigma ^2/M, \notag \\ &C_3/C_2=S\sigma . \end{align} {where} $M$ is the mean, the variance $\sigma ^2$ describes the width of the distribution and the skewness $S$ reflects the degree of symmetry. According to the Delta theorem, the statistical error of the cumulants ratio usually depends on the number of events \cite{Luo:2013bmi}. In this work, more than three million events for each case were simulated to ensure that the error was within a tolerable~range. \section{Results and Discussion}\label{sec:artwork} On the theoretical side, the cumulants characterizing fluctuations are usually manifested in a finite spatial volume, while in heavy-ion collision experiments, only the momentum of particles can be measured. Therefore, discussing the fluctuations both in coordinate space and momentum space and their correlations are of particularly importance. As a microscopic transport model, the UrQMD model is able to record the coordinate and momentum of all particles at each time, thereby providing an opportunity to calculate the cumulants in coordinate space and their time evolution. \subsection{Results in Coordinate Space} \begin{figure} \centering \includegraphics[width=\linewidth]{netb_CC_t.pdf} \caption{Time dependence of $C_2/C_1$ of net baryon numbers in the central region (|x|, |y|, |z|<1 fm) with impact parameter $b$ = 0 fm. } \label{fig:4} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{bb.pdf} \caption{The pseudorapidity window dependence of $C_2/C_1$ (\textbf{a}) and $C_3/C_2$ (\textbf{b}) for net proton numbers with transverse momentum acceptance $0.4<p_T<2.0$ GeV$/$c. The results for head-on ($b$ = 0 fm) Au + Au collisions at $\sqrt{s_{NN}}$ = 7.7 GeV obtained with different scenarios are compared.} \label{fig:5} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{bbb.pdf} \caption{The pseudorapidity window dependence of $C_2/C_1$ (\textbf{a}) and $C_3/C_2$ (\textbf{b}) for net proton numbers with transverse momentum acceptance $0.4<p_T<2.0$ GeV$/$c. The scenarios are same as Fig.~\ref{fig:5} but for impact parameter $b$ = 5 fm.} \label{fig:6} \end{figure*} The time dependence of $C_2/C_1$ of net baryon numbers in the central region is plotted in Fig.~\ref{fig:4}. From the initial time to t = 1 fm/c (the time of about the maximum compression), as one expects, the $C_2/C_1$ obtained with $d_{\rm min}=1.0$ fm was much larger than that obtained with $d_{\rm min} = 1.6$ fm, and their values were almost unchanged, indicating that the information of the initial density fluctuations could exist up to the maximum compression stage. In the expansion stage (t $\geq$ 1 fm/c), the difference in $C_2/C_1$ caused by $d_{\rm min}$ gradually diminished over time and almost completely vanished after about 10 fm/c. During t = 1--4 fm/c (the expansion stage), the $C_2/C_1$ obtained in the presence of a mean-field potential was slightly larger, while after t = 4 fm/c, the situation was reverse, the $C_2/C_1$ obtained in UrQMD/c was larger regardless of $d_{\rm min}$, which was similar to that shown in Fig.~\ref{fig:2}. This implied that the influences of the initial density fluctuations vanished while the influences of the mean-field potential were starting to appear. After t = 10 fm$/$c, the difference in $C_2/C_1$ between UrQMD/m and UrQMD/c was negligible and their values approached unity because the particle density was close to zero. The effect of the mean-field potential on the cumulant of particles in the coordinate space was consistent with our previous works \cite{Steinheimer:2018rnd,Ye:2018vbc,Ye:2020lrc}. We checked that the behavior of net baryons was very similar to that of net protons, therefore in the following only the results of the net protons are~displayed. \subsection{Results in Momentum Space} It is known that particle distributions in coordinate space cannot be measured in heavy-ion collision experiments. Usually the momentum of charged particles can be measured by detectors with a certain acceptance. Therefore, in the following the cumulants of net protons in the STAR transverse momentum acceptance ($0.4<p_T<2.0$ GeV$/$c) and for a given pseudorapidity window $\Delta \eta$ were calculated and are discussed. Here, $\Delta \eta$ corresponds to the pseudorapidity coverage ($-\eta$, $\eta$). Fig.~\ref{fig:5} and \ref{fig:6} display the $C_2/C_1$ and $C_3/C_2$ of net protons produced from \mbox{Au + Au} collisions at $\sqrt{s_{NN}}$ = 7.7 GeV with impact parameters $b$ = 0 fm and 5 fm, respectively. It can be seen that for a small pseudorapidity window $\Delta \eta$ $\leq$ 4, both $C_2/C_1$ and $C_3/C_2$ obtained with UrQMD/m were larger than those obtained with UrQMD/c, while the influences of $d_{\rm min}$ on both quantities were small. This meant that the fingerprints of the initial density fluctuations on the cumulants in the narrow pseudorapidity window around $\eta$ = 0 were almost completely washed out. It is known that protons with mid-pseudorapidity are usually emitted earlier during the expansion. It relates to t = 4--10 fm/c where the mean-field potential effects remain, while the initial density fluctuation effects disappear (see Fig.~\ref{fig:4}). For a larger pseudorapidity window ($\Delta \eta$ $\geq$ 4), all cumulant ratios were suppressed due to the effect of conservation laws, and the effects of the nuclear mean-field potential were suppressed while the effects of $d_{\rm min}$ were becoming visible. For $\Delta \eta$ $\ge$ 4, both $C_2/C_1$ and $C_3/C_2$ obtained with $d_{\rm min}=1.0$ fm were larger than those obtained with $d_{\rm min}=1.6$ fm, and this enhancement of the cumulant ratios was even larger than that caused by the presence of the nuclear mean-field potential. This may originate from the fact that particles with a large pseudorapidity usually experience quite a few collisions, therefore the signals of the initial density fluctuations on the cumulants can be maintained. Moreover, the effects of both $d_{\rm min}$ and the nuclear mean-field potential were more evident in collision with $b$ = 5 fm than those with $b$ = 0 fm. It can be understood from the fact that fingerprints of the initial density fluctuations and the nuclear mean-field potential on cumulants are easily erased by the most central collisions in the case of $b$~=~0~fm. In Fig.~\ref{fig:6}, it was found that both $C_2/C_1$ and $C_3/C_2$ were slightly increased at $\Delta \eta$ = 4--6. It was considered that the fluctuation of the number of fragments caused this cumulant ratio increment of the net protons at a large pseudorapidity window. \section{Summary} By varying the minimum distance $d_{\rm min}$ between two nucleons in the initialization of the UrQMD model, we investigated the effects of the initial density fluctuations in the coordinate space on the cumulants of the net-proton multiplicity distribution in \mbox{Au + Au} collisions at $\sqrt{s_{NN}}$ = 7.7 GeV. The strength of the initial density fluctuations was clearly increased if a smaller value of $d_{\rm min}$ was used. Consequently, at the initial time, the cumulant ratio (e.g., $C_2/C_1$) in the coordinate space around the collision center with a smaller $d_{\rm min}$ was larger than that with a larger $d_{\rm min}$. As the evolution proceeded, the influences of the initial density fluctuations on $C_2/C_1$ in the coordinate space gradually vanished while the mean-field potential effects were starting to appear. At the final state, it was found that in a narrow pseudorapidity window around $\eta$ = 0, the effects of the initial density fluctuations on the magnitude of net-proton number fluctuations (in the momentum space) were negligible. On the other hand, with a broad pseudorapidity window, the values of the cumulant ratios were enlarged if the initial density fluctuations were increased with a smaller value of $d_{\rm min}$, and this enhancement was comparable to that observed in the presence of the nuclear mean-field potential. It meant that the fingerprint of the initial density fluctuations on the cumulant ratios in the final state could be maintained. Moreover, it was found that the effects of the initial density fluctuations on the cumulant ratios were more evident in collisions with a larger impact parameter. \begin{acknowledgments} Fruitful discussions with Zepeng Gao and Dr. Haojie Xu is greatly appreciated. The authors acknowledge support by computing server C3S2 at the Huzhou University. This work is supported in part by the National Natural Science Foundation of China (Grants No. 11875125, No. U2032145, No. 12147219, and No. 12047568), the National Key Research and Development Program of China (No. 2020YFE0202002), the Fund for Shanxi "1331 Project" Key Subjects Construction. \end{acknowledgments}
{ "timestamp": "2022-09-23T02:12:49", "yymm": "2209", "arxiv_id": "2209.10923", "language": "en", "url": "https://arxiv.org/abs/2209.10923" }
\section{Introduction} \input{sections/introduction} \section{Requirements} \input{sections/requirements} \section{Related Work} \input{sections/relatedWork} \section{Capability and Skill Model for Autonomous Robots} \input{sections/model} \section{Conclusion} \input{sections/conclusion} \begin{acknowledgements} \noindent This research is funded by dtec.bw – Digitalization and Technology Research Center of the Bundeswehr in the frame of the project RIVA, which we gratefully acknowledge. \end{acknowledgements} \bibliographystyle{IEEEtran} \subsection{Structure} \label{susbec:structure} The basis of the capability and skill model for manufacturing is the description of the structure. The structure consists of one ODP, which is the industry standard of VDI 2206 \cite{VDI2206}. With \cite{VDI2206}, the defined terms can be used to describe machines as mechatronic systems and their assembly of modules and components, like sensors or actuators \cite{Kocher1}. HAuRs are also mechatronic systems. Nevertheless, the concepts available in VDI 2206 are too broad and insufficient e.g. to distinguish different types of HAuRs. For the representation of the structure of HAuRs, the IEEE standard 1872.2-2021 \cite{IEEE1872.2} is suitable. By linking with the two upper ontologies SUMO and DUL, there is already an advantage of possible linking with other ontologies of other domains. Linkage is done by subordinating more specific classes to more abstract super classes. Thereby, some concepts of the two upper ontologies have been linked and related to new concepts. In addition, the CORA ontology of the IEEE standard 1872-2015 \cite{IEEE1872} is adopted and connected. The main elements of the structure of the capability and skill model for HAuRs are shown in Fig.~\ref{fig:structure}. In this Fig.~\ref{fig:structure} and all the following figures, different colors represent a particular standard that includes one or more ODPs and their existing alignment. The only exception is the additional higher-level alignment to link the different standards. The structure corresponds to a combination of the proven standard of VDI 2206 \cite{VDI2206}, which is applied in the capability and skill model \cite{Kocher1} and adopted, and \cite{IEEE1872.2} including \cite{IEEE1872} and the upper ontologies DUL and SUMO. Fig.~\ref{fig:structure} shows the introduction of \emph{Robot} and especially \emph{Autonomous Robot} and the distinction of their types. \begin{align} AutonomousRobot \sqsubseteq Robot \sqsubseteq Device \\ \exists consistsOf.Platform \sqsubseteq AutonomousRobot \end{align} Linking of the individual ODPs is done via the alignment ontology shown in red. The elements of \cite{VDI2206} have been linked accordingly so that a consistent description is available. For example, \emph{Robot} is subclass of \emph{Mechatronic System}, allowing \emph{Robot} to also have inputs and outputs. \begin{align} Robot \sqsubseteq MechatronicSystem \end{align} Due to restricted space, the definition of orientation is not shown, but is consistent with the procedure for defining \emph{Position}. Different devices such as \emph{Sensor} can be assigned to a \emph{Robot} via the \emph{robotPart} link. Through this structure, the following can be represented: A drone is a robot and consists of components such as rotors and a camera and is in a certain position and orientation. \subsection{Capabilities} \label{subsec:caps} The capability aspect is used to describe functions and to make them comparable despite different execution technologies. The capability model from manufacturing uses the VDI 3682 \cite{VDI3682} standard for this purpose by defining capabilities as process operators. In \cite{VDI3682}, processes consist of multiple process operators, which are described as follows: A \emph{Process Operator} can be assigned to a \emph{Technical Resource} and converts a set of inputs into outputs. Inputs and outputs are either a \emph{Product}, \emph{Information} or \emph{Energy}. A \emph{Process Operator} can also be decomposed into further \emph{Process Operators} so that different levels of detail can be described \cite{Kocher1}. This approach is adopted in the same way for HAuRs. A capability \emph{transport} can be defined, which has as its input an object to be transported and information about the desired destination. Its output is the object at the desired location. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{figures/Capability-Model-Capability_new.pdf} \caption{Overview of core concepts and relations of the \emph{Capability} aspect of the capability and skill model for heterogeneous autonomous robots.} \label{fig:capability} \end{figure*} Furthermore, two additional ODPs according to DIN 8580 and VDI 2860 are used in the capability model of \cite{Kocher1}, which define subclasses for the otherwise rather abstract capability. But these two standards are manufacturing-specific and not relevant to the field of HAuRs. Therefore, these ODPs are not considered further. Nevertheless, the approach to give more meaning to the abstract capabilities is to be followed and therefore another ODP is introduced here. However, in the field of HAuRs, there is no standard as in manufacturing where, for example, manufacturing processes are described. Therefore, a separate ODP with suitable capabilities had to be created. Fig.~\ref{fig:capability} gives an overview of the aspect of capabilities for HAuRs. \emph{Capability} is modeled by \emph{Process Operator}, so the two are equivalent. A \emph{Capability} is provided by a \emph{Technical Resource} via \emph{hasCapability}. The two introduced standards of the IEEE are also applied here, since \emph{Interaction}, \emph{Communication} as well as \emph{Environment} are considered. The \emph{Process} definition considers only a few capability types such as \emph{Motion} and \emph{Communication}, so more specific capability types such as grasping an object are represented via the additional ODP, here called \emph{AuR-Cap}. The specific capability types are subclasses of \emph{Process Operator} to assign a capability to a specific type. To connect all these ODPs, some alignments have been made like subclassification of \emph{Process} from \cite{VDI3682} to \emph{Process} from \cite{IEEE1872}. \begin{align} VDI3682\!:\!Process \sqsubseteq SUMO\!:\!Process \end{align} In \cite{IEEE1872.2} a separation of functions from the execution of functions is also addressed. Therefore, \emph{Process Operator} is aligned with the corresponding \emph{Function} class. A \emph{Robot} is subclass of \emph{Technical Resource}, so a \emph{Robot} can be assigned to a \emph{Capability}. Accordingly, the connections of cross-domain ODPs are established. \begin{align} ProcessOperator \equiv Function \\ Robot \sqsubseteq TechnicalResource \end{align} \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{figures/Capability-Model-Skill_new.pdf} \caption{Overview of core concepts and relations of the \emph{Skill} aspect of the capability and skill model for heterogeneous autonomous robots.} \label{fig:skill} \end{figure*} \subsection{Properties} \label{subsec:props} The formal description of properties was introduced previously in the capability model for manufacturing. In the capability model, the concepts defined in IEC 61360 \cite{IEC61360} are used to describe properties \cite{Kocher2}. The concepts of \cite{IEC61360} offer an abstract possibility to describe different properties formally. Each object can have properties assigned to it. Each property, such as height, is uniquely defined once in the model via a so-called \emph{TypeDescription}. With a \emph{TypeDescription}, information like definition or the unit of measure are fixed. Objects sharing properties of the same \emph{TypeDescription} all need to have a \emph{DataElement} related to this \emph{TypeDescription}. Specific occurences of a property are modeled using a so-called \emph{InstanceDescription}, which is also linked to the corresponding \emph{DataElement}. An \emph{InstanceDescription} can be used to specify an actual value, a requirement, or an assurance about a value or range of values. Multiple \emph{InstanceDescriptions} can be attached to a single \emph{DataElement}. This machine-readable description of properties is also applicable to HAuRs. It allows to formulate constraints on the capabilities that certain robots provide. An example of an assurance: a drone has the ability to fly up to an altitude of 100 meters above ground. \subsection{Skills} \label{subsec:skills} The fourth aspect is devoted to skills and comprises the description of interfaces that can be used to automatically execute them. A central part of this aspect in \cite{Kocher1} is the state machine standardized in ISA 88 \cite{ISA88}. It is used to describe the states and transitions of a skill. Skills are described on this basis in order to track the current state and trigger permissible transitions. This creates a technology-agnostic interface to interact with skills \cite{Kocher1}. But in order to be callable, transitions must be mapped to execution technologies. In \cite{Kocher1}, this is achieved with two other ODPs for (a) RESTful webservices described using Web Application Description Language (WADL) and (b) OPC UA. The WADL ODP enables the description of web services in order to be able to access them via a given URL using HTTP methods. The OPC UA ODP can be used to describe an OPC UA Server and its objects with methods and variables according to the skills, their transitions and parameters. These ODPs can also be used for HAuRs. In \cite{IEEE1872.2} executable functions are also separated from functions. \emph{Function Execution} is modeled to be equivalent to the class \emph{Skill}. \begin{align} Skill \equiv FunctionExecution \end{align} Another standard that is applied in the field of HAuRs is MQTT \footnote{https://mqtt.org/}. MQTT is a standard messaging protocol for Internet of Things (IoT), which can also be ideally used for robot-to-robot communication. MQTT uses a publish/subscribe pattern, which differs from classic client-server architectures in that a third party, which is the broker, is interposed. Publishers are clients that send a message and subscribers are clients that receive messages. Publishers and subscribers are decoupled from each other by the broker and do not know each other. The broker filters incoming messages and distributes them to the clients. The broker manages so-called topics to which clients can publish or subscribe \cite{Soni.2017}. Accordingly, an \emph{MQTT Client} is a \emph{Skill} that listens for certain topics that can be reached via the broker. \begin{align} MQTTClient \equiv MQTTSkill \sqsubseteq Skill \end{align} A \emph{Transition} of the \emph{StateMachine} is triggered by publishing a message to the corresponding \emph{MQTTTopic}, e.g. to start the skill. Basically, the description is done in a similar way as for WADL or OPC UA. Fig.~\ref{fig:skill} provides an overview of the core concepts. A \emph{Skill} represents the executability of a \emph{Capability} via \emph{ExecutableVia} and has different technological variants as subclasses, thus connecting these Skill ODPs to the ontology. A Skill has a current \emph{State} and a \emph{StateMachine} from which a particular \emph{Transition} is executable by a particular method specific to the execution technology. \subsection{Suitability of Ontologies} \label{subsec:ontologies} According to \cite{Studer.1998}, an ontology is defined as a "formal, explicit specification of a shared conceptualization". Ontologies thus specify a \textit{common} understanding about a certain domain. The specification of concepts of a domain is \textit{formal} and thus machine-readable and \textit{explicit}, i.e., an unambiguous description of the concepts and their relations \cite{Guarino.2009}. The explicit specification additionally enables communication between multiple participants of a system \cite{IEEE1872.2}. Ontologies can be combined by one ontology importing other ontologies and using new relations to link the individual ontologies. Thereby, the individual ontologies can be managed separately \cite{Kocher1}. In the requirements such a modeling language was demanded to enable reusability and extensibility. Ontologies are further reusable because they represent knowledge about a domain independent of a particular task \cite{Bayat.2016}. Another important reason why ontologies are very well suited for modeling functions is \textit{reasoning}, which automatically infers implicit knowledge from explicitly modeled knowledge, enabling, for example, planning. Functions can be matched in order to automatically compose functions from other functions and in turn fulfill required functions \cite{Malakuti.2018}. \subsection{Function Modeling for Autonomous Robots} \label{subsec:RW_modelsForRobots} One approach to modeling robots is KnowRob \footnote{http://knowrob.org}, which is designed for autonomous service robots. The approach has been presented in several publications. First introduced in \cite{Tenorth.2009}, this approach introduces a knowledge representation and processing for use in robot control. The approach has been improved and extended subsequently. The approach aims to pass more abstract instructions to robots, which generate on this basis detailed information necessary for execution \cite{Tenorth.2017}. KnowRob uses the DOLCE+DnS Ultralite (DUL) upper ontology and augments it with some basic concepts for robotics, such as capabilities and coarse types of robots. The description remains rather abstract. The knowledge representation is not based on any standard. Functions and their executions have been separated, but no description of executions is given. Furthermore, neither communication nor interaction of robots is described semantically. The concept of a semantic map of the environment is addressed to some extent. The project ROSETTA aims at "RObot control for Skilled ExecuTion of Tasks in natural interaction with humans; based on autonomy, cumulative knowledge and learning" \cite{Stenmark.2015}. The focus is on industrial robots. A number of ontologies for robot functions had been developed until the current ontology, ROSETTA, emerged. The ontology uses the CORA ontology of the IEEE Standard 1872-2015 \cite{IEEE1872}. The focus is on the description of devices and abstract functions, but not the description of interfaces to call functions. The description of functions is limited to the description of function types. The use of a basic structure of functions is not included. The environment, interaction or communication is not considered at all. The application of a standard in \cite{Stenmark.2015} shows the availability of standards for modeling autonomous robots that should be considered. The IEEE Standard 1872-2015 \cite{IEEE1872} presents a standard for ontologies that define key terms in robotics and automation. The ontologies developed there are connected to the suggested upper merged ontology (SUMO) \cite{Niles.2001}. SUMO defines very general terms like objects or processes. The core ontology for robotics and automation (CORA) is the center of the developed ontologies and defines the three concepts robot, robot group and robot system. In addition, CORAX is used to define concepts such as design or physical environment, or POS is used to define concepts related to position and orientation. Fig~\ref{fig:ieee1872} provides an overview of the individual components of the standard. While \cite{IEEE1872} provides a suitable foundation for developing function modeling, as it is too general to be used on its own in complex applications. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figures/IEEE1872-2015.pdf} \caption{Overview of individual components of the IEEE Standard 1872-2015. CORA is the main component and is linked to the upper ontology SUMO. Likewise, the extensions CORAX, POS and RPARTS are linked to SUMO \cite{Fiorini.2015}.} \label{fig:ieee1872} \end{figure} Therefore, efforts are being made in subgroups such as the Autonomous Robotics (AuR) group to make the ontologies of \cite{IEEE1872} applicable. The AuR group has published the IEEE Standard 1872.2-2021 \cite{IEEE1872.2}, which focuses on HAuRs and is very relevant to the work of this paper. In \cite{IEEE1872.2} a separation is made between the different modalities of air, water and land. Two upper ontologies, SUMO and DUL, are used here. Environment, interaction, and communication are also considered in \cite{IEEE1872.2}. In particular, in \cite{IEEE1872.2} a clear distinction is made between a behavior of a robot and a function to be performed in order to achieve this behavior. The standard of \cite{IEEE1872.2} is very promising, however, the ontology is not publicly available and only parts are presented in the published standard itself. Furthermore, this approach is still too abstract to be applied in practice. Neither functions nor their executions are modeled in detail using existing standards in the field of HAuRs and can be used directly. Due to the lack of availability, it is also not entirely clear to what extent the structure of HAuRs in terms of actuators and sensors is considered. Nevertheless, the ontology presented in \cite{IEEE1872.2} serves as a sound foundation for developing applicable function modeling. To summarize, in the field of HAuRs, there are only few and insufficient approaches to function modeling. However, the topic of function modeling has been considered for quite some time in the field of manufacturing with robots. Therefore, capability and skill models in manufacturing will be examined in the following sub-section. \subsection{Capability and Skill Models in Manufacturing} \label{subsec:RW_modelsInManufacturing} In the domain of manufacturing, a wide variety of approaches have been published focusing on a specification and encapsulation of machine functions. These approaches mostly aim at making functions available to superordinate systems, such as production planning tools, by using a machine-readable description so that manufacturing systems may be adapted in an easy manner and manufacturing processes can be planned and executed on this basis. Over time, various terms have been used for this purpose, e.g., \emph{service}, \emph{task}, \emph{capability} or \emph{skill}). But in recent years, more and more approaches have been published using the terms \emph{capabilities} and \emph{skills}. While these two terms have often been used synonymously in the past, a distinction is increasingly being made. Capabilities are seen as a specification of machine functions, and skills are seen as their executable counterpart, i.e. an implementation of a machine function together with a description of an execution interface \cite{FKM+_CapabilitiesandSkillsin_26.04.2022}. Nevertheless, most research still focuses on either one of these two aspects. Approaches that focus on capabilities typically develop semantic models using ontologies to formally describe machine functions and relevant information such as inputs and outputs. Examples of this group of approaches are \cite{AmDu_AnUpperOntologyfor_2006} and \cite{JSL_FormalResourceandCapability_2016}. The authors of \cite{AmDu_AnUpperOntologyfor_2006} present Manufacturing Service Description Language, an upper ontology that contains core concepts and relations to describe manufacturing capabilities as manufacturing functions provided by a manufacturer. Ref. \cite{JSL_FormalResourceandCapability_2016} presents a formal capability model in the form of an ontology that is suited to be used in adaptation of plants. Capabilities are described by their name and parameters. Combined capabilities can be composed of multiple simple ones via so-called capability associations. The presented ontology provides the basis for a matching of provided capabilities with required capabilities. However, it contains no information needed for the execution of skills. In \cite{WBS+_AnOntologybasedMetamodelfor_9820209112020}, a rather abstract capability meta model is presented which defines generic terms and may thus be used as an upper / domain ontology. The meta model is based on existing ones such as SSN\footnote{https://www.w3.org/TR/vocab-ssn/} and ontologies built on standards such as VDI 2860. Furthermore, patterns for reusing and extending this ontology are shown. \begin{figure*}[t] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\hsize]{figures/Capability-ODPs-AK.pdf} \caption{Formal capability and skill model for manufacturing applications \cite{Kocher1}.} \label{fig:capability-model-ak} \end{subfigure} \tikz[baseline=-\baselineskip]\draw[ultra thick,->] (0,0) -- ++ (1,0); \begin{subfigure}{0.45\textwidth} \includegraphics[width=\hsize]{figures/Capability-AuR-ODPs.pdf} \caption{Adapted capability and skill model for heterogeneous autonomous robots.} \label{fig:capability-model-aur} \end{subfigure} \caption{Adaptation of the capability and skill model presented in \cite{Kocher1} to a model suitable for heterogeneous autonomous robots.} \label{fig:capability-model-vergleich} \end{figure*} Approaches focusing on skills typically use meta models such as AutomationML or OPC UA. In \cite{BaRe_Digitaldescriptionofproducts_2017}, the authors present an approach to skill-based programming for assembly systems. A skill is defined to be a vendor-neutral description of a machine's functionality that can be invoked by commands. Skills can carry out processes which are in turn needed to assemble a product. The mapping between skills and processes has to be done manually by experts. AutomationML is used to create a model that contains products, processes and skills. Another approach that focuses on skills is presented in \cite{Profanter.2021}. The approach presents a skill model that is intended to enable a hardware-independent Plug \& Produce system in industrial assembly. For this purpose, OPC UA is used as a standardized skill interface and the skills are described semantically via ontologies, so that Plug \& Produce, composition of skills but also skill execution is possible via an Manufacturing Execution System that uses a knowledge base. In \cite{BBM+_KnowledgeforIntelligentIndustrial_2012} a knowledge-based skill description is also presented. The approach takes place in the domain of work cells. Skills are thereby implemented by a state machine. In the presented ontology, the three aspects product, process and resource are separated from each other, but are connected by skills. A description of the interface for the execution of skills is only partially present in the ontology. The model presented in \cite{Kocher1} is the first one to contain both capability as well as skill descriptions in one joint ontology. This ontology consists of multiple so-called ontology design patterns (ODPs) which are all based on industry standards and have been developed and tested in several projects. The ontology shown in \cite{Kocher1} can be divided into three aspects: The \emph{structure} aspect is used as the basis for capabilities and skills and to capture existing components of machines. The \emph{capability} aspect is formed by an ontology for formalized process descriptions according to \cite{VDI3682}. Taxonomies of manufacturing and handling operations are further used to type capabilities. The aspect of \emph{skills} is centered around a state machine according to \cite{ISA88}. Transitions of this state machine (e.g. starting a skill) may be invoked using either web services or OPC UA. Later publications, e.g., \cite{Kocher2}, extend the model by properties according to \cite{IEC61360}, which enables formal modeling of properties and a clear assignment of properties to resources. In \cite{Kocher2}, a method to automatically generate model instances from PLC code is also added. \subsection{Conclusion on related work} \label{subsec:conclusion} Based on the approaches presented in Section~\ref{subsec:RW_modelsForRobots} and Section~\ref{subsec:RW_modelsInManufacturing}, it can be shown that on the one hand, in the field of HAuRs, only a few insufficient approaches to function modeling exist. On the other hand, there exists a variety of approaches in the area of manufacturing. These approaches use ontologies, which are also proposed in \cite{Bayat.2016} for modeling HAuRs. In \cite{Kocher1}, multiple ODPs are created based on standards that can be developed and maintained independently, making the approach of \cite{Kocher1} a good candidate with respect to the requirements of reusability and extensibility. Furthermore, the approach separates capabilities from skills. But as this approach is specifically tailored toward manufacturing, it needs adaptions and extensions to be applicable in the field of HAuRs. Thus in the following section, extensions to all model aspects of \cite{Kocher1} are presented to derive a model usable for HAuRs.
{ "timestamp": "2022-09-23T02:12:14", "yymm": "2209", "arxiv_id": "2209.10900", "language": "en", "url": "https://arxiv.org/abs/2209.10900" }
\section{Introduction} Covid-19 vaccines were developed in response to the widespread devastation caused by the Covid-19 pandemic which started in December 2019 and has been on the rise ever since. These vaccines have elicited a mixed response among the general public, and these reviews help us to understand just how these vaccines have affected people emotionally. Social media is the primary platform for finding solutions to health-based queries related to the Covid-19 pandemic [2]. One of the best ways to determine public opinion is to survey Twitter which has millions of users every day and receives lots of tweets on vaccines from all over the world [35]. Most of the researchers who have used machine learning to analyze Covid-19 vaccine-related tweets have adopted unsupervised techniques to determine the sentiments of tweets [21]. A majority of researchers use the rule-based Valence Aware Dictionary for Sentiment Reasoning (VADER) [15] which assigns a sentiment score to each tweet in an unsupervised manner [21, 23, 39]. Other unsupervised techniques used for Twitter sentiment analysis include AFINN [28] and TextBlob [27]. However, the unsupervised techniques are dependent on pre-defined rules meant for general sentiment analysis that may not work out for text data related to the ongoing pandemic. Supervised learning methodologies overcome the drawback of unsupervised techniques by learning appropriate text patterns that distinguish between positive, negative and neutral sentiments [45, 32, 43]. Supervised learning for text classification is achieved in the present times using transformers that are now popularly replacing Long Short-Term Memory (LSTM) [26] and Convolutional Neural Network (CNN) [11] in various Natural Language Processing (NLP) tasks [13, 33]. Transformer models can be used for sequence modeling to predict the next word in a sentence, and are usually trained on a large corpus such as Wikipedia or Brown Corpus [41, 9]. These pre-trained models are generalized, and are usually fine-tuned for downstream tasks such as classification and text generation.\par One of the challenges faced in supervised machine learning is the small size of the dataset which leads to less accurate models. Such datasets are called small sample datasets (SSD) [16]. The situation is complicated when the class distribution is imbalanced with the majority class samples outnumbering the minority class samples [35]. SSD provides fewer examples for the model to identify patterns from, and therefore results in less accurate models. An imbalanced dataset can also be detrimental to the model as it results in biased results towards the majority class and therefore results in wrong predictions. This paper aims to explore solutions to the class imbalance problem associated with the sentiment categories of Covid-19 related tweets when the size of the dataset is small. We explore the viability of text-based oversampling as a possible solution. The synthetic tweets generated are from classes having a lower population in order to balance the population of all classes while increasing the number of training samples at the same time. We explore different pre-trained transformer models for supervised learning from an imbalanced, small sample dataset containing tweets on Covid-19 vaccine related discussions, and compare their performance with that of domain-specific transformer models for the classification task. Specifically, we investigate the performance of the state-of-the-art transformer models RoBERTa, BERT and XLNet as compared to the domain-specific pre-trained transformer models CT-BERT and BERTweet for sentiment analysis of Covid-19 vaccine related tweets. CT-BERT and BERTweet are pre-trained transformer models obtained after intensive training on English tweets related to the Covid-19 pandemic. On the other hand, RoBERTa, XLNet and BERT provide input embeddings for sentences written in English, and are used for generalized natural language processing. CT-BERT and BERTweet models have the advantage of being familiar with text patterns emanating from Covid-19 related discussions such as “Covid positive” which may not be understood well by the RoBERTa, BERT and XLNet models. \par The organization of this paper is as follows. Section 2 describes some related work on text oversampling of Covid-19 related text datasets, and also reviews the LMOTE algorithm used for text oversampling in the current work, Section 3 outlines several state-of-the-art pre-trained transformer models including domain-specific transformer models, Section 4 presents the methodology, Section 5 contains a detailed analysis of the results, and Section 6 concludes the paper. \section{Preliminaries of text oversampling} \subsection{Text oversampling of Covid-19 related text} The Small Sample Size (SSS) problem refers to the availability of a small number of training samples in high dimensional datasets [25]; this leads to inadequate training rendering supervised learning a challenging task. This paper uses a small subset of annotated Covid-19 tweets to observe the effects of using a smaller text dataset for fine-tuning pre-trained transformer models that are the state of the art for implementing various NLP tasks. The situation is complicated when the class distribution is uneven and the number of majority samples outnumber the number of minority samples [35]. In literature, there are several examples of text oversampling being applied to Covid-19 related social media posts due to the apparent scarcity of text samples belonging to some of the minority classes. We discuss some of these works next. In [22], Liu et al. proved that oversampling the term-frequency inverse document frequency (TF-IDF) features using Synthetic Minority Oversampling Technique (SMOTE) [10] improved the results of Covid-19 vaccine hesitancy prediction. Support Vector Machine (SVM) was used for the classification. SMOTE was also used in [4] to oversample word embeddings for sentiment analysis of Arabic tweets related to Covid-19 conspiracy theories. A recent work [36] investigated ensemble models for the classification of Covid-19 infodemic tweets oversampled using SMOTE. Mohsen et al. [31] recommended text oversampling using SMOTe Edited Nearest Neighbor (SMOTEENN) for the sentiment analysis of Arabic tweets related to Covid-19 quarantine. Random oversampling was performed in [5] for detecting Covid-19 misinformation on Twitter. \subsection{A review of the LMOTE algorithm for text oversampling} Language Model-based Oversampling Technique (LMOTE), proposed by Leekha et al. in 2020 [20] is a language modelling based synthetic datapoints generation approach for tackling the problem of class imbalance in natural language processing tasks. Previous synthetic datapoints generation approaches for tackling class imbalance, like SMOTE and its variants [42], lack the ability to allow for proper qualitative analysis of the generated synthetic data points since the synthetic samples were generated in the Euclidean space. This made it difficult to concretely judge the semantic and contextual validity of the generated synthetic data points. Unlike SMOTE and its variants, LMOTE works specifically on textual data, and the generated synthetic data points using LMOTE allow for more concrete and intuitive balancing of the dataset. \par In our current work on Covid-19 vaccine sentiment analysis, there are three classes of sentiments: positive, negative and neutral. The neutral tweets are large in number since a lot of people tweet about generic information regarding vaccines without expressing any sentiments. Hence neutral sentiment is the majority class in our problem. The algorithm for text oversampling of the tweets belonging to the minority class (positive and negative tweets in our case) using LMOTE is given below. \begin{figure}[!ht] \centering \includegraphics[scale=0.9]{algorithm.jpg} \end{figure} \section{Pre-trained transformer models for domain-specific tasks} Transformers introduced by Vaswani et al. in 2017 [44] rely on the concept of self-attention that involves computation of intra-attention between positions in the input sequence. Transformer is rapidly replacing LSTM and CNN in encoder-decoder models that incorporate an attention mechanism between the encoder and decoder [7, 21]. Google’s BERT [12] and XLNet [46] are bi-directional transformer models used for learning representations for various NLP tasks, with XLNet outperforming BERT on several tasks that involve learning from long sequences [3, 46]. BERT is a transformer-based model based on masked language modeling. BERT and its advanced versions such as RoBERTa [24] and ALBERT [17] are trained on the English Wikipedia and the Brown Corpus. Pre-trained BERT models generate word embeddings that can be used for text understanding and classification. BERT can also be fine-tuned to adapt to specific tasks. XLNet is an autoregressive (AR) language model which uses permutation language modeling during the pre-training phase. Though it is similar in architecture to BERT, it differs in its pre-training objective due to which it surpasses BERT in various NLP tasks. XLNet is pre-trained using only a subset of output tokens as a target. Like BERT, pre-trained XLNet models can also be used for many other downstream tasks while also increasing the limits for sequences. In a recent work, the XLNet transformer was used successfully for sentiment analysis of unlabeled Covid-19 tweets by transfer learning [8]. The XLNet transformer was pre-trained on the US Airlines tweets dataset that was unconnected with Covid-19. Several language-specific models have been developed such as CamemBERT for French [29] and GottBERT for the German language [40]. These specific language models give better results than the BERT multi-language model. The generalized transformer models can be further trained on downstream tasks to create separate models for different tasks and different languages. We discuss some of these domain-specific models here. Each of them is trained on a specialized corpus, relevant to the topic at hand, and is more effective in that domain. 1. SciBERT (biomedical and computer science literature corpus): It is a BERT-based language model for performing scientific tasks. It was introduced by Beltagy et al. in 2019 [9]. 2. FinBERT (financial services corpus): This is a pre-trained NLP model proposed by Araci in 2019 [6] for analyzing the sentiments of financial statements, and is trained using a large financial corpus. 3. BioBERT (biomedical literature corpus): This NLP model pre-trained on biomedical corpora outperformed BERT and various state-of-the-art models in a variety of biomedical text mining tasks. It was introduced by Lee et al. in 2019 [19]. 4. ClinicalBERT (clinical notes corpus): This model focuses on clinical notes and its representations using bidirectional transformers and uncovers the relationship between medical concepts and humans as discussed by Huang et al. in 2019 [14]. 5. mBERT (corpora from multiple languages): mBERT is a single BERT model proposed by Pires et al. in 2019 that is trained on 104 different languages [37]. The languages with fewer data were oversampled and those with surplus of data were undersampled to balance the corpus. 6. patentBERT (patent corpus): The patentBERT model is a fine-tuned pre-trained BERT model for patent classification proposed by Lee et al. in 2019 [18]. The fine-tuning was done using over 2 million patents and CNN with word embeddings. 7. RoBERTa (Optimized pre-training approach for BERT): RoBERTa is a robustly optimized pre-trained model based on BERT [24]. It is implemented on PyTorch and modifies key hyperparameters of BERT, including the removal of BERT’s next-sentence pre-training objective and is trained with much larger mini-batches and learning rates. This helps it to achieve better downstream performance in the masked language modeling approach of BERT. 8. COVID-Twitter-BERT or CT-BERT (Covid-19 tweets): This is a domain-specific transformer-based model which is pre-trained on 160 million Twitter messages specifically related to Covid-19 [30]. The aim is to understand the content of social media posts related to the Covid-19 pandemic. Muller et al. proposed this model in 2020 [30] and applied it for five different classification tasks. The model gave an improvement over BERT on COVID-19 datasets but needed more pre-training to achieve similar performance on out-of-domain contents. 9. BERTweet (English Tweets): This model is a large-scale pre-trained language model for English Tweets and has the same architecture as the base BERT model [34]. Experiments have proved that this model outperforms strong baselines RoBERTa-base and XLM-R-base on various NLP tasks. BERTweet is the first public large-scale model pre-trained on English Tweets. It is trained using the RoBERTa pre-training procedure. This model is trained using a corpus of 850 million English Tweets comprising of 845 million tweets streamed from January 2012 to August 2019 and 5 million tweets belonging to the COVID-19 pandemic. \section{Implementation details} In our work, we test the suitability of text data augmentation for Covid-19 vaccine-related tweets applied for fine-tuning pre-trained transformer models. Data augmentation of the minority class is a popular remedy for class imbalance [38]. Due to the significant class-imbalance among positive, negative and neutral tweets, we oversample the positive and negative tweets only (minority class), concatenating the synthetic data points to the original dataset to generate a more balanced dataset. We adapt the LMOTE model for augmenting the data that is given as input to the pre-trained transformer models. The dataset used is a small subset of the larger set of Covid-19 tweets presented by Gabriel Preda [1] which is annotated for positive, negative, and neutral sentiments by FullMoonDataScience. The dataset contains 6000 tweets with 3680 tweets belonging to the neutral class, 1900 tweets to the positive class, and 420 tweets to the negative class. The dataset is thus both small in size and highly imbalanced. The data format is shown in Table 1. \begin{table}[!ht] \centering \caption{Data format} \label{tabl.1} \begin{tabular}{|l<{}|c|>{$}c<{$}|r|} \hline \textbf{column} & \textbf{type} \\\hline tweetID & integer \\\hline label & 1, 2, 3 \\\hline text & string \\\hline \end{tabular} \end{table} The input text sequences from Covid-19 tweets are tokenized. We use the hugging-face library in Python for the implementation of the transformer models. The text in the tweets is pre-processed by removing hashtags, links, emails, punctuations and extra spaces using regex. The sequential pre-processing steps are shown for an example tweet in Table 2. In addition to the steps shown, tabs and extra spaces are also removed. \begin{table}[!ht] \centering \caption{Sequential pre-processing steps for an example tweet} \label{tabl.1} \begin{tabular}{|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{10cm}|} \hline \textbf{Pre-processing step} & \textbf{Output} \\\hline Original tweet & "More \#GoodNewsfrom @bopanc \&amp; @DovLieber! \#PfizerBioNTech's \#COVID \#vaccine is highly effective after just 1 dose \&amp; can be stored in ordinary freezers for up to 2 weeks, according to new data https://t.co/ ZWwi00rIU via @WSJ https://t.co/7TMIPCkkBa"\\\hline Removing @ mentions & "More \#GoodNews from \&amp; ! \#PfizerBioNTech's \#COVID \#vaccine is highly effective after just 1 dose \&amp; can be stored in ordinary freezers for up to 2 weeks, according to new data https://t.co/QZWwi00rIU via https://t.co/7TMIPCkkBa"\\\hline Removing Hash tags & "More from \&amp; ! 's is highly effective after just 1 dose \&amp; can be stored in ordinary freezers for up to 2 weeks, according to new data https://t.co/QZWwi00rIU via https://t.co/7TMIPCkkBa"\\\hline Removing websites & "More from \&amp; ! 's is highly effective after just 1 dose \&amp; can be stored in ordinary freezers for up to 2 weeks, according to new data via "\\\hline Removing Every Punctuation Mark except - !?. & "More from amp ! 's is highly effective after just 1 dose amp can be stored in ordinary freezers for up to 2 weeks according to new data via "\\\hline Removing numbers & "More from amp ! 's is highly effective after just dose amp can be stored in ordinary freezers for up to weeks according to new data via "\\\hline \end{tabular} \end{table} A three-fold cross-validation is performed for all the models in our experimentation:- RoBERTa, XLNet and BERT, and the domain-specific CT-BERT and BERTweet. The cross-entropy loss function and Adam Optimizer are used for training the models with a learning rate of 2e-5, and five epochs. Hyperparameter settings are chosen as per the guidelines in the original papers [25]. Fig. 1 shows the process flow pipeline showing the training and testing phases. \begin{figure}[!ht] \centering \includegraphics[scale=0.9]{process.jpg} \caption{Process flow} \end{figure} The tokenized text is subject to text oversampling, where the (minority classes) positive and negative tweets are oversampled using LMOTE such that the population of all three classes positive, negative and neutral are balanced. Some of the generated tweets are shown in Table 3 for reference. Neutral tweets were not oversampled since they constitute the majority class. Negative tweets were highly augmented using synthetic samples to match the population of the neutral class. \begin{table}[!ht] \centering \caption{Some instances of tweets generated by text oversampling using LMOTE} \label{tabl.1} \begin{tabular}{|>{\centering\arraybackslash}m{7cm}|>{\centering\arraybackslash}m{1.75cm}|} \hline \textbf{text generated} & \textbf{label} \\\hline reporting cases of new cases in toronto closed business church no family no hope canada get out of syria effective and tech & positive \\\hline the commission has secured million additional doses of vaccine bringing the total number of doses secured to billion europeans will have had the & positive \\\hline home no family no business no church lent no hope until jab unproven vaccine order russia get out of syria that we have no idea of the & negative \\\hline company trial participants have died had received amp s s now not not worry to show for any but i know that experienced after the first dose of my & negative \\\hline \end{tabular} \end{table} The augmented and balanced dataset is applied for fine-tuning of the pre-trained models RoBERTa\footnote{https://github.com/facebookresearch/fairseq/blob/main/examples/roberta/README.md}, BERT\footnote{https://github.com/google-research/bert} and XLNet\footnote{https://github.com/zihangdai/xlnet}, and the domain-specific pre-trained models BERTweet\footnote{https://github.com/VinAIResearch/BERTweet} and CT-BERT\footnote{https://github.com/digitalepidemiologylab/covid-twitter-bert}. \section{Results} Our experiments were performed in Python version 3.8 on a 2.8 GHz Intel core PC. We have made our source code available online\footnote{https://github.com/Ace117MC/transformer-models-covid} for research purposes. The Covid-19 tweets were pre-processed and tokenized as per the procedure outlined in Section 4. In order to explore the effects of text oversampling on our small sample dataset, we performed the experiment twice, one with text oversampling and one without. The text data was used to fine-tune the pre-trained models RoBERTa, BERT and XLNet, and the domain-specific pre-trained models CT-BERT and BERTweet. The transformer models were trained using three-fold cross-validation on three different 80-20 splits of the dataset, and then we obtained the mean of the performance metrics over the three runs. \subsection{Results without text oversampling} The performance metrics - test accuracy, F1-score and Mathew's correlation coefficient (MCC) are summarized in Table 4 for the five transformer models - RoBERTa, BERT, XLNet, CT-BERT and BERTweet in the absence of text oversampling. The corresponding receiver operating characteristic (ROC) curves (with Area Under Curve (AUC) readings) are plotted for the three sentiment classes in Fig. 2 (a, b, c). \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[scale=0.2]{2a.jpg} \caption{Negative class} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[scale=0.2]{2b.jpg} \caption{Neutral class} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[scale=0.2]{2c.jpg} \caption{Positive class} \end{subfigure} \caption{ROC curves and AUC values for different models (w/o text oversampling)} \end{figure} \begin{table}[!ht] \centering \caption{Test accuracy, F1-Score and Mathew's correlation coefficient (MCC) for the five transformer models (w/o text oversampling)} \label{tabl.1} \begin{tabular}{|l<{}|c|>{$}c<{$}|r|} \hline \textbf{Model} & \textbf{Accuracy} & \textbf{F1-score} & \textbf{MCC} \\\hline BERT & 75.06\% & 0.63 & 0.51 \\\hline XLNet & 75.64\% & 0.67 & 0.53 \\\hline RoBERTa & 76.44\% & 0.67 & 0.55 \\\hline CT-BERT & 77.70\% & 0.70 & 0.57 \\\hline BERTweet & 77.25\% & 0.70 & 0.56\\\hline \end{tabular} \end{table} \begin{table}[!ht] \centering \caption{Class-wise performance analysis of the transformer models for positive, negative and neutral sentiments (w/o text oversampling)} \label{tabl.1} \begin{tabular}{|l<{}|c|>{$}c<{$}|c| |c| |c|} \hline \textbf{Model} & \textbf{Category} & \textbf{Precision} & \textbf{Recall} & \textbf{F1-score} & \textbf{MCC} \\\hline BERT & neutral & 0.79 & 0.83 & 0.81 & 0.49 \\\hline BERT & positive & 0.72 & 0.69 & 0.70 & 0.57 \\\hline BERT & negative & 0.44 & 0.34 & 0.38 & 0.34 \\\hline XLNet & neutral & 0.82 & 0.80 & 0.81 & 0.51 \\\hline XLNet & positive & 0.70 & 0.74 & 0.72 & 0.58 \\\hline XLNet & negative & 0.50 & 0.48 & 0.49 & 0.45 \\\hline RoBERTa & neutral & 0.82 & 0.81 & 0.81 & 0.53 \\\hline RoBERTa & positive & 0.71 & 0.75 & 0.73 & 0.6 \\\hline RoBERTa & negative & 0.51 & 0.45 & 0.48 & 0.44 \\\hline CT-BERT & neutral & 0.81 & 0.84 & 0.82 & 0.55 \\\hline CT-BERT & positive & 0.75 & 0.70 & 0.72 & 0.61 \\\hline CT-BERT & negative & 0.58 & 0.54 & 0.56 & 0.53 \\\hline BERTweet & neutral & 0.81 & 0.82 & 0.81 & 0.53 \\\hline BERTweet & positive & 0.73 & 0.73 & 0.73 & 0.6 \\\hline BERTweet & negative & 0.56 & 0.52 & 0.54 & 0.51 \\\hline \end{tabular} \end{table} It is observed from Table 4, that the domain-specific CT-BERT and BERTweet models show a better performance as compared to the RoBERTa, BERT and XLNet models in terms of test accuracy, F1-score and MCC. RoBERTa is observed to be better than BERT and XLNet, while BERTweet proved to be almost as good as CT-BERT and better than RoBERTa. The reason for the better performance of CT-BERT and BERTweet models is that they are specifically trained on COVID-19 tweets, and are hence familiar with text patterns related to the pandemic-related discussions. The detailed class-wise precision, recall, F1-score and MCC readings are presented in Table 5 for the five transformer models. There are three sentiment classes- neutral which is the majority class, and positive and negative which are minority classes, with the negative tweets being very small in number. As expected, the results are biased, with the neutral and positive classes performing better than the negative class, as observed from both Table 5 (F1-score, MCC) and Fig. 2 (AUC). The negative tweets being very low in number are highly mis-classified as evident from the poor performance of the negative class. CT-BERT is the best performer out of all the five models, followed by BERTweet, in terms of test accuracy, F1-score and MCC, followed by RoBERTa and XLNet. The accuracies of CT-BERT and BERTweet for the minority class (negative sentiment) are found significantly higher than the other models in Table 5, verifying that domain-specific transformer models mitigate the effect of the class imbalance to a certain extent, even in the absence of text augmentation. \subsection{Results with text oversampling} We next demonstrate the effects of text oversampling (of the positive and negative sentiment classes) using LMOTE to investigate the suitability of text augmentation prior to the training phase. The test accuracy, F1-score and MCC values are summarized in Table 6 for the five transformer models - RoBERTa, BERT, XLNet, CT-BERT and BERTweet when text oversampling is performed using LMOTE. \begin{table}[!ht] \centering \caption{Test accuracy, F1-Score and Mathew's correlation coefficient (MCC) for the five transformer models on dataset augmented using LMOTE} \label{tabl.1} \begin{tabular}{|l<{}|c|>{$}c<{$}|r|} \hline \textbf{Model} & \textbf{Accuracy} & \textbf{F1-score} & \textbf{MCC} \\\hline BERT & 74.25\% & 0.62 & 0.49 \\\hline XLNet & 75.80\% & 0.65 & 0.53 \\\hline RoBERTa & 76.69\% & 0.67 & 0.55 \\\hline CT-BERT & 76.14\% & 0.67 & 0.52\\\hline BERTweet & 77.78\% & 0.68 & 0.56 \\\hline \end{tabular} \end{table} A scrutiny of the results in Table 6 reveal a slight dip in the performance scores after text oversampling as compared to the readings in Table 4 (w/o text oversampling). This indicates that text oversampling of the minority classes for a small sample dataset will not improve the classification accuracy. For the augmented dataset, BERTweet performed best followed by RoBERTa and CT-BERT. \par We also compare the performance of LMOTE with SMOTE; the performance scores for the five transformer models when performing text oversampling using SMOTE are compiled in Table 7. On comparing the scores of SMOTE in Table 7 with the results of LMOTE in Table 6, we note that the performance of LMOTE is found to be significantly higher than that of SMOTE. This proves that text generation by language modeling is a better option for augmentation of text corpora than the resampling strategies prevalent in data mining. \begin{table}[!ht] \centering \caption{Test accuracy, F1-Score and Mathew's correlation coefficient (MCC) for the five transformer models on dataset augmented using SMOTE} \label{tabl.1} \begin{tabular}{|l<{}|c|>{$}c<{$}|r|} \hline \textbf{Model} & \textbf{Accuracy} & \textbf{F1-score} & \textbf{MCC} \\\hline BERT & 61.43\% & 0.54 & 0.37 \\\hline XLNet & 56\% & 0.51 & 0.36 \\\hline RoBERTa & 57.97\% & 0.54 & 0.38 \\\hline CT-BERT & 69.58\% & 0.63 & 0.47\\\hline BERTweet & 69.02\% & 0.6 & 0.45 \\\hline \end{tabular} \end{table} \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[scale=0.2]{3a.jpg} \caption{Negative class} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[scale=0.2]{3b.jpg} \caption{Neutral class} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[scale=0.2]{3c.jpg} \caption{Positive class} \end{subfigure} \caption{ROC curves and AUC values for different models (with text oversampling)} \end{figure} The detailed class-wise accuracies are presented in Table 8 for all the five transformer models, in case of text augmentation using LMOTE. The corresponding ROC curves (with AUC readings) are plotted for the three sentiment classes in Fig. 3 (a, b, c). \begin{table}[!ht] \centering \caption{Class-wise performance analysis of the transformer models for positive, negative and neutral sentiments (for dataset augmented using LMOTE)} \label{tabl.1} \begin{tabular}{|l<{}|c|>{$}c<{$}|c| |c| |c|} \hline \textbf{Model} & \textbf{Category} & \textbf{Precision} & \textbf{Recall} & \textbf{F1-score} & \textbf{MCC}\\\hline BERT & neutral & 0.78 & 0.83 & 0.80 & 0.47 \\\hline BERT & positive & 0.72 & 0.67 & 0.69 & 0.56 \\\hline BERT & negative & 0.45 & 0.34 & 0.39 & 0.35 \\\hline XLNet & neutral & 0.80 & 0.82 & 0.81 & 0.51\\\hline XLNet & positive & 0.72 & 0.73 & 0.72 & 0.59 \\\hline XLNet & negative & 0.48 & 0.38 & 0.42 & 0.39 \\\hline RoBERTa & neutral & 0.81 & 0.82 & 0.81 & 0.53 \\\hline RoBERTa & positive & 0.73 & 0.74 & 0.73 & 0.6 \\\hline RoBERTa & negative & 0.52 & 0.43 & 0.47 & 0.43 \\\hline CT-BERT & neutral & 0.81 & 0.83 & 0.82 & 0.49 \\\hline CT-BERT & positive & 0.73 & 0.71 & 0.72 & 0.56 \\\hline CT-BERT & negative & 0.55 & 0.47 & 0.51 & 0.46 \\\hline BERTweet & neutral & 0.81 & 0.84 & 0.82 & 0.54 \\\hline BERTweet & positive & 0.75 & 0.73 & 0.74 & 0.62 \\\hline BERTweet & negative & 0.54 & 0.41 & 0.47 & 0.43 \\\hline \end{tabular} \end{table} Both Table 8 and Fig. 3 indicate a decrease in the performance scores (after text oversampling) of the neutral class (majority class) as compared to the results of the original dataset in Table 5 and Fig. 2. The positive class accuracies remain more or less the same with a slight increase noted for some models. However, the negative class scores have distinctly increased for all models which can be attributed to text oversampling. \subsection{Discussion} In our work, we investigate the utility of domain-specific pre-trained transformer models and text oversampling for the sentiment analysis of Covid-19 vaccine related tweets from an imbalanced small sample dataset. As observed from the performance scores in Tables 4-8, the domain-specific pre-trained transformer models CT-BERT and BERTweet significantly outperform the pre-trained transformer models - RoBERTa, BERT and XLNet. An instance of a Covid-19 tweet that is classified correctly by the domain-specific CT-BERT and BERTweet, but incorrectly by all other transformer models, is shown in Table 9, along with another instance of a tweet which is mis-classified by all the models including CT-BERT and BERTweet. The latter tweet is a mixture of positive and negative news, though the human annotation labelled it as negative. Both CT-BERT and BERTweet labeled the tweet as positive due to the phrase “raised no safety concerns”. \begin{table}[!ht] \centering \caption{Examples of tweet classification by the pre-trained transformer models} \label{tabl.1} \begin{tabular} {|>{\centering\arraybackslash}m{2.75cm}|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1.75cm}|} \hline \textbf{Tweet} & \textbf{ground truth} &\textbf{BERT} & \textbf{XLNet} &\textbf{RoBERTa} &\textbf{CT-BERT} &\textbf{BERTweet} \\\hline i will not be taking amp j or any other vaccine. it's clear now that governments have no idea of the safety profile short medium long term of these experimental vaccines & negative & neutral & neutral & neutral & negative & negative \\\hline there were six deaths during the late stage trials but the fda says this raised no safety concerns & negative & neutral & neutral & neutral & positive & positive \\\hline \end{tabular} \end{table} The following observations were made from the performance scores of the five transformer models before and after text oversampling by LMOTE (augmentation of the minority classes only). 1. The domain-specific Covid-Twitter-BERT (CT-BERT) model performed significantly better than the pre-trained models- RoBERTa, XLNet and BERT for the original dataset (Table 4) since it was pre-trained on a Twitter dataset consisting of only Covid-19 tweets. The CT-BERT results also outperformed the other domain-specific model BERTweet for the original non-augmented dataset. 2. BERTweet, which has BERT as the base model, and is trained on English Tweets, gives consistently better results, and outperformed CT-BERT for the augmented dataset. The results of CT-BERT and BERTweet were overall better than that of RoBERTa, XLNet and BERT, which is expected since they both are trained on domain-specific information (tweets related to the Covid-19 pandemic). 3. Both CT-BERT and BERTweet performed well for the minority class containing negative sentiments even without text augmentation (Table 5), indicating that pre-training with domain-specific information helps to mitigate the effects of an imbalanced class distribution. 3. RoBERTa model was proved to be a better fit for the task at hand as compared to BERT and XLNet as observed from the accuracy, F1-score, MCC scores in Tables 4 and 6. 4. Training the models using synthetically generated textual data yielded worse results for the neutral class, and a marginal increase for the positive class, while the scores of the negative class significantly increased as observed from Tables 5 and 8. 5. It was concluded that LMOTE works poorly in multi-class settings, often degrading performance for the majority classes. LMOTE was used to augment and balance the dataset, in our case, so that training can be performed equally for all classes. 6. Thus text oversampling is not an advisable choice in the case of an imbalanced small sample multi-class dataset since it can downgrade the precision and/or recall rate for the majority class, as observed from the drop in the performance scores of the neutral class in Table 8. Though oversampling does improve results in data mining, the results of oversampling of Covid-19 tweets using LMOTE and SMOTE (Tables 6 and 7) were not encouraging, since the synthetically generated text for the low population negative class was not of high quality and degraded the performance of the neutral class (majority class). \section{Conclusion} In this paper, we explore the effectiveness of domain-specific pre-trained transformer models and text oversampling for learning from small sample datasets with an imbalanced class distribution. We consider the specific task of sentiment analysis of Covid-19 vaccine-related tweets. The majority class is the neutral sentiment, while the positive and negative sentiments form the minority classes. The performance scores of the negative sentiment class are the lowest which occurred due to the small number of training samples in this class. In this scenario, the domain-specific pre-trained transformer models CT-BERT and BERTweet outperform RoBERTa, BERT and XLNet transformer models that are state-of-the-art pre-trained transformer models popularly used for text classification tasks. Thus we conclude that domain-specific transformer models are able to mitigate the class imbalance to a certain extent. Text oversampling of the minority class Covid-19 tweets was found to deteriorate the overall performance, with BERTweet performing better than the other models on the augmented dataset. Hence synthetic tweet generation via text oversampling for the minority classes is not advisable for imbalanced small sample text datasets. We propose to adapt domain-specific transformer models for the classification of Covid-19 related documents in digital repositories in our future works. Since both CT-BERT and BERTweet models are based on the BERT transformer model, we would like to explore, in future, the pre-training of more recent transformer versions such as XLNet using Covid-19 related tweets. \section{References} [1] URL https://www.kaggle.com/gpreda/all-covid19-vaccines-tweets. Last accessed on 11th Feb 2022. [2] Adeyemi I., Esan A.: Covid-19-Related Health Information Needs and Seeking Behavior among Lagos State Inhabitants of Nigeria. In: International Journal of Information Science and Management (IJISM), vol. 20(1), 2022. [3] Adoma A., Henry N., Chen W.: Comparative analyses of bert, roberta, distil-bert, and xlnet for text-based emotion recognition. In: 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), p. 117–121. IEEE, 2020. [4] Al-Hashedi A., Al-Fuhaidi B., Mohsen A.M., Ali Y., Gamal Al-Kaf H.A., Al-Sorori W., Maqtary N.: Ensemble classifiers for Arabic sentiment analysis of social network (Twitter data) towards COVID-19-related conspiracy theories. In: Applied Computational Intelligence and Soft Computing, vol. 2022, 2022. [5] Alenezi M.N., Alqenaei Z.M.: Machine learning in detecting COVID-19 misinformation on twitter. In: Future Internet, vol. 13(10), p. 244, 2021. [6] Araci D.: Finbert: Financial sentiment analysis with pre-trained language models, 2019. ArXiv preprint arXiv:1908.10063. [7] Bahdanau D., Cho K., Bengio Y.: Neural machine translation by jointly learning to align and translate. In: 3rd International Conference on Learning Representations, ICLR. 2015. [8] Bansal A., Susan S., Choudhry A., Sharma A.: Covid-19 Vaccine Sentiment Analysis During Second Wave in India by Transfer Learning Using XLNet. In: International Conference on Pattern Recognition and Artificial Intelligence, pp. 443–454. Springer, 2022. [9] Beltagy I., Lo K., Cohan A.: SciBERT: A Pretrained Language Model for Scientific Text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), p. 3615–3620. 2019. [10] Chawla N.V., Bowyer K.W., Hall L.O., Kegelmeyer W.P.: SMOTE: synthetic minority over-sampling technique. In: Journal of artificial intelligence research, vol. 16, pp. 321–357, 2002. [11] Dastgheib M., Koleini S., Rasti F.: The application of deep learning in persian documents sentiment analysis. In: International Journal of Information Science and Management (IJISM), vol. 18(1), p. 1–15, 2020. [12] Devlin J., Chang M., Lee K., Toutanova K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, p. 4171–4186. 2019. [13] Goel R., Susan S., Vashisht S., Dhanda A.: Emotion-Aware Transformer Encoder for Empathetic Dialogue Generation. In: 2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), p. 1–6. IEEE, 2021. [14] Huang K., Altosaar J., Ranganath R.: Clinicalbert: Modeling clinical notes and predicting hospital readmission, 2019. ArXiv preprint arXiv:1904.05342. [15] Hutto C., Gilbert E.: Vader: A parsimonious rule-based model for sentiment analysis of social media text. In: Proceedings of the international AAAI conference on web and social media, vol. 8(1), p. 216–225, 2014. [16] Kou G., Yang P., Peng Y., Xiao F., Chen Y., Alsaadi F.: Evaluation of feature selection methods for text classification with small datasets using multiple criteria decision-making methods. In: Applied Soft Computing, vol. 86, p. 105836, 2020. [17] Lan Z., Chen M., Goodman S., Gimpel K., Sharma P., Soricut R.: Albert: A lite bert for self-supervised learning of language representations, 2019. ArXiv preprint arXiv:1909.11942. [18] Lee J., Hsiang J.: Patentbert: Patent classification with fine-tuning a pre-trained bert model, 2019. ArXiv preprint arXiv:1906.02124. [19] Lee J., Yoon W., Kim S., Kim D., Kim S., So C., Kang J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. In: Bioinformatics, vol. 36(4), p. 1234–1240, 2020. [20] Leekha M., Goswami M., Jain M.: A multi-task approach to open domain suggestion mining using language model for text over-sampling. In: European Conference on Information Retrieval, p. 223–229. Springer, Cham, 2020. [21] Liew T., Lee C.: Examining the Utility of Social Media in COVID-19 Vaccination: Unsupervised Learning of 672,133 Twitter Posts. In: JMIR public health and surveillance, vol. 7(11), p. 29789, 2021. [22] Liu J., Lu S., Lu C.: Exploring and Monitoring the Reasons for Hesitation with COVID-19 Vaccine Based on Social-Platform Text and Classification Algorithms. In: Healthcare, vol. 9, p. 1353. MDPI), 2021. [23] Liu S., Liu J.: Public attitudes toward COVID-19 vaccines on English-language Twitter: A sentiment analysis. In: Vaccine, vol. 39(39), p. 5499–5505, 2021. [24] Liu Y., Ott M., Goyal N., Du J., Joshi M., Chen D., Stoyanov V.: 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. [25] Lu J., Plataniotis K., Venetsanopoulos A.: Regularization studies of linear discriminant analysis in small sample size scenarios with application to face recognition. In: Pattern recognition letters, vol. 26(2), p. 181–191, 2005. [26] Mallick R., Susan S., Agrawal V., Garg R., Rawal P.: Context-and sequence-aware convolutional recurrent encoder for neural machine translation. In: Proceedings of the 36th Annual ACM Symposium on Applied Computing, p. 853–856. 2021. [27] Manguri K., Ramadhan R., Amin P.: Twitter sentiment analysis on worldwide COVID-19 outbreaks. In: Kurdistan Journal of Applied Research, p. 54–65, 2020. [28] Marcec R., Likic R.: Using twitter for sentiment analysis towards AstraZeneca/Oxford, Pfizer/BioNTech and Moderna COVID-19 vaccines. In: Post-graduate Medical Journal, 2021. [29] Martin L., Muller B., Su´arez P., Dupont Y., Romary L., De La Clergerie V., Sagot B.: CamemBERT: a Tasty French Language Model. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, p. 7203–7219. 2020. [30] M¨uller M., Salath´e M., Kummervold P.: Covid-twitter-bert: A natural language processing model to analyse covid-19 content on twitter, 2020. ArXiv preprint arXiv:2005.07503. [31] Mohsen A., Ali Y., Al-Sorori W., Maqtary N.A., Al-Fuhaidi B., Altabeeb A.M.: A performance comparison of machine learning classifiers for Covid-19 Arabic Quarantine tweets sentiment analysis. In: 2021 1st International Conference on Emerging Smart Technologies and Applications (eSmarTA), pp. 1–8. IEEE, 2021. [32] Naseem U., Razzak I., Khushi M., Eklund P., Kim J.: COVIDSenti: A large-scale benchmark Twitter data set for COVID-19 sentiment analysis. In: IEEE Transactions on Computational Social Systems, vol. 8(4), p. 1003–1015, 2021. [33] Naseem U., Razzak I., Musial K., Imran M.: Transformer based deep intelligent contextual embedding for twitter sentiment analysis. In: Future Generation Computer Systems, vol. 113, p. 58–69, 2020. [34] Nguyen D., Vu T., Nguyen A.: BERTweet: A pre-trained language model for English Tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, p. 9–14. 2020. [35] Nowak S., Chen C., Parker A., Gidengil C., Matthews L.: Comparing covariation among vaccine hesitancy and broader beliefs within Twitter and survey data. In: PloS one, vol. 15(10), p. 0239826, 2020. [36] Olaleye T., Abayomi-Alli A., Adesemowo K., Arogundade O.T., Misra S., Kose U.: SCLAVOEM: hyper parameter optimization approach to predictive modelling of COVID-19 infodemic tweets using smote and classifier vote ensemble. In: Soft Computing, pp. 1–20, 2022. [37] Pires T., Schlinger E., Garrette D.: How Multilingual is Multilingual BERT? In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, p. 4996–5001. 2019. [38] Saini M., Susan S.: Data augmentation of minority class with transfer learning for classification of imbalanced breast cancer dataset using inception-V3. In: Iberian Conference on Pattern Recognition and Image Analysis, p. 409–420. Springer, Cham, 2019. [39] Sattar N., Arifuzzaman S.: COVID-19 Vaccination awareness and aftermath: Public sentiment analysis on Twitter data and vaccinated population prediction in the USA. In: Applied Sciences, vol. 11(13), p. 6128, 2021. [40] Scheible R., Thomczyk F., Tippmann P., Jaravine V., Boeker M.: Gottbert: a pure german language model, 2020. ArXiv preprint arXiv:2012.02110. [41] Somasundaran S.: Two-level transformer and auxiliary coherence modeling for improved text segmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34(05), p. 7797–7804, 2020. [42] Susan S., Kumar A.: The balancing trick: Optimized sampling of imbalanced datasets—A brief survey of the recent State of the Art. In: Engineering Reports, vol. 3(4), p. 12298, 2021. [43] Vashishtha S., Susan S.: Inferring sentiments from supervised classification of text and speech cues using fuzzy rules. In: Procedia Computer Science, vol. 167, p. 1370–1379, 2020. [44] Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A., Polosukhin I.: Attention is all you need. In: Advances in neural information processing systems, vol. 30, 2017. [45] Wang T., Lu K., Chow K., Zhu Q.: COVID-19 Sensing: Negative sentiment analysis on social media in China via Bert Model. In: Ieee Access, vol. 8, p. 138162–138169, 2020. [46] Yang Z., Dai Z., Yang Y., Carbonell J., Salakhutdinov R., Le Q.: Xlnet: Generalized autoregressive pretraining for language understanding, 2019. Advances in neural information processing systems, 32. \end{document}
{ "timestamp": "2022-09-23T02:14:17", "yymm": "2209", "arxiv_id": "2209.10966", "language": "en", "url": "https://arxiv.org/abs/2209.10966" }
\section{\label{sec:intro}Introduction} Observing gravitational waves (GWs) with heterodyne interferometric detection cannot be done without cancelling the overwhelming noise stemming from the stochastic frequency fluctuations of the current technology lasers. For space-based interferometers like LISA, this crucial operation is performed on ground, once the data is downloaded from the satellites to Earth. The standard and most fully developed method to achieve this cancellation is time-delay interferometry (TDI)~\cite{tinto_cancellation_1999, tinto_time-delay_2014} a post-processing technique which performs adequate combinations of delayed phase measurements, some of them synthetically reproducing the path of a photon in a classic Michelson interferometer, to nearly nullify laser noise. To prepare the data analysis of the the Laser Interferometer Space Antenna (LISA)~\cite{danzmann_lisa_2017}, significant efforts have been made to assess the performance and underlying characteristics of TDI (see \cite{tinto_time-delay_2014} and references therein), including studying its interplay with anti-aliasing filters~\cite{bayle_effect_2019}, measurements units~\cite{bayle_adapting_2021}, clock jitters~\cite{hartwig_clock-jitter_2020} and clock synchronization~\cite{hartwig_time_2022}, new noise-cancelling combinations~\cite{muratore_revisitation_2020} and the construction of null channels~\cite{muratore2022, muratore2022b}. A new approach to the laser frequency noise problem has gained interest in the last two years, which formulates how the noise enters phase measurements with a design matrix, and interprets TDI as the solution to a linear algebra problem. Romano and Woan~\cite{romano_principal_2006} took a first step in that direction (further explored in \cite{leighton_principal_2016}) by showing that we can derive TDI variables from the eigenvectors of the laser noise covariance matrix, using a simple toy model in the time domain. More recently, we formalized this idea in the frequency domain, an approach that we named principal component interferometry (PCI), for which we provided first evidences for its suitability to parameter inference~\cite{baghi_statistical_2021}. Alternatively, authors in Ref.~\cite{vallisneri_tdi-infinity_2020} defined the TDI combinations from the null space of the design matrix itself. Another group~\cite{tinto_matrix_2021} demonstrated the equivalence between this technique and the algebraic definition of TDI as a ring over the space of polynomials in delay operators, bridging the matrix-based approaches with earlier mathematical studies of TDI~\cite{vinet_algebraic_2002, nayak_algebraic_2004}. In a recent work~\cite{baghi_model-independent_2021} (that we refer to as Paper I in the following), we proposed to further benefit from the power of matrix representation by directly analysing the interferometric measurements without assuming any prior knowledge on their correlations, except that they must extend further than a minimal time set by the problem timescale. The method, called aPCI for ``automated PCI", first forms a data matrix from replicas of phase measurements shifted by a integer number of samples backward or forward in time. Then, performing the matrix's principal component analysis (PCA) yields an array of components ordered by their variance, where the lowest variance components are almost free from laser noise. This handful of variables are sensitive to GWs, and can be used for source detection and characterisation. The process can be understood as a multivariate version of singular spectral analysis (SSA), a technique broadly used in signal processing (see, e.g., \cite{Golyandina2013}). We proved this concept in the case of constant (but unequal) interferometric links, demonstrating that aPCI's sensitivity is virtually the same as first-generation TDI, i.e., TDI combinations tailored for fixed armlengths. In this work, we present an upgrade of the aPCI method suitable for time-varying links, making it applicable to realistic space-borne measurements like the phase-meter and auxiliary system telemetry that will be delivered by the satellites in LISA's time-evolving constellation. This new version is very similar to the time-independent aPCI, except that additional columns are appended to the data matrix to account for polynomial time variations. In Section~\ref{sec:theory}, we recall aPCI's theoretical foundations and present its time-varying extension to arbitrary order in time. Then, in Section~\ref{sec:laser_noise_mitigation} we apply the first-order version of the method to numerical simulations of LISA's interferometer data featuring a flexing constellation and show that the method successfully cancel laser frequency noise. In Section~\ref{sec:sensitivity}, we compute the first-order aPCI response to both instrumental noise and GW signals, which allows us to derive its sensitivity and compare it to standard TDI. Finally, in Section~\ref{sec:discussion} we discuss the implications of our findings and outline further developments needed to strengthen aPCI's robustness for real-world data analysis. \section{\label{sec:theory}Theoretical framework} Space-based gravitational-wave detectors will send to the ground several interferometers output data, which can be expressed in relative frequency deviations. Similarly as in Ref.~\cite{bayle_adapting_2021}, we denote the corresponding measurements as $y_{ij}(t)$ where $i$ is the index of the satellite hosting the optical bench, and $j$ is the index of the far spacecraft. \begin{figure}[!h] \centering \includegraphics[width=\columnwidth, trim={0.6cm, 1cm, 1.1cm, 0.3cm}, clip]{./constellation.pdf} \caption{Schematic of the LISA constellation. The blue disks represent the three spacecrafts, the $y_{ij}$ indicate the interferometric measurement at the optical bench $ij$, hosted by spacecraft $i$ and receiving the laser beam from spacecraft $j$. The arrows correspond to the direction of the beams received by spacecraft $i$ and sent by spacecraft $j$, undergoing a time delay $D_{ij}$.} \label{fig:constellation} \end{figure} \subsection{\label{sec:tdi_filter}TDI as a filter} Classical TDI algorithms usually operate in two steps. First, they compute delayed versions of discrete signals, interpolated at specific times depending on the light travel time delays along the constellation arms. In a second step, they combine these interpolated time series in such a way that laser noise terms vanish. Hence, each TDI channel $A_\alpha$ is produced by some linear combination of various delays \begin{equation} A_\alpha(t) = \sum_{k=1}^{6}{c_{k\alpha}(t)\mathcal{D}[d_{k\alpha}(t)] y_{k}(t)}, \end{equation} where we have introduced a delay operator $\mathcal{D}[d]$ which realizes a delay of the $y$ data stream by time $d$. For conciseness, we wrote $y_{k}(t) = y_{i_k j_k}(t)$ the 6 interferometer measurements at optical bench $i_k$ of the beam coming from spacecraft $j_k$. For each channel $A_\alpha$ the model provides some set of coefficients $\{c_{k\alpha}\}$ and a corresponding set of delays $\{d_{k\alpha}(t)\}$. A delay $d_{k\alpha}(t)$ includes the application of one or multiple light travel times $D_{ij}$ from spacecraft $j$ to spacecraft $i$ (see Fig.~\ref{fig:constellation}), so that $\mathcal{D}[d_{k\alpha}(t)] = \prod_{p} \mathcal{D}[{D}_{i_{kp} j_{kp}}(t)]$. In general, as the constellation evolves, these delays slowly vary with time. In practice, we do not have continuous data, but $N$ data points sampled at times $\{t_i\}$ with a sampling rate~$\tau_s$. We thus realize the delay operator by a fractional delay filter of some finite half-width~$n_{h}$. In discrete form we write \begin{eqnarray} A_{i\alpha}&=&\sum_{l=-n_{h}}^{n_{h}}\sum_{k=1}^{6}{c_{k\alpha}(t_i)f_{lk\alpha}(t_i) \left[D_l y_{k}\right](t_i)} \label{eq:TDI_data_matrix} \end{eqnarray} where the $f_{lk\alpha}(t)$ are the fractional delay filter coefficients for the specified delay $d_k(t)$, and $D_l y_{k}$ is the data channel $k$ shifted by an integer number of samples $l$. To relate these combinations with the PCA formalism, we define the data matrix combining all shifted versions of the measurements $\bm{Y}$: \begin{eqnarray} \label{eq:x_matrix} \bm{X} \equiv \left({D}_{-{n_h}}\bm{Y},\, \hdots, \, {D}_{+{n_h}}\bm{Y}\right), \end{eqnarray} where the $N \times M$ matrix $\bm{Y}$ gathers the $M$ measurements recorded at $N$ time samples. In what follows, we assume that it includes $M = 6$ measurements as \begin{align} \label{eq:y_vector} \bm{Y} \equiv \left(\bm{y}_{12}, \, \bm{y}_{23}, \, \bm{y}_{31}, \, \bm{y}_{13}, \, \bm{y}_{21}, \, \bm{y}_{32} \right). \end{align} We also define the 6-row vectors $\bm{g}_{l\alpha}(t)$ with entries $g_{kl\alpha}(t) \equiv c_{k\alpha}(t)f_{lk\alpha}(t)$. For convenience, we gather them into a single-column vector as \begin{equation} \bm{g}_{\alpha}(t) = \left(\bm{g}_{-{n_{h}}\alpha}(t) \, \hdots \, \bm{g}_{{+n_{h}}\alpha}(t)\right)^{T}. \end{equation} The size of matrix $\bm{X}$ is $N \times 6p$, where $p = 2n_h+1$ is the number of integer shifts, hence $\bm{g}_{\alpha}(t)$ has size $6p$. With these definitions, Eq.~\eqref{eq:TDI_data_matrix} becomes \begin{eqnarray} A_{i\alpha}&=& \bm{X}_{i} \bm{g}_{\alpha}(t_i), \label{eq:TDI_data_matrix_2} \end{eqnarray} where we labelled by $\bm{X}_{i}$ the $\rm i^{th}$ row of matrix $\bm{X}$. The aPCI treatment of Paper I introduced a data-driven approach to deriving specific alternatives to the $A_{i\alpha}$ by exploring more general linear combinations of the $X_{ij}$ which appear in Eq.~\eqref{eq:TDI_data_matrix_2}, seeking those combinations which minimize the sample variance and thus cancel the dominant noise. Importantly, in that approach those linear combinations arising from the usual TDI channel (any TDI generation in general) were a subspace of the possible broader space explored by the PCA treatment \emph{as long as} the coefficients $\bm{g}_{l\alpha}$ were effectively time-independent. For LISA, or similar instruments, the TDI coefficients do however slowly vary in time as the constellation flexes and evolves. This evolution limits the length of data matrix to which the analysis can be effectively applied, and thereby limits significantly the quality of the result. The issue is superficially similar to the hierarchy of various TDI ``generations'' with first generation TDI applying to a frozen constellation with equal arms, and second generation accounting for leading-order temporal variation in the constellation. Our issue is distinct though. Any generation of TDI filters fits instantaneously into the form of the aPCI data matrix. The trouble here is that, at any order, those filters may be slowly time-dependent. The natural resolution is to allow similarly time-dependent linear combinations of the $\bm{X}_{ij}$ in the analysis. To see how to do this, consider sufficiently short segments where these evolving component values can be approximated linearly in time. We can Taylor expand the coefficents, writing $\bm{g}_{l\alpha}(t_i)\approx\bm{g}^{(0)}_{l\alpha}+(t_i-t_0)\bm{g}^{(1)}_{l\alpha} + \hdots$. Applying this in Eq.~\eqref{eq:TDI_data_matrix} we can then approximate the TDI construction as \begin{eqnarray} A_{i\alpha}&=& \sum_{q=0}^{m} \bm{X}_{i} \bm{g}^{(q)}_{l\alpha} (t_i-t_0)^q\nonumber\\ &=& \bm{Z}_{i}^{(m)} \bm{G}_{\alpha}^{(1)}, \label{eq:TDI_data_matrix_Taylor} \end{eqnarray} where we set $\bm G^{(m)} \equiv \left({\bm{g}^{(0)\dag}},\, {\bm{g}^{(1)\dag}},\, \hdots , \, {\bm{g}^{(m)\dag}} \right)^{\dag}$ and the new data matrix of order $m$ whose rows are given by \begin{equation} \label{eq:z_matrix} \bm{Z}_{i}^{(m)} = \left(\bm{X}_{i}, \, (t_i-t_0)\bm{X}_{i}, \, \hdots, \, (t_i-t_0)^{m}\bm{X}_{i} \right). \end{equation} With this convention, we have $\bm{Z}^{(0)} = \bm{X}$. The representation now has the same form as in Eq.~\eqref{eq:TDI_data_matrix}, but with time-independent coefficients, and with a slightly more complicated data matrix $\bm Z^{(m)}$ which includes a copy of $\bm X$ together with essentially $t^{q}\times\bm X$, with $q$ running from 1 to $m$. Then we can proceed with the PCA analysis as in Paper I but with a data matrix that is now enlarged by a factor of $2m$ on its short side. Although Eq.~\eqref{eq:z_matrix} describes an arbitrary Taylor expansion, we will be working with first order in time ($m=1$) in the following. \subsection{\label{sec:pca_time}Principal component analysis of the data matrix} TDI decompositions are derived from the knowledge of the design matrix, i.e., the way laser noise sources enter the interferometric data, including the exact delays and the time variations they undergo. In other words, TDI fully specifies the matrix of filter coefficients $\bm{G}_{\alpha}$ introduced in Eq.~\eqref{eq:TDI_data_matrix_Taylor}. In the aPCI approach, the noise-cancelling decomposition is derived from the data. To do so, we proceed as in Paper I with $\bm Z^{(m)}$ instead of $\bm X$, so that the higher-order term (or terms) in time are now present in the data matrix. Then, we compute its singular value decomposition (SVD): \begin{eqnarray} \label{eq:z_svd} \bm{Z}^{(m)} = \bm{U}^{(m)} \bm{S}^{(m)} \bm{V}^{(m)\dag}, \end{eqnarray} where $\bm{U}^{(m)}$ and $\bm{V}^{(m)\dag}$ are unitary matrices whose columns are basis vectors we will refer to as singular vectors, and $\bm{S}$ is a $N \times 6p(m+1)$ rectangular diagonal matrix whose elements are positive real numbers called singular values. We obtain the principal components (PCs) of $\bm{Z}^{(m)}$ by applying the transformation \begin{equation} \label{eq:e_matrix} \bm{E}^{(m)} = \bm{Z}^{(m)} \bm{V}^{(m)}. \end{equation} The columns of $\bm{E}^{(m)}$ form the aPCI combinations, and are ordered from the lowest to the largest singular value associated to them. This ordering of PCA components is in reverse order from typical convention, as most PCA applications value information in the high-variance components. Note that this convention was not adopted in the beginning of Paper I, where the transformed data matrix was labelled $\bm{T}$ instead of $\bm{E}$. Singular values are a measure of the variance carried by each component. Selecting the lowest-variance components, therefore, provides us with combinations where the laser frequency noise is minimal. In the following, $\bm{e}_{j}^{(m)}$ denotes the $j^{\rm th}$ lowest variance aPCI variable, such that its entries are $e_{j}^{(m)}(n) = E\left(n, j\right)$. \section{\label{sec:laser_noise_mitigation}Laser noise mitigation} To evaluate the performance of the first-order aPCI, we need to i) verify the proper mitigation of laser noise by the combinations $\bm{e}_{m}^{(1)}$ and ii) check that their sensitivity to GWs is comparable to second-generation TDI's, which is the state-of-the-art technique to cancel laser frequency noise terms for a flexing constellation, up to first order in time delay derivatives. In this section, we assess the level of laser noise cancellation. \subsection{\label{sec:noise_simulation}Data simulation} We use a simulation of the 18 interferometric outputs measured by LISA while in orbit, assuming a non-equal arm, flexing constellation following Keplerian orbits around the Sun. These outputs include measurements from the science, reference, and test-mass interferometers, as defined, e.g., in Ref.~\cite{bayle_effect_2019}. Furthermore, we assume that the six lasers are independent, which means that laser locking is off. We perform this simulation with Bayle et al.'s \textsc{LISA Instrument} simulator~\cite{lisainstrument}, a Python-based simulator cross-checked against the \textsc{LISANode} simulator~\cite{bayle:tel-03120731}. Only laser noise is present in the simulation, with an amplitude spectral density of 28.2 $\rm Hz.Hz^{-1/2}$ (we assume it is white in the simulation bandwidth). We set the sampling frequency of the output measurements to 4 Hz in accordance with LISA Science Requirements Document~\cite{lisa_simulation_working_group_lisa_2018}. The simulation runs at a cadence four times faster than the output sampling, and anti-aliasing filters are adjusted accordingly. Instead of directly analyzing the 18 interferometer outputs, we condense them into the 6 intermediary variables $\eta_{ij}$ using Staab et al.'s Python-based TDI calculator \textsc{pyTDI}\cite{pytdi}. This operation amounts to assuming a configuration with only 3 independent running lasers (see, e.g., \cite{otto_time-delay_2015} for a detailed definition). So we take $y_{ij} = \eta_{ij}$ in Eq.~\eqref{eq:y_vector}. Note that the construction of the variables $\eta_{ij}$ does not require precise estimation of the inter-spacecraft delays $D_{ij}$, and allows for a reduction of the problem's dimension. We generate the secondary noises (i.e., non-laser-frequency noises) independently for the 6 intermediary variables and add them to the simulation outputs. While this operation does not realistically reflect how secondary noises propagate through the instrument, it allows us to perfectly control the content of the simulation. In this setup, we assume the presence of two secondary noises: a noise due to the residual test-masses' (TMs) accelerations with respect to the inertial frame, of PSD $S_{\mathrm{a}}(f)$, and noises coming from the residual displacement in the optical metrology system (OMS), including position readouts. Hence, we can write the secondary noise PSDs as \begin{equation} \label{eq:secondary_noises_psd} S_{n}(f) = S_{\mathrm{OMS}}(f) + 2 S_{\mathrm{a}}(f), \end{equation} where the factor of 2 comes from the fact that the test-mass noise appears in the $\eta_{ij}$ from both the contribution of link $ij$ via the science interferometer and from link $ji$ via the test-mass interferometer. We provide the analytical expressions for the acceleration and OMS noise PSDs in Appendix~\ref{sec:secondary_noises}. \subsection{\label{sec:noise_mitigation_analysis}Analysis of simulated data} Once the noisy variables $\eta_{ij}$ are generated, we form the data matrix $\bm{Z}^{(m)}$ with $m = \{0, 1\}$ as defined by Eqs.~\eqref{eq:x_matrix} and \eqref{eq:z_matrix}. We consider 12 hours worth of data, which is long enough to allow for a significant variation of the armlengths and probe the lowest frequencies of LISA's bandwidth. We choose a half-width (or stencil size) large enough to encompass the number of time delays applied in second-generation TDI, to which we add a margin corresponding to the order of Lagrange polynomials typically used in TDI. This translates as $n_h = \lfloor 8 L / (c \tau_s) \rfloor + 32$. With $L = 2.5$ Gm as the average arm length, $\tau_s = 0.25$ s the sampling cadence and $c$ is the speed of light, we get $n_h = 266$. Then, we compute the PCA of $\bm{Z}^{(m)}$ as described in Eq.~\eqref{eq:z_svd} using the \textsc{scikit-learn} Python package~\cite{pedregosa_scikit-learn_2011}, which runs the full SVD with the standard LAPACK solver. This package also features incremental principal components analysis (IPCA) which allows us to split the computation in chunks and optimize the memory usage. \begin{figure}[!h] \centering \includegraphics[width=\columnwidth, trim={0.5cm, 0.3cm, 0.5cm, 0.3cm}, clip]{./variance_vs_element.pdf} \caption{Normalized singular values as a function of the principal components conveying the amount of variance they carry. We plot the zeroth order (black) and first order (purple) cases. We observe a dynamic range between the highest and the lowest variances that is more pronounced for the first-order decomposition, suggesting a better noise decomposition with first order than with zeroth order. } \label{fig:variances} \end{figure} We plot the normalized singular values (i.e. the diagonal elements of $\bm{S}^{(m)}$ divided by the number of data points $N$) for orders $m = 0$ and $m = 1$ in Fig.~\ref{fig:variances}. The singular values are proportional to the variance of their corresponding PCs, so that the PCs with the lowest variances should be the ones that best reject laser frequency noise. Note that we ordered them by increasing singular values here, that is why it appears flipped with respect to Fig.~1 of Paper~I. The zeroth order (represented in black) corresponds to the case studied in Paper~I, where no time variations are accounted for in the analysis (although they are present in the simulated data). The first-order curve (in purple) includes the extension to linear variations in time, and features a larger difference between the largest and the lowest variances. This indicates a more faithful decomposition of the noise. For gravitational-wave detection, we only need the PCs with lowest singular values. As in Paper~I, we select the q lowest variance components beyond there is no meaningful improvement in GW sensitivity, and then project the data onto these components. As an example, we plot the periodogram of the aPCI variable $\bm{e}_{1}^{(m)}$ (the one with lowest variance) in Fig~\ref{fig:aPCI_noise_periodograms} for $m = 0$ (dark blue) and $m = 1$ (light blue). We also plot the periodogram of the single-link measurement $\bm{y}_{12}$ (gray). \begin{figure}[!h] \centering \includegraphics[width=\columnwidth, trim={0.5cm, 0.3cm, 0.5cm, 0.3cm}, clip]{./pci_projections_periodograms.pdf} \caption{Periodogram of the lowest variance aPCI variable $\bm{t}_{j}^{(m)}$ for zeroth (dark blue) and first (light blue) order decompositions, along with channel $12$ of the input data vector $\bm{y}$. The red dotted curves show the contribution of laser noise for both orders. The laser noise's contribution to the residuals in the first-order case is lower than the all-noise residuals, showing that laser noise is suppressed below other noises.} \label{fig:aPCI_noise_periodograms} \end{figure} We observe a difference of 7 to 8 orders of magnitude between the input data and the first-order aPCI variable, while the noise level of the zeroth order variable is about ten times larger than its first-order counterpart. This result suggests that including the first-order terms in the aPCI process helps to cancel laser noise with time-varying armlengths. To confirm, we apply the same transformation to simulated data containing only laser frequency noise (with no secondary noise contribution). The result is shown by the light dotted and dark dashed red lines Fig~\ref{fig:aPCI_noise_periodograms} corresponding to zeroth and first order, respectively. In the case of order zero, the aPCI projection of the laser-frequency-noise-only data is almost superimposed on the projection of both laser and secondary noises, which confirms that laser frequency noise still dominates. On the contrary, the first-order case exhibits lower laser frequency noise residuals (light dotted red) compared to the data containing all noises (light solid blue). This result shows that the first-order extension of aPCI effectively cancel laser frequency noise below the level of other noises. \section{\label{sec:sensitivity}Sensitivity of first-order PCI combinations} Suppression of laser-noise would not be helpful if it inadvertently also suppressed gravitational wave signals. In this section, we compute the GW sensitivity of aPCI combinations $\bm{E}^{(1)}$, so that we can compare the algorithm performance with standard TDI. \subsection{\label{sec:noise_response}PCI response to stochastic processes} We analyze how a stochastic process measured in $\bm{y}_{ij}$ is transformed by the PCI combinations. Let us consider a zero-mean Gaussian, multivariate stationary process $\bm{Y}$ with 6 channels and $N$ points in time. Taking the Fourier transform of $\bm{Y}$ allows us to work with covariances defined for each frequency bin $f$, neglecting the correlations between two different frequency bins. We thus consider the $6$-column vector $\bm{\tilde{y}}$ defined as \begin{equation} \bm{\tilde{y}}(f) = \left(\tilde{y}_{12}, \, \tilde{y}_{23}, \, \tilde{y}_{31}, \, \tilde{y}_{13}, \, \tilde{y}_{21}, \, \tilde{y}_{32} \right)^T, \end{equation} where the convention $\tilde{y}(f)$ refers to the DFT of any vector $\bm{y}$ at frequency $f$. As in Paper~I, we can write transformation in Eq.~\eqref{eq:e_matrix} in the Fourier domain at zeroth order in time. For any frequency $f$, we relate the $6p$-vector of aPCI variables $\bm{\tilde{e}}(f)$ to the $6$-vector $\bm{\tilde{y}}(f)$ through the simple matrix operation \begin{equation} \label{eq:e_response} \bm{\tilde{e}}^{(0)}(f) = \bm{\tilde{W}}(f) \bm{\tilde{y}}(f), \end{equation} where we defined the $6p \times 6$ transformation matrix $\bm{\tilde{W}}(f)$ \begin{equation} \label{eq:x_vector} \bm{\tilde{W}}(f) = \bm{V}^{\dag} \bm{\tilde{\Omega}}(f), \end{equation} where $\bm{\tilde{\Omega}} \equiv \begin{pmatrix} \bm{\tilde{\Omega}}_{-n_h} & \hdots & \bm{\tilde{\Omega}}_{+n_h} \end{pmatrix}^T$ is a $6p \times 6$ matrix including the $6 \times 6$ blocks $\bm{\tilde{\Omega}}_{l}(f)$ which encode the application of delay $l$ to all channels. Each $\bm{\tilde{\Omega}}_{l}$ is a diagonal matrix constructed as \begin{equation} \bm{\tilde{\Omega}}_{l} = \mathrm{diag}\left(\tilde{D}_{l}, \, \hdots, \, \tilde{D}_{l} \right), \end{equation} whose diagonal elements are the Fourier-domain approximation of the delay operators: \begin{equation} \label{eq:delay_operator} \tilde{D}_{l}(f) = e^{-2\pi i f l \tau_s}. \end{equation} If $\bm{y}$ is a stationary stochastic process of covariance $\bm{\tilde{\Sigma}}_{y}(f)$, then the covariance matrix of $\bm{\tilde{e}}$ is \begin{eqnarray} \label{eq:e_covariance} \bm{\tilde{\Sigma}}_{e}(f) \approx \bm{\tilde{W}}(f) \bm{\tilde{\Sigma}}_{y}(f) \bm{\tilde{W}}(f)^{\dag}. \end{eqnarray} Rigorously, we should then account for the first-order part of the data matrix $\bm{Z}^{(1)}$, as given in Eq.~\eqref{eq:z_matrix}. However, while this part is obviously important to mitigate the linear variations of the laser frequency noises, its effect is not dominant when considering the response to secondary noises and gravitational waves (which are several orders of magnitude smaller than laser noise). Hence, in the following we neglect the contribution of time variations when computing the covariance of the aPCI variables. We will discuss in Section~\ref{sec:sensitivity_results} the consequences of this simplification. With Eq.~\eqref{eq:e_response}, we established the recipe to propagate any GW waveform from its single-link responses to the aPCI variables. Likewise, we can use Eq.~\eqref{eq:e_covariance} to convert the spectrum of any secondary noise (that is not laser frequency noise) into its aPCI spectrum. \subsection{\label{sec:laser_projection}Laser frequency noise projection} Laser frequency noise, as all other components in the data, projects onto the basis of singular vectors through Eq.~\eqref{eq:e_matrix}. We can mitigate its impact in any further analysis by simply considering only the first $q$ lowest singular value components, and discarding all the others. This is the counterpart of what is commonly called ``truncated PCA", which usually discards the lowest singular values. In Paper~I, we determined that the aPCI sensitivity was increasing until $q = 6$. Including additional components did not improve it further, as laser frequency noise starts to dominate at larger $q$. We adopt this cut-off in the following. \subsection{\label{sec:orthogonalization}Orthogonalization} We use Eq.~\eqref{eq:e_covariance} to compute the $q \times q$ covariance $\bm{\tilde{\Sigma}}_{{e}_{n}}(f)$ of the q lowest-variance aPCI variables. We assume that all single-link noises are uncorrelated and have the same PSD, set by Eq.~\eqref{eq:secondary_noises_psd}. Hence, their covariance $\bm{\tilde{\Sigma}}_{y}(f)$ is diagonal. In the same way as for Michelson TDI, the aPCI transformation introduces correlations among the resulting variables, so that the matrix $\bm{\tilde{\Sigma}}_{{e}_{n}}(f)$ has non-zero off-diagonal terms. Exactly like when we construct TDI A, E, T, another transformation is needed if we want to work with orthogonal data streams. One can perform this transformation by decomposing $\bm{\tilde{\Sigma}}_{{e}_{n}}(f)$ into its eigenbasis. It turns out that $\bm{\tilde{\Sigma}}_{{e}_{n}}(f)$ has only three non-zero eigenvalues. This is a consequence of the secondary noise approximation we made in Eq.~\eqref{eq:e_covariance}, which neglects laser frequency noise residuals. Without this assumption, the covariance has three other non-null eigenvalues, but much smaller than the three other ones. This suggests that only the corresponding three eigenstreams will be relevant for data analysis. We define the eigenstreams as the projection of the initial variables onto the eigenspace via \begin{equation} \label{eq:orthogonalization} \bm{\tilde{e}}^{(m)}_{\perp}(f) = \bm{\Phi}^{\dag}(f) \bm{\tilde{e}}^{(m)}(f), \end{equation} where $\bm{\Phi}(f)$ is the matrix of the covariance's eigenvectors $\bm{\tilde{\Sigma}}_{{e}_{n}}(f)$: \begin{equation} \label{eq:covariance_eigendecomposition} \bm{\tilde{\Sigma}}_{{e}_{n}}(f) = \bm{\Phi}(f) \bm{\Lambda}(f) \bm{\Phi}(f)^{\dag}, \end{equation} with $\bm{\Lambda}(f)$ being the diagonal matrix of eigenvalues, which are proportional to the PSDs of the orthogonal variables: $\Lambda_{lp}(f) = \langle | \, \tilde{e}^{(m)}_{\perp l}(f) |^{2} \rangle \delta_{lp}$. To verify our modeling, we plot (in blue) the periodograms of the three aPCI eigenstreams associated with non-zero eigenvalues in Fig.~\ref{fig:aPCI_noise_models}. \begin{figure}[!h] \centering \includegraphics[width=\columnwidth, trim={0.5cm, 0.3cm, 0.5cm, 0.3cm}, clip]{./pci_noise_model_ortho.pdf} \caption{Noise periodograms of the 3 orthogonalized first-order aPCI variable $\bm{e}_{\perp j}^{(1)}$ (in blue) along with their analytical power spectra (in red).} \label{fig:aPCI_noise_models} \end{figure} We compute their analytical covariance using the zeroth-order approximation in Eq.~\eqref{eq:e_covariance} and plot its non-zero eigenvalues in red. The model overlaps well with the periodograms, hereby showing that it is acceptable to only account for zeroth-order effects when computing residual noise aPCI spectra. \subsection{\label{sec:gw_response}PCI response to gravitational waves} Up to this point, no information about the constellation orbits has been required, since we derived the transformation matrix $\bm{\tilde{W}}$ directly from the data. That said, the link response to a GW point source with propagation vector $\bm{k}$ and Fourier amplitudes $\tilde{h}_{+}(f', \bm{k}), \tilde{h}_{\times}(f', \bm{k})$ depends on the spacecrafts orbits, as shown in Eq.~(29) of Paper I, which we reproduce here: \begin{align} \label{eq:arm_esponse_frequency} y^{\mathrm{GW}}_{ij}(t, \bm{k}) & = \sum_{\alpha=+,\times} \int_{-\infty}^{+\infty} \tilde{h}_{\alpha}\left(f', \bm{k}\right) e^{2i \pi f' t} \frac{ F_{\alpha}\left( \psi , \bm{k}, \bm{n}_{ij}\right)}{2\left(1 - \bm{k}\cdot \bm{n}_{ij}\right))} \nonumber \\ & \times \left[ e^{- 2i \pi f' \left(L_{ij} + \bm{k} \cdot \bm{r}_{j}(t_j)\right) / c}- e^{- 2i \pi f' \bm{k} \cdot \bm{r}_{i}(t)/ c} \right] df'. \end{align} The orbits are needed to determine the spacecraft position vectors $\bm{r}_{i}(t)$ and the constellation arms orientation vectors $\bm{\hat{n}}_{ij} = \left(\bm{r}_{i} - \bm{r}_{j}\right) / \lVert \bm{r}_{i}- \bm{r}_{j} \rVert$ as a function of time. However, the response requires far less precise information than TDI: while a nanosecond accuracy is needed for TDI, a 100 ms accuracy is likely to be sufficient for most GW sources~\cite{katz2022}. The response also depends on the sky location through $\bm{k}$ and the polarization angle $\psi$. For any time $t$, we can conveniently express the equation above as a matrix relation, \begin{eqnarray} \label{eq:y_gw_matrix} \bm{\tilde{y}}_{\mathrm{GW}}(\bm{k}) = \int_{-\infty}^{+\infty} \bm{\tilde{H}}(f', \bm{k}) \bm{\tilde{h}}(f') df', \end{eqnarray} where $\bm{\tilde{H}}(f, \bm{k})$ is the $6 \times 2$ GW response matrix with as many rows are there are links, and as many columns as there are polarization modes. We also defined the vector of strain amplitudes $\bm{h}(f, \bm{k}) \equiv \big(h_{+}(f, \bm{k}), h_{\times}(f, \bm{k})\big)^{\intercal}$. To easily compare the analytical response with simulated data, we study the case of an isotropic, stationary, zero-mean Gaussian stochastic GW background with a strain PSD equal to unity. The power spectrum of the strain amplitudes is then~\cite{caprini_reconstructing_2019} \begin{align} \label{eq:h_covariance} \langle \tilde{h}_{\alpha}(f, \bm{k}) \tilde{h}_{\alpha}(f', \bm{k}')^{\ast} \rangle & = \nonumber \\ \frac{1}{2} \delta\left(f - f'\right) \frac{1}{4\pi} \delta\left(\bm{k} - \bm{k}'\right) \delta_{\alpha, \alpha'} S_{\mathrm{GW}}(f), \end{align} where we defined the power spectrum operator $\langle \cdot \rangle$ in Appendix~\ref{eq:average_operator}. We are interested in the sky-averaged response $\bm{\tilde{y}}_{\mathrm{GW}} = \int_{\bm{k}}\bm{\tilde{y}}_{\mathrm{GW}}(\bm{k}) d^{2} \bm{k}$. Using Eqs.~\eqref{eq:y_gw_matrix} and~\eqref{eq:h_covariance}, we can then write the sky-averaged link response $\bm{R}_{y}(f)$ as \begin{align} \label{eq:averaged_y_gw_response} \bm{R}_{y}(f) & \equiv \langle\bm{\tilde{y}}_{\mathrm{GW}}(f) \bm{\tilde{y}}_{\mathrm{GW}}(f)^{\dag} \rangle \nonumber \\ & = \frac{1}{8\pi} \int_{\bm{k}} \bm{\tilde{H}}(f, \bm{k}) \bm{\tilde{H}}(f, \bm{k})^{\dag} d^{2} \bm{k}. \end{align} Therefore, the link response to the GW background is a stochastic process of covariance given by Eq.~\ref{eq:averaged_y_gw_response}. To compute the aPCI GW response, we simply need to use Eq.~\eqref{eq:e_covariance}, which yields \begin{eqnarray} \label{eq:e_gw_response} \bm{R}_{e}(f) = \bm{\tilde{W}}(f) \bm{R}_{y}(f) \bm{\tilde{W}}^{\dag}(f). \end{eqnarray} Similarly, it follows from Eq.~\eqref{eq:orthogonalization} that the GW response from orthogonalized aPCI variables is \begin{eqnarray} \label{eq:e_gw_ortho_response} \bm{R}_{e, \perp}(f) = \bm{\Phi}^{\dag}(f) \bm{R}_{e}(f) \bm{\Phi}(f). \end{eqnarray} To check the validity of the response function, we simulate an isotropic stochastic gravitational-wave background using \texttt{LISA GW Response}~\cite{lisagwresponse}, assuming independent polarizations and a PSD equal to unity, as described by Eq.~\eqref{eq:h_covariance}. In the simulation, the background stems from 768 independent point-sources dividing the sky into as many pixels, distributed on a \texttt{HEALPix} map~\cite{Gorski_2005}. Then, we apply the exact same aPCI transformation that we obtained in Sec.~\ref{sec:noise_mitigation_analysis} to the measured link responses $\bm{\tilde{y}}_{\mathrm{GW}}$. We get the GW signal as seen through the aPCI variables $\bm{\tilde{e}}_{\mathrm{GW}}$, which we project onto the eigenspace of the secondary noise covariance using Eq.~\eqref{eq:orthogonalization}. We finally obtain the frequency series $\bm{\tilde{y}}_{\perp, \mathrm{GW}}(f)$. We plot their periodogram in light blue in Fig~\ref{fig:aPCI_gw_response} and we superimpose the analytical response that we derived with Eq.~\ref{eq:e_gw_ortho_response}. \begin{figure}[!h] \centering \includegraphics[width=\columnwidth, trim={0.5cm, 0.3cm, 0.5cm, 0.3cm}, clip]{./pci_gw_model_ortho.pdf} \caption{GW response periodograms of the 3 first-order orthogonalized aPCI variables $\bm{e}_{\perp \mathrm{GW}, j}^{(1)}$ to a stochastic GW background with a strain PSD of $\rm 1 \, Hz^{-1}$ (blue), along with their analytical response function (black).} \label{fig:aPCI_gw_response} \end{figure} We check that the model matches the simulation by inspecting the distributions of the real and imaginary parts of the Fourier transforms normalized by their response $2 \bm{R}_{e, \perp}^{-1} \bm{\tilde{y}}_{\perp, \mathrm{GW}}$. We verify that they follow a normal distribution of mean zero and unit standard deviation. Like for the noise residuals, this result shows that the zeroth-order analytical model is sufficient to describe the aPCI response to GWs. \subsection{\label{sec:sensitivity_definition}Computation of sensitivity} In the GW literature, sensitivity is commonly defined as the ratio of the PSD of the noise affecting the measurement to the instrument's sky-averaged response function. Here we strictly follow the definition of Babak et al.~\cite{babak_lisa_2021}. The sensitivity of one variable $\bm{e}_{\perp j}^{(1)}$ is therefore \begin{equation} \label{eq:t_sensitivity} {S}_{h, e_{\perp j}}(f) = \frac{\Lambda_{jj}(f)}{R_{e, \perp jj}(f)}, \end{equation} where the numerator is given by the aPCI covariance eigenvalues in Eq.~\eqref{eq:covariance_eigendecomposition}, and the denominator is the diagonal element of the response in Eq.~\eqref{eq:e_gw_ortho_response}. Making use of orthogonality, we derive the total sensitivity by summing the inverse sensitivities (i.e., the signal-to-noise ratios) of the individual variables as \begin{equation} \label{eq:total_sensitivity} {S}_{h, e_{\perp}}(f) = \left[ \sum_{j=1}^{3} {S}^{-1}_{h, e_{\perp j}}(f) \right]^{-1}. \end{equation} In the following, we will compare the aPCI sensitivity that we obtain with standard second-generation TDI. To this aim, we compute the TDI sensitivity in a similar way. Instead of using the aPCI transformation matrix $\bm{\tilde{W}}(f)$, we build a TDI transformation matrix $\bm{\tilde{W}}_{\mathrm{TDI}}(f)$ in the frequency domain. The TDI-equivalent output of Eq.~\eqref{eq:e_response} is then a 3-vector whose elements are the Michelson variables $X$, $Y$ and $Z$. At zeroth order, we can derive the entries of $\bm{\tilde{W}}_{\mathrm{TDI}}(f)$ for the second generation TDI from Eq.~(106) in Babak et al.~\cite{babak_lisa_2021}, where we approximate all delays operators by their frequency-domain limit for infinite time series: $\tilde{D}_{ij} = e^{-2\pi i f L_{ij} / c}$. We evaluate the armlengths at half the observation time $L_{ij} = L_{ij}(N \tau_s / 2)$. Then, we compute the Michelson TDI covariance exactly as in Eq.~\eqref{eq:e_covariance} and we diagonalize it to obtain the orthogonal TDI variables. Performing this exact orthogonalization instead of using the standard A, E, T formula (as derived in~\cite{prince_lisa_2002} under specific hypotheses) ensures that we follow the same process for both aPCI and TDI to compute the sensitivity. Moreover, the standard definition does not yield perfectly orthogonal combinations in the case of non-equal armlengths~\cite{baghi_statistical_2021}. \subsection{\label{sec:sensitivity_results}Sensitivity results} We plot the first-order aPCI total sensitivity with a thick, dashed blue line in Fig.~\ref{fig:sensitivities} thanks to the analytical model provided by Eq.~\eqref{eq:total_sensitivity}. As a comparison, we do the same for the case of second-generation TDI with a solid orange line. The aPCI and TDI curves match remarkably well, given that aPCI does not use any model describing the laser frequency noise terms appearing in the link measurements, nor any prior knowledge of light travel time delays between the spacecrafts. The computation of noise-cancelling combinations directly comes from the singular spectral analysis of one realization of the laser-frequency-noise dominated data matrix $\bm{Z}^{(1)}$, from which we obtained the singular vectors $\bm{V}^{(1)}$. \begin{figure}[!h] \centering \includegraphics[width=\columnwidth, trim={0.5cm, 0.3cm, 0.5cm, 0.3cm}, clip]{./sensitivities_tdi_pci_smooth.pdf} \caption{Sky-averaged sensitivities of first-order aPCI (dashed blue) and second-generation TDI (orange). The thick solid curves are the analytical sensitivities and the dotted thin lines are empirical estimates of the sensitivity obtained with GW stochastic background and instrumental noise simulations, and computing the ratios of their smoothed periodogram.} \label{fig:sensitivities} \end{figure} To check that this analytical results represent what is actually measured, we plot the empirical sensitivities defined as the ratios of the response periodograms of Fig.~\ref{fig:aPCI_gw_response} and the noise periodograms of Fig.~\ref{fig:aPCI_noise_models}, \begin{equation} \label{eq:total_estimated_sensitivity} \hat{S}_{h, e_{\perp}}(f) = \left[ \sum_{j=1}^{3} \frac{\lvert \tilde{e}_{\perp, \mathrm{GW} j} \rvert^{2}}{\lvert \tilde{e}_{\perp, \mathrm{n} j} \rvert^{2}} \right]^{-1}. \end{equation} The above equation only uses the outputs of our GW background and instrumental noise simulations. These quantities are empirical estimates of each aPCI variable's signal-to-noise ratios. The plot shows that the analytical models are consistent with their empirical equivalents. The aPCI processing is therefore a valid method for practical data analysis purposes. Unlike in Paper I, we approximated the aPCI residual noise covariance matrix $\bm{\tilde{\Sigma}}_{{e}_{n}}$ by only accounting for secondary noises, and zeroth order effects in time. From this approximation, we derived the matrix's eigenvectors $\bm{\Phi}$ that yield the orthogonal variables $\bm{\tilde{e}}_{\perp}^{(1)}$. In reality, a non-zero residual laser noise is still present in the aPCI variables, which slightly modifies their covariance, and hereby their eigen-decomposition. To see that, we set up a simulation including only laser frequency noise, discarding all other noise sources. Then we apply the exact same aPCI decomposition to evaluate the level of laser noise that remains. We show the outcome in Fig.~\ref{fig:sensitivities_with_laser_noise}. \begin{figure}[!h] \centering \includegraphics[width=\columnwidth, trim={0.5cm, 0.3cm, 0.5cm, 0.3cm}, clip]{./sensitivities_tdi_pci_laser.pdf} \caption{Residual laser noise in the aPCI (dotted brown) and TDI (solid red) sensitivities, compared with their total sensitivity (dashed blue and orange curves).} \label{fig:sensitivities_with_laser_noise} \end{figure} For comparison, in the figure we reproduce the total sensitivity of both aPCI (dashed blue curve) and TDI (solid orange curve). We draw in dotted brown lines the residual laser frequency noise in the aPCI variables, which stands about one order of magnitude below the total sensitivity. Thus, as we already observed in Fig.~\ref{fig:aPCI_noise_periodograms}, the aPCI algorithm suppresses laser frequency noise below the level of secondary noises. The same residuals in the TDI variables (plotted in solid red line) are down to about three orders of magnitude below all other noises in most of the frequency band. While TDI is explicitly designed to cancel laser noise, aPCI is rather constructed to find the data combinations minimizing the variance. That said, the residual difference we observe may be due to the non-optimal orthogonalization. More accurately characterizing the noise in the aPCI variables $\bm{\tilde{e}}$ may help further decreasing the aPCI laser frequency noise residuals. We leave this task to future work, as Fig.~\ref{fig:sensitivities_with_laser_noise} already demonstrates that, in its current form, the aPCI approach matches the TDI sensitivity up to a relatively small error. Using the dotted brown residual noise periodogram, we estimate this error to be 2\% on average. \section{\label{sec:discussion}Discussion} We further developed the data-driven technique for space-based interferometry introduced in our earlier work (Paper I). In the present study, we allowed the distance in-between the spacecraft to vary with time, breaking the stationarity of the laser noise that dominates interferometer measurements. To make our aPCI method suitable for such a time-varying configuration, we included a new time-dependent term in the data matrix that we analyze through singular value decomposition. Instead of carefully modelling the beams' geometry, the aPCI approach ``learns" the filter coefficients that one needs to apply to the data to mitigate laser frequency noise. We showed that this extension allows us to cancel laser noise down to a level 100 times better than the previous version which worked in the case of constant arm lengths. This level is enough to reduce laser frequency noise residuals below the other noise sources. Based on an approximate model for the residual noise covariance, we transformed the aPCI components into a quasi-orthogonal set of variables of which only three are GW-sensitive. We demonstrated that their combined sensitivity is the same as for second-generation TDI, up to a 2 \% relative error. This result shows that we can infer all the information needed to process space-based interferometry measurements from the data without a particular noise model. The only implicit assumption is that the data features significant time correlations over a specific duration. Extracting these correlations via SSD leads to a data-driven basis where we can separate the laser-noise-free components of the data from the laser-noise-dominated ones. The aPCI laser frequency noise residuals lie below the level of secondary noises, which allows one to use the methods for most GW data analysis purposes. Nevertheless, in the present implementation, these residuals stand higher than the level achieved by second-generation TDI. The reason is that after getting the lowest-variance aPCI variables, we need to orthogonalize them with respect to the remaining noise, in the same way we construct the optimal TDI combinations A, E, and T from the Michelson combinations. As for TDI, this orthogonalization process requires knowing the variables' covariance matrix. In this work, we computed this covariance using an approximation which only includes non-laser-frequency-noise contributions. This way, the covariance is easy to derive from the single-link noise PSDs and the singular vectors and does not require any knowledge of the laser noise correlations structure. However, ignoring them leads to an imperfect diagonalization of the covariance, which, in turn, yields a set of three variables with non-optimal sensitivity. To reach the full potential of aPCI, we would need to characterize the aPCI variables' residual noise covariance from the data. We plan to develop this characterization in further works. Note that we also need such a tool to get TDI's optimal sensitivity. Even though non-laser-frequency noises fully dominate Michelson variables, one cannot rely on a fixed model for instrumental noises to build statistically optimal TDI combinations. Instead, GW data analysis will ultimately need an estimation of TDI covariances by using suitable multivariate spectral methods. In conclusion, the present form of the aPCI method is already operational for practical purposes. Achieving its optimal sensitivity is within reach, provided that we develop a robust frequency-domain covariance estimator. We will also focus on understanding how the aPCI sensitivity depends on its tuning parameters. Indeed, while not critical, a trade-off between performance and computational efficiency most likely exists when choosing the analyzed data size $N$, the stencil size $n_h$, the number of components $q$ to consider, and the order $p$ of the Taylor expansion in time. We will therefore assess the influence of these parameters in further studies. Furthermore, testing more realistic configurations including additional noises and laser locking is required. Finally, we envision to demonstrate Bayesian inference of GW source parameters with the aPCI framework, which would be the ultimate demonstration of its reliability for GW data analysis.
{ "timestamp": "2022-09-23T02:10:30", "yymm": "2209", "arxiv_id": "2209.10851", "language": "en", "url": "https://arxiv.org/abs/2209.10851" }
\section{Introduction} \label{Sec: Introduction} Shock waves are complex events which can induce catastrophic damage to materials through plastic deformation and spall fracture. As such, considerable effort has been devoted towards understanding shock propagation within solids at the continuum level \cite{meyers1994dynamic,davison2008fundamentals}. However, a material's response to shock wave loading is linked to intricate behavior at the microscale. For example, fracture caused by a shock wave impact is the direct result of dislocations and void nucleation within the material's microstructure \cite{RustyGray2012,Fensin2014,Bingert2014}. Hence, it is imperative to understand shock wave propagation and evolution at the microscale in order to adequately predict material behavior at the macroscale. Atomistic shock wave simulations have been performed over the past several decades primarily using a technique known as non-equilibrium Molecular Dynamics (NEMD). In these simulations, the shock is typically generated by an impact or with a moving piston and is then allowed to propagate through the domain \cite{holian1995atomistic}. Recently, NEMD frameworks have been expanded to incorporate hundreds of millions of particles and have been used to model events such as dislocation generation \cite{germann2004dislocation,Tramontina2017Simulation,righi2021towards,zhu2021collapse}, twinning \cite{Higginbotham2013Molecular,wu2021unveiling,zhu2021novel}, void nucleation \cite{bringa2010void,bisht2019investigation,tian2021anisotropic}, and shock-induced spallation \cite{Srinivasan2007,Fensin2014Effect,wang2021spall,chen2021molecular,dewapriya2021molecular}. Unfortunately, NEMD techniques suffer from issues related to limited domain sizes and a large computational overhead which can cause artificial wave reflection and drastically restrict the total runtime \cite{kedharnath2021classical}. In the past two decades, alternative atomistic techniques have been developed to counteract such issues, and some examples are the uniaxial Hugoniostat \cite{maillet2000uniaxial,maillet2002uniaxial}, the multiscale shock technique (MSST) \cite{Reed2003,reed2006analysis}, and the moving window method \cite{Zhakhovskii1997,zhakhovsky2011two,davis2020one}. While modern atomistic techniques have greatly expanded our knowledge of shock wave behavior at the microscale, they nevertheless fail to capture the continuum-level response because the total number of particles that can be realistically incorporated into the domain is restricted by computer architecture and limited computational resources. To overcome these issues, \textit{concurrent multiscale} frameworks have been developed which retain atomistic information around a small region of interest and populate the remainder of the domain with finite elements \cite{kohlhoff1991crack,mcdowell2020connecting,van2020roadmap,xiong2021multiscale,fish2021mesoscopic}. A primary concern of concurrent methods is ensuring numerical compatibility at the atomistic-continuum (A-C) interfaces in order to reduce spurious wave reflections and ghost forces. Many schemes have been developed which address this issue in different ways \cite{tadmor2011modeling}, and a few of them are as follows: the Coupling of Length Scales (CLS) method \cite{rudd1998coarse}, the Bridging Domain (BD) method \cite{xiao2004bridging}, the Coupled Atomistic Discrete Dislocation (CADD) method \cite{Shilkrot2002Coupled}, and the Quasicontinuum (QC) method \cite{tadmor1996quasicontinuum}. Although methods such as these have had great success in material modeling, many of them still suffer from interface discrepancy due to a difference in governing equations between the atomistic and continuum regions. The Concurrent Atomistic-Continuum (CAC) method overcomes many of the A-C interface issues seen in other concurrent schemes by utilizing a unified multiscale framework built upon Atomistic Field Theory \cite{chen2005atomistic,chen2009reformulation} whereby a single set of governing equations is employed throughout the entire domain \cite{xiong2011coarse,yang2013concurrent,xiong2014prediction,xiong2015concurrent,xu2016mesh,chen2017recent,chen2018passing,xu2018pycac,chen2019concurrent}. As a result, CAC has seen tremendous success over the past decade in modeling phenomena such as dislocations and grain boundaries \cite{xiong2014sub,chen2017effects} as well as passing high-frequency waves between the atomistic and continuum regions \cite{chen2018passing,DAVIS2022111702}. Recent work has even implemented an A-atom approach within CAC to perform large-scale simulations of multicomponent alloys \cite{chu2022multiscale}, and research of dislocation evolution \cite{selimov2021lattice} as well as crystal plasticity \cite{selimov2022coarse} is ongoing. Unfortunately, the study of shock wave propagation using the CAC method has been limited due to the highly dynamic nature of such phenomena. While previous work has addressed this complication by incorporating moving window techniques into a CAC framework to track a nonlinear shock wave for long runtimes \cite{davis2022moving}, this formulation only considered a 1D chain of particles and was thus limited in scope. In the present work, we develop a multiscale framework using the CAC method to model long-time shock wave propagation through a two-dimensional lattice. Specifically, we utilize both the Hugoniot shock equations \cite{meyers1994dynamic} as well as the nonlinear Eulerian thermoelastic shock equations \cite{clayton2013nonlinear} to study the classic Riemann problem of a single traveling discontinuity. Furthermore, we enhance the moving window techniques first presented in \cite{davis2022moving} to track the shock over long simulation times and engineering-scale domains. Each method maintains the shock front at the center of the atomistic region for the entire runtime, so the wave front never encounters the A-C interfaces. This allows us to model shock propagation for greater simulation times than traditional NEMD and multiscale methods and thus gain valuable information about the long-term, time-averaged material response to shock loading of two different FCC solids. This paper is organized as follows. Section \ref{Sec: Shock Wave Background} characterizes the shocks studied in the present work and elaborates on both the Hugoniot and Eulerian analytical models. Section \ref{Sec: Computational Framework} describes the framework's geometry and boundary conditions as well as presents the interatomic potential, thermostat, material parameters, and shock constants utilized in the simulations. Section \ref{Sec: CAC Method} discusses the finite element formulation of CAC and its 2D implementation. Section \ref{Sec: Shock Propagation Technique} outlines both the shock propagation technique and the two moving window schemes used to track the shock front. Section \ref{Sec: Elastic Anisotropy: Crystal Orientation Dependence on Shock Propagation Response} presents shock propagation results obtained with the conveyor technique and compares these to both analytical models to highlight the directional anisotropies in single crystals subject to shock loading. Section \ref{Sec: Results with the Coarsen-Refine Method and Formulation Efficiency} uses the coarsen-refine technique to perform parametric studies related to the shock front's structure as well as showcases the efficiency of the current model compared to NEMD simulations. Finally, Section \ref{Sec: Conclusion} concludes the paper and discusses ideas for future work. \section{Shock Wave Background} \label{Sec: Shock Wave Background} \subsection{Problem statement} \label{Sec: Problem statement} We consider a two-dimensional monatomic lattice with no defects under compression by an ideal longitudinal shock wave traveling in the $x-$direction. Mathematically, we represent the shock as a propagating discontinuity across which there exists a jump in particle velocity ($v$), stress ($\sigma$), strain ($\epsilon$), and temperature ($\theta$). Material quantities ahead of the shock front have the superscript \textit{$-$}, and quantities behind the shock front have the superscript \textit{$+$}. The notation $\llbracket\cdot\rrbracket$ denotes the change in a given quantity $(\cdot)$ across the shock front. During each simulation, particles ahead of the shock wave are assumed to be at zero mean particle velocity, unstressed, unstrained, and at room temperature ($295$ K). Furthermore, the shock propagates at a natural velocity $U_S$ along the surface of the primitive unit cell of an FCC lattice. We incorporate these parameters into a moving window CAC framework to simulate long-time shock wave propagation over engineering-scale domains. Specifically, we model the classic Riemann problem of a single shock wave front with constant states on either side as shown in Fig. \ref{Fig:Riemann Shock}. \begin{figure}[htpb] \centering \includegraphics[width=0.45\textwidth]{Riemann_Shock.JPG} \caption{Riemann problem of a shock wave with constant states in front of and behind the shock front.} \label{Fig:Riemann Shock} \end{figure} To calculate the aforementioned jump parameters and thus characterize the shock wave at the continuum level, we use two different formulations which are discussed below. \subsection{Hugoniot shock equations} \label{Sec: Hugoniot shock equations} First, we simulate dynamic shock wave propagation and evolution using the conservation of momentum, continuity equation, Hugoniot equation of state (EOS), and a thermodynamic relationship derived from the shock Hugoniot and release isentrope. By applying the conservation of linear momentum and continuity of displacement across the discontinuous shock front and assuming uniaxial loading, we obtain the following standard one-dimensional shock wave jump equations \cite{davison2008fundamentals}: \begin{align} \llbracket \sigma \rrbracket + \rho U_s \llbracket v \rrbracket &= 0 \label{Eq: Momentum Equation} \\ \llbracket v \rrbracket + U_s \llbracket \epsilon \rrbracket &= 0\label{Eq: Continuity Equation} \end{align} where $\rho$ denotes the density of the material. To fully parameterize the system, Eqs. (\ref{Eq: Momentum Equation}) and (\ref{Eq: Continuity Equation}) are supplemented by an empirically observed linear relation between shock velocity and particle velocity \cite{meyers1994dynamic}: \begin{equation} U_s = C_0 + S\llbracket v\rrbracket. \label{Eq: Linear Law} \end{equation} Here, \textit{S} is a dimensionless, empirical parameter representing the slope of the shock velocity vs. particle velocity Hugoniot curve, and \textit{$C_{0}$} is the sound velocity in the material at zero stress. We can use Eqs. (\ref{Eq: Momentum Equation}), (\ref{Eq: Continuity Equation}), and (\ref{Eq: Linear Law}) to derive the standard Hugoniot stress-strain relationship given as follows: \begin{equation} \sigma = \frac{\rho C_0^2 \llbracket \epsilon \rrbracket}{(1 + S \llbracket \epsilon \rrbracket)^2} \label{Eq: Hugoniot EOS} \end{equation} where compressive stress and strain are considered positive. The Hugoniot stress-strain relationship forms the basis of modern equations of state. Finally, we calculate the rise in temperature across the shock front by solving the following ordinary differential equation \cite{davison2008fundamentals}: \begin{equation} C_{V} \left(\frac{d\theta}{d \epsilon} \right)_{H} - \frac{\gamma \theta C_{V}}{1 - \epsilon} = \frac{\epsilon}{2} \left(\frac{d \sigma}{d \epsilon} \right)_{H} - \frac{\sigma}{2} \label{Eq: Shock Heat Equation} \end{equation} where $C_V$ is the volumetric specific heat capacity, and $\gamma$ is the Mie-Gruneisen parameter for the material. \subsection{Eulerian shock equations} \label{Sec: Eulerian shock equations} We also characterize the propagating shock wave using the nonlinear Eulerian thermoelastic shock equations derived in \cite{clayton2013nonlinear,clayton2014shock} for anisotropic crystals. Nonlinear elastic constitutive models of material behavior which do not account for slippage and plasticity are generally idealizations because even small uniaxial compressive strains can cause ductile materials to reach the experimental Hugoniot elastic limit (HEL). However, such elastic formulations can be practically applied to defect-free atomistic and multiscale simulations since these domains may be shocked to finite strains over relatively short time scales and small volumes \cite{clayton2014shock,zimmerman2011elastic}. For an extensive derivation of the Eulerian formulation for shock waves, we refer the reader to \cite{clayton2013nonlinear}. Here, we merely present the relevant equations used in the current work. The particle velocity in the shocked material, shock propagation velocity, and temperature in the shocked material are given by the following respective equations: \begin{equation} \label{Eq: ClaytonEulerianParticleVelocity} v = \biggl \{ \left(\frac{\hat{S}}{\rho} \right) \left[ \left(1-2D \right) - \left(1-2D \right)^{3/2} \right] \biggr \}^{1/2} \end{equation} \begin{equation} \label{Eq: ClaytonEulerianShockVelocity} U_S = v \left[1 - \left(1-2D \right)^{-1/2} \right]^{-1} \end{equation} \begin{equation} \label{Eq: ClaytonEulerianTemperature} \theta = \frac{\partial \hat{U}}{\partial \eta} = \theta_0 \left(1 - \Hat{\Gamma}_1 D - \frac{1}{2} \hat{\Gamma}_{11} D^2 \right). \end{equation} In Eqs. (\ref{Eq: ClaytonEulerianParticleVelocity}), (\ref{Eq: ClaytonEulerianShockVelocity}), and (\ref{Eq: ClaytonEulerianTemperature}), $D$ is the Eulerian strain represented by the following expression: \begin{equation} \label{Eq: EulerianStrain} D = \frac{1}{2} \left(1 - F^{-2} \right) = \frac{1}{2} \left[1 - \frac{1}{(1 + \epsilon)^2} \right]. \end{equation} Here, the term \textit{Eulerian} refers to a strain which is a function of the inverse deformation gradient $F$. Hence, the strain tensor $D$ assumes material coordinates rather than spatial coordinates, so it can be applied to simulations of anisotropic materials \cite{clayton2013nonlinear}. Furthermore, $\hat{U}$ is the Eulerian fourth-order internal energy function as seen below \cite{clayton2014shock}: \begin{equation} \label{Eq: ClaytonEulerianEnergy} \Hat{U} = \frac{1}{2}\Hat{C}_{11}D^2 + \frac{1}{6}\Hat{C}_{111}D^3 + \frac{1}{24} \Hat{C}_{1111}D^4 - \theta_0 \left(\Hat{\Gamma}_{1} D + \frac{1}{2} \Hat{\Gamma}_{11}D^2 - 1 \right)\eta \end{equation} where $\Hat{C}_{11}$, $\Hat{C}_{111}$, and $\Hat{C}_{1111}$ are the Eulerian second, third, and fourth-order elastic constants, $\Hat{\Gamma}_{1}$ and $\Hat{\Gamma}_{11}$ are the Eulerian first and second-order Gr\"{u}neisen parameters, and $\eta = 0$ is the entropy ahead of the shock front. The elastic constants and Gr\"{u}neisen parameters in an Eulerian setting are obtained from their non-Eulerian counterparts using the following relations \cite{weaver1976application,perrin1978application,clayton2013nonlinear}: \begin{align} \Hat{C}_{11} &= C_{11} \label{Eq: ClaytonEulerianC11} \\ \Hat{C}_{111} &= C_{111} + 12C_{11} \label{Eq: ClaytonEulerianC111} \\ \Hat{C}_{1111} &= C_{1111} - 18C_{111} - 318C_{11} \label{Eq: ClaytonEulerianC1111} \\ \Hat{\Gamma}_{1} &= \Gamma_{1} \\ \Hat{\Gamma}_{11} &= \Gamma_{11} + 4\Gamma_1 \label{Eq: ClaytonEulerianGruneisenConstant}. \end{align} Finally, the conjugate stress $\Hat{S}$ is represented by \begin{align} \label{Eq: ClaytonEulerianStress} \Hat{S} &= \frac{\partial \Hat{U}}{\partial D} \\ \nonumber &= C_{11}D + \frac{1}{2}\hat{C}_{111}D^2 + \left(\frac{1}{6}\hat{C}_{1111} - \theta_0 \Hat{\Gamma}_1 b_3 \right)D^3 - \theta_0 D^4 \left[ \left(\Hat{\Gamma}_1 b_4 + \hat{\Gamma}_{11} b_3 \right) + \left(\Hat{\Gamma}_1 b_5 + \hat{\Gamma} _{11} b_4 \right) D \right] \end{align} where $b_3$, $b_4$, and $b_5$ are polynomials for entropy $\eta$ generated across the shock front, and their expressions can be found in \cite{clayton2013nonlinear}. In each shock wave simulation, we use the fourth-order expression of Eqs. (\ref{Eq: ClaytonEulerianParticleVelocity}), (\ref{Eq: ClaytonEulerianShockVelocity}), and (\ref{Eq: ClaytonEulerianTemperature}). \subsection{`Elastic' shock waves} \label{Sec: Elastic shock waves} To legitimately utilize the shock equations from Sec. \ref{Sec: Eulerian shock equations} as well as avoid intractability with the moving window techniques, we perform shock simulations with relatively small strains such that the resulting stresses are below the HEL of the material (see \ref{App: Stress-strain relations}). To maintain consistency, we refer to these as \textit{elastic} shock waves in the present work, but they are also classified as \textit{weak} shocks in other papers \cite{holian1995atomistic}. Elastic shock waves are often modeled in defect-free crystals with NEMD techniques to study a particular phenomenon, test a new framework, or validate a given potential \cite{holian1978molecular,zimmerman2011elastic,davis2020one}, and their distinguishing characteristic is the lack of any permanent dislocations (inelastic deformation) behind the wave front. This is possible because the HEL is typically higher than what is seen in experimental settings \cite{yang2013concurrent}, and the wave speed is still greater than the sound velocity in the material at the low strains. Modeling shock propagation with the CAC moving window framework using thermoelastic-viscoplastic models \cite{lloyd2014simulation,lloyd2014plane} is a worthy pursuit but would add an extra layer of complexity to the current model and is thus reserved for future studies. \section{Computational Framework} \label{Sec: Computational Framework} \subsection{Geometry and boundary conditions} \label{Sec: Geometry and boundary conditions} The two-dimensional CAC framework is implemented using an in-house C++ code, and the monatomic lattice is split into three primary regions as seen in Fig. \ref{Fig:2D_CAC_Framework}. The two coarse-scaled (continuum) regions are composed of rhombus elements, and the four particles which make up any particular element are referred to as \textit{nodes} in the present work. We choose rhombus elements because they align with the primitive unit cell of the FCC lattice (see Sec. \ref{Sec: Two-dimensional formulation}) and thus facilitate a smooth transition between the fine-scaled and coarse-scaled regions. Specifically, the $x$-direction corresponds to the [112] lattice orientation while the $y$-direction corresponds to the [110] lattice orientation. Since element connectivity is not required in CAC \cite{xiong2011coarse}, each node is a member of only one element, and this greatly reduces the complexity of the finite element formulation. Furthermore, the edges of the grid in the continuum regions are ``filled in" with particles which we refer to as \textit{boundary atoms} in this work. This is done in order to facilitate periodic boundary conditions as shown in \cite{xu2015quasistatic}. \begin{figure}[htpb] \centering \includegraphics[width=0.9\textwidth]{2D_CAC_Framework_2.PNG} \caption{Schematic of the two-dimensional CAC framework.} \label{Fig:2D_CAC_Framework} \end{figure} The two coarse-scaled regions flank the inner fine-scaled (atomistic) region on the left and right-hand side, and we refer to the particles in this region as either \textit{inner atoms} or just \textit{atoms} in the present work. The ``elements" in the fine-scaled region are reduced to their smallest possible configuration such that only four atoms constitute the entire area of each element. Hence, both the fine-scaled and coarse-scaled regions are technically made up of rhombuses with the only differences being the area and mass of their respective elements. As a consequence, one governing equation along with a single mass matrix is utilized for both regions, all force calculations are nonlocal, and the interatomic potential is the only constitutive relation \cite{xu2016mesh}. Thus, the particles at the A-C interfaces ($x_{A,0}$ and $x_{A,F}$) interact with each other directly without creating ghost forces \cite{xu2015quasistatic,xu2018pycac}. We note that to avoid introducing non-physical strains into the domain during shock simulations, semi-periodic boundary conditions are employed in the $x$-direction whereby the particles at the ends of the chain ($x_0$ and $x_F$) are made neighbors with the nodes at the interfaces ($x_{A,0}$ and $x_{A,F}$ respectively) \cite{davis2022moving}. Additionally, since the present work only considers uniaxial compression, we utilize periodic boundary conditions in the $y$-direction when modeling a longitudinal shock wave. \subsection{Interatomic potential and material parameters} \label{Sec: Interatomic potential and material parameters} To calculate the integrand of the internal force density (Eq. \ref{Eq: Internal Force Density}), we use the modified Morse interatomic potential function. This potential only considers first nearest neighbor interactions and is given by the following expression \cite{macdonald1981thermodynamic}: \begin{equation} \label{Eq: Morse} \Pi(r_{ij}) = \frac{D_0}{2B-1}\left[\mathrm{e}^{-2 \alpha \sqrt{B} (r_{ij} - r_0)} - 2B\mathrm{e}^{-\alpha (r_{ij} - r_0) / \sqrt{B}}\right] \end{equation} where $r_{ij} = |\textbf{x}_{i} - \textbf{x}_{j}|$ is the magnitude of the displacement between particle $i$ and $j$, and $r_0$ is the distance at which the potential reaches the minimum (defined as the close-packed neighbor spacing). We perform shock simulations with Cu and Al, and the parameters for these materials are given in Table \ref{Table: MorsePotentialParameters}. \begin{table}[h] \centering \caption{Material constants and Morse parameters of two different FCC metals \cite{macdonald1981thermodynamic}.} \label{Table: MorsePotentialParameters} \begin{tabular}{||c c c c c c c c||} \hline \textit{Element} & \textit{mass (u)} & \textit{$\rho_0$ (g/$cm^3$)} & $\Gamma_1$ & \textit{$r_0$ (\AA)} & \textit{$a$ (\AA$^{-1}$)} & \textit{$D_0$ (eV)} & \textit{B} \\ \hline Cu & 63.55 & 8.96 & 1.97 & 2.5471 & 1.1857 & 0.5869 & 2.265 \\ Al & 26.98 & 2.70 & 2.17 & 2.8485 & 1.1611 & 0.3976 & 2.5 \\ \hline \end{tabular} \end{table} \subsection{Integration algorithm and thermostat} \label{Sec: Integration algorithm and thermostat} The CAC governing equation (Eq. \ref{Eq: Matrix Form of Governing Equation}) is a second-order ordinary differential equation in time, and we solve it using the velocity Verlet algorithm. The time step used in the integration algorithm is chosen to be $\Delta t = 0.001$ ps in order to minimize numerical error. To apply temperature to the domain, we use the Langevin thermostat -- a stochastic thermostat which adds a random force to the particle motion along with a damping term $\zeta$. In particular, we modify the velocity Verlet algorithm in the presence of the Langevin thermostat by performing the discretization used in LAMMPS \cite{schneider1978molecular}: \begin{align} \textbf{v}_{i} \left(t + \frac{\Delta t}{2} \right) &= \textbf{v}_{i}(t) - \frac{\Delta t}{2} \left[\frac{\nabla_{i} \Pi(t)}{m} + \zeta \textbf{v}_{i}(t) \right] + \sqrt{\frac{\Delta t k_B \theta \zeta}{m}}\tilde{\textbf{h}}_{i} \nonumber \\ \textbf{x}_{i}(t + \Delta t) &= \textbf{x}_{i}(t) + \textbf{v}_{i} \left(t + \frac{\Delta t}{2} \right) \Delta t \nonumber \\ \textbf{v}_{i} \left(t + \Delta t \right) &= \textbf{v}_{i} \left(t + \frac{\Delta t}{2} \right) - \frac{\Delta t}{2} \left[\frac{\nabla_{i} \Pi(t + \Delta t)}{m} + \zeta \textbf{v}_{i} \left(t + \frac{\Delta t}{2} \right) \right] + \sqrt{\frac{\Delta t k_B \theta \zeta}{m}}\tilde{\textbf{h}}_{i}. \end{align} Here, $\textbf{x}_{i}$ and $\textbf{v}_{i}$ denote the position and velocity of the ${i}^{th}$ particle, $m$ is the atomic mass, $k_{B}$ is Boltzmann's constant, and $\tilde{\textbf{h}}_{i}$ is a Gaussian random variable with a mean of zero and a variance of one. As per Langevin's requirements, we generate a different random variable for each particle during each velocity update. Since Langevin is local in nature, the target temperatures $\theta^+$ and $\theta^-$ are specified for each particle. For the compressive strains applied in this work, $\theta^+$ has an upper boundary of $\sim$ 450 K. \subsection{Shock parameters} \label{Sec: Shock parameters} In Table \ref{Table: ShockParameters}, we present the empirical Hugoniot shock parameters as well as the second, third, and fourth-order elastic constants (in a normal and Eulerian setting) for both Cu and Al. The Hugoniot parameters are obtained from \cite{marsh1980lasl}, the second and third-order elastic constants for Cu and Al are obtained from \cite{hiki1966anharmonicity} and \cite{thomas1968third} respectively, and the fourth-order elastic constants are obtained from \cite{clayton2014shock}. For these values, the temperature is assumed to be $295$ K, $C_0$ is given in $km/sec$, $S$ is unitless, and the elastic constants are given in GPa. The Hugoniot parameters are derived for a shock wave propagating through a bulk, polycrystalline material. Furthermore, the elastic constants represent the pure-mode directions such that a planar shock impact results in an exclusively longitudinal component (along the [100] direction) with no transmitted shear stress, and hence the one-dimensional analysis is valid. We use these parameters as initial input in our shock simulations and compare the results from the CAC model to analytical and empirical data in Sec. \ref{Sec: Elastic Anisotropy: Crystal Orientation Dependence on Shock Propagation Response}. \begin{table}[h] \centering \caption{Hugoniot and Eulerian shock parameters for Cu and Al ($\theta = 295$ K, $C_0$ in km/sec, and $C_{\alpha \beta}$ in GPa).} \label{Table: ShockParameters} \begin{tabular}{||c c c||} \hline \textit{Property} & \textit{Cu [100]} & \textit{Al [100]} \\ \hline $C_0$ & 3.94 & 5.33 \\ S & 1.49 & 1.34 \\ $C_{11}$ & 166 & 107 \\ $C_{111}$ & -1270 & -1080 \\ $\Hat{C}_{111}$ & 722 & 204 \\ $C_{1111}$ & 11900 & 25000 \\ $\Hat{C}_{1111}$ & 2000 & 10500 \\ \hline \end{tabular} \end{table} \section{CAC Method} \label{Sec: CAC Method} \subsection{Finite element implementation} \label{Sec: Finite element implementation} Here, we give a very brief overview of the finite element implementation of CAC, but more details can be found in \cite{xiong2009multiscale,deng2010coarse,xiong2011coarse,chen2019concurrent}. The mathematical foundation of CAC is Atomistic Field Theory (AFT), and the governing equations of AFT have a similar form to the balance laws of classical continuum mechanics. % Exploiting the definitions of internal force density and kinetic temperature derived in \cite{chen2005nanoscale} and \cite{chen2006local}, we can recast the instantaneous balance equation of linear momentum as follows \cite{xiong2009multiscale}: \begin{equation} \label{Eq: FEM Governing Equation 2} \rho^{\alpha} \Ddot{\textbf{u}}^{\alpha}(\textbf{x}) = \textbf{f}_{int}^{\alpha}(\textbf{x}) + \textbf{f}^{\alpha}(\textbf{x}). \end{equation} In Eq. (\ref{Eq: FEM Governing Equation 2}), $\textbf{u}^{\alpha}(\textbf{x})$ is the displacement of the $\alpha^{th}$ atom in the unit cell, $\rho^{\alpha} = m^\alpha/\Delta V $ is the volumetric mass density, $m^{\alpha}$ is the mass of the $\alpha^{th}$ atom, $\Delta V$ is the volume of the unit cell, $\textbf{f}_{int}^{\alpha}(\textbf{x})$ is the internal force density, and $\textbf{f}^{\alpha}(\textbf{x})$ is the force density due to external forces and temperature. The two terms on right side of Eq. (\ref{Eq: FEM Governing Equation 2}) are represented as follows: \begin{align} \textbf{f}_{int}^{\alpha}(\textbf{x}) &= \int_{\Omega(\textbf{x}')} \sum_{\beta = 1}^{N_a} \textbf{f} \left[\textbf{u}^{\alpha}(\textbf{x}) - \textbf{u}^{\beta}(\textbf{x}') \right] d\textbf{x}' \label{Eq: Internal Force Density} \\ \textbf{f}^{\alpha}(\textbf{x}) &= \textbf{f}_{ext}^{\alpha}(\textbf{x}) - \frac{m^{\alpha} k_B}{M \Delta V} \nabla_{\textbf{x}} \theta^{\alpha} \label{Eq: External Force Density} \end{align} where $\textbf{f}_{ext}^{\alpha}(\textbf{x})$ is the external force density, $M$ is the total mass of the atoms within a unit cell, and $\theta^{\alpha}$ is the kinetic temperature. We note that the internal force density can be obtained exclusively from the interatomic potential function since it is a nonlinear, nonlocal function of relative displacements between neighboring particles \cite{yang2014concurrent}. We employ the finite element method to calculate the numerical solution of Eq. (\ref{Eq: FEM Governing Equation 2}). We populate the domain with finite elements such that every element contains a collection of primitive unit cells. Each nodal location represents a unit cell which is itself made up of particles. As a result, CAC provides a two-level description of crystals and follows the solid state physics model whereby the structure is continuous at the lattice level but discrete at the atomic scale. We use interpolation within each element in the domain to approximate the displacement field as follows \cite{xiong2011coarse}: \begin{equation} \label{Eq: Approximate Displacement Field} \Hat{\textbf{u}}^{\alpha}(\textbf{x}) = \boldsymbol{\Phi}_{\xi}(\textbf{x}) \textbf{U}_{\xi}^{\alpha}. \end{equation} In Eq. (\ref{Eq: Approximate Displacement Field}), $\Hat{\textbf{u}}^{\alpha}(\textbf{x})$ is the displacement field for the $\alpha^{th}$ atom within a given element, $\boldsymbol{\Phi}_{\xi}(\textbf{x})$ is the shape function, and $\textbf{U}_{\xi}^{\alpha}$ is the displacement of the $\alpha^{th}$ atom within the $\xi^{th}$ element node. We let $\xi = 1, 2, ..., n$ where $n$ is the total number of nodes in the element (four in this work). Applying the method of weighted residuals, we obtain the weak form of the governing equation by multiplying Eq. (\ref{Eq: FEM Governing Equation 2}) with a weight function $\boldsymbol{\Phi}_{\eta}(\textbf{x})$ and integrating over the entire domain: \begin{equation} \label{Eq: Weak Form of Governing Equation 1} \int_{\Omega(\textbf{x})} \left[\rho^{\alpha} \boldsymbol{\Phi}_{\eta}(\textbf{x}) \Ddot{\textbf{u}}^{\alpha}(\textbf{x}) \right] d\textbf{x} = \int_{\Omega(\textbf{x})} \left[ \boldsymbol{\Phi}_{\eta}(\textbf{x}) \textbf{f}_{int}^{\alpha}(\textbf{x}) \right] d\textbf{x} + \int_{\Omega(\textbf{x})} \left[ \boldsymbol{\Phi}_{\eta}(\textbf{x}) \textbf{f}^{\alpha}(\textbf{x}) \right] d\textbf{x}. \end{equation} Specifically, the Galerkin method is used to obtain the above expression, so the weight function $\boldsymbol{\Phi}_{\eta}(\textbf{x})$ equals the shape function $\boldsymbol{\Phi}_{\xi}(\textbf{x})$ in this case. Substituting Eqs. (\ref{Eq: Internal Force Density}), (\ref{Eq: External Force Density}), and (\ref{Eq: Approximate Displacement Field}) into Eq. (\ref{Eq: Weak Form of Governing Equation 1}), we arrive at the weak form of the CAC governing equation which can be represented in matrix form as follow: \begin{equation} \label{Eq: Matrix Form of Governing Equation} \textbf{M}^{\alpha} \Ddot{\textbf{U}}_{\xi}^{\alpha} = \textbf{F}_{int}^{\alpha} + \textbf{F}^{\alpha} \end{equation} where \begin{align} \textbf{M}^{\alpha} &= \int_{\Omega(\textbf{x})} \left[\rho^{\alpha} \boldsymbol{\Phi}_{\eta}(\textbf{x}) \boldsymbol{\Phi}_{\xi}(\textbf{x}) \right] d\textbf{x} \label{Eq: Inertial term} \\ \textbf{F}_{int}^{\alpha} &= \int_{\Omega(\textbf{x})} \boldsymbol{\Phi}_{\eta}(\textbf{x}) \int_{\Omega(\textbf{x}')} \sum_{\beta = 1}^{N_a} \textbf{f} \left[\boldsymbol{\Phi}_{\xi}(\textbf{x}) \textbf{U}_{\xi}^{\alpha} - \boldsymbol{\Phi}_{\xi}(\textbf{x}') \textbf{U}_{\xi}^{\beta} \right] d\textbf{x}' d\textbf{x} \\ \textbf{F}^{\alpha} &= \int_{\Omega(\textbf{x})} \left[ \boldsymbol{\Phi}_{\eta}(\textbf{x}) \textbf{f}^{\alpha}(\textbf{x}) \right] d\textbf{x}. \end{align} In this work, we approximate the inertial term (Eq. \ref{Eq: Inertial term}) using the lumped mass matrix derived in \ref{App: Mass matrix}. Additionally, no external forces are applied, and temperature is incorporated via a thermostat as in \cite{xiong2014prediction} and \cite{chen2018passing}. The internal force density $\textbf{F}_{int}^{\alpha}$ is the most computationally expensive term, and we evaluate it numerically using Gaussian integration as discussed in \ref{App: Gaussian integration}. By using this finite element implementation of CAC, a majority of the degrees of freedom in the coarse-scaled regions are eliminated. For critical regions where atomistic behavior is required, the finest mesh is used such that each rhombus ``element" consists exclusively of four atoms with no additional lattice points. Thus, CAC uses AFT to produce a unified theoretical framework between the fine-scaled and coarse-scaled regions. A unique feature of CAC is that in the finite element implementation, element connectivity is not required because the nonlocal interatomic force field is the only constitutive relation \cite{xiong2011coarse}. This is similar to aspects of the cohesive zone model \cite{needleman1987continuum} and greatly simplifies the implementation of both the mass matrix as well as the force calculations. \subsection{Two-dimensional formulation} \label{Sec: Two-dimensional formulation} Rhombohedral elements are utilized within the CAC formulation to replicate the primitive unit cell of a monocrystalline lattice (FCC in the present work). A sketch of this can be seen in Fig. \ref{Fig:Primitive_Unit_Cell}, where we observe the primitive unit cell (blue lines) within the broader FCC crystal structure. \begin{figure}[htpb] \centering \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{PrimitiveUnitCell.JPG} \caption{} \label{Fig:Primitive_Unit_Cell} \end{subfigure} \begin{subfigure}{0.61\textwidth} \centering \includegraphics[width=0.5\textwidth]{2D_CAC_Element_1.PNG} \caption{} \label{Fig:RhombusElementSchematic} \end{subfigure} \caption{(a) Rhombohedral element constituting the primitive unit cell (blue lines) of an FCC lattice. The shaded region represents the two-dimensional rhombus element utilized in the present formulation. (b) Schematic of the two-dimensional rhombus element.} \end{figure} Furthermore, the shaded region represents the two-dimensional atomic plane used in our formulation whereby rhombus elements are incorporated throughout the domain. Since the same constitutive relation is used both within elements as well as between elements, dislocations and cracks emerge naturally through the separation of finite elements \cite{xiong2011coarse}. This is a direct result of the CAC governing equations, and it allows such defects to pass smoothly between the atomistic and continuum regions without deforming individual elements. A schematic of the two-dimensional rhombus element can be seen in Fig. \ref{Fig:RhombusElementSchematic}. Here, the black circles represent the four nodes where the governing equations are applied, and the grey circles represent the lattice points which serve as nodal neighbors and thus aid in the force calculations. For monatomic crystals, each nodal location (unit cell) only contains one atom, and the positions of the lattice points are interpolated using Eq. (\ref{Eq: Approximate Displacement Field}) throughout the element. We emphasize that the lattice points are excluded from the Verlet algorithm. Finally, since no external forces are applied in this work, the governing equations from Sec. \ref{Sec: Finite element implementation} reduce to the following: \begin{equation} \label{Eq: Matrix Form of Governing Equation in 2D} \textbf{M} \Ddot{\textbf{U}} - \textbf{F}^{int} = \textbf{0} \end{equation} where \begin{equation} \label{Eq: Mass Matrix in 2D} \textbf{M} = \int_{\Omega(\textbf{x})} \left[\rho \boldsymbol{\Phi}(\textbf{x}) \boldsymbol{\Phi}(\textbf{x}) \right] d\textbf{x} \end{equation} \begin{equation} \label{Eq: Internal Force Density in 2D} \textbf{F}^{int} = \int_{\Omega(\textbf{x})} \boldsymbol{\Phi}(\textbf{x}) \int_{\Omega(\textbf{x}')} \sum_{j = 1}^{n_{\alpha}} \textbf{f} \left[\boldsymbol{\Phi}(\textbf{x}) \textbf{U}_i - \boldsymbol{\Phi}(\textbf{x}') \textbf{U}_j \right] d\textbf{x}' d\textbf{x} = \int_{\Omega(\textbf{x})} \boldsymbol{\Phi}(\textbf{x}) \textbf{f}^{int}(\textbf{x}) d\textbf{x}. \end{equation} In Eq. (\ref{Eq: Matrix Form of Governing Equation in 2D}), $\textbf{M}$ is the mass matrix, and \ref{App: Mass matrix} provides a full derivation of this term. In brief, we utilize the lumped mass matrix approach in the present formulation which effectively reduces $\textbf{M}$ to the following expression for each element: \begin{equation} \label{Eq: Reduced Mass Matrix} \textbf{M} = \frac{m N_{ppe}}{N_{npe}} \end{equation} where $m$ is the atomic mass, $N_{ppe}$ is the number of particles per element (including lattice points), and $N_{npe}$ is the number of nodes per element \cite{xu2015quasistatic}. The terms $\Ddot{\textbf{U}}$ and $\textbf{F}^{int}$ are vectors of the respective accelerations and internal forces for each atom/node in the lattice, and $n_{\alpha}$ represents the total number of neighbors of particle $i$ within a specified cutoff radius. Furthermore, the force $\textbf{f}^{int}(\textbf{x})$ on particle $i$ at position $\textbf{x}$ is obtained exclusively from the interatomic potential function through relative displacements of particles, and the corresponding net force is obtained through Gaussian integration (see \ref{App: Gaussian integration}). When calculating the force $\textbf{f}^{int}(\textbf{x})$ for a node in the coarse-scaled region, the surrounding lattice points are taken as neighbors. The only difference in the fine-scaled force calculations would be the fact that the neighbors of atoms are other atoms rather than lattice points. \section{Shock Propagation Technique} \label{Sec: Shock Propagation Technique} \subsection{Shock initialization}\label{Sec: Shock initialization} For each simulation, the shock wave is characterized using either the Hugoniot (Sec. \ref{Sec: Hugoniot shock equations}) or Eulerian (Sec. \ref{Sec: Eulerian shock equations}) governing equations, and the shock front is achieved by dividing the grid from Fig. \ref{Fig:2D_CAC_Framework} into different regions as seen in Fig. \ref{Fig:2DCAC_ShockWave_Geometry}. The boundary particles within each continuum domain (red circles) constitute the \textit{thermostat regions} (TRs) and are categorized as ``damped" atoms since they apply a constant temperature to the lattice through the Langevin thermostat. Furthermore, a small band of inner atoms at each A-C interface are also damped to ensure that the \textit{window region} (WR) made up of ``undamped'' atoms (blue circles) achieves the correct canonical ensemble \cite{davis2022moving}. We note that as in \cite{qu2005finite}, the nodes (black circles) are left undamped to prevent spurious behavior within each element. The shock wave front (SWF) originates at the center of the WR and travels to the right along the positive $x$-direction with a speed of $U_S$. Particles to the right of the SWF constitute the unshocked material while particles to the left constitute the shocked material. \begin{figure}[htpb] \centering \includegraphics[width=0.9\textwidth]{2DCAC_ShockWave_Geometry.JPG} \caption{CAC geometry used for shock wave simulations. Here, the red circles represent damped atoms, the blue circles represent undamped atoms, and the black circles represent nodes.} \label{Fig:2DCAC_ShockWave_Geometry} \end{figure} To initialize the shock, we assign a final strain $\epsilon^+$ to the shocked material and use either Eqs. (\ref{Eq: Continuity Equation}) and (\ref{Eq: Linear Law}) for the Hugoniot formulation or Eqs. (\ref{Eq: ClaytonEulerianParticleVelocity}) and (\ref{Eq: ClaytonEulerianShockVelocity}) for the Eulerian formulation to obtain the mean particle velocity $v^+$ and shock front velocity $U_S$. The Hugoniot parameters $C_0$ and $S$ as well as the elastic constants $C_{11}$, $\Hat{C}_{111}$, and $\Hat{C}_{1111}$ are initially assigned their literature values given in Table \ref{Table: ShockParameters}. The particle velocity $v^+$ represents the new equilibrium velocity for the shocked region, and the strain $\epsilon^+$ causes the lattice to compress uniaxially such that particles behind the SWF obey the Cauchy-Born rule. As a result, the shocked region achieves its final state and the SWF begins to propagate forward starting at the center of the WR. The temperature $\theta^+$ calculated from either Eq. (\ref{Eq: Shock Heat Equation}) or (\ref{Eq: ClaytonEulerianTemperature}) is applied to the shocked TR, and each TR is far enough away from the non-equilibrium SWF to be considered within a region of ``local" equilibrium. Hence, we can legitimately apply the Langevin thermostat to the strained portion of the domain \cite{maillet2000uniaxial}. In this work, we overcome the runtime-limiting obstacle of boundary reflections present in traditional NEMD shock wave simulations by incorporating two moving window techniques into the multiscale framework. The first technique, known as the \textit{conveyor} method, draws inspiration from the moving boundary conditions used in \cite{holland1998ideal} and \cite{selinger2000dynamic} to model dynamic crack propagation as well as the atomic insertion scheme from \cite{Zhakhovskii1997} and \cite{zhakhovsky2011two} to model piston-driven shocks. The second technique, known as the \textit{coarsen-refine} method, has similarities to mesh refinement schemes used in finite element \cite{berger1989local,greco2015crack} as well as atomistic-continuum \cite{xu2016mesh,tembhekar2017automatic,amor2021adaptive} frameworks. Both techniques serve to track the propagating shock front over engineering length scales and time frames by eliminating shock-boundary reflections, and a description of each can be found in the following sections. \subsection{Conveyor method} \label{Sec: Conveyor method} Figure \ref{Fig:2DCAC_Conveyor_Method} provides a schematic of the conveyor technique for the two-dimensional CAC framework. This technique is similar to the scheme found in \cite{davis2022moving} for one dimension, but there are more intricacies and complexities associated with the higher-dimensional lattice. After the SWF has traveled one lattice spacing ($a_{lat}$) along the positive $x$-direction from the center of the WR, the initial position, displacement, velocity, and acceleration of particles in the first two columns of the grid are set equal to the parameters of their rightmost neighbors within the same row. The neighbors may be either boundary atoms, nodes, or lattice points, but if they are lattice points, the Verlet parameters are first interpolated as discussed in Sec. \ref{Sec: Finite element implementation}. Effectively, the parameters of particles within the first two columns of the lattice are removed from the simulation as is noted in the figure by the leftmost arrow. \begin{figure}[htpb] \centering \includegraphics[width=0.9\textwidth]{Shock_Conveyor_2D.PNG} \caption{Schematic of the moving window \textit{conveyor} technique for the 2D CAC framework. The white circles represent removed particle locations while the gold/orange circles represent inserted particle locations.} \label{Fig:2DCAC_Conveyor_Method} \end{figure} This process continues throughout the entire domain from the beginning of the shocked region to the end of the unshocked region, and we note that only the initial position of lattice points are updated since their displacements, velocities, and accelerations are interpolated during the integration algorithm. Particles in the final column of the domain (denoted by the gold and orange circles in Fig. \ref{Fig:2DCAC_Conveyor_Method}) are given new initial $x$-positions which are one lattice spacing greater than their current initial $x$-positions, and their $y$-positions remain the same. Furthermore, their displacements, velocities, and accelerations are all set equal to zero, and local atomic energy fluctuations induced near $x_F$ are damped by the Langevin thermostat as in \cite{zhakhovsky2011two}. This conveyor mechanism occurs with a frequency of $\tau^{-1} = U_{S} / a_{lat}$, and if the simulated and analytical shock velocities are the same, the SWF will remain stationary at the center of the WR for the entire runtime. The resulting time resolution of $a_{lat}/U_S$ is thus optimized for the given shock propagation velocity, but higher time resolutions are achievable depending on the speed of the phenomenon in question. \subsection{Coarsen-refine method} \label{Sec: Coarsen-refine method} A schematic of the coarsen-refine method can be seen in Fig. \ref{Fig:2DCAC_CoarsenRefine_Method}. Here, after the SWF has traveled a distance equal to the length of the element diagonal ($e_{diag}$) plus the lattice spacing divided by two, the moving window mechanism begins whereby material in the shocked continuum region gets coarsened and material in the unshocked continuum region gets refined. In the shocked region, coarsening is achieved by transforming the relevant particles into nodes and lattice points such that new elements appear in the previous atomic locations. On the other hand, in the unshocked region, refinement takes place by changing nodes and lattice points into fine-scaled particles through both parameter re-assignment and linear interpolation -- similar to what is done with the conveyor technique. This procedure effectively transmits the fine-scaled region forward to the new SWF location as seen in Fig. \ref{Fig:2DCAC_CoarsenRefine_Method}. \begin{figure}[htpb] \centering \includegraphics[width=0.9\textwidth]{Coarsen_Refine_2D_Schematic.JPG} \caption{Schematic of the moving window \textit{coarsen-refine} technique for the 2D CAC framework.} \label{Fig:2DCAC_CoarsenRefine_Method} \end{figure} After this process completes, undamped particles at the A-C interfaces in the shocked material are redefined as damped particles and vice versa for particles in the unshocked material. Furthermore, the mass matrix is updated to reflect the new mass distribution within the lattice. This technique occurs iteratively with a frequency of $\tau^{-1} = U_S / \frac{1}{2} \left(e_{diag}+a_{lat} \right)$, and the integer time counter $n$ is increased by one each time the mechanism terminates (as shown in Fig. \ref{Fig:2DCAC_CoarsenRefine_Method}). When utilizing the coarsen-refine method, the entire two-dimensional grid remains stationary and merely the boundaries of the fine-scaled region are modified. As a result, most of the domain can be populated with finite elements while a comparatively small section of atoms track the propagating shock wave through the lattice. This technique thus ultimately emerges from a consideration of the balance between total efficiency and total accuracy of nonlinear shock wave modeling. \section{Elastic Anisotropy: Crystal Orientation Dependence on Shock Propagation Response} \label{Sec: Elastic Anisotropy: Crystal Orientation Dependence on Shock Propagation Response} In this section, we elaborate on the shock velocity and longitudinal stress results obtained with both the Hugoniot and Eulerian formulations and discuss how they relate to the directional anisotropy of materials subject to shock impact. Recent NEMD works have studied shock propagation along different lattice directions of single crystals and observed a significant orientation dependence on the material's shock response \cite{germann2000orientation,bringa2004atomistic,lin2014effects,neogi2017shock}. This phenomenon has also been documented for elastic shock waves in small-scale, atomistic domains \cite{zimmerman2011elastic,davis2020one}. Interestingly, large-scale experimental studies have not shown the same orientation dependence of shock parameters \cite{chau2010shock}, but this may be due to the fact that bulk crystals naturally have more defects than what can be feasibly represented using atomistic techniques \cite{lin2014effects}. The present work provides a unique insight on this phenomenon because the CAC domain is modeled after the primitive unit cell of an FCC lattice. Hence, the shock travels along the [112] longitudinal direction, and the [110] direction is transverse to the direction of propagation. To the authors' knowledge, this is one of the first studies to analyze shock evolution along this particular orientation. \subsection{Simulation specifications} \label{Sec: Simulation specifications} The results in this section are obtained from shock wave simulations performed with the \textit{conveyor} moving window technique using the CAC domain described in Fig. \ref{Fig:2DCAC_ShockWave_Geometry}. For every simulation, the left and right coarse-scaled regions each contain $250$ particle columns for a total length of $250a_{lat}$, and each element diagonal has a length of $8a_{lat}$. Furthermore, the fine-scaled region contains $2500$ particle columns, and the length of each element diagonal is merely the lattice spacing ($a_{lat})$. Additionally, each atomistic TR band contains $20$ columns -- much longer than the force range to ensure the WR achieves the desired temperature. Simulations are conducted for compressive strains ($\epsilon^+$) ranging from $1\%$ to $9\%$ and $1\%$ to $8\%$ for Cu and Al respectively (see \ref{App: Stress-strain relations}), and the total runtime is $2$ ns. A velocity profile of the two-dimensional shocked lattice can be seen in Fig. \ref{Fig:2D_Shock_Conveyor_Simulation}a. Specifically, we track the SWF over time in MATLAB by taking a column average of the particle velocities as shown in Fig. \ref{Fig:2D_Shock_Conveyor_Simulation}b. \begin{figure}[htpb] \centering \begin{subfigure}{0.9\textwidth} \includegraphics[width=\textwidth]{2D_Shock_Conveyor.PNG} \caption{} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{Average_Velocity_Profile_of_Shock.png} \caption{} \end{subfigure} \caption{Velocity profiles of the propagating shock in the CAC framework. (a) SWF in the two-dimensional grid (not to scale); (b) SWF obtained from averaging the column velocities of the lattice.} \label{Fig:2D_Shock_Conveyor_Simulation} \end{figure} \subsection{Shock velocity results} \label{Sec: Shock velocity results} Shock velocity results obtained for both Hugoniot and Eulerian theory can be seen in Figs. \ref{Fig:Shock_Hugoniot_Conveyor_Velocity} and \ref{Fig:Shock_Eulerian_Conveyor_Velocity} respectively. Specifically, Fig. \ref{Fig:Shock_Hugoniot_Conveyor_Velocity} displays the shock velocity vs. particle velocity data (as well as the derived Hugoniot equations) of four different sets of simulations using both (a) Cu and (b) Al. Here, the blue line represents the polycrystalline Hugoniot calculated in \cite{marsh1980lasl}, and the green data points are the average velocity results for shocks propagating through the standard CAC domain. As a comparison, we also invert the lattice such that the [110] orientation lies along the $x$-direction, and the [112] orientation lies along the $y$-direction, and these results are given by the red data points. As in \ref{App: Stress-strain relations}, we performed stress vs. strain studies for this inverted lattice and found yielding to occur at 9\% strain for Cu and 8\% strain for Al, so we maintain $\epsilon^+$ values below these elastic limits when simulating shocks along the [110] direction. Finally, we also present one-dimensional atomistic shock data obtained from \cite{davis2020one} for Cu and calculated in this work for Al. \begin{figure}[htpb] \centering \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{2D_CAC_Hugoniot_Shock_Velocities_Cu.png} \caption{} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{2D_CAC_Hugoniot_Shock_Velocities_Al.png} \caption{} \end{subfigure} \caption{Hugoniot shock wave results for both (a) Cu and (b) Al. The polycrystalline shock Hugoniot obtained from \cite{marsh1980lasl} is shown in blue. Two-dimensional CAC Hugoniot data obtained for shocks propagating along the [112] and [110] lattice directions are shown in green and red respectively. One-dimensional shock Hugoniots are given in orange. The Cu Hugoniot comes from \cite{davis2020one}, and the Al Hugoniot is calculated in the present work.} \label{Fig:Shock_Hugoniot_Conveyor_Velocity} \end{figure} \begin{figure}[htpb] \centering \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{2D_CAC_Eulerian_Shock_Velocities_Cu.png} \caption{} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{2D_CAC_Eulerian_Shock_Velocities_Al.png} \caption{} \end{subfigure} \caption{Eulerian shock results for both (a) Cu and (b) Al. The blue line represents velocities obtained from fourth-order Eulerian theory. Two-dimensional CAC data obtained for shocks propagating along the [112] and [110] lattice directions are shown in green and red respectively. One-dimensional CAC data obtained from \cite{davis2022moving} are in orange.} \label{Fig:Shock_Eulerian_Conveyor_Velocity} \end{figure} The data and associated Hugoniot equations in Fig. \ref{Fig:Shock_Hugoniot_Conveyor_Velocity} clearly show the dependency of a shock's propagation velocity on the given lattice orientation. In particular, both of the two-dimensional CAC Hugoniots have $C_0$ and $S$ values which are greater than the standard polycrystalline Hugoniot. This is most likely due to the fact that the FCC primitive unit cell is rhombohedral instead of cubic, so the entire CAC lattice is more compressed than a traditional structured FCC grid. This causes the particles in the domain to be more compact which results in larger forces from the interatomic potential and hence higher shock velocities. As expected, the inverted CAC lattice produces slightly higher shock velocities than the lattice from Fig. \ref{Fig:2DCAC_ShockWave_Geometry} since the [110] lattice spacing is shorter than the [112] spacing. Finally, the one-dimensional shock velocities are greater than the those from the two-dimensional simulations due to the lack of any transverse motion which naturally dampens the shock speed. Instead, the 1D results are comparable to plane-plane collisions in a bulk lattice \cite{tsai1966shock}. We observe a similar phenomenon for the Eulerian results in Fig. \ref{Fig:Shock_Eulerian_Conveyor_Velocity} where we now plot average shock velocity vs. applied strain. Here, the green and red data points are from the same types of 2D simulations as those from Fig. \ref{Fig:Shock_Hugoniot_Conveyor_Velocity}. However, the blue line now represents the analytical results from fourth-order Eulerian theory, and the orange data points are 1D CAC shock results obtained from \cite{davis2022moving}. As seen previously, the 1D shock velocities are slightly greater than the 2D velocities from the present study, and the inverted CAC lattice has a higher slope than the standard CAC lattice. For Cu, the shock velocities predicted at higher strains by Eulerian theory are indeed lower than the 2D and 1D CAC results, but this is not the case for Al. The reason for the anomalous results with Al is not necessarily clear, but it could be due to the higher presence of aluminum alloys which could alter the density and thus affect the model's outcomes. Nonetheless, we observe qualitative compatibility between the Hugoniot and Eulerian formulations which gives us confidence that the current CAC framework produces accurate results and can thus be reliably used to measure the response of materials to shock propagation along various lattice directions. \subsection{Longitudinal stress results} \label{Sec: Longitudinal stress results} To supplement the anisotropic shock velocity results from Sec. \ref{Sec: Shock velocity results}, we perform longitudinal stress vs. strain studies using the shocked data for both Cu and Al, and these results can be seen in Fig. \ref{Fig:Longitudinal_Shock_Stress}. Specifically, we calculate the time-averaged virial (thermodynamic) stress ($\sigma_{xx}$) in the shocked region using Eq. (\ref{Eq: Virial Stress}), and we relate the Cauchy stress ($P_{xx}$) to the virial stress as follows \cite{zimmerman2011elastic}: \begin{equation} \label{Eq: Cauchy Stress} P_{xx} = (1-\epsilon)\sigma_{xx} \end{equation} where we note that compressive stress/strain is considered positive. Fig. \ref{Fig:Longitudinal_Shock_Stress} shows the shock stress $P_{xx}$ normalized by the second-order elastic constant $C_{11}$ as a function of the applied strain. The data from Hugoniot and Eulerian theory were practically identical, so without loss of generality, we only exhibit the Eulerian results. The [100] second, third, and fourth-order Eulerian models are represented by the blue, orange, and green lines respectively, while the [112] and [110] shock stress data are represented by the purple circles and gold diamonds respectively. As in Sec. \ref{Sec: Shock velocity results}, we clearly observe the orientation dependence of the shock stress as the CAC data is significantly higher than that predicted by the various Eulerian models for shocks along the [100] direction. Furthermore, the [110] CAC simulations produced shock stresses which were slightly higher than those from the [112] simulations. Again, this is primarily due to the higher compression velocities caused by the larger `compactness' of CAC domains. This anisotropic stress data is congruent with a previous work which analyzed elastic shocks along various lattice directions using a number of different potential functions \cite{zimmerman2011elastic}. \begin{figure}[htpb] \centering \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{Longitudinal_Shock_Stress_Cu.png} \caption{} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{Longitudinal_Shock_Stress_Al.png} \caption{} \end{subfigure} \caption{Longitudinal stress data for both (a) Cu and (b) Al. The blue, orange, and green lines represent the [100] results from 2nd, 3rd, and 4th-order Eulerian theory respectively. The purple circles and gold diamonds represent the [112] and [110] CAC data respectively.} \label{Fig:Longitudinal_Shock_Stress} \end{figure} \section{Results with the Coarsen-Refine Method and Formulation Efficiency} \label{Sec: Results with the Coarsen-Refine Method and Formulation Efficiency} Without loss of generality, we only reference data from Eulerian theory in this section as both shock models gave similar quantitative results. \subsection{Coarsen-refine simulations} In Fig. \ref{Fig:Coarsen_Refine_2D_Simulation}, we present results from a shock wave simulation performed using the \textit{coarsen-refine} technique over $6$ ns. Here, we can observe the atomistic portion of the domain successfully follow the evolving shock front throughout the CAC framework with no spurious wave behavior at the A-C interfaces. Due the elastic nature of the shock as discussed in Sec. \ref{Sec: Elastic shock waves}, no dislocations are present to the left of the wave front, but we do see the shocked material maintain the mean particle velocity of $v^+$ for the entire runtime. These results are in contrast to those performed using the conveyor technique because now the SWF may travel through the entire CAC domain while staying within the fine-scaled region. Although previous work has used mesh refinement to study phenomena within both finite-element \cite{berger1989local} and multiscale \cite{xu2016mesh,tembhekar2017automatic,amor2021adaptive} schemes, utilizing simultaneous refine/coarsen techniques to study dynamic, high-temperature phenomena is still a challenging area of research \cite{davis2022moving}. Thus, the present formulation provides a novel means for tracking propagating shocks over long runtimes, and may be used to research even more complex lattice structures in the future such as nanoscale composites or high-entropy alloys. \begin{figure}[htpb] \centering \includegraphics[width=0.9\textwidth]{Coarsen_Refine_2D.PNG} \caption{Shock simulation using the coarsen-refine moving window technique.} \label{Fig:Coarsen_Refine_2D_Simulation} \end{figure} \subsection{Shock structure and planarity} We now use the coarsen-refine simulations to analyze the shock front's spatial width over $5$ ns, and the results for $\epsilon^+ = -0.06$ can be seen in Fig. \ref{Fig:2D_Shock_Width}. As a comparison, we also show the 1D CAC results from \cite{davis2022moving}. Unlike the 1D data, the present work shows a clear steadiness in the shock wave behavior as evidenced by the fact that the shock width remains constant throughout the simulation with very little deviation from the mean. We also do not observe a significant change in the shock front's planarity throughout the simulation's duration. Finally, similar results were found for both Cu and Al over the range of strains studied with the present formulation. Clearly, for shock waves modeled at the microscale, the ability of particles to oscillate transversely to the direction of shock propagation plays a large role in the overall steadiness of the wave. These results are similar to findings from previous NEMD studies which observed a change in shock structure and steadiness when transitioning from a 1D to 3D regime \cite{holian1995atomistic}. In particular, the transition from unsteady to steady waves was due to the ``increase in coupling between vibrational excitations normal and transverse to the direction of shock wave propagation'' \cite{holian1979molecular}. Our work shows this for two dimensions as well. \begin{figure}[htpb] \centering \includegraphics[width=0.5\textwidth]{2D_Shock_Width.png} \caption{Spatial shock width over time. The blue and red circles represent the 1D CAC data from \cite{davis2022moving} for both Cu and Al respectively. The gold squares and purple diamonds represent the 2D CAC data for Cu and Al from the present work.} \label{Fig:2D_Shock_Width} \end{figure} \subsection{Framework speedup and efficiency} For the sake of completeness, we now present results for speedup/efficiency tests which compare the two-dimensional moving window CAC framework to equally-sized NEMD domains. The data from these two studies can be seen in Fig. \ref{Fig:Domain_Size_Speedup}. Specifically, in Fig. \ref{Fig:Domain_Size_Speedup}a, we maintain a constant ratio in the CAC lattice such that the fine-scaled region is always one-tenth the length of the entire grid, and we run simulations for increasing domain sizes. We observe the CAC vs. MD efficiency reach an asymptotic value around 81\% (further increases in domain size did not significantly effect the speedup percentage). Next, in Fig. \ref{Fig:Domain_Size_Speedup}b, we keep the total lattice size constant and vary the length of the coarse-scaled region from 0\% to 100\% of the total area. Clearly, as the percentage of the lattice that is coarse-scaled increases, the speedup does as well, and we note that this increase appears to be fairly linear. These studies demonstrate the utility of using the present CAC framework to enhance performance in large-scale simulations. \begin{figure}[htpb] \centering \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{2D_CAC_Domain_Size_Speedup.png} \caption{} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{2D_CAC_Continuum_Size_Speedup.png} \caption{} \end{subfigure} \caption{Efficiency of the CAC framework vs equally-sized MD domains. In (a), the total runtimes are compared for increasing system sizes. Here, the central fine-scaled region of the CAC lattice is always 1/10 the length of the entire grid. In (b), the simulation speedup is shown when the size of the domain remains constant, but the coarse-scaled region increases from 0\% to 100\% of the lattice.} \label{Fig:Domain_Size_Speedup} \end{figure} \section{Conclusion} \label{Sec: Conclusion} In this paper, we developed a dynamic moving window CAC framework to simulate shock wave propagation through a two-dimensional, single-crystal lattice. Specifically, we characterized the shock using both the linear Hugoniot \cite{davison2008fundamentals} and nonlinear Eulerian \cite{clayton2013nonlinear} shock equations to study the classic Riemann problem of a single discontinuity traveling through an infinite medium. The CAC multiscale formulation was utilized for its ability to seamlessly transition between the fine-scaled and coarse-scaled regions, and many verifications and analyses were conducted on the higher-dimensional system. We elaborated on the technique to initialize the shock front in the lattice as well as described two moving window methods which were incorporated into the domain. These schemes provided a mechanism to study the evolution of the shock over very long simulation times by preventing non-physical wave reflections at the A-C interfaces. We performed many shock wave simulations within the CAC framework and used the moving window techniques to track the shock front through two different FCC materials: Cu and Al. The unique lattice directions inherent to the CAC formulation provided us the opportunity to study how directional anisotropies in single crystals can give rise to orientation-dependent shock velocities. We observed that longitudinal shocks traveling along the [112] and [110] directions of the CAC domain propagated at distinct velocities for a given strain and particle velocity. These shock velocities were also different from those predicted by polycrystalline Hugoniot and Eulerian analytical models as well as previous one-dimensional atomistic and multiscale data. From these results, we were able to derive new Hugoniot parameters for the CAC formulation, and longitudinal stress calculations further validated the observed anisotropic material response. Our data agreed qualitatively with the results from previous NEMD studies which identified this orientation-dependence of shock evolution in solids \cite{germann2000orientation,bringa2004atomistic,lin2014effects,neogi2017shock}. Next, in Sec. \ref{Sec: Results with the Coarsen-Refine Method and Formulation Efficiency}, we exhibited the capability and novelty of the present framework by using the coarsen-refine technique to track a propagating shock wave through the entire grid. By leveraging concepts from previous atomistic and finite element schemes as well as exploiting the unique qualities of the CAC formulation, the fine-scaled region could travel through the domain at the speed of the moving wave front, and we noted the significance of this for advancing non-equilibrium multiscale research. We utilized this techinique to study the shock's structure and planarity over very long runtimes which are typically unattainable in traditional NEMD methods. Finally, we presented multiple plots comparing the efficiency of an NEMD system to an equally-sized CAC lattice. We observed that the present moving window multiscale scheme had significantly faster runtimes for various domain sizes -- a necessary quality for realistic and scalable atomistic-continuum models. The present work is innovative in its own right, but it also opens the door to more complex research involving the use of multiscale domains to simulate dynamic, nonlinear phenomena over engineering length scales. While we focused only on elastic shock waves in this work, we hope to expand this formulation to model elastic-plastic shocks \cite{lloyd2014simulation} in polycrystalline materials to study the role of grain boundaries on shock evolution. Additionally, recent works have used both atomistic \cite{shen2022uncovering,jiang2022molecular} as well as multiscale \cite{chu2022multiscale,elahi2022multiscale} methods to predict material behavior in medium-entropy and high-entropy alloys. This work provides a framework to study shock propagation through such materials. Furthermore, we would also like to utilize machine learning algorithms in this scheme to pass information from the mesoscale to macroscale \cite{xiao2021machine}. Finally, we hope to incorporate a high-frequency wave passing technique that was first introduced in \cite{chen2018passing} and \cite{DAVIS2022111702} into the present formulation to study shock scattering and the role of scattered waves in subsequent material behavior. \section{Acknowledgments} \label{Sec: Acknowledgments} This material is based upon work supported by the National Science Foundation under Grant No. $1950488$. Financial support was also provided by the U.S. Department of Defense through the National Defense Science and Engineering Graduate (NDSEG) Fellowship Program (F-$1656215698$). Simulations were performed using the Easley computing cluster at Auburn University. \bibliographystyle{ieeetr}
{ "timestamp": "2022-09-23T02:14:52", "yymm": "2209", "arxiv_id": "2209.10985", "language": "en", "url": "https://arxiv.org/abs/2209.10985" }
\section{Introduction} Magnetic Resonance Imaging (MRI) is the standard imaging modality for the diagnosis and follow-up of Multiple Sclerosis (MS). It allows a direct observation of brain lesions produced by the disease and provides information about the pathology stage or treatment efficiency. Deep Learning (DL) approaches, based on a trained U-Net-like neural network, are invaluable tools to automatically delineate MS lesions \cite{shoeibi2021applications}. Although powerful and versatile, these models provide segmentation maps that are typically opaque, with no indication regarding their certainty. This hinders full acceptance of DL models in clinical routine, for which uncertainty attached to the computerized results is essential for their interpretation and to avoid misleading predictions. A variety of methods have been proposed to quantify the uncertainty attached to deep neural networks \cite{abdar2021review}. Among them, the Monte Carlo (MC) dropout stands out as one of the simplest approach, as it can be applied to any model trained with the dropout technique \cite{srivastava2014dropout}. Such a model can be interpreted as a Bayesian neural network, giving access to the interesting properties of these probabilistic models regarding quantification of their uncertainty \cite{gal2016dropout}. More particularly at inference, for a given input, multiple stochastic forward passes are computed by keeping dropout activated, corresponding to empirical samples from the approximated predictive distribution. This produces a set of softmax probabilities that can further be used to compute uncertainty estimates. Applied to MRI segmentation, the MC dropout method produces uncertainty metrics for each voxel in the volume, resulting in so-called voxel-wise uncertainty maps \cite{sander2019towards,jungo2017towards,nair2020exploring}. The clinically-relevant information, however, is at a higher level, typically at the instance (lesion, tissue) level. Natural ways to obtain such instance-wise uncertainties, meaning the uncertainties attached to each connected component within the output segmentation, are through a \textit{post hoc} aggregation of voxel-wise uncertainty estimations. Existing approaches include computing the mean uncertainty of voxels belonging to the same class in the segmentation \cite{roy2019bayesian} (thus producing one uncertainty estimate per class, rather than per connected component). In the context of MS, lesion-wise uncertainty was also estimated using the logsum of the connected voxels uncertainties \cite{nair2020exploring}. Using the mean implies that each component uncertainty contributes equally to the overall instance score, while the use of the logsum assumes that connected voxels are conditionally independent, given that they belong to the same instance. These highly simplified assumptions may degrade the quality of instance uncertainty computation. To go further, a side-learner called MetaSeg has been proposed to predict the Intersection Over Unions (IoU) of each individual segmented instance with the ground truth \cite{rottmann2020prediction}. For this task, a Linear Regression Model is trained based on a series of features derived from a standard segmentation model’s output probabilities. The predicted score is then used as a marker of instance uncertainty. Yet, the input features of MetaSeg consist in averaged voxel-wise metrics, leading to the same restrictions than the previously-described \textit{post hoc} aggregation methods. Additionally, it has been proposed to train an auxiliary Graph (Convolutional) Neural Network (GCNN) using the outputs of a MC dropout U-Net (i.e. voxel-wise segmentation and uncertainty maps) to refine the predicted masks \cite{soberanis2020uncertainty}. This approach, however, remains at the voxel level and focuses on 2D segmentation tasks. In this work, we propose to build from the two last methods to overcome their respective limitations. Indeed, we implement a GCNN at the output of a trained MC dropout U-Net model. Using the predicted 3D segmentation outputs, each individual segmented lesion is modeled by a graph whose voxels are the interconnected nodes. Node features are determined by the input and output of the U-Net, comprising the voxel image intensities, the voxel predicted label, and voxel-wise uncertainty maps. We implement two alternative variants of the proposed GCNN, either classification or regression, to quantify lesions uncertainty. We test our framework on a task of 3D binary segmentation segmentation on MS data. Results demonstrate the superiority of our approach compared to known methods. \section{Our Framework: Graph modelization for lesion uncertainty quantification} \textit{Overview:} Consider an input image X and a trained MC dropout segmentation model $\mathcal{N}$ with parameters $W$ that produces a segmentation $Y=\mathcal{N}(X, W)$ and a set of voxel-wise uncertainty maps $U_i$ (e.g. entropy, variance, PCS, etc.). Our objective is to quantify the uncertainty of each instance (i.e.\ lesion) in Y. To do so, we propose to train an auxiliary GCNN to predict this uncertainty directly from X, Y, and $U_i$ (see Figure \ref{framework}). \begin{figure}[!htb] \centering \includegraphics[width=\textwidth]{fig-framework-miccai.pdf} \caption{Illustration of the proposed framework for learning lesion uncertainty from the outputs of a Monte Carlo dropout model. See the text for details of each block.} \label{framework} \end{figure} \subsection{Monte Carlo dropout model and voxel-wise uncertainty} We use a generic 3D U-Net \cite{cciccek20163d} for its simplicity and popularity within the field, although our method can be employed with any segmentation model trained with dropout. We add 3D dropout \cite{tompson2015efficient} with a rate of $p=0.2$ at the end of each encoding and decoding block. The model is trained on annotated datasets composed of pairs of images: (i) input T2-weighted FLAIR MRI sequences $X$ and (ii) associated ground truth MS lesions segmentation $Y$. At inference, dropout is kept activated and $T$ forward passes are made for a new input volume $x^*$. We chose $T=20$, as it allows an optimal counterpart between inference time and quality of uncertainty estimates \cite{orlando2019u2}. From this set of predictions, several well-known voxel-related uncertainty metrics are extracted: (see Figure \ref{framework}, part A): the entropy \cite{gal2017deep}, the variance \cite{kendall2015bayesian} and the Predicted Confidence Score (PCS) \cite{zhang2020towards}. \subsection{Graph dataset generation} \subsubsection{Inference on Validation Dataset and Connected Component Analysis.} After training, the MC dropout U-Net is subsequently used to generate segmentation and uncertainty maps on the set-aside validation set of images. These predictions are used to generate training data for the auxiliary GCNN. We use a Connected Component Analysis (CCA) to identify each lesion in the segmentation masks using 26-connectivity --- meaning that a lesion is defined by voxels that are interconnected by their faces, edges, or corner. For each lesion identified by CCA, we compute the Adjusted Intersection Over Union ($IoU_{adj}$) \cite{rottmann2020prediction} with the ground truth lesions (see Figure \ref{framework}, part B). This variant of the IoU is suited for brain-abnormalities segmentation, where a connected component in the ground truth can be divided into several pieces in the predicted segmentation. Identified lesions can exhibit a wide range of shape and size. To learn from these data, we must thus design a neural network that can be employed regardless of the shape and size of the input structure. GCNNs, which can be interpreted as a generalization of the classic convolutional networks to non-Euclidean and irregular data, are thus particularly suitable for this task. \subsubsection{From voxels to graphs} We first slightly dilate each lesion mask to include surrounding voxels at the border between classes, which typically convey useful information about uncertainty. We then convert the dilated mask into a graph by representing its voxels by nodes and neighborhood relationships by edges. Each node is further defined by a set of $n+4$ features: (i) the intensity of its corresponding voxel in each of the $n$ input MRI sequences, (ii) its binarized label (1 for the observed lesion class and 0 for all other classes), and its 3 voxel-wise uncertainty estimates: (iii) entropy, (iv) variance and (v) PCS (see Figure \ref{framework}, part C). In agreement with the aforementioned 26-connectivity CCA, each node (i.e. voxel) is connected in the graph to its 26 nearest neighbors. \subsection{GCNN architecture and training} Here, we use a lightweight GCNN architecture composed of 2 consecutive Graph Convolutional layers with a hidden dimension of $h=64$, followed by a Linear layer (see Figure \ref{framework}, part D). The model is trained using the graph dataset generated from the validation images, composed of graphs (transformed connected components obtained from the segmentation model) along with their associated ground truth ($IoU_{adj}$). As in \cite{rottmann2020prediction}, we propose two versions of our model: \begin{itemize} \item In the classification approach ($\text{GCNN}_{\text{Classif}}$), the $IoU_{adj}$ labels are first binarized as follows: FP if $IoU_{adj}(graph) < \epsilon$, and TP if $IoU_{adj}(graph) \geq \epsilon$. $\epsilon$ is a hyperparameter that we set to $0.1$ in our experiments, so that lesions with an $IoU_{adj}$ very close to 0 are not wrongly considered as TP. The network is then trained using the Cross-Entropy Loss. At inference, structural uncertainty is quantified by the graph FP probability. \item In the regression approach ($\text{GCNN}_{\text{Reg}}$), the model is directly trained to predict the graph $\widehat{IoU}_{adj}$, using the MSE loss. At inference, we use $1-\widehat{IoU}_{adj}$ as the structural uncertainty score. \end{itemize} \section{Material and Method} \subsection{Data} We combine two open-source MS datasets: from the University Hospital of Ljubljana (MSLUB) \cite{lesjak2018novel} and from the MICCAI 2016 MS segmentation challenge (MSSEG 2016) \cite{commowick2021multiple}. We thus use 83 manually-annotated 3D T2-FLAIR sequences. Images are resampled to a \unit{1}{\milli\meter} isotropic resolution of $160\times 192 \times 160$ to focus on brain tissues, and intensities are normalized to zero mean and unit variance. We opt for a 4-fold cross-validation scheme due to the limited number of images. In each fold, we put aside $25\%$ of the images for testing. From the remaining images, we use $20\%$ for validation and $80\%$ to train the model. During evaluation, results are averaged over the 4 folds. Due to the limited number of images, we extensively use Data Augmentation to train our models, comprising flipping, rotation, contrast alteration, gaussian noise and blurring. \subsection{Comparison with known approaches} To evaluate the relevance of our proposed $\text{GCNN}_{\text{Classif}}$ and $\text{GCNN}_{\text{Reg}}$ approaches, we implement in parallel known approaches to obtain instance uncertainty from the U-Net. We use the mean and logsum of the voxel-wise uncertainty of each lesions, with the 3 different types of uncertainty. We name these methods $\text{Entropy}_{\text{mean}}$, $\text{Variance}_{\text{mean}}$, $\text{PCS}_{\text{mean}}$, $\text{Entropy}_{\text{logsum}}$, $\text{Variance}_{\text{logsum}}$, and $\text{PCS}_{\text{logsum}}$. As pointed out in \cite{nair2020exploring}, using the logsum assigns a higher uncertainty to small-size lesions. This appears sub-optimal as small lesions could be segmented with high confidence, especially in the case of MS lesions. To verify this point, we implement a naive approach, named Size, which attributes a lesion uncertainty inversely proportional to its size. The lesion size (number of voxels composing it) being $S$, its uncertainty is computed as $1/S$. Lastly, we implement an approach inspired from the MetaSeg framework \cite{rottmann2020prediction}. We extract a series of features from each connected component in the validation dataset, consisting in the mean entropy, variance and PCS, as well as the size of the lesion. We then train a Logistic Regression classifier from these 4 features to distinguish between True Positive (TP) and FP lesions ($\text{MetaSeg}_{\text{Classif}}$). Alternatively, we train a Linear Regression model to directly predict $\widehat{IoU}_{adj}$ ($\text{MetaSeg}_{\text{Reg}}$). We use the outputs of these models to obtain lesion uncertainty as described for the GCNN approach. \subsection{Evaluation Setting} For medical applications, the ideal uncertainty quantification should attribute a higher uncertainty to FP lesions than TP, to allow for proper interpretation and evaluation of the results. To evaluate this properly, we use Accuracy-Confidence curves \cite{lakshminarayanan2017simple}. Briefly, the principle is to set aside the $\tau \%$ of the most uncertain predicted lesions among the test dataset, and measure the performance of the model on the remaining lesions by counting the number of FP and TP lesions. The threshold $\tau$ fluctuates between 0 (all lesions are kept) and 100 (all lesions are removed). By plotting the couples (FP, TP) at different thresholds, we obtain an Accuracy-Confidence curve and compute the AUC (Area Under the Curve) score reflecting the quality of the estimated lesion uncertainty. FP and TP counts are normalized in the range $[0, 1]$ by dividing by the counts obtained without filtering (at $\tau = 0)$. This metric only depends on the ranking of uncertainties, thus is independent of the uncertainty ranges of each method ensuring a fair comparison. We additionally evaluate the segmentation performance of the U-Net on the test datasets using the Dice coefficient, as well as the total number of TP and FP lesions. Finally, for each method, we control the correlation between the estimated uncertainty and the lesion size using the Spearman's rank correlation coefficient ($\rho$). \begin{table}[!htb] \centering \caption{U-Net segmentation performance on the MS dataset and number of TP and FP lesions for each fold.}\label{tab_seg} \begin{tabular}{|c|c|c|c|c|} \hline Fold & 0 & 1 & 2 & 3 \\ \hline Dice & 0.672 & 0.645 & 0.705 & 0.693 \\ \hline \# TP lesions & 829 & 597 & 715 & 871 \\ \hline \# FP lesions & 525 & 294 & 353 & 454 \\ \hline \end{tabular} \end{table} \subsection{Implementation Details} \subsubsection{3D Segmentation U-Net} Our segmentation framework was implemented using PyTorch \cite{NEURIPS2019_9015}. We opt for a patch approach to train the segmentation U-Net, meaning that the $160\times192\times160$ MRI volumes are split into 3d patches of $160\times 192 \times 32$, decreasing the memory cost of training. We use a batch size of 5. The U-Net is trained with a combination of the Dice \cite{milletari2016v} and Cross-Entropy loss, using the ADAM optimizer \cite{ADAM} with a learning rate of $1e^{-4}$ until convergence. For the training of the segmentation models, a single NVIDIA T4 GPU was used. \subsubsection{Graph Neural Networks} We use the Deep Graph Library \cite{wang2019deep} to implement and train the GCNN models. The training procedure of our GCNN is standard: we use the ADAM optimizer with a learning rate of $1e^{-2}$ at the start of training, and progressively decreasing to $1e^{-5}$. Graphs are presented to the network in the form of batches, composed of 10 graphs. Due to the small size of the GCNN models, they were trained on CPU, which took a couple of minutes in our experiments. \begin{figure}[!htb] \centering \includegraphics[width=0.75\textwidth]{workshop_version.png} \caption{Accuracy-Confidence curves for the different methods. The associated AUC scores are indicated in brackets in the graphs legends.} \label{res_auc} \end{figure} \section{Results and Discussion} Accuracy-Confidence curves are presented in Figure \ref{res_auc} along with the corresponding AUC values. Segmentation performance and correlation coefficients are presented in Tables \ref{tab_seg} and \ref{tab_auc}. Experimental results show that both models of the proposed framework outperform the classical methods by a significant margin, and that their performances are similar with a very small advantage for the classification version. The naive Size approach achieves the lowest performance. Similarly, the $logsum$ approaches, also strongly correlated with the lesion size, have poorer performance than the $mean$ counterparts. Not surprisingly, in the context of MS lesions, the lesion size is not a satisfying proxy for uncertainty as small lesions can be segmented with high confidence. In our experimental setting, $MetaSeg$ models do not outperform simpler methods. This is probably due to the overall simplicity of these models, failing to fully learn the relationships between the different input features. Results show that our graph-based framework can be efficiently used to flag uncertain lesions, that are more likely to result in False Positives. The classification variant slightly outperforms regression. We hypothesize that this is due to the increased difficulty of predicting the exact $IoU_{adj}$, compared to the binary classification setting. One drawback of our approach is that it requires an additional validation set containing enough lesions (typically a few hundreds) to allow GCNN training. However, as most DL pipelines rely on a set-aside validation set to control overfitting during training, these data can then be used for this purpose (as it was the case in this work). The requirement is thus not prohibitive and only necessitates a sufficiently large validation set. Overall, our framework is computationally light as CCA is computed only once per MRI, followed by the graph generation step that can be parallelized among the lesions. Additionally, in the context of MS, most brain lesions are relatively small (less than 1000 voxels), which results in small graphs that are fast to generate. Finally we use 26-connectivity, meaning that a voxel is only connected to its closest neighbors, which reduces the computational burden. Our approach enhances the binary voxel-wise predictions of the segmentation model with reliable and readable lesion-wise uncertainty estimates. In the classification setting, uncertainty is cast as the probability of a lesion being a false positive, which is a straightforward and intelligible definition. In a real world clinical application, this may help the clinician examine the automated segmentation in the light of the model's confidence, hence allowing a better interpretability of the provided results and a more trustable usage of the algorithm. Future work will study the extension to multi-class segmentation, and inclusion of additional features such as the global location of the lesion within the MRI volume. Indeed, for brain disorders such as MS, the location of the lesion within the brain conveys information concerning uncertainty, as false positives tend to be concentrated in specific brain regions. \begin{table}[h!] \centering \caption{Evaluation of uncertainty estimates (AUC values). $\rho$ represents Spearman's rank correlation coefficient $\rho$. }\label{tab_auc} \begin{tabular}{l|cc|} \cline{2-3} & \multicolumn{1}{c|}{AUC (\%)} & Spearman's $\rho$ \\ \hline \multicolumn{1}{|l|}{$\text{GCNN}_{\text{Classif}}$} & \multicolumn{1}{c|}{\textbf{87.32}} & -0.78 \\ \hline \multicolumn{1}{|l|}{$\text{GCNN}_{\text{Reg}}$} & \multicolumn{1}{c|}{87.10} & -0.77 \\ \hline \multicolumn{1}{|l|}{$\text{Entropy}_{\text{mean}}$} & \multicolumn{1}{c|}{83.80} & -0.42 \\ \hline \multicolumn{1}{|l|}{$\text{Entropy}_{\text{logsum}}$} & \multicolumn{1}{c|}{83.72} & -0.97 \\ \hline \multicolumn{1}{|l|}{$\text{Variance}_{\text{mean}}$} & \multicolumn{1}{c|}{83.14} & -0.44 \\ \hline \multicolumn{1}{|l|}{$\text{Variance}_{\text{logsum}}$} & \multicolumn{1}{c|}{82.99} & -0.99\\ \hline \multicolumn{1}{|l|}{$\text{PCS}_{\text{mean}}$} & \multicolumn{1}{c|}{83.79} & -0.44 \\ \hline \multicolumn{1}{|l|}{$\text{PCS}_{\text{logsum}}$} & \multicolumn{1}{c|}{83.88} & -0.98 \\ \hline \multicolumn{1}{|l|}{Size} & \multicolumn{1}{c|}{80.30} & -1.00 \\ \hline \multicolumn{1}{|l|}{$\text{MetaSeg}_{\text{Classif}}$} & \multicolumn{1}{c|}{83.10} & -0.76 \\ \hline \multicolumn{1}{|l|}{$\text{MetaSeg}_{\text{Reg}}$} & \multicolumn{1}{c|}{83.42} & -0.77 \\ \hline \end{tabular} \end{table} \section{Conclusion} This paper presents an innovative graph-based framework to quantify lesion-wise uncertainty. We demonstrate, with our approach, improvement of the predicted uncertainty, when compared to various known methods. The strength of our solution is its generic nature, making it compatible with any segmentation model trained with dropout. \bibliographystyle{splncs04}
{ "timestamp": "2022-09-23T02:11:25", "yymm": "2209", "arxiv_id": "2209.10877", "language": "en", "url": "https://arxiv.org/abs/2209.10877" }
\section{Introduction} In \cite{dL92} Lascar showed, under the continuum hypothesis, the simplicity of the group of field automorphisms of the complex numbers $\mathbb C$ which fix pointwise the algebraic closure $\overline{\mathbb Q}$ of the prime field. Lascar himself wrote \cite[p. 249]{dL92} \emph{J'ai peine \`a imaginer que ce fait n'est pas d\'ej\`a connu} (I cannot imagine that this result is not already well-known). Lascar's proof had two main ingredients: firstly, every non-trivial automorphism of $\mathbb C$ is unbounded (see Definition \ref{D:bounded}), or moves maximally the unique non-algebraic type, in the terminology of Tent and Ziegler (\cite{TZ13}) on the simplicity of the group of isometries of the Urysohn space. Secondly, algebraic independence coincides with non-forking independence for the stable theory of the field $\mathbb C$, which eliminates imaginaries, so types over (real) algebraically closed subsets are stationary, and thus there is a canonical way to fuse partial elementary maps defined on independent subsets. Lascar's proof \cite{dL92} used strongly that the group of automorphisms of a countable structure is naturally a Polish group, and hence many of his arguments had a topological flavor. In \cite{dL97} Lascar provided an algebraic and more direct proof of the simplicity of $\Aut(\mathbb C/\overline{\mathbb Q})$ which circumvented the need of topological arguments and the continuum hypothesis. His second proof relies on a clever use of the aforementioned ingredients via transfinite induction. In \cite[Example 3.14]{EGT16}, Lascar's first proof was adapted by Evans, Ghadernezad and Tent to show the simplicity of the group of automorphisms of a countable saturated differentially closed field fixing pointwise the differentially algebraic elements. Their proof used two results due to Konnerth: the extension of suitable partial elementary maps to global automorphisms of the saturated countable differentially closed field \cite[Lemma 2.4]{rK02} and the triviality of bounded automorphisms \cite[Proposition 2.9]{rK02}. Motivated by the existing work, we will present a unifying approach of Lascar's proof in \cite{dL97} to the simplicity of the automorphism group of several fields with operators fixing the closure of the prime field with respect to a natural pregeometry. The description of bounded automorphisms was already generalized to several theories of fields with operators in \cite{BHMP17, fW20}. A natural algebraic example of a theory of fields with operators is the theory ACFA of difference closed fields, that is, the common theory of existentially closed difference fields. The theory ACFA is simple yet unstable, so non-forking independence still captures many of the desired properties of an independence relation. However, types over algebraically closed sets need not be stationary, even if the theory admits elimination of (hyper)-imaginaries. In \cite[Proposition 4.9]{CH99} a useful criterion for relative stationarity was exhibited for models of ACFA$_0$, that is, for difference closed fields of characteristic $0$. This criterion has been crucial in order to adapt Konnerth's proof to show the existence of global extensions of suitable partial elementary maps \cite[Lemma 2.4]{rK02} to ACFA$_0$ (see Proposition \ref{P:ACFA_ex}) The field of complex numbers $\mathbb C$ is the unique saturated model in its cardinality of the stable theory of algebraically closed fields of characteristic $0$. Uncountably saturated models exist for stable theories, but this need no longer be the case for simple theories without additional set-theoretical assumptions. A weaker version of saturation of cardinality $\kappa>\aleph_0$ is the notion of $\kappa$-prime models over a given set of parameters. The existence and uniqueness of a $\kappa$-prime model over all subsets has been shown by Shelah for superstable theories \cite[Theorem IV.4.18]{sSbook}. The second author \cite{zC19} showed the existence and uniqueness of a $\kappa$-prime model over certain subsets for the unstable theory ACFA$_0$. In this work, we will show in Theorem~\ref{T:main} the simplicity of the group of automorphisms fixing pointwise all non-generic elements over $\emptyset$ for a class of uncountable models, encompassing both uncountably saturated models (if they exist) as well as certain $\kappa$-prime models, for suitable theories of fields with operators. \section{Preliminaries}\label{S:Prel} In order to provide a unifying approach to the study of $\kappa$-prime models for several theories of fields with operators, we will work with a first-order complete theory $T$ in a language $\LL$, which we will assume for the sake of the presentation to be countable, though this is not an actual obstacle as long as we take an uncountable cardinal $\kappa \ge |\LL|^+$. Furthermore, we impose that the $\LL$-theory $T$ is simple. We work inside a $\kappa$-saturated model $\UU$ of our theory $T$ with $\kappa$ uncountable. Fix a definable group $G$ without parameters. Recall that an element $g$ in $G$ is generic over the subset $A$ if $$h\cdot g \mathop{\mathpalette\Ind{}} A\cup\{h\}$$ for all $h$ in $G$ with $g\mathop{\mathpalette\Ind{}}_A h$, where the symbol $\mathop{\mathpalette\Ind{}}$ denotes nonforking independence in the sense of the simple theory $T$. The inverse of the generic element $g$ is again generic, so we have the independence $g\cdot h \mathop{\mathpalette\Ind{}} A\cup\{h\}$ whenever $g\mathop{\mathpalette\Ind{}}_A h$. The product of two generic independent elements over $A$ is again generic and independent of each factor over $A$. A type $\tp(g/A)$ in $G$ is generic if $g$ is generic over $A$. Every non-forking extension of a generic type is again generic. Whenever the underlying group is stable and connected, there exists a unique generic type. This remarkable feature was also shown for difference closed fields in \cite[Proposition 2.10]{CH99}. Since we will be mostly concerned with the group of automorphisms of certain fields with operators, we will impose the following condition: \begin{hyp}\label{H:single_gen} The universe $\UU$ admits a group structure definable over $\LL$ without parameters. Furthermore, there is a unique generic type over every subset $A$ of $\UU$, given by the nonforking extension of the unique generic type $p$ over $\emptyset$: every two generic elements over $A$ have the same type over $A$. \end{hyp} Since every non-forking extension of a generic type is again generic, Property (\ref{H:single_gen}) implies in particular that the generic type $p$ over $\emptyset$ is \emph{stationary}: Any two realizations of the generic type which are both independent over the same subset $A$ of $\UU$ have the same type over $A$. Recall that a stationary type $q$ over $\emptyset$ is regular if it is orthogonal to every forking extension: if $a$ realizes the unique nonforking extension of $q$ to $B$ and the realization $c$ of $q$ forks with $B$, then $a\mathop{\mathpalette\Ind{}}_B c$. The presence of a regular type $q$ based over the empty set determines a closure operator \cite{eH85} on the set of realisations of $q$. This closure operator was later adapted in \cite[Section 3.5]{fWbook} to other collections of types. Whilst the general definition of the closure for an arbitrary family of types is somewhat difficult to grasp, it can be simplified in our context, as we will only consider the closure, within the ambient model $\UU$, taken with respect to the generic type $p$ of the underlying group. Since every element of the group can be written as the product of two generic elements, every type is thus analyzable with respect to the generic type. In our concrete case, the closure with respect to the (possibly non-regular) generic type $p$ over $\emptyset$ can be described as follows: \begin{definition}\label{D:closure} Given a subset $A$, an element $b$ \emph{belongs to the closure} $\cl(A)$ \emph{of} $A$ if for every subset $C$ containing $A$ and every generic element $g$ over $C$, we have that $g$ remains generic over $C\cup\{b\}$. \end{definition} Notice that in particular, every element algebraic (in the model-theoretic sense) over $A$ belongs to $\cl(A)$. However, in most examples of fields with operators we are interested in, the closure of a set will be rather big, and in particular $\cl(\emptyset)$, taken in our $\kappa$-saturated model $\UU$, will have cardinality $\geq \kappa$. For example, in the case of differentially closed fields of characteristic $0$, every single constant element $a$ (that is, with $\delta(a)=0$) belongs to $\cl(\emptyset)$. This notion of closure, despite being very general, already satisfies some desired properties: \begin{remark}\label{R:closure}\textup{(}\cf\ \cite[Lemmata 3.5.3 \& 3.5.5]{fWbook}\textup{)}~ \begin{enumerate}[(a)] \item If $g$ is generic over $A$, then it remains so over $\cl(A)$. \item Given two subsets $B$ and $C$ of $\UU$ with a common subset $A$, \[ B\mathop{\mathpalette\Ind{}}_A C \quad \Longrightarrow \quad \cl(B) \mathop{\mathpalette\Ind{}}_{\cl(A)} \cl(C).\] \end{enumerate} \end{remark} In a stable theory with weak elimination of imaginaries, types over real algebraically closed subsets of small cardinality are stationary. As shown in \cite[Theorem 5.3 \& Corollary B.11]{CH04} for difference closed fields, types of the form $\tp(a/B)$ are stationary whenever $B=\cl(B)\cap \acl(B,a)$, so this motivates the following property. \begin{hyp}\label{H:closed_stat} Types over relatively $\cl$-closed subsets are stationary: Given subsets $A_1$, $A_2$ and $B$ of size strictly less than $\kappa$ with a common algebraically closed subset $C$ such that \[ \acl(A_1)\cap \cl(C)= C= \acl(A_2)\cap \cl(C),\] whenever $A_1\equiv_C A_2$ and \[ A_1\mathop{\mathpalette\Ind{}}_C B \text{ and } A_2\mathop{\mathpalette\Ind{}}_C B,\] then $A_1\equiv_B A_2$. \end{hyp} The following property will be fundamental in order to extend partial automorphisms to arbitrary subsets in terms of a small chain of extensions obtained by adding successively generic elements. \begin{hyp}\label{H:addgen} For every $\kappa$-saturated model $\UU$ of $T$ and every subset $A$ of $\UU$ of size strictly less than $\kappa$, each element $b$ of $\UU$ is contained in $\cl(A\cup C)$, where $C$ is a subset consisting of a sequence of independent realizations of the generic type over $A$. \end{hyp} \begin{remark}\label{R:addgen} As the ambient theory $T$ is simple, if Property (\ref{H:addgen}) holds, then we can choose $C$ countable. \end{remark} \begin{proof} If $b$ belongs to $\cl(A\cup C)$, there is a countable subset $C_0 \subset C$ such that \[ b \mathop{\mathpalette\Ind{}}\limits_{A \cup C_0} A\cup C\] by the local character of nonforking independence in the simple countable theory $T$. By Remark \ref{R:closure}(b), we conclude that $b$ belongs to $\cl(A\cup C_0)$. \end{proof} \begin{remark}\label{R:gen_reg_addgen} If the generic type is regular, then an element is generic over $A$ if and only if it does not belong to $\cl(A)$. In particular, Property (\ref{H:addgen}) always holds with $C$ a singleton. Moreover, if $\tp(g/B)$ is never isolated, whenever $B\supset A$ is arbitrary and $g$ is generic over $B$, then $\cl(A)$ is an elementary substructure of $\UU$. (This is always the case for a stable theory when the generic type over $\emptyset$ is non-isolated, by the Open Mapping theorem.) \end{remark} \begin{proof} Clearly, a generic element over $A$ cannot lie in $\cl(A)$ by Remark \ref{R:closure} (a). For the other direction, assume $b$ is not generic and choose $g$ generic over $A, b$. So write $b= g \cdot h$, where both $g$ and $h=g\inv \cdot b$ are generic over $A\cup\{b\}$. Now, the realization $h$ of the generic type is not independent from $g$ over $A$ (since $b$ is not generic). By definition \ref{D:closure}, the element $h$ belongs to $\cl(A\cup\{g\})$: Indeed, we need only show that $h$ is independent from every generic element $g'$ over any subset $C$ containing $A\cup\{g\}$. Note that $h$ forks with $C$ and thus $h$ is independent from the generic element $g'$ over $C$, by regularity of the generic type. We conclude that $b$ belongs to $\cl(A\cup\{g\})$ as well. The Remark \ref{R:closure} (b) yields that $b$ belongs to $\cl(A)$, as desired. For the second statement, assume that the generic type is regular but not isolated. That $\cl(A)$ is an elementary substructure is a straight-forward application of Tarski's test: consider a consistent formula $\varphi(x, b_1,\ldots, b_n)$, where each $b_i$ belongs to $\cl(A)$, and choose a realization $g$ of $\varphi(x, b_1,\ldots, b_n)$ in $\UU$. If $g$ already belongs to $\cl(A)$, then we are done. Otherwise, it is generic over $A$ and thus remains so over $B=A\cup\{b_1, \ldots, b_n\}$. As the generic type $\tp(g/B)$ is not isolated, there is some $c$ realizing $\varphi(x, b_1,\ldots, b_n)$ whose type differs from $\tp(g/B)$. By Property (\ref{H:single_gen}), the element $c$ is not generic over $B$ and thus it belongs to $\cl(B)\subseteq \cl(A)$, as desired. \end{proof} \begin{definition}\label{D:atomic} The $\kappa$-saturated model $\UU$ is $\kappa$-\emph{atomic} over $A$ if for every finite tuple $b$ in $A$ the type $\tp(b/A)$ is \emph{$\kappa$-isolated}, that is, there is some subset $C$ of $A$ of cardinality strictly less than $\kappa$ such that $\tp(b/C)\vdash \tp(b/A)$. \end{definition} \begin{hyp}\label{H:Konnerth} Every $\kappa$-saturated model $\UU$ of $T$ is $\kappa$-atomic over subsets of the form $K=\acl(\cl(A_1)\cup\cdots\cup\cl(A_n)\cup B)$, with $|A_i|,|B|<\kappa$ for $1\le i\le n$. \end{hyp} The following remark is an immediate consequence of the atomicity and the saturation of the model $\UU$. \begin{remark}\label{R:weaklyhom} Every $\kappa$-saturated model $\UU$ of a theory $T$ satisfying Property (\ref{H:Konnerth}) is weakly homogenous with respect to the family of subsets of the form \[ K=\acl(\cl(A_1)\cup\cdots\cup\cl(A_n)\cup B),\] with $|A_i|,|B|<\kappa$ for $1\le i\le n$: given an element $c$ in $\UU$ and a partial elementary map $f: K\to \UU$, with $f(K) = \acl(\cl(f(A_1))\cup\cdots\cup\cl(f(A_n))\cup f(B))$, there is an extension of $f$ to a partial elementary map defined on $K\cup\{c\}$. \end{remark} \begin{lemma}\label{L:stable_Konnerth}\textup{(}\cf\ \cite[Lemma 2.3]{rK02}\textup{)}~ Suppose that $\UU$ is stable with weak elimination of imaginaries and satisfies Property (\ref{H:single_gen}). Then $\UU$ satisfies Properties (\ref{H:closed_stat}) and (\ref{H:Konnerth}) with respect to the closure operator defined in Definition \ref{D:closure}. \end{lemma} \begin{proof} It was already noticed right before stating Property (\ref{H:closed_stat}) that it always holds for stable theories with weak elimination of imaginaries. Thus, we need only show that Property (\ref{H:Konnerth}) holds. Let $c$ be a finite tuple of $\UU$. In order to show that $\tp(c/K)$ is $\kappa$-isolated, we must find some subset $E$ of $K$ of size strictly less than $\kappa$ such that $\tp(c/E)\vdash \tp(c/K)$. The local character of forking yields a subset $D$ of $K$ of cardinality bounded by $|\LL|<\kappa$ such that $c\mathop{\mathpalette\Ind{}}_{D} K$. Set now \[ E =\acl(A_1,\cdots, A_n, B, D).\] Since $D\subseteq E$, it follows that $c\mathop{\mathpalette\Ind{}}_E K$. We now show that $\tp(c/E)\vdash \tp(c/K)$, or equivalently, that $\tp(c/E)\vdash \tp(c/E, \eta)$ for all finite tuples $\eta$ in $K$. Notice that $\tp(c/E)$ is stationary, by weak elimination of imaginaries, so we need only show that for every realization $d$ in $\UU$ of $\tp(c/E)$ \[ d\mathop{\mathpalette\Ind{}}_E \eta.\] By saturation, there is a tuple $\eta'$ in $\UU$ such that $c\eta'\equiv_E d\eta$. It suffices thus to show that $c\mathop{\mathpalette\Ind{}}_E \eta'$. Now, the tuple $\eta$ is algebraic over $E\cup\{ \eta_i\}_{1\le i\le n}$, where each $\eta_i$ belongs to $\cl(A_i)$ for $1\le i \le n$. Hence, the tuple $\eta'$ is algebraic over $E\cup\{ \eta'_i\}_{1\le i\le n}$, where each $\eta'_i$ again belongs to $\cl(A_i)$ for $1\le i \le n$, since $\cl(A)$ is invariant under all automorphisms of $\UU$ fixing $A$. We deduce that $\eta'$ lies in $K$, as desired. \end{proof} \begin{remark}\label{R:Premiers_Examples}~ Every stable connected group with weak elimination of imaginaries whose generic type is regular satisfies Properties (\ref{H:single_gen})-(\ref{H:Konnerth}). If the group is superstable of Lascar rank $\omega^\alpha$, then the closure operator equals \[ \cl(B)=\{a\in \UU \ | \ \mathrm{U}(a/B)<\omega^\alpha\} .\] In particular, both the theory of algebraically closed fields of arbitrary characteristic in the language of rings $\mathcal{L}_{Ring}$ as well as the theory of differentially closed fields of characteristic $0$ with respect to $m$ commuting derivations in the language $\LL_\textrm{Rings}\cup\{\delta_1,\ldots, \delta_m\}$ satisfy Properties (\ref{H:single_gen})-(\ref{H:Konnerth}). \end{remark} Notice that, whenever the superstable group has Lascar rank $\omega^\alpha$, with $\alpha\ne 0$, then the closure of any (small) subset $B$ of the $\kappa$-saturated model $\UU$ has size at least $\kappa$. \begin{proof} Property (\ref{H:single_gen}) follows since the group is connected. Properties (\ref{H:closed_stat}) and (\ref{H:Konnerth}) follow from Lemma \ref{L:stable_Konnerth}, whilst Property (\ref{H:addgen}) follows from Remark \ref{R:gen_reg_addgen}. \end{proof} In contrast to stable theories, types over models in simple theories need not be stationary. However, under some mild conditions it can be the case that the non-forking extension is unique. This was explored in \cite[Proposition 4.9]{CH99} for the theory ACFA$_0$ (or rather ACF$_0$A) of difference closed fields in characteristic $0$, using the concept of \emph{superficial co-stability}, see Lemma \ref{L:ACFA_4.9} below. Before showing that properties (\ref{H:single_gen}) -- (\ref{H:Konnerth}) hold for ACFA$_0$, let us recall some notation and results from \cite{CH99}: if $A$ is a subset of the $\kappa$-saturated difference closed field $\UU$, then $\scl{A}_\sigma$ denotes the inversive difference field generated by $A$, that is, the smallest difference subfield containing $A$ such that the restriction of the automorphism to $\scl{A}_\sigma$ is surjective. To avoid any possible confusion, as there will be several powers $\sigma^k$ of the generic automorphism $\sigma$ around, we will denote by $\acl_\sigma$ the model-theoretic algebraic closure of $A$, that is, the smallest inversive difference field containing $A$ and which is algebraically closed as a field. The type $tp_\sigma(b/A)$ is uniquely determined by the isomorphism type over $A$ of the difference field $\acl_\sigma(A\cup\{d\})$ (see \cite[Corollary 1.5]{CH99}). Every completion of ACFA$_0$ is simple and nonforking independence can be described in algebraic terms: Given two inversive difference fields $A$ and $B$ with a common difference subfield $C=\acl_\sigma(C)$, we have that $A\mathop{\mathpalette\Ind{}}_C B$ if and only if $A$ and $B$ are algebraically independent over $C$ \cite[Section 1.9]{CH99}. By \cite[Corollary 1.13]{CH99}, the field $(\UU,\sigma^k)$, is again a difference closed field for $k\ne 0$ in $\Z$. In particular, we will denote by $\scl{A}_{\sigma^k}$, $\acl_{\sigma^k}(A)$ and $\tp_{\sigma^k}(A\cup\{d\})$ the corresponding notions in the reduct $\UU[k]=(\UU,\sigma^k)$. Note that the closure operator in $(\UU,\sigma)$ can be described algebraically as \[ \cl_\sigma(B)=\{a\in \UU \ | \ \mathrm{tr.deg}(a, \sigma(a), \sigma^2(a),\ldots/\acl_\sigma(B))<\infty\}.\] Furthermore, if we assume that $B=\langle B\rangle_\sigma$, then for each $k\ne 0$ in $\Z$ \[ \mathrm{tr.deg}(a, \sigma^k(a), \sigma^{2k}(a),\ldots/B)<\infty \ \Leftrightarrow \ \mathrm{tr.deg}(a, \sigma(a), \sigma^2(a),\ldots/B)<\infty,\] so $\cl_{\sigma^k}(B)= \cl_\sigma(B)$. A combination of the results obtained in \cite{CH99} yields an explicit description of the failure of uniqueness of the non-forking extension to a fixed set of parameters. For the proof of \ref{P:ACFA_ex}, we will only need an adaptation of \cite[Proposition 4.9 \& Lemma 2.8]{CH99}, which we will state in the following Lemma: \begin{lemma}\cite[Proposition 4.9 \& Lemma 2.8]{CH99}\label{L:ACFA_4.9} Consider an algebraically closed subset $E$ and two tuples $c$ and $\eta$ in some ambient model $(\UU, \sigma)$ of ACFA of characteristic $0$ with $c\mathop{\mathpalette\Ind{}}_E \eta$. Suppose that \[ \scl{E\cup\{c\}}_{\sigma^k} \text{ and } \scl{E\cup\{\eta'\}}_{\sigma^k} \text{ are algebraically independent over }E,\] for every $k\ne 0$ in $\Z$ and every finite tuple $\eta'$ realizing the quantifier-free type $\mathrm{qftp}_{\sigma^k}(\eta)$. Then the type $\tp_\sigma(c/E)$ implies $\tp_\sigma(c/E\cup\{\eta\})$. \end{lemma} \begin{proof} Assume that $c$, $\eta$ and $E$ are as in the statement. We can apply \cite[Proposition 4.9]{CH99} and obtain that $\tp_\sigma(c/E)$ and $\tp_\sigma(\eta/E)$ are superficially co-stable: for every $d\mathop{\mathpalette\Ind{}}_E \eta$ realizing $\tp_\sigma(c/E)$, setting $K=\acl_\sigma(E\cup\{\eta\})$ and considering the unique extension $\sigma_F$ of $\sigma\restr{\acl_\sigma(E,d)}\otimes \sigma\restr{K}$ to the compositum field $F$ of $\acl_\sigma(E,d)$ and $K$, we have that $F$ has no proper finite Galois extension invariant under $\sigma_F$. Moreover, setting $k=1$ in our assumption, we deduce from the previous description of the non-forking independence in the simple theory ACFA that $c$ is independent of every realisation $\eta'$ in $\UU$ of $\tp_\sigma(\eta/E)$, or equivalently, that every realisation $d$ of $\tp_\sigma(c/E)$ is independent from $\eta$ over $E$. A realization $d$ of $\tp_\sigma(c/E)$ yields an $E$-isomorphism of difference fields between $\acl_\sigma(E\cup\{d\})$ and $\acl_\sigma(E\cup\{c\})$ mapping $d$ to $c$. We need to show that this isomorphism of difference fields extends to an $K$-isomorphism of difference fields between $\acl_\sigma(K\cup\{d\})$ and $\acl_\sigma(K\cup\{c\})$. By superficial co-stability (since every $d$ realizing $\tp_\sigma(c/E)$ is automatically independent from $K$ over $E$), there is a unique extension of the field automorphism $\sigma_F$ to the algebraic closure of $F$, by \cite[Lemma 2.8]{CH99}. We conclude that there is an isomorphism of difference fields between the algebraic closure of $F$ and the difference field $\acl_\sigma(K\cup\{c\})$ over $K$ mapping $d$ to $c$, so $d\equiv_K c$ for every realization $d$ of $\tp_\sigma(c/E)$, as desired. \end{proof} \begin{prop}\label{P:ACFA_ex} Every completion $T$ of ACFA$_0$ is simple and satisfies Properties (\ref{H:single_gen})-(\ref{H:Konnerth}). \end{prop} \begin{proof} The simplicity of every completion of ACFA already appears in \cite[Section 1.9]{CH99}. Property (\ref{H:single_gen}) was shown in \cite[Proposition 2.10]{CH99} and Property (\ref{H:closed_stat}) in \cite[Theorem 5.3 \& Corollary B.11]{CH04}. Property (\ref{H:addgen}) follows from Remark \ref{R:gen_reg_addgen} and the Lascar inequalities for supersimple theories, since the generic type in a difference closed field has monomial Lascar rank $\omega$. We need only show that Property (\ref{H:Konnerth}) holds in every $\kappa$-saturated model $\UU$ of ACFA$_0$. As in the proof of Lemma \ref{L:stable_Konnerth}, set $E=\acl_\sigma(A_1,\cdots, A_n, B, D)$, where $D$ is a subset of $K=\acl_{\sigma}(\cl_\sigma(A_1)\cup\cdots\cup\cl_\sigma(A_n)\cup B)$ of cardinality bounded by $|\LL|<\kappa$ such that $c\mathop{\mathpalette\Ind{}}_{D} K$. Note that $c\mathop{\mathpalette\Ind{}}_E K$. The $\kappa$-atomicity will follow once we show that $\tp(c/E)\vdash \tp(c/K)$, or equivalently, that $\tp(c/E)\vdash \tp(c/E\cup\{\eta\})$ for every finite tuple $\eta$ in $K$. We may assume that $B=\scl{B}_\sigma$ and each $A_i=\acl_\sigma(A_i)$. Hence $K$ is the field algebraic closure of the (inversive difference) field generated by $\cl_\sigma(A_1)\cup\cdots\cup\cl_\sigma(A_n)\cup B$. Possibly at the cost of enlarging $\eta$, we may also assume that $\eta=(\eta_0, \eta_1, \ldots ,\eta_n)$ with $\eta_0$ field algebraic over $E\cup\{ \eta_i\}_{1\le i\le n}$ and $\eta_i$ in $\cl_\sigma(A_i)$ for $1\le i \le n$, by the discussion before Lemma \ref{L:ACFA_4.9}. Therefore, we need only show by Lemma \ref{L:ACFA_4.9} that for every integer $k$ in $\N$ and every $\eta'$ in $\UU$ realizing the quantifier-free type $\mathrm{qftp_{\sigma^k}(\eta/E)}$, the fields $\scl{E,c}_{\sigma^k}$ and $\scl{E,\eta'}_{\sigma^k}$ are algebraically independent over $E$. Note that $\cl_\sigma(A_i)=\cl_{\sigma^k}(A_i)$, so $\eta_i$ belongs to $\cl_{\sigma^k}(A_i)$ for $1\le i\le n$. The isomorphism of difference fields in the structure $(\UU, \sigma^k)$ over $E$ maps the tuple $\eta$ to $\eta'$, with $\eta'=(\eta'_0, \eta'_1,\ldots,\eta'_n)$, where $\eta'_0$ is field algebraic over $E\cup\{ \eta'_i\}_{1\le i\le n}$ and each $\eta'_i$ lies in $\cl_{\sigma^k}(A_i)$, since each of these properties is quantifier-free definable in the language of difference rings. Hence, the whole tuple $\eta'$ is field algebraic over the field generated by $E\cup\bigcup_{1\le i\le n} \cl_{\sigma}(A_i) \subseteq K$. The difference field $K$ is algebraically closed, so $\scl{E,\bar\eta'}_{\sigma^k}\subseteq K$. Since $E$ was chosen so that $c\mathop{\mathpalette\Ind{}}_E K$, we deduce the desired independence between $\scl{E,c}_{\sigma^k}$ and $\scl{E,\eta'}_{\sigma^k}$ over $E$.\end{proof} \begin{question}\label{Q:examples} The theory of separably closed fields of positive characteristic $p$ and finite degree of imperfection $e$ is stable \cite[Theorem 3]{cW79} and eliminates imaginaries \cite[Proposition 43]{fD88} in the language $\mathcal L= \LL_\textrm{Rings}\cup\{c_1,\ldots, c_e\} \cup\{\lambda_n(x)\}_{0\le n < p^e}$, where $\{c_1,\ldots, c_e\}$ denotes a $p$-basis and the $\lambda$-functions are taken with respect to that basis. This theory satisfies Properties (\ref{H:single_gen}), (\ref{H:closed_stat}) and (\ref{H:Konnerth}) with respect to the closure operator given by the unique generic type. We do not know whether this theory satisfies Property (\ref{H:addgen}). Note that the unique generic type is not regular if $e\ne 0$: Indeed, given a generic element $g$ over a small set of parameters $A$, the sequence $\{\lambda_n(g)\}_{0\le n < p^e}$ is a sequence of independent realisations of the generic type. However, the element $g$ is no longer generic over $\lambda_0(g)$, whilst $\lambda_1(g)$ remains generic over $\lambda_0(g)$. Clearly $g$ and $\lambda_1(g)$ fork over $\lambda_0(g)$, since $\lambda_1(g)$ is definable over $g$. Analogously, the theory $\mathcal D$-CF$_0$ of $\mathcal D$-closed fields of characteristic $0$ equipped with $n$~free derivations in the language $\mathcal L=\mathcal{L}_\textrm{Rings}\cup\{\delta_1,\ldots, \delta_n\}$ introduced in \cite{MS14} is stable and eliminates imaginaries \cite[Corollary 5.7 and Theorem 5.12]{MS14}, so it satisfies Properties (\ref{H:single_gen}), (\ref{H:closed_stat}) and (\ref{H:Konnerth}) with respect to the closure operator given by unique generic type. The same holds for any collection of $n$ free operators if the associated endomorphisms are trivial. We do not know whether the theory $\mathcal D$-CF$_0$ satisfies Property (\ref{H:addgen}). Note that the unique generic type is not regular, whenever the number of free derivations is at least $2$, by exactly the same argument as for separably closed fields. \end{question} \section{Tame models} {\bf For this section, we assume that the cardinal $\kappa$ is uncountable and that the countable simple $\LL$-theory $T$ satisfies Properties (\ref{H:single_gen})-(\ref{H:addgen}).} In order to prove Theorem \ref{T:main}, we will need to restrict our focus to a particular class of $\kappa$-saturated models, which we will denote by \emph{tame} models. \begin{definition}\label{D:tame_model} A $\kappa$-saturated model $\UU$ of the theory $T$ is \emph{$\kappa$-tame} if it satisfies the following conditions: \begin{enumerate}[(a)] \item There is a subset $A$ of $\UU$ of size $\kappa$ such that $\UU=\cl(A)$. \item Every partial elementary map $f: K\to \UU$ with $K=\acl(\cl(A_1)\cup\cdots \cup\cl(A_n)\cup B)$ and $f(K)= \acl(\cl(f(A_1))\cup\cdots\cup\cl(f(A_n))\cup f(B))$, such that all $A_i$'s and $B$ have cardinality strictly less than $\kappa$, extends to a global automorphism of $\UU$. \end{enumerate} \end{definition} By Property (\ref{H:addgen}) and saturation, it suffices to consider in the above condition (a) a subset $A$ enumerating an independent sequence of realizations of the generic type. In particular, the model $\UU$ does not contain an independent sequence of realizations of the generic type of length $\kappa^+$: Indeed, every generic element in $\UU$ must fork with $A$, which is witnessed by a finite tuple in $A$. There are $\kappa$ many finite tuples in $A$, but each tuple can only divide (by simplicity of $T$) with countably many elements in the independent sequence, which contradicts that the sequence had length $\kappa^+$. If the generic type is regular, condition (a) is equivalent to \begin{enumerate} \item[(a')] The model $\UU$ does not contain an independent sequence of realizations of the generic type of length $\kappa^+$. \end{enumerate} Indeed, choose a subset $A$ enumerating a maximal independent sequence in $\UU$ of realizations of the generic type. The subset $A$ has size at least $\kappa$, by saturation, but it cannot exceed $\kappa$, by condition (a'), so it has size exactly $\kappa$. Notice now that no element in $\UU$ lies outside the closure of $A$, by regularity of the generic type and Remark \ref{R:gen_reg_addgen}. Given a subset $K=\acl(\cl(A_1)\cup\cdots \cup\cl(A_n)\cup B)$ as above and a sequence $(c_i)_{i<\lambda}$ with $\lambda<\kappa$, the set $\acl(K, \{c_i\}_{i<\lambda})$ is again of the form as in Definition \ref{D:tame_model} (b). Thus, the following remark follows immediately from Property (\ref{H:single_gen}). \begin{remark}\label{R:tame_ext_dim} Consider a $\kappa$-tame model $\UU$ of the theory $T$ and a partial elementary map $f: K\to \UU$ as above. Given two sequences $(c_i)_{i<\lambda}$ and $(d_i)_{i<\lambda}$ of realizations of the unique generic type which are respectively independent over $K$ and over $f(K)$, with $\lambda<\kappa$, there is an extension of $f$ to a global automorphism mapping the sequence $(c_i)_{i<\lambda}$ to the sequence $(d_i)_{i<\lambda}$. In particular, the global automorphism maps $\cl(K, \{c_i\}_{i<\lambda})$ onto $\cl(f(K), \{d_i\}_{i<\lambda})$. Using a standard chain argument of length $\kappa$, it is therefore possible to extend $f$ to a global automorphism of $\UU$ mapping an sequence of length $\kappa$ of independent realizations over $K$ of the generic type to sequence of length $\kappa$ of independent realizations over $f(K)$ of the generic type. \end{remark} \begin{lemma}\label{L:tame_ext_ind} Consider a $\kappa$-tame model $\UU$ of a theory $T$, consider $\cl$-closed subsets $X$, $Y_1$ and $Y_2$ of $\UU$ with $X \subseteq Y_1 \cap Y_2$, each $\cl$-generated by subsets of size strictly less than $\kappa$. Given elementary automorphisms of $Y_i$ for $i=1, 2$ which agree on $X$ such that $g_1(X)=g_2(X)= X$. If \[Y_1 \mathop{\mathpalette\Ind{}}_X Y_2,\] then there is a global automorphism of $\UU$ which extends both $g_1$ and $g_2$. \end{lemma} \begin{proof} Set $Y_1 = \cl(A_1)$ for some subset $A_1$ of cardinality stricly than $\kappa$. Note that $Y_1 = \cl(g_i(A_1))$, since $g_1$ is an elementary map of $Y_1$ onto itself. By tameness, there exists a global automorphism $\widehat{g_2}$ of $\UU$ extending $g_2$. After enumeration of $Y_1$, the subset $Y'_1 = \widehat{g_2}\inv(g_1(Y_1))$ has the same type as $Y_1$ over $X$, since $g_1$ and $g_2$ (and thus $\widehat{g_2}$) agree on $X$. Note that $Y'_1 \mathop{\mathpalette\Ind{}}_X Y_2$. By Property (\ref{H:closed_stat}), we have $Y'_1 \equiv_{Y_2} Y_1$, since $X$ is $\cl$-closed. Thus, there is an elementary map~$h$ on $\acl(Y_1 \cup Y_2)$ fixing pointwise $Y_2$ whose restriction to $Y_1$ coincides with $\widehat{g_2}\inv g_1$. Note that $Y'_1 = \cl(h(A_1))$, so we obtain by tameness a global automorphism $\widehat{h}$ of $\UU$ extending $h$. By construction, the automorphism $\widehat{g_2} \circ \widehat{h}$ extends both $g_1$ and $g_2$. \end{proof} \begin{remark}\label{R:sat_tame} If the countable simple theory $T$ satisfies Properties (\ref{H:single_gen})-(\ref{H:Konnerth}) and has a saturated model $\UU$ of cardinality $\kappa$, then $\UU$ is $\kappa$-tame. Indeed, the condition Definition \ref{D:tame_model} (a) holds trivially, since the model $\UU$ has cardinality $\kappa$. The only thing left to show is the strong extension property for elementary maps $f: K\to \UU$ as above. A straight-forward Back-\&-Forth argument shows that $f$ extends to a global automorphism using Remark \ref{R:weaklyhom}. \end{remark} The existence of saturated models of cardinality exactly $\kappa$ follows whenever the countable theory $T$ is $\kappa$-stable \cite{vH75, sSbook}. \begin{cor} \label{C:stable_examples_tame} Each of the particular examples of theories of fields with operators listed in Remark \ref{R:Premiers_Examples} is $\omega$-stable and therefore has a $\kappa$-tame model for any uncountable $\kappa$, namely, the saturated model of cardinality $\kappa$. \end{cor} For a general theory $T$, if $\kappa=\lambda^+$ for some cardinal $\lambda\ge |T|$ with $\lambda^+=2^\lambda$, or if $\kappa$ is regular and strongly inacessible, then there are saturated models of cardinality $\kappa$. However, these set-theoretic assumptions go beyond ZFC. \section{Existence and uniqueness of $\kappa$-prime models}\label{S:Prime} Shelah introduced the notion of $\kappa$-prime models \cite[Chapter IV]{sSbook} (see also \cite[Chapitre VI]{dLBook}) for an arbitrary stable theory, generalizing Morley's notion of prime models for an $\omega$-stable theory. \begin{definition}\label{D:prime} The ambient $\kappa$-saturated model $\UU$ of $T$ is $\kappa$-\emph{prime} over the subset $A\subseteq \UU$ if it elementarily embeds into every other $\kappa$-saturated model containing $A$. \end{definition} Shelah showed the existence and uniqueness (up to isomorphism) of $\kappa$-prime models over arbitrary subsets $A$ (even if $A$ has size strictly larger than $\kappa$) whenever the countable theory is superstable. They are exact\-ly the $\kappa$-saturated $\kappa$-atomic models over $A$ which contain no (non-constant) $A$-indiscernible sequence of length $\kappa^+$. There are also results for stable theories, but with some restrictions on the cardinal $\kappa$. \begin{remark}\label{R:Exist} The existence and uniqueness of a $\kappa$-prime model over its closure $\cl(\emptyset)$ has been shown by the second author \cite{zC19} for the unstable theory ACFA: Choose a $\kappa$-saturated model $\mathcal M$ of ACFA of characteristic $0$. Then $\kappa$-prime models over subfields $A$ containing $\cl_{\M}(\emptyset)$ exist and are unique up to isomorphism over $A$. As in the stable case, the $\kappa$-prime model $\UU$ of ACFA$_0$ over a subset $A$ containing $\cl_{\M}(\emptyset)$ is exactly the unique (up to isomorphism) $\kappa$-saturated model $\kappa$-atomic model over $A$ containing no (non-constant) $A$-indiscernible sequence of length $\kappa^+$. \end{remark} \begin{prop}\label{P:prime_tame} Suppose that the countable simple theory $T$ satisfies Properties (\ref{H:single_gen})-(\ref{H:Konnerth}) and that the generic type is regular. We suppose furthermore that for every model $\M$ of $T$ and every subset $A$ of $M$ containing $\cl_{\M}(\emptyset)$, there exists a unique $\kappa$-prime model over $A$, which is characterized by being $\kappa$-atomic over $A$ and containing no (non-constant) $A$-indiscernible sequence of length $\kappa^+$. Then for every model $\M$ of $T$, the $\kappa$-prime model $\UU$ over $\cl_\M(\emptyset)$ is $\kappa$-tame and $\cl_\UU(\emptyset)=\cl_\M(\emptyset)$. \end{prop} \begin{proof} Let us first show that $\UU$ is the closure of a set of size $\kappa$. Since the generic type is regular, we need only consider the equivalent condition (a') in Definition~\ref{D:tame_model}. Clearly, the $\kappa$-saturated model $\UU$ does not contain (non-constant) independent sequences of the generic type of length $\kappa^+$, since such a sequence is indiscernible by Property (\ref{H:single_gen}). We now show the strong extension property for elementary maps $f: K\to \UU$ with \begin{itemize} \item $K=\acl(\cl_\UU(A_1)\cup\cdots \cup\cl_\UU(A_n)\cup B)$; \item $f(K)= \acl(\cl_\UU(f(A_1))\cup\cdots\cup\cl_\UU(f(A_n))\cup f(B))$;\end{itemize} where all subsets $|A_i|$'s and $B$ of $\UU$ have cardinality strictly less than $\kappa$. By Property~(\ref{H:Konnerth}), the model $\UU$ is $\kappa$-atomic over $K$ as well as over $f(K)$ and does not contain any non-constant indiscernible sequence of length $>\kappa$ over $K$ nor over $f(K)$. Hence, by uniqueness of $\kappa$-prime models, the partial isomorphism $f:K\to f(K)$ extends to an automorphism of $\UU$. Since $\UU$ embeds into $\mathcal M$ over $\cl_{\M}(\emptyset)$, it follows immediately that $\cl_\UU(\emptyset)=\cl_\M(\emptyset)$. \end{proof} \begin{notation} From now on, we will say that $\UU$ is $\kappa$-prime over its closure $\cl(\emptyset)$ if $\UU$ is $\kappa$-saturated and $\kappa$-prime over the subset $A=\cl_\UU(\emptyset)$. \end{notation} The aforementioned characterizations of $\kappa$-prime models in terms of $\kappa$-atomicity and non-existence of non-constant indiscernible sequences of length $\kappa^+$ yields immediately the following result. \begin{cor}\label{C:Examples_tame} By Remark \ref{R:Exist} and the discussion above it, each of the particular examples of theories of fields with operators listed in Remark \ref{R:Premiers_Examples} and Proposition \ref{P:ACFA_ex} has a $\kappa$-tame model for any $\kappa>\aleph_0$, namely, the $\kappa$-prime model $\UU$ over the corresponding closure $\cl(\emptyset)$. \end{cor} \section{Automorphisms of $\kappa$-tame models} {\bf Henceforth, we suppose that the countable simple $\LL$-theory $T$ satisfies Properties (\ref{H:single_gen})-(\ref{H:addgen}) and has a $\kappa$-tame model $\UU$.} \begin{notation} We will denote by $\clS$ the collection of subsets of $\UU$ of the form $\cl(A)$, where $A$ enumerates a sequence of independent realizations of the generic type of length strictly less than $\kappa$. Note that each member of $\clS$ is an algebraically closed substructure of $\UU$. \end{notation} \begin{definition}\label{D:acceptable} An extension $X\subseteq Y$ with $X$ and $Y$ in $\clS$ is \emph{acceptable} if $Y=\cl(X \cup A)$, where $A$ enumerates an infinite countable sequence of independent realizations of the generic type over $X$. \end{definition} Remark \ref{R:tame_ext_dim} yields immediately the following result: \begin{lemma}\label{L:iso_accept} Given an elementary automorphism $f$ of $X$ in $\clS$ and two acceptable extensions $Y_1$ and $Y_2$ of $X$, there is an extension of $f$ to an elementary partial map sending $Y_1$ to $Y_2$. Thus, any two acceptable extensions $Y_1$ and $Y_2$ of $X$ are conjugate by an automorphism of $\UU$ fixing $X$ pointwise. \end{lemma} \begin{remark}\label{R:iteration} Given $X$ in $\clS$, an element $b$ in $\UU$ as well as a countable collection $\mathcal F$ of global automorphisms of $\UU$ which each leave $X$ setwise invariant, there exists an acceptable extension $Y$ of $X$ in $\clS$ which contains $b$ and is setwise stable under each automorphism of $\mathcal F$. \end{remark} \begin{proof} Without loss of generality, we may assume that $\mathcal F$ is a group. By induction, successively applying Property (\ref{H:addgen}), we construct a increasing sequence $(A_n)_{n\in \N}$ of sets such that: \begin{itemize} \item the set $A_0$ generates $X$ with respect to the closure operator $\cl$. \item the elements in $A_{n+1}\setminus A_n$ enumerate a countable independent sequence of realizations of the generic type over $A_n$ (and thus over $\cl(A_n)$); \item the element $b$ belongs to $\cl(A_1)$; \item for every $n$ in $\N$, every $\tau$ in $\mathcal F$ and every $a$ in $A_n$, the element $\tau(a)$ belongs to $\cl(A_{n+1})$. \end{itemize} Set now $Y=\cl\left(\bigcup_{n\in \N} A_n\right)$, which has all the desired properties. \end{proof} We recall now \cite[Definition 2.14]{BHMP17} (or a slightly modified version thereof, see also \cite{fW20}). \begin{definition}\label{D:bounded} An $\LL$-automorphism $\tau$ of a $\UU$ is \emph{unbounded} if, whenever $A$ has cardinality strictly less than $\kappa$, there is a generic element $b$ in $\UU$ over $A$ (so $b$ does not belong to $\cl(A)$) such that $\tau(b)$ does not lie in $\cl(A, b)$. The $\LL$-automorphism $\tau$ is \emph{bounded} if it is not unbounded. \end{definition} Recall that Frobenius is bounded, whenever the underlying field has positive characteristic. Similarly, given a $\kappa$-saturated model $(K, \sigma)$ of ACFA, every power $\sigma^k$ of the generic automorphism $\sigma$ is bounded. It is immediate to see that the collection of bounded automorphisms of an arbitrary model forms a normal subgroup of the automorphism group of $\UU$. \begin{lemma}\label{L:imageInd} Let $\tau$ be an unbounded automorphism of $\UU$ fixing the closure $\cl(\emptyset)$ in $\UU$ pointwise. If $X$ in $\clS$ is stable setwise under the action of $\tau$, then there is an acceptable extension $Y$ of $X$ such that $Y$ and $\tau(Y)$ are independent over $X$. \end{lemma} Notice that $\tau(Y)$ is again an acceptable extension of $X$. \begin{proof} It suffices by Remark \ref{R:closure} (b) to inductively build a independent sequence $(a_n)_{n\in \N}$ of realizations of the generic type over $X$ such that for any $n \in \N$ we have \begin {multline*} a_n\mathop{\mathpalette\Ind{}}_X a_0,\tau(a_0),\ldots, a_{n-1}, \tau(a_{n-1}) \quad \text{ and } \\ \tau(a_n)\mathop{\mathpalette\Ind{}}_X a_0,\tau(a_0),\ldots, a_{n-1}, \tau(a_{n-1}), a_n.\end{multline*} Suppose the sequence has been constructed for $j<n$. The subset $X'=\cl(X, a_j, \tau(a_j): j<n)$ lies in $\clS$ yet $\tau$ is unbounded over the small subset $X'\cup \tau(X')$. Hence, we find an element $a_n$ in $\UU$ generic over $X'$ such that $\tau(a_n)$ does not lie in $\cl(X', \tau(X'), a_n)$. In particular, the element $a_n$ is independent from $a_0,\tau(a_0),\ldots, a_{n-1}, \tau(a_{n-1})$ over $X$. Similarly, the element $\tau(a_n)$ is independent from $a_0,\tau(a_0),\ldots, a_{n-1}, \tau(a_{n-1}), a_n$ over $X$, as desired. \end{proof} It follows from the above proof that every unbounded automorphism fixing pointwise the closure $\cl(\emptyset)$ \emph{moves maximally}, according to the terminology of \cite{TZ13}, the unique generic type $p$ over $X$ in $\clS$, by Property (\ref{H:single_gen}). Motivated by \cite{BHMP17,fW20}, we will now introduce the last property of interest for our purposes. \begin{hyp}\label{H:nobounded} The only bounded automorphism of $\UU$ fixing pointwise the closure $\cl(\emptyset)$ is the identity. \end{hyp} A key fact in the description of bounded automorphisms of fields with operators is that a non-generic definable additive subgroup of $\mathbb{G}_a$ is given by a system of linear equations on (words on) the operators. A straight-forward adaptation of \cite[Th\'eor\`eme, pp. 957-958]{BHMP17} yields the following: \begin{remark}\label{R:examples_nobounded} The particular examples of theories of fields with operators listed in Remark \ref{R:Premiers_Examples} and Proposition \ref{P:ACFA_ex} all satisfy Property (\ref{H:nobounded}). \end{remark} We will now reproduce verbatim in our context Lascar's proof of the simplicity of the group of field automorphisms of $\mathbb C$ fixing $\Q^\text{alg}$ \cite{dL97}. More precisely, we will show the simplicity of the group of automorphisms $\Aut(\UU/\cl(\emptyset))$, modulo the subgroup of bounded automorphisms. We fix an unbounded automorphism $\tau$ of $\UU$ fixing $\cl(\emptyset)$ pointwise. Denote by \[\Psi(f,g)= \tau^{f}\circ (\tau\inv)^{f\circ g}=f\circ \tau \circ g\circ \tau\inv\circ g\inv\circ f\inv=[\tau, g]^f\] for $f$ and $g$ automorphisms of $\UU$, where $\tau^f=f\circ \tau\circ f\inv$ and $[\tau,g]=\tau g\tau^{-1}g^{-1}$. Whenever $X$ in $\clS$ is stable under the action of $\tau$, we denote by \[\Psi_X(f,g)= (\tau\restr{X})^{f}\circ (\tau\restr{X}\inv)^{f\circ g} =\Psi(f,g)\restr{X} \] for $f$ and $g$ elementary automorphisms of $X$. We will show in \ref{T:main} that every automorphism $\nu$ of $\UU$ fixing $\cl(\emptyset)$ can be written as \[\nu=\Psi(f_1,f_2) \circ \Psi(f_3,f_4)\inv,\] for suitable automorphisms $f_1,\ldots, f_4$. This will in particular show that the group of automorphisms of $\UU$ fixing $\cl(\emptyset)$ is simple modulo the normal subgroup of bounded automorphisms fixing $\cl(\emptyset)$. This process will be done by a chain of approximations of $\nu$ to smaller substructures of dimension strictly less than $\kappa$. If the partial isomorphism $g$ extends the partial isomorphism $f$, we will denote it by $f\subset g$. For the back-and-forth process to describe $\nu$ as a product of conjugates of $\tau$ and $\tau\inv$, we will need the following central result. \begin{prop}\label{P:central} Given $X$ in $\clS$ stable under the action of $\tau$, two elementary automorphisms $f$ and $g$ of $X$ and an acceptable extension $Y\supseteq X$ in $\clS$ equipped with an elementary automorphism $h$ of $Y$ extending $\Psi_X(f,g)$, there exist two automorphisms $f'$ and $g'$ of $\UU$ extending $f$ and $g$, respectively, such that \[ \Psi(f',g') \supset h \supset \Psi_X(f,g).\] \end{prop} \begin{proof} By Lemma \ref{L:imageInd}, there exists an acceptable extension $Y_1$ of $X$ such that $Y_1$ and $Y_2=\tau(Y_1)$ are independent over $X$. By Lemma \ref{L:iso_accept}, there is an automorphism $f'$ of $\UU$ extending $f$ such that $f'$ maps $Y_2$ to $Y$. Consider now the elementary automorphism \[h_2 = f'^{-1}\circ h \circ f'\] of $Y_2$, which restricted to $X$ equals \[ f\inv \circ h\restr X \circ f= f\inv \circ\Psi_X(f,g) \circ f =\tau\restr X \circ (\tau\restr X \inv)^g.\] Again by Lemma \ref{L:iso_accept}, choose an elementary automorphism $g_2$ extending $g$ which maps $Y_2$ to itself. The elementary automorphism \[ g_1= (h_2\circ g_2)^{\tau\inv}=\tau\inv\circ h_2\circ g_2\circ \tau\] of $Y_1$ restricted to $X$ extends \[ \tau\restr X \inv\circ h_2\restr X\circ g \circ \tau\restr X= (\tau\restr X \inv)^g \circ g\circ \tau\restr X = g.\] We include a diagram to facilitate the arrow-chasing:\begin{figure}[h] \centering \resizebox{.8\textwidth}{!}{% \begin{tikzpicture} \draw(2,7) node (a0) [blue,ellipse, draw, inner xsep= 0.2cm, inner ysep= 0.4cm, label={[blue]{\tiny $X$}}]{}; \draw(7,7) node (b0) [blue,ellipse, draw, inner xsep= 0.2cm, inner ysep= 0.4cm, label={[blue]{\tiny $X$}}]{}; \draw[->,>=latex,blue] (a0) to[bend left=10] node[above] {\tiny $f\circ \tau_{\restr X} \circ g \circ\tau \restr X ^{-1} \circ g^{-1} \circ f^{-1}$} (b0); \draw(2,7.2) node (a1) [black,ellipse, draw, inner xsep= 0.3cm, inner ysep= 0.6cm, label={[label distance=-0.15cm,black]245:{\scriptsize $Y$}}]{}; \draw(7,7.2) node (b1) [black,ellipse, draw, inner xsep= 0.3cm, inner ysep= 0.6cm, label={[label distance=-0.15cm,black]295:{\scriptsize $Y$}}]{}; \draw[->,>=latex,black] (a1.300) to[bend right=10] node[sloped,above] {\scriptsize $h$} (b1.240); \draw(0,3.2) node (c1) [black,ellipse, draw, inner xsep= 0.3cm, inner ysep= 0.6cm, label={[label distance=-0.2cm,black]115:{\scriptsize $Y_2$}}]{}; \draw[->,>=latex,black] (c1) to[bend left=20] node[sloped,above] {\scriptsize $f'$} (a1); \draw(9,3.2) node (d1) [black,ellipse, draw, inner xsep= 0.3cm, inner ysep= 0.6cm, label={[label distance=-0.2cm,black]65:{\scriptsize $Y_2$}}]{}; \draw[->,>=latex,black] (d1) to[bend right=20] node[sloped,above] {\scriptsize $f'$} (b1.-34); \draw[->,>=latex,black,dashed] (c1.300) to[bend right=10] node[sloped,below] {\scriptsize $h_2$} (d1.240); \draw(0,3) node (c0) [blue,ellipse, draw, inner xsep= 0.2cm, inner ysep= 0.4cm, label={[blue]{\tiny $X$}}]{}; \draw(9,3) node (d0) [blue,ellipse, draw, inner xsep= 0.2cm, inner ysep= 0.4cm, label={[blue]{\tiny $X$}}]{}; \draw[->,>=latex,blue,dashed] (c0) to node[below] {\tiny $\tau_{\restr X} \circ g \circ\tau \restr X ^{-1} \circ g^{-1}$} (d0); \draw(2,4.2) node (e1) [black,ellipse, draw, inner xsep= 0.3cm, inner ysep= 0.6cm, label={[black]{\scriptsize $Y_2$}}]{}; \draw[->,>=latex,black] (c1) to[bend left=20] node[sloped,above] {\scriptsize $g_2^{-1}$} (e1); \draw(7,4.2) node (f1) [black,ellipse, draw, inner xsep= 0.3cm, inner ysep= 0.6cm, label={[black]{\scriptsize $Y_1$}}]{}; \draw[->,>=latex,black] (f1) to[bend left=20] node[sloped,above] {\scriptsize $\tau$} (d1); \draw(4.5,4.2) node (g1) [black,ellipse, draw, inner xsep= 0.3cm, inner ysep= 0.6cm, label={[black]{\scriptsize $Y_1$}}]{}; \draw[->,>=latex,black] (e1) to node[above] {\scriptsize $\tau^{-1}$} (g1); \draw[->,>=latex,black] (g1) to node[above] {\scriptsize $g_1$} (f1); \end{tikzpicture} } \end{figure} Lemma \ref{L:tame_ext_ind} yields now a common extension $g'$ to $\UU$ of the elementary automorphisms $g_1$ of $Y_1$ and $g_2$ of $Y_2$. We need only check now that $\Psi(f', g')$ extends $h$ when restricted to $Y$, or equivalently, that $f'^{-1}\circ \Psi(f', g') \circ f'$ extends $h_2$. By the definition of $\Psi(f',g')$, we have that \[ f'^{-1}\circ \Psi(f', g') \circ f' = \tau \circ (\tau\inv)^{g'}\] which extends $h_2 = \tau \circ g_1 \circ \tau\inv \circ g_2\inv$ on $Y_2$, as desired. \end{proof} The previous Proposition \ref{P:central} contains all the ingredients to tackle the successor stage of the back-and-forth construction required in the proof of Theorem \ref{T:main} \begin{prop}\label{P:succ} Let $\nu$ be an automorphism of $\UU$ and $X$ in $\clS$ which is stable under the action of both $\tau$ and $\nu$ such that \[\nu\restr X = \Psi_X(f_1,f_2)\circ \Psi_X(f_3, f_4)\inv\] for some elementary automorphisms $f_i$ of $X$, with $i=1,\ldots, 4$. For every element $a$ in $\UU$, there are: \begin{itemize} \item an acceptable extension $Y\supseteq X$ containing the element $a$ which is stable under both $\tau$ and $\nu$; \item elementary extensions $f'_i$ of $f_i$, for $i=1,\ldots, 4$, to $Y$, \end{itemize} such that \[\nu\restr{Y} = \Psi_Y(f'_1,f'_2)\circ \Psi_Y(f'_3, f'_4)\inv.\] \end{prop} \begin{proof} By Remark \ref{R:iteration}, there is an acceptable extension $Y_1$ of $X$ containing $a$ invariant under the action of both $\tau$ and $\nu$. By Lemma \ref{L:iso_accept}, choose two elementary automorphisms $f_{3,1}$ and $f_{4,1}$ of $Y_1$ extending respectively $f_3$ and $f_4$. Set $h_1=\nu\restr{Y_1}\circ \Psi_{Y_1}(f_{3,1}, f_{4,1})$ and notice that \[ h_1\supset \Psi_X(f_1,f_2).\] By Proposition \ref{P:central}, there are two global automorphisms $f_{1,1}$ and $f_{2,1}$ of $\UU$, extending $f_1$ and $f_2$ such that $\Psi(f_{1,1}, f_{2,1})$ extends $h_1$. Similarly, we find an acceptable extension $Y_2$ of $Y_1$ stable under the action of $\tau$, $\nu$, $f_{1,1}$ and $f_{2,1}$. Then, we denote by $f_{1,2}$ and $f_{2,2}$ the restrictions of $f_{1,1}$ and $f_{2,1}$ to $Y_2$. Notice that \[ \nu\restr{Y_2}\inv \circ \Psi_{Y_2}(f_{1,2}, f_{2,2})\supset \Psi_{Y_1}(f_{3, 1}, f_{4,1}).\] Iterating the above argument countably many times, we construct an increasing chain $Y_n$ of acceptable extensions (setting $Y_0=X$) and compatible elementary extensions $f_{1,2n}$ and $f_{2,2n}$ of $f_1=f_{1,0}$ and $f_2=f_{2,0}$ to $Y_{2n}$ as well as $f_{3,2n+1}$ and $f_{4,2n+1}$ of $f_3$ and $f_4$ to $Y_{2n+1}$ such that for all $n$ in $\N$, \[f_{1,2n+2} \supset f_{1,2n}, \quad f_{2,2n+2} \supset f_{2,2n}, \quad f_{3,2n+3} \supset f_{3,2n+1}, \quad f_{4,2n+3} \supset f_{4,2n+1},\] \[ \nu\restr{Y_{2n+1}} \circ \Psi_{Y_{2n+1}}(f_{3,2n+1}, f_{4,2n+1})\supset \Psi_{Y_{2n}}(f_{1, 2n}, f_{2,2n})\] and \[ \nu\restr{Y_{2n+2}}\inv \circ \Psi_{Y_{2n+2}}(f_{1,2n+2}, f_{2,2n+2})\supset \Psi_{Y_{2n+1}}(f_{3, 2n+1}, f_{4,2n+1}).\] \begin{figure}[h] \centering \resizebox{0.8\textwidth}{!}{% \begin{tikzpicture} \draw(0,0) node (a0) [ellipse, draw, inner xsep= 0.2cm, inner ysep= 0.4cm, label={{\tiny $X$}}]{}; \draw(7,0) node (b0) [ellipse, draw, inner xsep= 0.2cm, inner ysep= 0.4cm, label={{\tiny $X$}}]{}; \draw(3.5,-5) node (c0) [ellipse, draw, inner xsep= 0.2cm, inner ysep= 0.4cm, label={{\tiny $X$}}]{}; \draw[->,>=latex] (a0.320) to node[above] {\tiny $\nu \restr X$} (b0.220); \draw[->,>=latex] (c0) to node[sloped,above] {\tiny $\Psi_X(f_3,f_4)$} (a0); \draw[->,>=latex] (c0) to node[sloped,above] {\tiny $\Psi_X(f_1,f_2)$} (b0); \node[blue,xshift=-0.3cm] () [label={[xshift=-0.15cm, yshift=-0.3cm]85:{\color{blue} }}] at (a0.85) {\scriptsize $a$}; \node[blue,xshift=-0.3cm] () [label={[xshift=-0.15cm, yshift=-0.3cm]85:{\color{blue} }}] at (c0.85) {\scriptsize $a$} ; \node[blue,xshift=-0.3cm] () [label={[xshift=-0.15cm, yshift=-0.3cm]85:{\color{blue} }}] at (b0.85) {\scriptsize $a$}; \draw(0,0.2) node (a1) [blue,ellipse, draw, inner xsep= 0.3cm, inner ysep= 0.6cm, label={[blue]{\tiny $Y_1$}}]{}; \draw(3.5,-4.8) node (c1) [blue,ellipse, draw, inner xsep= 0.3cm, inner ysep= 0.6cm, label={[blue]{\tiny $Y_1$}}]{}; \draw[->,>=latex,blue] (c1.180) to[bend left=10] node[sloped,above] {\tiny $\Psi_{Y_1}(f_{3,1},f_{4,1})$} (a1.270); \draw(7,0.2) node (b1) [blue,ellipse, draw, inner xsep= 0.3cm, inner ysep= 0.6cm, label={[blue]{\tiny $Y_1$}}]{}; \draw[->,>=latex,blue] (a1.330) to[bend left=10] node[sloped,above] {\tiny $\nu \restr{Y_1}$} (b1.210); \draw[->,>=latex,blue] (c1.0) to[bend right=10] node[sloped,above] {\tiny $h_1$} (b1.270); \draw(7,0.45) node (b2) [red,ellipse, draw, inner xsep= 0.45cm, inner ysep= 0.9cm, label={[red]{\tiny $Y_2$}}]{}; \draw(3.5,-4.55) node (c2) [red,ellipse, draw, inner xsep= 0.45cm, inner ysep= 0.9cm, label={[red]{\tiny $Y_2$}}]{}; \draw[->,>=latex,red] (c2.320) to[bend right=20] node[sloped,above] {\tiny $\Psi_{Y_2}(h_{f_{1,2}},f_{2,2})$} (b2.290); \draw(0,0.45) node (a2) [red,ellipse, draw, inner xsep= 0.45cm, inner ysep= 0.9cm, label={[red]{\tiny $Y_2$}}]{}; \draw[->,>=latex,red] (a2.340) to[bend left=20] node[sloped,above] {\tiny $\nu \restr{Y_2}$} (b2.200); \draw[->,>=latex,red] (c2.220) to[bend left=20] node[sloped,above] {\tiny $h_2$} (a2.250); \draw(0,0.65) node (a3) [purple,ellipse, draw, inner xsep= 0.55cm, inner ysep= 1.1cm, label={[purple]{\tiny $Y_3$}}]{}; \draw(3.5,-4.35) node (c3) [purple,ellipse, draw, inner xsep= 0.55cm, inner ysep= 1.1cm, label={[purple]{\tiny $Y_3$}}]{}; \draw[->,>=latex,purple] (c3.240) to[bend left=35] node[sloped,above] {\tiny $\Psi_{Y_3}(f_{3,3},f_{4,3})$} (a3.230); \draw(7,0.65) node (b3) [purple,ellipse, draw, inner xsep= 0.55cm, inner ysep= 1.1cm, label={[purple]{\tiny $Y_3$}}]{}; \draw[->,>=latex,purple,dashed] (a3.350) to[bend left=30] (b3.190); \draw[->,>=latex,purple,dashed] (c3.300) to[bend right=35] (b3.310); \node[yshift=0.75cm] () at (a3.north) {$\vdots$}; \node[yshift=0.75cm] () at (b3.north) {$\vdots$}; \node[yshift=0.75cm] () at (c3.north) {$\vdots$}; \draw(0,1.2) node (a4) [ellipse, draw, inner xsep= 0.8cm, inner ysep= 1.6cm, label={{$Y$}}]{}; \draw(7,1.2) node (b4) [ellipse, draw, inner xsep= 0.8cm, inner ysep= 1.6cm, label={{$Y$}}]{}; \draw(3.5,-3.8) node (c4) [ellipse, draw, inner xsep= 0.8cm, inner ysep= 1.6cm, label={{ $Y$}}]{}; \draw[->,>=latex] (c4.260) to[bend left=40] node[sloped,below] {$\Psi_{Y}(f'_3,f'_4)$} (a4.200); \draw[->,>=latex] (c4.280) to[bend right=40] node[sloped,below] {$\Psi_{Y}(f'_1,f'_2)$} (b4.340); \draw[->,>=latex] (a4) to[bend left=20] node[sloped,above] {$\nu \restr Y$} (b4); \end{tikzpicture} } \end{figure} By construction of the chain, the subset $Y=\bigcup_{n\in \N} Y_n$ lies in $\clS$ and is an acceptable extension of $X$. For $1\le i\le 4$, denote $f'_i$ the corresponding elementary automorphism to $Y$ given by the $f_{i, k}$'s. By construction, we have that \[\nu\restr{Y} = \Psi_{Y}(f'_1,f'_2)\circ \Psi_{Y}(f'_3, f'_4)\inv ,\] as desired. \end{proof} We have now all the ingredients to prove the simplicity, up to bounded automorphisms, of the automorphism group of $\UU$ which fix pointwise $\cl(\emptyset)$. \begin{theorem}\label{T:main} Consider a $\kappa$-tame model $\UU$ of a simple countable theory $T$ satisfying Properties (\ref{H:single_gen})-(\ref{H:addgen}). Fix an unbounded automorphim $\tau$ of $\UU$ fixing $\cl(\emptyset)$ pointwise. Every automorphism $\nu$ of $\UU$ fixing $\cl(\emptyset)$ pointwise can be written as the product of four conjugates of $\tau$ and $\tau\inv$. In particular, the group $\Aut(\UU/\cl(\emptyset))$ of automorphisms of $\UU$ fixing $\cl(\emptyset)$ pointwise is simple modulo the normal subgroup of all bounded automorphisms fixing $\cl(\emptyset)$ pointwise. \end{theorem} \begin{proof} By $\kappa$-tameness, write $\UU=\cl( \{a_\alpha\}_{\alpha<\kappa})$, where $(a_\alpha)_{\alpha<\kappa}$ is an independent sequence of realizations of the generic type. Given an automorphism $\nu$ of $\UU$ fixing $\cl(\emptyset)$, we construct recursively a increasing chain of subsets $X_\alpha$, for $\alpha<\kappa$, in $\clS$ such that the extension $X_\alpha\subseteq X_{\alpha+1}$ is acceptable and each $X_\alpha$ is stable under the action of $\tau$ and $\nu$ and $\cl$-generated by a independent sequence of length at most $\max\big(\aleph_0, |\alpha|\big)$ of realizations of the generic type, equipped with compatible elementary automorphisms $f_{i, \alpha}$ of $X_\alpha$, for $1\le i\le 4$, such that $a_\alpha$ lies in $X_{\alpha+1}$ and \[\nu\restr{X_\alpha} = \Psi_{X_\alpha}(f_{1, \alpha},f_{2,\alpha})\circ \Psi_{X_\alpha}(f_{3,\alpha}, f_{4,\alpha})\inv.\] For the beginning of the recursion, set $X_0 = \cl(\emptyset)$ and $f_{i,0} = \mathrm{Id}_{X_0}$. Assume now that $X_\beta$ has already been constructed for $\beta<\alpha $. If $\alpha$ is a limit ordinal, the union $X_\alpha = \bigcup_{\beta <\alpha} X_\beta$ with $f_{i,\alpha} = \bigcup_{\beta <\alpha} f_{i,\beta}$, for $1\le i\le 4$ is $\cl$-generated by an independent sequence of length $|\alpha|$ and $\nu$ restricted to $X_\alpha$ satisfies the above identity. If $\alpha$ is the successor of $\beta$, Proposition \ref{P:succ} applied to $X_\beta$ yields an acceptable extension $X_{\beta+1}$ of $X_\beta$ containing $a_\beta$ and elementary extensions $f_{i,\beta+1}$, as desired. Finally, the union $\bigcup_{\alpha<\kappa} X_\alpha$ is $\cl$-closed, so it must equal $\UU$. By construction, the automorphism $\nu$ equals a product of four conjugates of $\tau$ and $\tau\inv$ globally on $\UU$, since the automorphisms at every step are compatible. \end{proof} \begin{cor}\label{C:main} For each of the following theory of fields with operators: \begin{itemize} \item algebraically closed field with the closure operator given by the field algebraic closure $\acl$; \item differentially closed field in characteristic $0$ with finitely many commuting derivations with the closure operator given by the elements which are not differentially transcendental; \item difference closed field in characteristic $0$ with the closure operator given by the elements of transformal transcendence degree $0$; \end{itemize} the group of automorphisms of every uncountable model saturated in its cardinality (if it exists) fixing pointwise $\cl(\emptyset)$ is simple. More generally, given $\kappa>\aleph_0$, the group of automorphisms of the $\kappa$-prime model over its closure $\cl(\emptyset)$ fixing pointwise $\cl(\emptyset)$ is simple. \end{cor} \begin{proof} By Remark, \ref{R:Premiers_Examples}, Proposition \ref{P:ACFA_ex} and Remark \ref{R:examples_nobounded}, each of these theories satisfies Properties (\ref{H:single_gen})-(\ref{H:nobounded}). Furthermore, these theories either have uncountable saturated or $\kappa$-prime models, which are in both cases $\kappa$-tame by Remark \ref{R:sat_tame} \& Corollary \ref{C:Examples_tame}. The triviality of the subgroup of bounded automorphisms of such models in Remark \ref{R:examples_nobounded} and Theorem \ref{T:main} yields the desired result. \end{proof} \begin{question} Let $K$ be a saturated uncountable separably closed fields of positive characteristic $p$ and finite degree of imperfection $e$. Is the automorphism group of $K$ fixing pointwise $\cl(\emptyset)$ simple? Recall that the theory of $K$ satisfies Properties (\ref{H:single_gen}), (\ref{H:closed_stat}) and (\ref{H:Konnerth}) by Question \ref{Q:examples}. However we cannot apply Remark \ref{R:sat_tame} \& Lemma \ref{L:iso_accept}, since we do not know whether Property (\ref{H:addgen}) holds. \end{question}
{ "timestamp": "2022-09-23T02:11:49", "yymm": "2209", "arxiv_id": "2209.10891", "language": "en", "url": "https://arxiv.org/abs/2209.10891" }
\section{Introduction and main results} \subsection{Background} Let $\Omega$ be a bounded domain of $\mathbb{R}^d$, $d\geq 2$, with smooth boundary $\Gamma = \Gamma_0 \cup \Gamma_1$. We assume that $\Gamma_0$ and $\Gamma_1$ are relatively open non-empty subsets of $\Gamma$ that satisfy $\overline{\Gamma_0} \cap \overline{\Gamma_1} = \emptyset$. We consider the following feedback system: \begin{subequations} \label{eq:pde-bc} \begin{align} \label{eq:pure-wave} &\partial_{tt}u - \Delta u = 0 & &\mbox{in}~ \Omega \times (0, +\infty), \\ \label{eq:hbc} &\partial_{tt}u - \Delta_\Gamma u = - \partial_\nu u & &\mbox{on}~ \Gamma_0 \times (0, +\infty), \\ \label{eq:robin-feed} &\partial_\nu u + u= - \alpha \partial_t u & & \mbox{on}~ \Gamma_1 \times (0, + \infty), \end{align} \end{subequations} where $\alpha$ is a positive constant, $\Delta$ is the Laplacian, $\partial_\nu$ denotes the outward normal derivative, and $\Delta_\Gamma$ is the Laplace-Beltrami operator on $\Gamma$ for the metric inherited from $\mathbb{R}^d$ (see Subsection \ref{sec:prel} below). The general context of this work is the analysis of evolution equations with dynamic (or kinetic) boundary conditions. Those arise in physical models where the momentum of the boundary cannot be neglected, hence the second-order (in time) dynamics. An early example of such equations is given by \cite{liu1998spectral}, where energy decay of a two-dimensional (in space) acoustic flow is studied. In our case, the coupled wave equation \cref{eq:hbc} may model boundary oscillations that propagate in the tangential directions and are caused by in-domain displacements governed by the pure wave equation \cref{eq:pure-wave}. A few variations around the coupled equations \cref{eq:pure-wave}-\cref{eq:hbc} have been investigated in the literature, with \cref{eq:robin-feed} being typically replaced by a zero Dirichlet boundary condition. \cite{vitillaro2017wave} deals with local and global well-posedness of \cref{eq:pure-wave}-\cref{eq:hbc} perturbated by nonlinear potentials and damping terms acting on the domain and the boundary. In \cite{graber_analicity}, \cref{eq:pure-wave}-\cref{eq:hbc} are supplied with boundary and/or in-domain Kelvin-Voigt damping, which adds heat-like regularizing effect to the flow. The present article is more control-oriented and tackles the problem of boundary stabilization of \cref{eq:pure-wave}-\cref{eq:hbc} by the mean of a velocity feedback acting on $\Gamma_1$ only, as modeled by \cref{eq:robin-feed}. To the best of our knowledge, this problem has not been addressed. Overall, what differentiates our work from the related literature is the combination of the two following technical challenges. \begin{enumerate} \item In presence of the Laplace-Beltrami term, the boundary condition \cref{eq:hbc} is a proper (hyperbolic) partial differential equation, as opposed to \cite{liu1998spectral} or the recent article by \cite{li_asymptotics} for instance, where no tangential derivatives appear in the dynamic boundary condition. \item Only the anticollocated boundary $\Gamma_1$ dissipates energy; in other words, from the point of view of the dynamic boundary $\Gamma_0$, the damping is indirect and has to somehow propagate across $\Omega$. This contrasts with all the aforementioned work, where damping acts in the interior and/or the boundary subject to the second-order dynamics. \end{enumerate} Inspired by the literature on coupled second-order equations and in particular \cite{liu_frequency_2007}, we carry out the stability analysis of the feedback system \cref{eq:pde-bc} in the frequency domain: we investigate pure imaginary eigenvalues (or rather, the lack thereof) and then aim at estimating the growth of the resolvent operator on the imaginary axis. By doing so, we are able to prove semi-uniform stability of system \cref{eq:pde-bc} and, under additional geometrical conditions, polynomial energy decay for solutions with smooth initial data. This is detailed in the next subsection. Finally, let us also mention \cite{alabau_indirect}, where polynomial stability is established for a class of abstract coupled second-order equations; however, this result does not apply to \cref{eq:pde-bc} due to the unboundedness of the corresponding coupling operator. In particular, the compact perturbation argument, which is often employed to prove that weakly damped systems of waves are \emph{not} uniformly stable, cannot be used, leaving the question of {exponential} stability open. \textbf{Notation.} The norm of a given normed vector space $E$ is denoted by $\|\cdot \|_E$. The duality bracket $\langle \phi, x \rangle_{E}$ is used to write $\phi (x)$ for any vector $x$ in $E$ and continuous linear form $\phi$ in $E'$. If $E$ is a Hilbert space, then $(\cdot, \cdot)_E$ denotes the scalar product of $E$. If $E_1$ and $E_2$ are two Banach spaces, $\L(E_1, E_2)$ denotes the set of bounded linear operators from $E_1$ to $E_2$, which is a Banach space as well if equipped with the operator norm. Given a real number $s$, we denote by $H^s(\Omega)$ the (complex) Sobolev space of order $s$ on $\Omega$. The notation $\d x$ indicates the Lebesgue measure on $\mathbb{R}^d$; and $\d \sigma$ denotes the induced surface measure on $\Gamma$. Finally, $\mathcal{C}_c^\infty(\Omega)$ is the space of compactly supported and infinitely differentiable complex-valued functions on $\Omega$. In the proofs, $K$, $K'$, etc., stand for generic constants that do not depend on the variables of interest. \subsection{Main statements} We start by introducing the natural energy space $\H$ associated with the feedback system \cref{eq:pde-bc}. Let \begin{equation} H \triangleq L^2(\Omega) \times L^2(\Gamma_0) \end{equation} endowed with its product Hilbertian structure, and \begin{equation} V \triangleq \{ (u, \theta) \in H^1(\Omega) \times H^1(\Gamma_0) : u_{|\Gamma_0} = \theta \} \end{equation} equipped with a scalar product $(\cdot, \cdot)_V$ explicitly defined below in \cref{eq:scalar-product} and equivalent to that of ${H^1(\Omega)\times H^1(\Gamma_0)}$. The set $V$ is a Hilbert space as well (see Subsection \ref{sec:prel} below). Then, we define the product Hilbert space \begin{equation} \H \triangleq V \times H. \end{equation} Our first result concerns well-posedness in $\H$ and semi-uniform stability of the system governed by \cref{eq:pde-bc}. We start by recasting the boundary value problem \cref{eq:pde-bc} into a first-order evolution equation on $\H$ of the form $(\d /\d t)[u, v] + \mathcal{A} [u, v] = 0 $, where $\mathcal{A} : \mathcal{D}(\mathcal{A}) \to \H$ is an unbounded linear operator explicitly given below in \cref{eq:generator}. Solutions to \cref{eq:pde-bc} are understood in the usual linear semigroup sense: they are \emph{classical} solutions for initial data in the domain $\mathcal{D}(\mathcal{A})$ and \emph{mild} solutions for general initial data in $\H$. \begin{theorem} \label{th:wp} Solutions to \cref{eq:pde-bc} define a strongly continuous semigroup $\{ \S_t \}$ of linear contractions on the energy space $\H$, with maximal dissipative generator $- \mathcal{A}$. Furthermore, $\{ \S_t\}$ is semi-uniformly stable, i.e., $\{\S_t\}$ is bounded and \begin{equation} \label{eq:semi-uniform} \lim_{t \to + \infty} \|\S_t (\mathcal{A} + \Id)^{-1}\|_{\L(\H)} = 0. \end{equation} \end{theorem} The proof of Theorem \ref{th:wp} is given in Section \ref{sec:wp}. We digress for a moment to comment on the notion of \emph{semi-uniform} stability, which has been introduced in \cite{batty-non-uniform}. As the name suggests, it is a property that is intermediate between strong and uniform stability. Indeed, \cref{eq:semi-uniform} implies that $\{\S_t\}$ is strongly stable and that the decay of \emph{strong} solutions to \cref{eq:pde-bc} can be quantified as follows: \begin{equation} \label{eq:semi-uniform-bis} \|\S_t [u_0, v_0]\|_\H \leq K \|\S_t(\mathcal{A} + \Id)^{-1}\|_{\L(\H)} \|[u_0, v_0]\|_{\mathcal{D}(\mathcal{A})} \end{equation} for any initial data $[u_0, v_0]$ in $\mathcal{D}(\mathcal{A})$ equipped with the graph norm. For more details, the reader is referred to the survey article by \cite{chill-semi}. As an example, semi-uniform stability of a wave equation with spatially varying coefficients is investigated using spectral methods in \cite{jacob_stability}. Coming back to our contributions, under certain geometrical conditions, we are able to replace \cref{eq:semi-uniform-bis} with an explicit polynomial decay rate. \begin{theorem} \label{th:pdr} Assume there exists a real vector field $h$ in $\mathcal{C}^2(\overline{\Omega})$ that satisfies the following conditions: \begin{enumerate}[label={(\alph*)}] \item \label{it:jacobian} Denoting the Jacobian matrix of $h$ by $J_h \triangleq [\partial_j h_i]_{i j}$, there exists $\rho > 0$ such that \begin{equation} \re \int_\Omega [J_h f] \cdot \overline{f} \, \d x \geq \rho \|f\|^2_{L^2(\Omega)^d} \end{equation} for all $f$ in $L^2(\Omega)^d$; \item \label{it:gamma0} On $\Gamma_0$, $h$ is parallel to the unit outward normal $\nu$, i.e., $h = (h \cdot \nu) \nu$; also, $h \cdot \nu \leq 0$; \item \label{it:gamma1} On $\Gamma_1$, $(h \cdot \nu) \geq m$ for some $m > 0$. \end{enumerate} Then, the semigroup $\{ \S_t\}$ enjoys the following polynomial decay property: there exists $C > 0$ such that for any $[u_0, v_0]$ in $\mathcal{D} (\mathcal{A})$, for all $t\geq 0$, \begin{equation} \label{eq:pdr} \|\S_t[u_0, v_0]\|_\H \leq C t^{-{1/2}} \|[u_0, v_0]\|_{\mathcal{D}(\mathcal{A})}. \end{equation} \end{theorem} Theorem \ref{th:pdr} is proved in Section \ref{sec:pdr}. Most of its geometrical requirements are standard when it comes to differential multiplier analysis; we point out however that Item \ref{it:gamma0} is a stronger than usual assumption in that we use a vector field that is perpendicular to the boundary on $\Gamma_0$. Nevertheless, examples of such domains include ``donut-shaped" sets $\Omega$ of the form $ \Omega = \{ x \in \mathbb{R}^d : k_0 < f(x) < k_1 \} $ where $f : \mathbb{R}^d \to \mathbb{R}$ is a smooth strictly convex function, and $k_0$ and $k_1$ are real numbers such that $k_0 < k_1$ with $k_0 > \inf_{x \in \mathbb{R}^d} f(x)$. In that case, $\Gamma_0$ and $\Gamma_1$ are the inverse image by $f$ of $\{k_0\}$ and $\{k_1\}$ respectively, and one can check the hypotheses of Theorem \ref{th:pdr} by letting $h = \nabla f$. \subsection{Preliminaries and operator model} \label{sec:prel} In this subsection, we introduce additional definitions and notation that are needed in our analysis of system \cref{eq:pde-bc}. The boundary $\Gamma$ of the domain $\Omega$ is a compact and smooth embedded submanifold of the ambient Euclidian space $\mathbb{R}^d$. Recalling \cite[Chapitre 1, Section 7.3]{lions-problemes}, the Sobolev spaces $H^s(\Gamma)$ are modeled after $H^s(\mathbb{R}^{d-1})$ by the mean of partitions of unity subordinated to the covering of $\Gamma$ by charts. For each $x$ in $\Gamma$, we denote by $T_x(\Gamma)$ the tangent space at $x$, which we see as a $(d-1)$-dimensional subspace of $\mathbb{R}^d$. Given a smooth function $\varphi : \Gamma \to \mathbb{R}$ , the total derivative of $\varphi$ at $x \in \Gamma$, which is a linear form on $T_x(\Gamma)$, is denoted by $\d \varphi(x)$ -- see for instance \cite[Chapter 1]{guillemin-topology}. As a submanifold, $\Gamma$ can be equipped with the canonical Riemannian metric $g$ inherited from $\mathbb{R}^d$: $ g_x(\gamma_1, \gamma_2) = \gamma_1 \cdot \gamma_2 $ for all $\gamma_1, \gamma_2 \in T_x(\Gamma)$ and $x \in \Gamma$, where $\cdot$ denotes the usual Euclidian inner product. The Riemannian measure associated with $g$ coincides with the induced hypersurface measure $\d \sigma$. The Riemannian gradient $\nabla_\Gamma \varphi$ of a smooth real-valued function $\varphi$ is defined as follows: $\nabla_\Gamma \varphi(x)$ is the unique element in $T_x(\Gamma)$ such that $\d \varphi(x) \gamma = (\nabla_{\Gamma}\varphi \cdot \gamma)_\Gamma$ for all $\gamma$ in $T_x(\Gamma)$. Then, $\nabla_\Gamma \varphi$ is a smooth vector field on $\Gamma$. This definition extends to complex-valued $\varphi$ by linearity. Following \cite[Chapter 2]{taylor-pde}, the Laplace-Beltrami operator $\Delta_\Gamma$ is defined to be the second-order differential operator on $\Gamma$ satisfying $ \label{eq:def-laplace} -\int_{\Gamma} \Delta_\Gamma \varphi_1 \varphi_2 \, \d \sigma = \int_{\Gamma} \nabla_\Gamma \varphi_1 \cdot \nabla_\Gamma \varphi_2 \, \d \sigma $ for all smooth and compactly supported $\varphi_1$ and $\varphi_2$. One can then define $\nabla_\Gamma \theta$ and $\Delta_\Gamma \theta$ in the sense of distributions for any $\theta$ in (say) $L^2(\Gamma)$. Then, $H^1(\Gamma)$ is the set of all $\theta$ in $L^2(\Gamma)$ such that $\nabla_\Gamma \theta$ belongs to $L^2(\Gamma)^{d}$. (recall that here each $T_x(\Gamma)$ is a subspace of $\mathbb{R}^d$). Using the notation $\|x\|^2 \triangleq x\cdot \overline{x}$ for $x$ in $\mathbb{C}^d$, the norm given by $ \|\theta\|_{H^1(\Gamma)}^2 = \int_{\Gamma} |\theta|^2 + \| \nabla_\Gamma \theta \|^2 \, \d \sigma $ is equivalent to those built upon local charts. Likewise, $H^2(\Gamma)$ is the space of all $\theta$ in $L^2(\Gamma)$ such that $ - \Delta_\Gamma \theta$ belongs to $L^2(\Gamma)$. For more details, the reader is referred to \cite[Chapters 4 and 5]{taylor-pde}. From now on, we focus on the submanifold $\Gamma_0$. It follows from the assumption $\overline{\Gamma_0} \cap \overline{\Gamma_1} = \emptyset$ that $\Gamma_0$ is connected and has no boundary. Thus, the spaces $H^1_0(\Gamma_0)$ and $ H^1(\Gamma_0)$ coincide; and for any real $s$, $-\Delta_\Gamma$ extends as a bounded linear operator from $H^s(\Gamma_0)$ to $H^{s - 2}(\Gamma_0)$. Furthermore, we have the following Green formula on $\Gamma_0$: \begin{equation} \label{eq:div-manifold} \int_{\Gamma_0} \nabla_\Gamma \theta_1 \cdot \nabla_\Gamma \theta_2\, \d \sigma = - \int_{\Gamma_0} \Delta_\Gamma \theta_1 \theta_2 \, \d \sigma, \end{equation} for any $\theta_1$ in $H^2(\Gamma_0)$ and $\theta_2$ in $H^1(\Gamma_0)$. Finally, we recall that for sufficiently smooth $u$, say, $u \in H^2(\Omega)$, the vector field given by the \emph{tangential} derivatives of $u$ on $\Gamma_0$ coincides with the Riemannian gradient $\nabla_\Gamma u$ of the trace $u_{|\Gamma_0}$. This allows us to write \begin{equation} \label{eq:tan-riem} \|\nabla u \|^2 = |\partial_\nu u|^2 + \|\nabla_\Gamma u \|^2 \quad \mbox{a.e. on}~ \Gamma_0. \end{equation} Let us return to the spaces $H$ and $V$. One can prove that $V$ is closed in $H^1(\Omega) \times H^1(\Gamma_0)$, which makes it a Hilbert space if equipped with the inherited scalar product. In the sequel, we will rather use the following one: \begin{equation} \label{eq:scalar-product} (u_1, u_2)_V \triangleq \int_\Omega \nabla u_1 \cdot \nabla \overline{u_2} \, \d x + \int_{\Gamma_0} \nabla_\Gamma u_1 \cdot \nabla_\Gamma \overline{u_2} \, \d \sigma + \int_{\Gamma_1} u_1 \overline{u_2} \, \d \sigma. \end{equation} Using a standard indirect compactness argument, we see that the norm associated with \cref{eq:scalar-product} is equivalent to that of $H^1(\Omega) \times H^1(\Gamma_0)$. Note that we will frequently identify $V$ as a subspace of $H^1(\Omega)$ and drop the tuple notation. We can finally define the operator $\mathcal{A}$: let $W \triangleq [H^2(\Omega) \times H^2(\Gamma_0)] \cap V$, then \begin{subequations} \label{eq:generator} \begin{align} &\mathcal{D}(\mathcal{A}) \triangleq \{ [u, v] \in W \times V: \partial_\nu u + u = - \alpha v ~\mbox{on}~ \Gamma_0 \}, \\ &\mathcal{A}[u, v] \triangleq [- v, ( - \Delta u, - \Delta_\Gamma u + \partial_\nu u)]. \end{align} \end{subequations} \section{Well-posedness and semi-uniform stability} \label{sec:wp} To prove Theorem \ref{th:wp}, we first investigate properties of $\mathcal{A}$. \begin{proposition} \label{prop:max-mon} The unbounded operator $\mathcal{A}$ is maximal monotone. Furthermore, for any $\lambda > 0$, the resolvent $(\mathcal{A} + \lambda \Id)^{-1}$ is a compact operator on $\H$. \end{proposition} \begin{proof} The proof is split into several steps. \textbf{Step 1: Monotonicity.} Let $X = [u, v] \in \mathcal{D}(A)$. By performing a few integration by parts, we obtain the following formula: \begin{equation} \label{eq:ax-x} (\mathcal{A} X, X)_\H = \alpha \int_{\Gamma_1} |v|^2 \, \d \sigma + 2\i \im (u, v)_V. \end{equation} Taking the real part of \cref{eq:ax-x} yields $\re (\mathcal{A} X, X)_\H \geq 0$. \textbf{Step 2: Variational equations.} Let $\lambda > 0$. Our goal is to prove that $\mathcal{A} + \lambda \Id$ is surjective. We will simultaneously prove that $(\mathcal{A} + \lambda \Id)^{-1}$ is well-defined and compact. Let $[f, g] \in \H$ with $g = (g_1, g_2) \in H$. We need to find $X = [u, v] \in \mathcal{D}(\mathcal{A})$ such that $\mathcal{A} X + X = [f, g]$, i.e., $- v + \lambda u = f$ and \begin{subequations} \label{eq:lambda-problem} \begin{align} \label{eq:u-v-g1} &- \Delta u + \lambda v = g_1 &&\mbox{in}~ \Omega, \\ \label{eq:u-v-g2} &- \Delta_\Gamma u + \partial_\nu u + \lambda v = g_2 &&\mbox{on}~\Gamma_0. \end{align} \end{subequations} We infer from \cref{eq:u-v-g1}-\cref{eq:u-v-g2} that any solution $[u, v]$ must satisfy the following variational problem: \begin{multline} \label{eq:var-u} \int_\Omega \nabla u \cdot \nabla \overline{w} \, \d x + \int_{\Gamma_0} \nabla_\Gamma u \cdot \nabla_\Gamma \overline{w} \, \d \sigma + \int_{\Gamma_1} [u + \alpha v] \overline{w} \, \d x \\ + \lambda (v, w)_H = (g, w)_H \quad \mbox{for all}~ w \in V. \end{multline} As usual for that kind of problem (see for instance \cite[Proof of Proposition 2.1]{liu_frequency_2007}), the existence of $[u, v] \in \H$ satisfying both $-v + \lambda u = f$ and \cref{eq:var-u} is proved by obtaining a variational equation in the $v$-variable only, and then using Lax-Milgram theorem to find an appropriate $v \in V$, which in turn uniquely determines $u$. It remains to prove that $[u, v]$ belongs to $\mathcal{D}(\mathcal{A})$ and that \cref{eq:u-v-g1}-\cref{eq:u-v-g2} are satisfied in a $L^2$-sense. By evaluating \cref{eq:var-u} for test functions $w$ in $\mathcal{C}_c^\infty(\Omega)$, we obtain that the distribution $\Delta u$ is in fact a function in $L^2(\Omega)$, with \cref{eq:u-v-g1} satisfied a.e. in $\Omega$. Recall that $\partial_\nu u$ is then uniquely defined in $H^{-1/2}(\Gamma)$ by the formula \begin{equation} \label{eq:distrib-normal} \langle \partial_\nu u , \theta \rangle_{H^{1/2}(\Gamma)} = \int_{\Omega} \nabla u \cdot \nabla w \, \d x - \int_{\Omega} \Delta u w \, \d x \end{equation} for all $\theta \in H^{1/2}(\Gamma)$, where $w$ is any element in $H^1(\Omega)$ such that $w_{|\Gamma} = \theta$. Furthermore, \begin{equation} \|\partial_\nu u\|_{H^{-1/2}(\Gamma)} \leq K \{ \|\Delta u\|_{L^2(\Omega)} + \|u\|_{H^1(\Omega)}\}. \end{equation} Plugging \cref{eq:u-v-g1} and \cref{eq:distrib-normal} into \cref{eq:var-u} leads to another variational equation, from which we shall recover the boundary conditions satisfied by $u$: \begin{multline} \label{eq:var-gamma} \langle -\Delta_{\Gamma}u, \overline{w}_{|\Gamma_0} \rangle_{H^1(\Gamma_0)} + \langle \partial_\nu u, \overline{w}_{|\Gamma}\rangle_{H^{1/2}(\Gamma)} \\ - \int_{\Gamma_1} [u + \alpha v] \overline{w} \, \d \sigma = \int_{\Gamma_0} g_2 \overline{w} \, \d \sigma \quad \mbox{for all}~ w \in V. \end{multline} \textbf{Step 3: ``Decoupling" the boundary conditions.} Since $\overline{\Gamma_0} \cap \overline{\Gamma_1} = \emptyset$, the indicator functions $\mathds{1}_{\Gamma_0}$ and $\mathds{1}_{\Gamma_1}$ are smooth. As a notable consequence, for any real $s \geq 0$, the extension map $\theta \to \mathds{1}_{\Gamma_i} \theta$ belongs to $\L(H^s(\Gamma_i), H^s(\Gamma))$, $i \in \{0, 1\}$. In particular, given an arbitrary $\theta \in H^1(\Gamma_0)$, $\mathds{1}_{\Gamma_0} \theta$ is in $H^{1/2}(\Gamma)$, so that taking any continuous right-inverse of the trace provides an element $w \in V$ satisfying $w_{|\Gamma_0} = \theta$ and $w_{|\Gamma_1} = 0$. Evaluating \cref{eq:var-gamma} for such $w$ yields \begin{equation} \label{eq:var-gamma0} \langle -\Delta_{\Gamma}u, \theta \rangle_{H^1(\Gamma_0)} + \langle \partial_\nu u, \mathds{1}_{\Gamma_0} \theta \rangle_{H^{1/2}(\Gamma)} = \int_{\Gamma_0} [g_2 - \lambda v] \theta \, \d \sigma \end{equation} holding for arbitrary $\theta \in H^1(\Gamma_0)$. Again, the map $\theta \mapsto \mathds{1}_{\Gamma_0} \theta$ is in $\L(H^{1/2}(\Gamma_0), H^{1/2}(\Gamma))$; hence, it follows from \cref{eq:var-gamma0} that $- \Delta_{\Gamma}u$, which is \emph{a priori} defined in $H^{-1}(\Gamma_0)$, belongs in fact to $H^{-1/2}(\Gamma_0)$. Then, elliptic regularity for the Laplace-Beltrami operator -- see, e.g., \cite{taylor-pde} -- yields $u_{|\Gamma_0} \in H^{3/2}(\Gamma_0)$ and \begin{multline} \label{eq:3/2-gamma0} \|u_{|\Gamma_0}\|_{H^{3/2}(\Gamma_0)} \leq K \{ \|\partial_\nu u\|_{H^{-1/2}(\Gamma)} + \|g_2- \lambda v\|_{L^2(\Gamma_0)}\} \\ \leq K' \{ \|\Delta u \|_{L^2(\Omega)} + \|u\|_{H^1(\Omega)} + \|g_2 - \lambda v\|_{L^2(\Gamma_0)}\} \\ \leq K' \{ \|g_1 - \lambda v\|_{L^2(\Omega)} + \|u\|_{H^1(\Omega)} + \|g_2- \lambda v\|_{L^2(\Gamma_0)}\}. \end{multline} To prove that $u \in H^2(\Omega)$, we start by picking a function $\rho \in \mathcal{C}^2(\overline{\Omega})$ such that $\rho = 1$ (resp. $\rho = 0$) in some open neighborhood $\Gamma_0^\varepsilon$ of $\Gamma_0$ (resp. $\Gamma_1^\varepsilon$ of $\Gamma_1$). We let $u^0 \triangleq \rho u$ and $u^{1} \triangleq (1- \rho) u$, so that $u = u^0 + u^1$. Then, $u^0$ belongs to $H^1(\Omega)$ and $\Delta u^0 = \rho \Delta u + u \Delta \rho + 2 \nabla u \cdot \nabla \rho \in L^2(\Omega)$. First, we have $u^1_{|\Gamma} = \mathds{1}_{\Gamma_0} u_{|\Gamma} \in H^{3/2}(\Gamma_0)$. Then, applying elliptic theory (\cite{lions-problemes,taylor-pde}), we get that $u^1 \in H^2(\Omega)$ together with the estimate \begin{multline} \label{eq:elliptic-u0} \|u^0\|_{H^2(\Omega)} \leq K \{ \|\Delta u^0\|_{L^2(\Omega)} + \|u_{|\Gamma_0}\|_{H^{3/2}(\Gamma_0)}\} \\ \leq K' \{ \|g_1 - \lambda v\|_{L^2(\Omega)} + \|u\|_{H^1(\Omega)}+ \|g_2- \lambda v\|_{L^2(\Gamma_0)}\}. \end{multline} Next, we look at $u^1$. Again, $\Delta u^1 \in L^2(\Omega)$, and $\partial_\nu u^1$ is well-defined in $H^{-1/2}(\Gamma)$ as well. We claim that the (distributional) normal derivative $\partial_\nu u^1$ satisfies, for any $\theta \in H^{1/2}(\Gamma)$, $ \langle \partial_\nu u^1, \theta \rangle_{H^{1/2}(\Gamma)} = \langle \partial_\nu u, \mathds{1}_{\Gamma_1} \theta \rangle_{H^{1/2}(\Gamma)}. $ This can be deduced from \cref{eq:distrib-normal} by constructing, given $\theta \in H^{1/2}(\Gamma)$, a function $w \in H^1(\Omega)$ satisfying $w_{|\Gamma} = \mathds{1}_{\Gamma_1} \theta$ and whose support is contained in $\Gamma_1^{\varepsilon}$ ( where $u$ and $u^1$ coincide). On the other hand, using the same argument as for $\Gamma_0$, we can particularize \cref{eq:var-gamma} to elements $w \in V$ satisfying $w_{|\Gamma} = \mathds{1}_{\Gamma_1}\theta $ for any given arbitrary $\theta \in H^{1/2}(\Gamma)$. This leads to \begin{equation} \langle \partial_\nu u, \mathds{1}_{\Gamma_1} \theta \rangle_{H^{1/2}(\Gamma)} + \int_{\Gamma_1} [u + \alpha v] \theta \, \d \sigma = 0, \end{equation} which means that $\partial_\nu u^1$ is in $H^{1/2}(\Gamma)$, with $\partial_\nu u^1 = \mathds{1}_{\Gamma_1}[- u - \alpha v]$. Combined with $\Delta u^1 \in L^2(\Omega)$, elliptic theory yields that $u^1 \in H^2(\Omega)$, with \begin{equation} \label{eq:elliptic-u1} \|u^1\|_{H^2(\Omega)} \\ \leq K \{ \|g_1 - \lambda v\|_{L^2(\Omega)} + \|u\|_{H^1(\Omega)} + \|v\|_{H^1(\Omega)}\}. \end{equation} Then, $u = u^0 + u^1 \in H^2(\Omega)$, $\partial_\nu + u = - \alpha v$ on $\Gamma_1$. Going back to \cref{eq:var-gamma0}, we see that $\Delta_\Gamma u \in L^2(\Gamma_0)$ and thus $u_{|\Gamma_0} \in H^2(\Gamma_0)$. It is now proved that $[u, v] \in \mathcal{D}(\mathcal{A})$. \textbf{Step 4: Compactness.} The following argument is standard: by substituting the identity $-v + \lambda u = g_1$ in order to rewrite the variational problem \cref{eq:var-u} in terms of $u$ only and letting $w = u$ in the resulting equation, one can obtain an estimate of the form $\|[u, v]\|_\H \leq K \|[f, g]\|_H$. From there, we combine \cref{eq:3/2-gamma0}, \cref{eq:elliptic-u0}, and \cref{eq:elliptic-u1} to obtain \begin{equation} \label{eq:stronger-estimate} \|u\|_{H^2(\Omega) \times H^{3/2}(\Gamma_0)} + \|v\|_V \leq K \|[f, g]\|_\H. \end{equation} Since $H^2(\Omega) \times H^{3/2}(\Gamma_0)$ is compactly embedded into $H^1(\Omega) \times H^1(\Gamma_0)$, and $V$ is compactly embedded into $H$, \cref{eq:stronger-estimate} proves that $(\mathcal{A} + \lambda \Id)^{-1}$ is a compact operator. \end{proof} The next proposition is motivated by the spectral criterion for semi-uniform stability. \begin{proposition} \label{prop:spectrum} We have $ \spe(\mathcal{A}) \cap \i \mathbb{R} = \emptyset$. \end{proposition} \begin{proof} First, due to the compactness of $(\mathcal{A} + \lambda \Id)^{-1}$ for $\lambda > 0$, $\spe(\mathcal{A})$ consists of only eigenvalues. That being said, we now prove the result by contradiction. Suppose there exists $\lambda = \mathrm{i}\omega \in \mathrm{i}\mathbb{R}$ such that for some non-zero $X = [u, v] \in \mathcal{D}(\mathcal{A})$, $\mathcal{A} X = \i \omega X$. We start with the case $\omega = 0$. Then, $\mathcal{A}[u, v] = 0$, which means that $v = 0$ and $u$ solves the following boundary-value problem: \begin{subequations} \begin{align} \label{eq:harm} &- \Delta u = 0 &&\mbox{in}~ \Omega, \\ \label{eq:harm-gamma0} &- \Delta_\Gamma u = - \partial_\nu u &&\mbox{on}~ \Gamma_0, \\ \label{eq:harm-gamma1} & \partial_\nu u + u = 0 & &\mbox{on}~ \Gamma_1. \end{align} \end{subequations} We multiply \cref{eq:harm} by $\overline{u}$, integrate over $\Omega$, and use \cref{eq:harm-gamma0}-\cref{eq:harm-gamma1} along with Green formulas on $\Omega$ and $\Gamma_0$ to obtain $\|u\|^2_V = 0$; thus, $X = 0$. Now, in the case where $\omega$ is non-zero, we can write \begin{equation} \|X\|^2_\H = \frac{1}{\i \omega} (\mathcal{A} X, X)_\H. \end{equation} Recalling the identity \cref{eq:ax-x}, we have \begin{equation} \label{eq:norm-id} \|X\|^2_\H = \frac{\alpha}{\i \omega} \int_{\Gamma_1} |v|^2 \, \d \sigma - \frac{2}{\omega} \im (u, v)_V. \end{equation} Taking the imaginary part of \cref{eq:norm-id} yields $ v = 0 $ a.e. on $\Gamma_1$. On the other hand, $u = - \i \omega v$ and because $[u, v] \in \mathcal{D}(\mathcal{A})$, \begin{subequations} \begin{align} &- \Delta u - \omega^2u = 0 &&\mbox{in}~ \Omega, \\ & u = 0 && \mbox{on}~ \Gamma_1, \\ & \partial_\nu u = 0 & &\mbox{on}~ \Gamma_1. \end{align} \end{subequations} Furthermore, the differential operator $- \Delta - \omega^2 \Id$ is elliptic and has real analytic coefficients. Thus, we can apply {John-Holmgrem theorem} on unique continuation across non-characteristic hypersurfaces to obtain that $u = 0$ in $\Omega$, which completes the proof. \end{proof} We now conclude the section. \begin{proof}[Proof of Theorem \ref{th:wp}] As a maximal dissipative operator, $-\mathcal{A}$ generates a strongly continuous semigroup of linear contractions on $\H$ by virtue of Lumer-Phillips theorem. Furthermore, because $\spe(\mathcal{A}) \cap \i \mathbb{R} = \emptyset$, we can apply \cite[Theorem 1]{batty-non-uniform} to obtain the desired semi-uniform stability property \cref{eq:semi-uniform}. \end{proof} \section{Resolvent estimate and polynomial decay rate} \label{sec:pdr} The main technical contribution of our paper is the following resolvent estimate. \begin{proposition} \label{prop:resolvent-estimate} Under the geometrical conditions of Theorem \ref{th:pdr}, we have \begin{equation} \label{eq:resolvent-estimate} \sup_{\omega \in \mathbb{R}, |\omega| \geq 1} \frac{1}{\omega^2} \left \| (\mathcal{A} + \mathrm{i} \omega \mathrm{id} )^{-1} \right \|_{\mathcal{L}(\mathcal{H})} < + \infty. \end{equation} \end{proposition} Assume for a moment that Proposition \ref{prop:resolvent-estimate} is established. \begin{proof}[Proof of Theorem \ref{th:pdr}] We recall that $\{\S_t\}$ is a bounded semigroup with generator $-\mathcal{A}$ that satisfies $\spe(\mathcal{A}) \cap \i \mathbb{R} = \emptyset$. Thus, we can apply \cite[Theorem 2.4]{borichev_polynomial} to deduce from \cref{eq:resolvent-estimate} that for some $C > 0$, \begin{equation} \|\S_t (\mathcal{A} + \Id)^{-1} \|_{\L(\H)} \leq C t^{-1/2}. \end{equation} The operator $(\mathcal{A} + \Id)^{-1}$ is an isomorphism between $\H$ and $\mathcal{D}(\mathcal{A})$ endowed with the graph norm, hence \cref{eq:pdr}. \end{proof} We now give the proof of the desired resolvent estimate. \begin{proof}[Proof of Proposition \ref{prop:resolvent-estimate}] We proceed by contradiction. Assume there exist sequences of real numbers $\omega_n$ with $|\omega_n| \to + \infty$ and vectors $X_n = [u_n, v_n] \in \mathcal{D}(\mathcal{A})$ with $\|X_n\|_\H = 1$ such that \begin{equation} \label{eq:cont-Xn} \omega_n^{2} \| \mathcal{A}X_n + \mathrm{i} \omega_n X_n \|_\mathcal{H} \to 0, \end{equation} By taking a subsequence for which all $\omega_n$ are either positive or all negative and replacing all $X_n$ by $-X_n$ if needed, we may assume that all $\omega_n$ are positive. We shall obtain a contradiction by proving that $\|X_n\|_\H \to 0$ as $n$ goes to $+ \infty$. The proof is split into several steps as it involves some back and forths between estimates on $\Omega$, $\Gamma_0$, and $\Gamma_1$. \textbf{Step 1: Obtaining Helmhotz-like equations.} We start by detailing \cref{eq:cont-Xn}: \begin{subequations} \label{eq:cont-Xn-detailed} \begin{align} \label{eq:v-u} &\omega_n^{2} (-v_n +\i \omega_n u_n) \to 0 &&\mbox{in}~H^1(\Omega), \\ \label{eq:v-u-gamma} &\omega_n^{2} (-v_n +\i \omega_n u_n) \to 0 &&\mbox{in}~H^1(\Gamma_0), \\ \label{eq:Dv-u} &\omega_n^{2} ( -\Delta u_n + \i \omega_n v_n) \to 0 & &\mbox{in}~ L^2(\Omega), \\ \label{eq:Dv-u-gamma} &\omega_n^{2}( - \Delta_\Gamma u_n + \partial_\nu u_n + \i \omega_n v_{n} ) \to 0 & & \mbox{in}~ L^2(\Gamma_0). \end{align} \end{subequations} Plugging \cref{eq:v-u} into \cref{eq:Dv-u} and \cref{eq:v-u-gamma} into \cref{eq:Dv-u-gamma} yields \begin{subequations} \label{eq:comp-vn} \begin{align} &\omega_n(-\Delta u_n - \omega_n^2 u_n) \to 0 &&\mbox{in}~L^2(\Omega),\\ &\omega_n(-\Delta_\Gamma u_n - \omega_n^2 u_n + \partial_\nu u_n) \to 0 &&\mbox{in}~ L^2(\Gamma_0). \end{align} \end{subequations} Let us reformulate \cref{eq:comp-vn} as follows: there exist sequences $\{f_n\} \subset L^2(\Omega)$ and $\{g_n\} \subset L^2(\Gamma_0)$ such that \begin{subequations} \label{eq:helmhotz} \begin{align} \label{eq:wave} &- \Delta u_n - \omega_n^2 u_n = f_n &&\mbox{in}~ \Omega,\\ \label{eq:wave-gamma} &- \Delta_\Gamma u_n - \omega_n^2 u_n = - \partial_\nu u_n + g_n &&\mbox{on}~ \Gamma_0, \end{align} \end{subequations} with, using Landau notation, \begin{equation} \label{eq:est-rhs} \|f_n\|_{L^2(\Omega)} = o(\omega_n^{-1}) \quad \mbox{and}~ \|g_n\|_{L^2(\Gamma_0)} = o(\omega_n^{-1}). \end{equation} \textbf{Step 2: Estimate of the feedback term.} Coming back to \cref{eq:cont-Xn}, we have \begin{equation} \label{eq:X-F} \mathcal{A} X_n + \i \omega_n X_n = F_n, \end{equation} where $\{F_n \} \subset \H$ is such that $\|F_n\|_\H = o(\omega_n^{-2})$. Take the real part of the scalar product of \cref{eq:X-F} with $X_n$ to obtain \begin{equation} \label{eq:scalar} \re (\mathcal{A} X_n, X_n)_\H = \re (F_n, X_n)_\H = o(\omega_n^{-2}). \end{equation} Recalling the identity \cref{eq:ax-x}, it follows from \cref{eq:scalar} that \begin{equation} \label{eq:trace-vn} \int_{\Gamma_1} |v_n|^2 \, \mathrm{d} \sigma = o(\omega_n^{-2}). \end{equation} Equation \cref{eq:trace-vn} together with \cref{eq:v-u} and continuity of the trace operator from $H^1(\Omega)$ to $L^2(\Gamma_1)$ yields $ \int_{\Gamma_1} |u_n|^2 \, \d \sigma = o(\omega_n^{-4})$. Furthermore, since each $X_n$ is in $\mathcal{D}(\mathcal{A})$, $\partial_\nu u_n + u_n = -\alpha v_n$ on $\Gamma_1$, and thus \begin{equation} \label{eq:est-feedback} \int_{\Gamma_1} |\partial_\nu u_n|^2 \, \d \sigma = o(\omega_n^{-2}). \end{equation} \textbf{Step 3: Estimate of the coupling term.} It is assumed that $\overline{\Gamma_0} \cap \overline{\Gamma_1} = \emptyset$. As a consequence, there exists a vector field $\tilde{h} \in \mathcal{C}^2(\overline{\Omega})^d$ such that $\tilde{h} = \nu$ on $\Gamma_0$ and $\tilde{h} = 0$ on $\Gamma_1$. Multiplying \cref{eq:wave} by $2\tilde{h}\cdot \nabla \overline{u_n}$ and integrating over $\Omega$ leads to the following classical trace identity: \begin{multline} \label{eq:trace} \int_{\Gamma_0} \omega_n^2 |u_n|^2 - \|\nabla u_n\|^2 \, \mathrm{d} \sigma = 2 \re \int_\Omega [J_{\tilde{h}}\nabla u_n] \cdot \nabla \overline{u_n} \, \d x - \re \int_{\Gamma_0} \partial_\nu u_n [2\tilde{h}\cdot \nabla \overline{u_n}] \, \d \sigma \\ - \re \int_\Omega f_n [2\tilde{h}\cdot \nabla \overline{u_n} ] \, \d x + \int_\Omega \{ \omega_n^2 |u_n|^2 - \| \nabla u_n\|^2 \} \operatorname{div} \tilde{h} \, \mathrm{d}x, \end{multline} where $J_{\tilde{h}} = [\partial_j \tilde{h}_i]_{ij}$ is the Jacobian matrix of $\tilde{h}$. For the construction of $\tilde{h}$ or computations leading to \cref{eq:trace}, the reader is referred to \cite[Lemmas 2.1 and 2.3]{komornik_exact_1994}. Furthermore, since $\tilde{h} = \nu$ on $\Gamma_0$ and $\nu \cdot \nabla \overline{u_n} = \partial_\nu \overline{u_n}$, \begin{equation} \label{eq:h-para} \re \int_{\Gamma_0} \partial_\nu u_n [2\tilde{h} \cdot \nabla \overline{u_n}] \, \d \sigma = 2 \int_{\Gamma_0} |\partial_\nu u_n|^2 \, \d \sigma. \end{equation} Now, recall that $\|X_n\|_\H = 1$. In particular, $u_n$ and $v_n$ are bounded in $V$ and $H$ respectively, which implies: \begin{itemize} \item $\nabla u_n$ and $\nabla_\Gamma u_n$ are bounded in $L^2(\Omega)^d$ and $L^2(\Gamma_0)^{d-1}$ respectively; \item By \cref{eq:v-u}-\cref{eq:v-u-gamma}, $\omega_n u_n$ is bounded in $L^2(\Omega)$ and $\omega_n u_{n|\Gamma_0}$ is bounded in $L^2(\Gamma_0)$. \end{itemize} Therefore, it follows from \cref{eq:tan-riem}, \cref{eq:trace} and \cref{eq:h-para} that \begin{equation} \label{eq:est-coupling} \int_{\Gamma_0} |\partial_\nu u_n|^2 \, \mathrm{d}\sigma = O(1). \end{equation} \textbf{Step 4: Multiplier identity.} Let $\varepsilon > 0$ to be fixed later on and define $ \mathcal{M}\overline{u_n} \triangleq 2h\cdot \nabla \overline{u_n} + (\dive h - \varepsilon) \overline{u_n} $, where the vector field $h$ is defined in the hypotheses of Theorem \ref{th:pdr} and $\dive$ stands for the divergence. The multiplier identity \begin{multline} \label{eq:wave-mult} 2 \operatorname{Re} \int_\Omega [J_h \nabla u_n] \cdot \nabla \overline{u_n} \, \mathrm{d}x + \epsilon \int_\Omega \omega_n^2 |u_n|^2 - \|\nabla u_n\|^2 \, \mathrm{d}x \\ = (1 - \varepsilon) \operatorname{Re} \int_{\Gamma} \partial_\nu u_n \overline{u_n} \operatorname{div}h \, \mathrm{d}\sigma + \operatorname{Re} \int_{\Gamma} \partial_\nu u_n [2 h\cdot \nabla \overline{u_n}] \, \mathrm{d}\sigma \\ + \int_{\Gamma} (h\cdot \nu) \{ \omega_n^2 |u_n|^2 - \| \nabla u_n\|^2\} \, \mathrm{d}\sigma + \operatorname{Re} \int_\Omega f_n \mathcal{M}\overline{u_n} \, \mathrm{d}x. \end{multline} is standardly obtained by multiplying \cref{eq:wave} by $\mathcal{M} \overline{u_n}$, integrating over $\Omega$, and performing a series of integrations by parts -- see, e.g., the proof of \cite[Theorem 4.1]{lasiecka_uniform_1992} for similar computations. Recalling Item \ref{it:jacobian} in the hypotheses, we choose $\varepsilon < 2\rho$ to obtain \begin{equation} \label{eq:wave-mult-bis} 0 \leq (2 \rho - \varepsilon) \int_{\Omega} \|\nabla u_n\|^2 \, \mathrm{d}x + \varepsilon \int_\Omega \omega_n^2 |u_n|^2 \, \mathrm{d}x \leq \mbox{Right-hand side of \cref{eq:wave-mult}.} \end{equation} In what follows, $\eta$ denotes an arbitrary number taken in $(0, 1)$. Since $\| \omega_n u_n \|_H = O(1)$, we have \begin{equation} \label{eq:est-un} \|u_n\|_{H} = o(\omega_n^{\eta - 1}). \end{equation} \textbf{Step 5: Estimates on the boundary.} We start with the integrals on $\Gamma_0$. { First, $h\cdot \nu$ is smooth, so that $(h\cdot \nu) \overline{u_n}$ belongs to $H^1(\Gamma_0)$ with} $ \nabla_\Gamma [(h\cdot\nu)\overline{u_n}] = (h\cdot\nu) \nabla_\Gamma \overline{u_n} + \overline{u_n} \nabla_\Gamma [h\cdot \nu] $. Thus, multiplying \cref{eq:wave-gamma} by $(h\cdot \nu) \overline{u_n}$, integrating over $\Gamma_0$, and using the Green formula \cref{eq:div-manifold} leads to \begin{multline} \label{eq:elliptic-gamma0} \int_{\Gamma_0} (h \cdot \nu) \{ \|\nabla_\Gamma u_n\|^2 - \omega_n^2 |u_n|^2\} \, \mathrm{d}\sigma = \int_{\Gamma_0} (h \cdot \nu) g_n \overline{u_n} \, \mathrm{d}\sigma \\ - \int_{\Gamma_0} (\nabla_\Gamma u_n \cdot \nabla_\Gamma [h\cdot \nu]) \overline{u_n} \, \d \sigma - \int_{\Gamma_0} (h \cdot \nu)\partial_\nu u_n \overline{u_n} \, \mathrm{d}\sigma. \end{multline} Using a series of Cauchy-Schwarz inequalities, we deduce from \cref{eq:est-rhs}, \cref{eq:est-coupling}, \cref{eq:est-un}, and \cref{eq:elliptic-gamma0} that \begin{equation} \label{eq:est-gamma0} \int_{\Gamma_0} (h \cdot \nu) \{ \|\nabla_\Gamma u_n\|^2 - \omega_n^2 |u_n|^2\} \, \mathrm{d}\sigma = o(\omega_n^{\eta -1}). \end{equation} In view of \cref{eq:wave-mult}, we also note that because $h = (h\cdot \nu) \nu$ on $\Gamma_0$ (Item \ref{it:gamma0} in the hypotheses of Theorem \ref{th:pdr}), we have \begin{equation} \label{eq:tangential-vanish} \re \int_{\Gamma_0} \partial_\nu u_n [2h\cdot \nabla \overline{u_n}] \, \d \sigma = 2\int_{\Gamma_0} (h\cdot \nu) |\partial_\nu u_n|^2 \, \d \sigma. \end{equation} Now we deal with the integrals on $\Gamma_1$ that appear in \cref{eq:wave-mult}. By using that $(h\cdot \nu) \geq m > 0$ on $\Gamma_1$ (Item \ref{it:gamma1}) together with Cauchy-Schwarz and Young inequalities, we get \begin{multline} \label{eq:cs-young-gamma1} \int_{\Gamma_1} (h\cdot \nu) \{ \omega_n^2 |u_n|^2 - \| \nabla u_n\|^2\} \, \mathrm{d}\sigma + \operatorname{Re} \int_{\Gamma_1} \partial_\nu u_n [2 h\cdot \nabla \overline{u_n}] \, \mathrm{d}\sigma \\ \leq \int_{\Gamma_1} (h\cdot \nu) \omega_n^2 |u_n|^2 + \frac{1}{2m} \int_{\Gamma_1} |\partial_\nu u_n|^2 \, \d \sigma. \end{multline} \textbf{Step 6: Estimate of the interior energy.} Bearing in mind the sign conditions prescribed for $(h \cdot \nu)$ on each part of $\Gamma$, we combine \cref{eq:wave-mult}, \cref{eq:wave-mult-bis}, \cref{eq:tangential-vanish}, and \cref{eq:cs-young-gamma1} to obtain \begin{multline} \label{eq:wave-mult-ter} (2 \rho - \varepsilon) \int_{\Omega} \|\nabla u_n\|^2 \, \mathrm{d}x + \varepsilon \int_\Omega \omega_n^2 |u_n|^2 \, \mathrm{d}x \leq (1 - \varepsilon) \operatorname{Re} \int_{\Gamma} \partial_\nu u_n \overline{u_n} \operatorname{div}h \, \mathrm{d}\sigma \\ + \operatorname{Re} \int_\Omega f_n \mathcal{M}\overline{u_n} \, \mathrm{d}x + \int_{\Gamma_0} (h\cdot \nu) \{ \omega_n^2 |u_n|^2 - \| \nabla u_n\|^2\} \, \mathrm{d}\sigma \\ + \int_{\Gamma_1} (h\cdot \nu) \omega_n^2 |u_n|^2 + \frac{1}{2m} \int_{\Gamma_1} |\partial_\nu u_n|^2 \, \d \sigma. \end{multline} Next, we deduce from \cref{eq:wave-mult-ter} combined with \cref{eq:est-rhs}, \cref{eq:est-coupling}, \cref{eq:est-un}, and \cref{eq:est-gamma0} that \begin{equation} \label{eq:est-interior} \int_\Omega \|\nabla u_n\|^2 + \omega_n^2 |u_n|^2 \, \d x = o(\omega_n^{\eta - 1}). \end{equation} \textbf{Step 7: Refined estimate of the coupling term.} We can now use \cref{eq:est-interior} to improve our prior estimate \cref{eq:est-coupling}. We come back to \cref{eq:trace} and \cref{eq:h-para}: \begin{multline} \label{eq:id-ref} \int_{\Gamma_0} \omega_n^2 |u_n|^2 - \|\nabla_{\Gamma} u_n\|^2 + |\partial_\nu u_n|^2 \, \mathrm{d} \sigma = \int_\Omega \{ \omega_n^2 |u_n|^2 - \| \nabla u_n\|^2 \} \operatorname{div} \tilde{h} \, \mathrm{d}x \\ - \re \int_\Omega f_n [2\tilde{h}\cdot \nabla \overline{u_n} ] \, \d x + 2 \re \int_\Omega [J_{\tilde{h}}\nabla u_n] \cdot \nabla \overline{u_n} \, \d x. \end{multline} As in Step 5, we obtain another expression of the integral over $\Gamma_0$ of $\omega_n^2 |u_n|^2 - \|\nabla_\Gamma u_n\|^2$ by multiplying \cref{eq:wave-gamma} by $\overline{u_n}$ and integrating over $\Gamma_0$. Then, \cref{eq:id-ref} yields \begin{multline} \label{eq:normal-der} \int_{\Gamma_0} |\partial_\nu u_n|^2 \, \d \sigma = \int_\Omega \{ \omega_n^2 |u_n|^2 - \| \nabla u_n\|^2 \} \operatorname{div} \tilde{h} \, \mathrm{d}x - \re \int_\Omega f_n [2\tilde{h}\cdot \nabla \overline{u_n} ] \, \d x \\ + 2 \re \int_\Omega [J_{\tilde{h}}\nabla u_n] \cdot \nabla \overline{u_n} \, \d x + \int_{\Gamma_0} g_n \overline{u_n} \, \d \sigma - \int_{\Gamma_0} \partial_\nu u_n \overline{u_n} \, \d \sigma. \end{multline} It follows from \cref{eq:est-rhs}, \cref{eq:est-un}, \cref{eq:est-interior}, and \cref{eq:normal-der} that \begin{equation} \label{eq:coupling-bis} \int_{\Gamma_0} |\partial_\nu u_n|^2 \, \d \sigma = o(\omega_n^{\eta - 1}). \end{equation} \textbf{Step 8: Conclusion.} Conversely, we can now use \cref{eq:coupling-bis} to refine the estimate \cref{eq:est-interior} of the interior energy. More precisely, using Cauchy-Schwarz inequality, we infer from \cref{eq:est-un} and \cref{eq:coupling-bis} that \begin{equation} \label{eq:refined-coupling} \re \int_{\Gamma_0} \partial_\nu u_n \overline{u_n} \dive h = o(\omega_n^{3(\eta - 1)/2}) \end{equation} for any $\eta \in (0, 1)$. We let $\eta = 1/3$; then, by plugging \cref{eq:refined-coupling} into \cref{eq:wave-mult-ter} we finally obtain (compare with \cref{eq:est-interior}) \begin{equation} \label{eq:est-interior-bis} \int_\Omega \|\nabla u_n\|^2 + \omega_n^2 |u_n|^2 \, \d x = o(\omega_n^{-1}). \end{equation} We are now in position to conclude. We recall that the trace operator is continuous from $H^{1/2}(\Omega)$ into $L^2(\Gamma_0)$ and use linear interpolation between Sobolev spaces: \begin{equation} \label{eq:interp} \begin{aligned} \int_{\Gamma_0} \omega_n^2 |u_n|^2 \, \d \sigma& \leq K \omega_n^2 \|u_n\|^2_{H^{1/2}(\Omega)} \\ & \leq K' \omega_n \|u_n\|_{H^{1}(\Omega)} \| \omega_n u_n\|_{L^2(\Omega)}. \end{aligned} \end{equation} By \cref{eq:est-interior-bis}, we have \begin{equation} \|u_n\|_{H^1(\Omega)} = o(\omega_n^{-1/2}), \quad \| \omega_n u_n\|_{L^2(\Omega)} = o(\omega_n^{-1/2}). \end{equation} Therefore, \cref{eq:interp} yields \begin{equation} \int_{\Gamma_0} \omega_n^2 |u_n|^2 \, \d \sigma = o(1) \end{equation} In sum, after multiplying \cref{eq:wave-gamma} by $\overline{u_n}$, we finally obtain \begin{multline} \int_{\Omega} \omega_n^2 |u_n|^2 + \| \nabla u_n \|^2 \, \mathrm{d}x + \int_{\Gamma_0} \omega_n^2 |u_n|^2 + \|\nabla_\Gamma u_n\|^2 \, \mathrm{d}\sigma + \int_{\Gamma_1} |u_n|^2 \, \mathrm{d}\sigma = o(1), \end{multline} which contradicts $\|X_n\|_\mathcal{H} = 1$. \end{proof} \section*{Acknowledgements} This work has been partially supported by MIAI@Grenoble Alpes (ANR-19-P3IA-0003). \bibliographystyle{alpha}
{ "timestamp": "2022-09-23T02:11:18", "yymm": "2209", "arxiv_id": "2209.10872", "language": "en", "url": "https://arxiv.org/abs/2209.10872" }
\section{Introduction} Fermi polaron models correspond to a general class of quantum many-body problems in which a single impurity interacts with a bath of fermions. Historically, theoretical work in this area started with the analysis of models with infinitely heavy impurities, which exhibit a phenomenon of orthogonality catastrophe~\cite{anderson1967}. The latter is an observation of P. W. Anderson that even a weak impurity potential results in the creation of an infinite number of low-energy particle-hole excitations. Orthogonality catastrophe plays an important role in several areas of physics, including X-ray scattering \cite{ohtaka1990,mahan2000}, photoemission \cite% {Anderson1969,Anderson1970,tanabe1985orthogonality}, transport in mesoscopic systems \cite% {hentschel2005,nazarov2009quantum}, and radiofrequency (RF) and Rydberg spectroscopies in ultracold atoms \cite{schirotzek2009,zhang2012,Richard2016,Yuto2019}. Dynamics in polaronic systems becomes even richer when impurity particles are endowed with internal degrees of freedom. The simplest example is adding spin states to a localized impurity, corresponding to the Kondo model. This class of systems exhibits such striking phenomena as non-monotonic temperature dependence of resistivity in metals with magnetic impurities \cite{sarachik1964}, formation of heavy fermion materials \cite% {hewson1993kondo}, and even emergence of non-Fermi liquid states \cite% {gegenwart2008quantum,QCP2020}. Another way of enriching impurity dynamics is to make them mobile. Models of mobile Fermi polarons were first considered in the context of He$^{4}$/He$% ^{3}$ mixtures, ions in the normal liquid of He$^{3}$, and diffusion of muons in metals \cite{dMuon1998}. In comparison to the infinitely heavy impurity models, a new feature of such systems comes from the finite recoil energy of the impurity particle. This appears as a constraint on the scattering processes of the bath fermions and raises the question of whether the states with and without the impurity-bath coupling are orthogonal to each other. For infinitely heavy impurities, we have the orthogonality catastrophe, which means that the two states are orthogonal, while for heavy but finite mass impurities, the answer was argued to depend on dimensionality~\cite{rosch1999quantum}. In two- and three-dimensional systems, the two states are expected to have a finite overlap, which in turn implies a finite quasiparticle weight, whereas in one-dimensional systems, the quasiparticle weight can be proven to vanish~\cite% {castella1993,Dolgirev2020}. It is interesting to note, however, that developing accurate theoretical models for describing properties of mobile impurities interacting with a Fermi bath remains a considerable theoretical challenge. Earlier studies of mobile Fermi polarons have been motivated by two primary considerations. On the one hand, they provided a concrete example of the emergence of friction in a purely quantum-mechanical system~\cite{astrakharchik2004motion, cherny2012theory}. On the other hand, the issue of quasiparticle weight was considered as a paradigmatic case study of the concept of quasiparticles in strongly interacting Fermi systems. Renewed interest in the study of Fermi polarons came with the progress of experiments in the field of ultracold atoms. These systems make it possible to realize Fermi polarons with different mass ratios of the impurity and bath particles and tune impurity-bath interaction strength using magnetic Feshbach resonances~\cite{schirotzek2009, schmidt2011excitation,zhang2012,kohstall2012, koschorreck_attractive_2012, cetina2016,scazza2017,yan2019,ness2020observation}. The tunability of microscopic interactions brings a new feature of the interplay of few- and many-body aspects of the problem. In particular, Feshbach resonance itself corresponds to the appearance of a bound state in a two-body problem~\cite{chin2010}. An interesting question then is whether one finds a transition between molecular and polaronic ground states in a many-body system. In the former case, the impurity atom makes a bound state with one of the bath fermions, accompanied by the vanishing quasiparticle weight. In the polaronic case, the impurity interacts with many bath particles and forms a state that has finite quasiparticle weight. In three-dimensional systems, there is strong numerical \cite% {prokofev2008a,prokofev2008b} and experimental \cite{schirotzek2009} evidence for the polaron-to-molecule transition. Notably, one finds that in the case of equal masses of impurity and bath particles, one can obtain a good description of many-body polaronic states by including only a single particle-hole excitation~\cite{chevy2006}. These so-called Chevy ansatz (CA) wave functions work surprisingly well even at unitarity when the scattering length diverges~\cite{combescot2007,combescot2008}. In two-dimensional (2D) systems, analysis based on CA suggested that the ground state should always be of the polaronic type~\cite% {zollner_polarons_2011} (for equal masses of the impurity and bath fermions). However, analysis that extended CA to include two particle-hole excitations supported the existence of the polaron-to-molecule transition~% \cite{parish_polaron-molecule_2011, Parish2013, cui2020, Cheng2021}. The most recent addition to the experimental platforms for exploring Fermi polarons utilizes excitons and electrons (or holes) in transition metal dichalcogenides (TMDs)~\cite{sidler2017}. In contrast to traditional Si and GaAs semiconductors, TMDs have a smaller dielectric constant and heavier electron mass, resulting in much stronger binding energy and smaller size of an exciton. For a broad range of electron densities used in experiments, the size of excitons is much smaller than a typical inter-electron distance. Hence excitons can be treated as impurities when analyzing their interaction with electrons. Furthermore, there is an effective Feshbach resonance between electrons and excitons, which manifests itself in the repulsive and attractive branches in the absorption spectra. These branches are strongly reminiscent of the Fermi polaron spectra measured in two-dimensional systems of ultracold fermions~\cite{sidler2017,efimkin2017,Christian2020,kuhlenkamp2022}. Motivated by these developments, we set ourselves a goal of developing an efficient theoretical formalism for describing Fermi polarons in 2D systems, both in and out of equilibrium. The approach we choose here is based on the non-Gaussian states detailed in Ref.~\cite{shi_variational_2018}. These variational states do not rely on truncations in the number of particle-hole excitations. As such, this approach provides an unbiased analysis of the competition between polaronic and molecular states. It also guarantees that in the limit of infinitely heavy impurities, our solution reduces to the exact one based on the Functional Determinant Approach~\cite% {levitov1996,knap2012,schmidt2018universal}. Additional motivation to employ the non-Gaussian states is that they capture remarkably well the physics of 1D Fermi polaron~\cite{mcguire_interacting_1965,mcguire_interacting_1966,mathy2012quantum, knap2014quantum, gamayun2016time,gamayun2018impact,gamayun2020zero,Dolgirev2020}. The main advantage of our method is the possibility of analyzing non-equilibrium properties~\cite{knap2012,schmidt2012,parish2016quantum,schmidt2018universal,gamayun2018impact,liu2019variational,liu2020theory,adlong2020quasiparticle,burovski2021mobile} of polaronic systems, including various spectral functions. As part of our analysis, we introduce here new characteristics of Fermi-polaron systems, namely: molecular residue and molecular spectral function. These quantities provide a complementary perspective on the polaron-to-molecule transition. We discuss how they can be measured in experiments with ultracold atoms. This paper is organized as follows: In Sec.~\ref{sec: formalism}, we introduce the general non-Gaussian approach, which includes the ground-state optimization via the imaginary-time evolution, the study of the real-time dynamics by projecting the Schr\"{o}dinger equation on the variational manifold, and the linear-response analysis by linearizing the equations of motion around the ground-state configuration. In Sec.~\ref{sec: static properties}, a first-order polaron-to-molecule transition is identified, where both the ground-state energy and single-particle residue are in excellent agreement with those from CA and diagrammatic Monte Carlo (DMC) calculations. Section~\ref{sec: dynamical properties} is dedicated to far-from-equilibrium dynamics of the polaronic system. There, we compute various spectral properties and introduce two types of RF spectroscopies to quantify the polaron-to-molecule transition. The spectral functions exhibit distinctive dynamical behaviors in the polaronic and molecular phases, such as long-lived oscillations between the repulsive and attractive polarons and fast relaxation to the molecular state from different initial states. Finally, the main results are briefly summarized in Sec.~\ref{sec: summary}. \section{\label{sec: formalism}Formalism} This section introduces the non-Gaussian formalism to study the Fermi polaron in two spatial dimensions. In the first subsection, we formulate the model of a single impurity in the 2D Fermi gas and apply the Lee-Low-Pines (LLP) transformation~\cite{LLP} that decouples the impurity from the fermionic degrees of freedom. We then, in the second subsection, introduce the non-Gaussian variational states, which allow us to investigate the ground-state properties and the real-time dynamics. Up to this stage, our framework closely follows that used in Ref.~\cite{Dolgirev2020} to study the one-dimensional Fermi polaron. The two-dimensional problem is much more challenging due to the large number of the involved degrees of freedom. Consequently, numerical simulations are limited to small system sizes. To overcome this issue, in the third subsection, we utilize the rotational symmetry of the problem, which in turn allows us to efficiently model even relatively large systems. The fourth subsection is dedicated to the linear response theory within the formalism of the non-Gaussian states. \subsection{Model} A single mobile impurity immersed in a 2D Fermi gas is described via the following microscopic Hamiltonian: \begin{align} H=H_{\mathrm{b}}+H_{\mathrm{imp}}+H_{\mathrm{int}}, \label{eqn:Ham_rs} \end{align} where \begin{equation} H_{\mathrm{b}}=-\frac{1}{2m_{\mathrm{b}}}\int d^{2}x \, \hat{\Psi}^{\dagger }(\mathbf{x})\nabla ^{2}\hat{\Psi}(\mathbf{x}) \end{equation}% represents the kinetic energy of the fermionic bath. The kinetic energy of the impurity is given by: \begin{equation} H_{\mathrm{imp}}=-\frac{1}{2m_{\mathrm{imp}}}\int d^{2}x \, \hat{\Psi}_{% \mathrm{imp}}^{\dagger }(\mathbf{x})\nabla ^{2}\hat{\Psi}_{\mathrm{imp}}(% \mathbf{x}). \end{equation} The contact interaction term reads: \begin{equation} H_\mathrm{int}=g\int d^{2}x\, \hat{\Psi}^{\dagger }(\mathbf{x})\hat{\Psi}(% \mathbf{x})\hat{\Psi}_{\mathrm{imp}}^{\dagger }(\mathbf{x})\hat{\Psi}_{% \mathrm{imp}}(\mathbf{x}), \label{eqn:Hint_rs} \end{equation}% where $\hat{\Psi}(\mathbf{x})$ and $\hat{\Psi}^{\dagger }(\mathbf{x})$ ($% \hat{\Psi}_{\mathrm{imp}}(\mathbf{x})$ and $\hat{\Psi}_{\mathrm{imp}% }^{\dagger }(\mathbf{x})$) denote the fermionic (impurity) creation and annihilation operators, respectively; they obey the fermionic anti-commutation relations. Here $m_{\mathrm{b}}$ and $m_{\mathrm{imp}}$ are the fermion and impurity masses, respectively. In the 2D gas, the attractive interaction strength $g$ is related to the 2D scattering length $a_{\mathrm{% 2D}}$ via the Lippmann-Schwinger equation: \begin{equation} \frac{1}{g}=-\frac{1}{L^2}\sum_{|\mathbf{k}|<k_{\Lambda }}\frac{1}{E_{B}+ k^2/(2m_{\mathrm{r}})}, \end{equation} where $E_{B}=1/(2m_{\mathrm{r}}a_{\mathrm{2D}}^{2})$ is the binding energy of the weakly bound diatomic molecule, $m_{\mathrm{r}}=m_{\mathrm{b}}m_{% \mathrm{imp}}/(m_{\mathrm{b}}+m_{\mathrm{imp}})$ denotes the reduced mass, $L $ is the linear system's size, and $k_{\Lambda }$ is the ultraviolet (UV) momentum cutoff. In momentum space, the Hamiltonian reads: \begin{align} H = &\sum_{\mathbf{k}}(\varepsilon _{\mathrm{b},\mathbf{k}}c_{\mathbf{k}% }^{\dagger }c_{\mathbf{k}} + \varepsilon _{\mathrm{imp},\mathbf{k}}f_{% \mathbf{k}}^{\dagger }f_{\mathbf{k}}) \nonumber \\ &+\frac{g}{L^2}\sum_{\mathbf{k},\mathbf{p},\mathbf{q}}f_{\mathbf{k}% }^{\dagger }f_{\mathbf{k}+\mathbf{q}}c_{\mathbf{p}}^{\dagger }c_{\mathbf{p-q}% }, \end{align} where $\varepsilon _{\sigma =\mathrm{b,imp},\mathbf{k}}=k^{2}/(2m_{\sigma })$ and $\mathbf{k}=2\pi (n_{x},n_{y})/L$ with integer $n_{x}$ and $n_{y}$. We turn to discuss the Lee-Low-Pines (LLP) transformation $U_{\mathrm{LLP}% }=e^{-i\mathbf{Q}_{b}\cdot \mathbf{X}_{\mathrm{imp}}}$, which allows one to simplify the model by eliminating the impurity degrees of freedom. This unitary transformation relies on the very fact of the total momentum conservation $[\mathbf{Q}_{\mathrm{tot}},H]=0$, where $\mathbf{Q}_{\mathrm{% tot}}= \mathbf{Q}_{\mathrm{imp }} + \mathbf{Q}_{\mathrm{b}}$. Here $\mathbf{Q% }_{\mathrm{imp}}=\sum_{\mathbf{k}}\mathbf{k}f_{\mathbf{k}}^{\dagger }f_{% \mathbf{k}}$ and $\mathbf{Q}_{\mathrm{b}}=\sum_{\mathbf{k}}\mathbf{k}c_{% \mathbf{k}}^{\dagger }c_{\mathbf{k}}$ represent the momenta of the impurity and the fermi bath, respectively. We also defined $\mathbf{X}_{\mathrm{imp}% }=\int d^{2}x\,\mathbf{x}\,\hat{\Psi}_{\mathrm{imp}}^{\dagger }(\mathbf{x})% \hat{\Psi}_{\mathrm{imp}}(\mathbf{x})$ to be the impurity position operator. Physically, the LLP transformation simply encodes the fact that the impurity momentum $\mathbf{Q}_{\mathrm{imp}}=U_{\mathrm{LLP}}^{\dagger }\mathbf{Q}_{% \mathrm{tot}}U_{\mathrm{LLP}}$ can be reconstructed from the total momentum $% \mathbf{Q}_{\mathrm{tot}}$ and the net momentum $\mathbf{Q}_{\mathrm{b}}$ of the host fermions. Under the LLP transformation, the system is transformed into the co-moving frame of the impurity. The modified Hamiltonian $H_{% \mathrm{LLP}}=U_{\mathrm{LLP}}^{\dagger }HU_{\mathrm{LLP}}$ in the single-impurity subspace $\sum_{\mathbf{k}}f_{\mathbf{k}}^{\dagger }f_{% \mathbf{k}}=1$ then reads:% \begin{align} H_{\mathrm{LLP}} =& \sum_{\mathbf{k}}(\varepsilon _{\mathrm{b},\mathbf{k}}c_{% \mathbf{k}}^{\dagger }c_{\mathbf{k}}+\varepsilon _{\mathrm{imp},\mathbf{k-Q}% _{\mathrm{b}}}f_{\mathbf{k}}^{\dagger }f_{\mathbf{k}}) \nonumber \\ &+\frac{g}{L^2}\sum_{\mathbf{k}}f_{\mathbf{k}}^{\dagger }f_{\mathbf{k}}\sum_{% \mathbf{p},\mathbf{q}}c_{\mathbf{p}}^{\dagger }c_{\mathbf{q}}. \end{align} We note that in the transformed frame, $f_{\mathbf{k}}^{\dagger }f_{\mathbf{k% }}$ commutes with $H_{\mathrm{LLP}}$; in other words, $\mathbf{Q}_{\mathrm{% imp}}=\mathbf{K}_{0}$ becomes an integral of motion in the co-moving frame so that the LLP Hamiltonian can be written as: \begin{equation} H_{\mathrm{LLP}}=\sum_{\mathbf{k}}\varepsilon _{\mathrm{b},\mathbf{k}}c_{% \mathbf{k}}^{\dagger }c_{\mathbf{k}}+\frac{g}{L^2}\sum_{\mathbf{k},\mathbf{p}% }c_{\mathbf{k}}^{\dagger }c_{\mathbf{p}}+\varepsilon _{\mathrm{imp},\mathbf{K% }_{0}\mathbf{-Q}_{\mathrm{b}}}. \label{Decoupled Hamiltonian} \end{equation} We obtain that the impurity degrees of freedom are eliminated at the price of introducing a non-local impurity-mediated interaction between the fermions, encoded in the third term of Eq.~\eqref{Decoupled Hamiltonian}. \subsection{\label{sec: variational method}Non-Gaussian variational approach} \label{subsec:NGstates} To study the polaron physics, both in and out of equilibrium, we employ the non-Gaussian family of variational wave functions. Specifically, guided by the LLP transformation, we write the many-body polaronic state in the laboratory frame as: \begin{align} |\Psi _{\mathbf{K}_{0}}\rangle =U_{\mathrm{LLP}}f_{\mathbf{K}_{0}}^{\dagger }|0\rangle_{\mathrm{imp}} \otimes |\Psi _{\mathrm{GS}}\rangle. \label{eqn:NGS_wf} \end{align} Implicit in Eq.~\eqref{eqn:NGS_wf} is that the state $|\Psi _{\mathrm{GS}% }\rangle$ represents the fermionic wave function in the co-moving frame. We then choose $|\Psi _{\mathrm{GS}}\rangle$ to be Gaussian~\cite% {shi_variational_2018,Dolgirev2020}: \begin{align} |\Psi _{\mathrm{GS}}\rangle =U_{\mathrm{GS}}|\mathrm{FS}\rangle =e^{-i\theta }e^{ic^{\dagger }\xi c}|\mathrm{FS}\rangle, \end{align} where $|\mathrm{FS}\rangle$ describes the Fermi sea set by a Fermi momentum $% k_F$. At this stage, our variational parameters are the global phase $\theta$ and Hermitian matrix $\xi$ written in the Dirac basis $c=\left( c_{\mathbf{k}% _{1}},c_{\mathbf{k}_{2}},\ldots ,c_{\mathbf{k}_{N}}\right) ^{\mathrm{T}}$, with $N$ being the total number of fermionic degrees of freedom. We note that even though the wave function in the co-moving frame is factorizable between the impurity and host fermions, it is highly entangled by $U_{% \mathrm{LLP}}$ when expressed in the laboratory frame, cf. Eq.~% \eqref{eqn:NGS_wf}. Any variational state applied to many-body problems represents some approximation. Given that often there are no small parameters or exact solutions, it is crucial to test the validity of any such variational approach. For the 1D Fermi polaron, it was demonstrated in Ref.~\cite% {Dolgirev2020} that the non-Gaussian states of the form~\eqref{eqn:NGS_wf} reproduce the exact Bethe ansatz results, both in and out of equilibrium. The validity of the non-Gaussian wave functions in the 2D polaron problem is the subject of the next sections. To optimize for the best variational wave function that approximates the ground state, we employ the imaginary-time dynamics. For now, instead of $% \theta$ and $\xi$, it is more convenient to work with the covariance matrix \begin{align} \Gamma _{ij}\equiv \langle \Psi _{\mathrm{GS}}|c_{i}^{\dagger }c_{j}|\Psi _{% \mathrm{GS}}\rangle =U^{\ast }\Gamma _{\mathrm{FS}}U^{\mathrm{T}}, \end{align} where $\Gamma _{\mathrm{FS}}$ is the covariance matrix of the Fermi sea and $% U = e^{i\xi}$. Then the projection of the imaginary-time evolution onto the tangential space of the variational manifold gives rise to~\cite% {shi_variational_2018}: \begin{equation} \partial _{\tau }\Gamma =-\mathcal{H}_{\mathrm{MF}}^{\mathrm{T}}\Gamma -\Gamma \mathcal{H}_{\mathrm{MF}}^{\mathrm{T}}+2\Gamma \mathcal{H}_{\mathrm{% MF}}^{\mathrm{T}}\Gamma. \end{equation} Here we employed Wick's theorem to derive the mean-field Hamiltonian% \begin{align} (\mathcal{H}_{\mathrm{MF}})_{\mathbf{kp}} =& \left(\frac{k^{2}}{2m_{\mathrm{r% }}}\mathbf{-}\frac{\mathbf{K}_{0}\cdot \mathbf{k}}{m_{\mathrm{imp}}}% \right)\delta _{\mathbf{kp}}+\frac{g}{L^2} \\ &+\frac{1}{m_{\mathrm{imp}}}\left(\left\langle \mathbf{Q}_{\mathrm{b}% }\right\rangle_{\mathrm{GS}} \cdot \mathbf{k}\, \delta _{\mathbf{kp}}-% \mathbf{k\cdot p}\left\langle c_{\mathbf{p}}^{\dagger }c_{\mathbf{k}% }\right\rangle_{\mathrm{GS}} \right). \nonumber \end{align} For the imaginary-time evolution, the global phase $\theta $ can be chosen arbitrarily, and the variational energy% \begin{align} E_{\mathbf{K}_{0}} =&\mathrm{Tr}(\mathcal{H}_{\mathrm{MF}}\Gamma ^{\mathrm{T}% })+\frac{1}{2m_{\mathrm{imp}}}\Big(K_{0}^{2}+\left\langle \mathbf{Q}_{% \mathrm{b}}\right\rangle^{2}_{\mathrm{GS}} \nonumber \\ &-\sum_{\mathbf{kp}}\mathbf{k\cdot p} \langle c_{\mathbf{k}}^{\dagger}c_{% \mathbf{p}}\rangle _{\mathrm{GS}} \langle c_{\mathbf{p}}^{\dagger }c_{% \mathbf{k}}\rangle_{\mathrm{GS}} \Big) \label{Energy function} \end{align}% decreases monotonically and reaches its ground-state value in the limit $% \tau \rightarrow \infty $. The real-time equations of motion are derived from Dirac's variational principle, with the result~\cite{Dolgirev2020}: \begin{eqnarray} \partial _{t}U &=&-i\mathcal{H}_{\mathrm{MF}}U, \\ \partial _{t}\theta &=&E_{\mathbf{K}_{0}}-\mathrm{Tr}\left( \mathcal{H}_{% \mathrm{MF}}\Gamma ^{\mathrm{T}}\right) . \end{eqnarray}% From this, one can get an equation solely on the covariance matrix: \begin{equation} i\partial _{t}\Gamma =\Gamma \mathcal{H}_{\mathrm{MF}}^{\mathrm{T}}-\mathcal{% H}_{\mathrm{MF}}^{\mathrm{T}}\Gamma . \label{eq:real-time evolution of the Gamma matrix} \end{equation}% This result could alternatively be derived from projecting the Schr\"{o}% dinger equation onto the tangential space of the variational manifold~\cite% {shi_variational_2018}. As a remark, we note that during either the imaginary-time or real-time evolution, the total number of fermions is conserved $d_{\tau ,t}N_{f}{=0} = d_{\tau ,t}\mathrm{Tr}(\Gamma )=0$. This follows from the fact that provided the initial state is pure, as encoded in $\Gamma^2 = \Gamma$, it will remain pure upon the evolution. \subsection{Rotational symmetry} Let us consider the situation with zero total momentum $\mathbf{K}_{0}=% \mathbf{0}$, where the system is rotationally invariant. When performing the LLP transformation for this case, we work with continuous rather than discretized variables, as in Eqs.~\eqref{eqn:Ham_rs}-\eqref{eqn:Hint_rs}. The rotational symmetry implies that the covariance matrix $\Gamma _{\mathbf{% pp}^{\prime }}$ (or any other observable) depends only on $p$, $p^{\prime }$% , and the relative angle $\vartheta -\vartheta ^{\prime }$, which allows us to write: \begin{equation} \Gamma _{\mathbf{pp}^{\prime }}=\frac{1}{2\pi \delta _{p}\sqrt{pp^{\prime }}}% \sum_{n}\Gamma _{pp^{\prime }}^{n}e^{-in(\vartheta -\vartheta ^{\prime })}. \end{equation}% In this expression, the radial momenta $p$ and $p^{\prime }$ in each of the matrices $\Gamma _{pp^{\prime }}^{n}$ have been discretized with spacing $% \delta _{p}$. Here, $\Gamma _{pp^{\prime }}^{n}$ is understood as the following covariance matrix: \begin{equation} \Gamma _{pp^{\prime }}^{n}=\langle \Psi _{\mathrm{GS}}|c_{p}^{n\dagger }c_{p^{\prime }}^{n}|\Psi _{\mathrm{GS}}\rangle , \end{equation}% where $c_{p}^{n}\propto \int d\vartheta \,c_{\mathbf{p}}e^{-in\vartheta }$ satisfing $[c_{p}^{n},c_{p^{\prime }}^{n\dagger }]=\delta _{pp^{\prime }}$ is the annihilation operator in the angular momentum basis. The imaginary-time equations of motion now read: \begin{equation} \partial _{\tau }\Gamma ^{n}=-[\mathcal{H}^{n}]^{\mathrm{T}}\Gamma ^{n}-\Gamma ^{n}[\mathcal{H}^{n}]^{\mathrm{T}}+2\Gamma ^{n}[\mathcal{H}% ^{n}]^{\mathrm{T}}\Gamma ^{n}, \end{equation}% where the mean-field Hamiltonian in the angular momentum channel $n$ is given by: \begin{align} \mathcal{H}_{pp^{\prime }}^{n}=& \frac{p^{2}}{2m_{\mathrm{r}}}\delta _{pp^{\prime }}+\frac{g\delta _{p}\sqrt{pp^{\prime }}}{2\pi }\delta _{n0} \nonumber \\ & -\frac{pp^{\prime }}{2m_{\mathrm{imp}}}\left( \Gamma _{p^{\prime }p}^{n+1}+\Gamma _{p^{\prime }p}^{n-1}\right) . \label{eqn H_n} \end{align}% As encoded in the second term in Eq.~\eqref{eqn H_n}, the impurity induces a potential in the zero angular momentum channel only -- this is because we consider contact coupling. We note that eventually the distribution of fermions for $n\neq 0$ becomes affected via the inter-channel scattering described by the third term in Eq.~\eqref{eqn H_n}. The real-time evolution for the unitary $U^{n}=\exp \left( i\xi ^{n}\right) $ in the channel $n$ and the global phase $\theta $ read:% \begin{eqnarray} \partial _{t}U^{n} &=&-i\mathcal{H}^{n}U^{n}, \\ \partial _{t}\theta &=&E_{0}-\overset{\infty }{\sum_{n=-\infty }}\mathrm{Tr}% \left( \mathcal{H}^{n}\Gamma ^{n\mathrm{T}}\right) , \label{EOMn} \end{eqnarray}% where the energy functional $E_{0}$ for $\mathbf{K}_{0}=\mathbf{0}$ depends on each $\Gamma ^{n}$ and is expressed as: \begin{align} E_{0}=& \sum_{p,n}\frac{p^{2}}{2m_{\mathrm{r}}}\Gamma _{pp}^{n}+\frac{% g\delta _{p}}{2\pi }\sum_{p,p^{\prime }}\,\sqrt{pp^{\prime }}\,\Gamma _{pp^{\prime }}^{0} \nonumber \\ & -\frac{1}{2m_{\mathrm{imp}}}\sum_{p,p^{\prime },n}pp^{\prime }\,\Gamma _{pp^{\prime }}^{n}\Gamma _{p^{\prime }p}^{n+1}. \end{align}% The main result of this subsection is that the initial two-dimensional problem reduces to simulating coupled one-dimensional ones, which dramatically facilitates numerical analyses of even relatively large systems. In practice, we introduce a cutoff $n_{\Lambda }$ in angular momentum space such that the covariance matrix for $|n|>n_{\Lambda }$ is replaced by the expectation value for the filled Fermi sea $\Gamma _{pp^{\prime }}^{n}=\delta _{pp^{\prime }}\theta \left( k_{F}-p\right) $. The value $n_{\Lambda }$ is determined by the numerical convergence of the results. We remark that for the Gaussian state considered in this subsection, different angular momentum channels are decoupled, and the particle number of each channel is individually conserved, i.e., $d_{\tau ,t}N_{f}^{n}=d_{\tau ,t}\mathrm{Tr}(\Gamma ^{n})=0$. \subsection{Linear response formalism} \label{subsec_CM_analysis} One of the goals of this work is to provide a framework capable of computing observables relevant for both solid-state and ultracold atoms experiments. In this subsection, we focus on linear-response probes, which in turn require careful analysis of collective modes representing small-amplitude fluctuations on top of a (momentum-dependent) ground state. We remark that the fluctuation analysis within Gaussian states for a bosonic system has been proven to be equivalent to the generalized random phase approximation and successfully applied to reproduce the Goldstone zero-mode, naturally without imposing the Hugenholtz-Pines condition~\cite{demler1996class, Guaita2019,shi2019trapped}. For the 1D Fermi polaron, collective modes turned out to be crucial for understanding even far-from-equilibrium properties~\cite{Dolgirev2020}. In the LLP frame, the particle-hole excitation spectrum can be analyzed via linearizing Eq.~\eqref{eq:real-time evolution of the Gamma matrix} around the ground-state configuration, characterized by $U_{g}=% \mathrm{exp}(i\Xi _{g})$ and $\Gamma _{g}=U_{g}^{\ast }\Gamma _{0}U_{g}^{% \mathrm{T}}$. We note that the unitary $U_{g}$ diagonalizes the mean-field Hamiltonian $U_{g}^{\dagger }\mathcal{H}_{\mathrm{MF}}U_{g}=d_{g}$. Small-fluctuations $\mathrm{\delta }\Xi$ are encoded in the fermionic wave function as: \begin{equation} e^{i\hat{c}^{\dagger }\Xi _{g}\,\hat{c}}e^{i\hat{c}^{\dagger }\mathrm{\delta }\Xi \hat{c}}|\mathrm{FS}\rangle, \end{equation} where the particle-hole generator $\mathrm{\delta }\Xi $ is an $N\times N$ Hermitian matrix ($N$ is the total number of single-particle modes in the fermionic system). The corresponding unitary matrix becomes $U=U_{g}e^{i% \mathrm{\delta }\Xi }$. The gauge redundancy in $\mathrm{\delta }\Xi $ can be eliminated by requiring the non-vanishing fluctuation $\mathrm{\delta }% \Gamma \equiv \Gamma -\Gamma _{g}\sim -iU_{g}^{\ast }[\mathrm{\delta }\Xi ^{\ast },\Gamma _{0}]U_{g}^{\mathrm{T}}$ of the covariance matrix. Since the covariance matrix of the state $|\mathrm{FS}\rangle $ composed of $N_f$ fermions is $\Gamma_\mathrm{FS}=\left( \begin{array}{cc} \mathrm{I}_{N_f\times N_f} & 0 \\ 0 & 0% \end{array}% \right) $, the condition $\mathrm{\delta }\Gamma \neq 0$ imposes the off-diagonal form $\mathrm{\delta }\Xi =\left( \begin{array}{cc} 0 & \mathrm{\delta }\xi \\ \mathrm{\delta }\xi ^{\dagger } & 0% \end{array}% \right) $ with an $N_f\times (N-N_f)$ matrix $\mathrm{\delta }\xi$. In terms of $\mathrm{\delta }\xi$, the fluctuation of the covariance matrix reads% \begin{equation} \mathrm{\delta }\Gamma =U_{g}^{\ast }\left( \begin{array}{cc} 0 & i\mathrm{\delta }\xi ^{\ast } \\ -i\mathrm{\delta }\xi ^{\mathrm{T}} & 0% \end{array}% \right) U_{g}^{\mathrm{T}}. \end{equation}% Linearization of Eq.~(\ref{eq:real-time evolution of the Gamma matrix}) results in% \begin{equation} i\partial _{t}\mathrm{\delta }\Xi =[d_{g},\mathrm{\delta }\Xi ]-iU_{g}^{\dagger }\mathrm{\delta }\mathcal{H}U_{g}, \label{eq: linearized EOM of xi} \end{equation}% where the fluctuation matrix $% \mathrm{\delta }\mathcal{H}$ describing particle-hole interactions is given by: \begin{equation} \delta \mathcal{H}_{\mathbf{kp}}=\delta_{\mathbf{k} \mathbf{p}} \sum_{% \mathbf{q}} \frac{\mathbf{k} \cdot \mathbf{q}}{M} \mathrm{\delta }\Gamma_{% \mathbf{qq}}-\frac{\mathbf{k} \cdot \mathbf{p}}{M} \delta \Gamma_{\mathbf{pk}% }. \end{equation} Provided $\mathbf{K}_{0}=\mathbf{0}$% , following the preceding subsection, we write $\delta \mathcal{H}$ as: \begin{align} \mathrm{\delta }\mathcal{H}_{pn,p^{\prime }n^{\prime }} =&\frac{p\delta _{pp^{\prime }}}{2m_{\mathrm{imp}}}\sum_{q, m,\sigma =\pm 1}q\delta _{n^{\prime }n+\sigma }\mathrm{\delta }\Gamma _{qm+\sigma ,qm} \nonumber \\ & -\frac{pp^{\prime }}{2m_{\mathrm{imp}}}\sum_{\sigma =\pm 1}\mathrm{\delta }% \Gamma _{p^{\prime }n^{\prime }+\sigma ,pn+\sigma }. \end{align}% Equation~\eqref{eq: linearized EOM of xi} gives rise to a compact equation of motion $i\partial _{t}v_{\mathrm{ph}}=\mathcal{M}v_{\mathrm{ph}}$, where $% v_{\mathrm{ph}}=\left( \mathrm{\delta }\xi ,\mathrm{\delta }\xi ^{\ast }\right) ^{T}$. The spectrum of collective modes is given by the eigenvalues of $\mathcal{M}$; linear-response observables also require the knowledge of the eigenvectors of $\mathcal{M}$. We finally remark that for $\mathbf{K}% _{0}=\mathbf{0}$, one can write $\xi _{pn,pn^{\prime }}=\delta _{nn^{\prime }}\delta \xi _{pp^{\prime }}^{|n|}$, which further facilitates numerical evaluations. An example of analysis of collective modes is discussed in Appendix~\ref{appendix:collective modes}. \section{\label{sec: static properties}Ground-state properties} \begin{figure}[t!] \includegraphics[width=1\linewidth]{Fig_1.pdf} \caption{Polaron-to-molecule phase transition within the non-Gaussian variational states. (a) Polaron energy-momentum relation for various values of $(k_F a_{\mathrm{2D}})^{-1}$: as this parameter is increased, we observe that the minimum in the dispersion shifts from being located at $\mathrm{K}% _0 = 0$ to $\mathrm{K}_0 = k_F$, indicating a first-order polaron-to-molecule transition. Here $E_F = k_F^2/2m_{\mathrm{b}}$ is the Fermi energy. (b) This transition is predicted to occur at around $(k_F a_{% \mathrm{2D}})^{-1} = 1.37$. (c) The quasiparticle residue $\mathcal{Z}$ for $% \mathrm{K}_0 = 0$ remains finite as the transition point is crossed; the residue for $\mathrm{K}_0 = k_F$ is close to zero. Parameters used: $m_{% \mathrm{imp}}=5m_{\mathrm{b}},k_{\Lambda }=5k_{F}$, and $\protect\delta % _{p}=2\protect\pi /L=\left. k_{F}\right/8$. } \label{fig 1} \end{figure} In this section, we primarily investigate the polaron-to-molecule phase transition. We begin by exploring the full polaron energy-momentum relation, being interested in arbitrary total momentum $\mathbf{K}_0$. As such, the system is, in general, not rotationally symmetric, and numerical simulations are computationally expensive. To facilitate the computations, we consider, for now, rather heavy impurities, such as $m_{\mathrm{imp}}=5m_{\mathrm{b}}$% , allowing us to choose a rather small UV cutoff $k_{\Lambda}$ because of relatively small binding energy $E_B$ -- this energy decreases with increasing the ratio $m_{\mathrm{imp}}/m_{\mathrm{b}}$~\cite% {parish_polaron-molecule_2011}. If one is interested solely in the case with $\mathrm{K}_0 = 0$, the polaronic properties can be efficiently studied for arbitrary mass ratios using rotational symmetry, as we discuss below. Figure~\ref{fig 1}(a) shows the polaron energy-momentum relation for various interaction strengths, as encoded in the dimensionless parameter $\left( k_{F}a _{\mathrm{2D}}\right) ^{-1}$. We note that this dispersion $E_{% \mathbf{K}_{0}}$ depends on $\mathrm{K}_0$ only (it does not depend on the direction of ${\mathbf{K}_{0}}$). Notably, for sufficiently strong interactions, the energy of the state at $\mathrm{K}_0 = k_F$ becomes smaller than that at $\mathrm{K}_0 = 0$, indicating a change in the nature of the ground state -- this change occurs at around $\left( k_{F}a _{\mathrm{% 2D}}\right) ^{-1} = 1.37$, as shown in Fig.~\ref{fig 1}(b). To better understand this transition, we now consider the quasiparticle residue defined as: \begin{align} \mathcal{Z} = \left\vert \langle \mathrm{FS}|f_{\mathbf{K}_{0}}|\Psi _{% \mathbf{K}_{0}}\rangle \right\vert ^{2}. \end{align} This expression can be understood as the overlap between the non-interacting many-body state $f_{\mathbf{K}_{0}}^{\dagger }|\mathrm{FS}\rangle$ and the true ground state $|\Psi _{\mathbf{K}_{0}}\rangle$ with the impurity-bath interaction being switched on. Within the non-Gaussian states, the polaron residue is given by~\cite{Dolgirev2020}: \begin{align} \mathcal{Z}=\left\vert \langle \mathrm{FS}|\Psi _{\mathrm{GS}}\rangle \right\vert^{2} =\det\left(\mathrm{I}_{N}+2 \Gamma_\mathrm{FS} \Gamma-\Gamma-\Gamma_\mathrm{FS}\right). \end{align} Figure~\ref{fig 1}(c) shows the quasiparticle residues at $\mathrm{K}_0 = 0$ and $\mathrm{K}_0 = k_F$ across the transition: while the former smoothly decreases with $\left( k_{F}a _{\mathrm{2D}}\right) ^{-1}$ and remains finite at the transition point, the latter is nearly zero. We remark that these results agree with the studies in Refs.~\cite{cui2020, Cheng2021}. \begin{figure}[t!] \includegraphics[width=1\linewidth]{Fig_2.pdf} \caption{ Comparison to the existing methods. (a) Energies of the polaronic and molecular states as a function of $\left( k_{F}a_{\mathrm{2D}}\right) ^{-1}$ for the case of equal masses $m_{\mathrm{imp}}=m_{\mathrm{b}}$. The polaron-to-molecule transition point within the NGS approach is around $% \left( k_{F}a_{\mathrm{2D}}\right) ^{-1}=2.2$. For comparison, we also show the known results from DMC~\protect\cite{DMC2D} and CA with one and two particle-hole excitations~\protect\cite{Parish2013}. Here $E_{\mathrm{FS}}$ is the energy of the filled Fermi sea. Parameters used: $k_{\Lambda }=20k_{F} $, $\protect\delta_{p}= k_{F}/ 40$, and $n_{\Lambda }=8$. (b) Schematic of the polaronic state $\mathbf{K}_{0} = (0,0)$: the impurity (red ball) has on average zero momentum and is coupled to particle-hole excitations (shown with arrows) of the Fermi sea (fermions are shown as blue balls). (c) Schematic of the molecular state $\mathbf{K}_{0} = (k_F,0)$: the mobile impurity with momentum around $(k_F,0)$ binds to the fermionic atom that has momentum $(-k_F,0)$; the resulting molecule on average has zero momentum and weakly interacts with the rest of the Fermi sea.} \label{fig 2} \end{figure} These findings suggest the following physical picture. For weak and moderate interactions, the ground state is polaronic, it corresponds to the solution with $\mathrm{K}_0 = 0$, and it has finite quasiparticle weight [Fig.~\ref% {fig 2}(b)]. For stronger interactions, the system exhibits a first-order phase transition into a molecular state, associated with the solution with $% \mathrm{K}_0 = k_F$ and vanishing quasiparticle residue $\mathcal{Z} = 0$. In this regime, we find that for $\mathbf{K}_{0}=(k_{F},0)$, the fermion occupation at $(-k_{F},0)$ is essentially zero, indicating that this fermion has been removed from the Fermi sea to form a bound state with the impurity particle, so that the resulting molecule approximately has zero net momentum [Fig.~\ref{fig 2}(c)]. We finally remark that if one would have limited the analysis only to the $\mathrm{K}_0 =0$ sector, instead of an abrupt transition, one would find a smooth crossover with gradual suppression of the quasiparticle weight. When investigating the polaron-to-molecule transition for lighter impurities, such as $m_{\mathrm{imp}}=m_{\mathrm{b}}$, the binding energy $% E_{B}$ becomes large, requiring a larger UV cutoff $k_\Lambda$ and making computations too expensive. We now argue that rotational symmetry can be naturally used to overcome this difficulty. The analysis of the polaronic state is immediately simplified because this state corresponds to $\mathbf{K}% _0 = (0,0)$, where the system is already rotationally invariant. For the molecular state, we have $\mathbf{K}_{0}=(k_{F},0)$ and, thus, rotational symmetry is broken. To restore this symmetry in our variational ansatz, we employ the following method: instead of working with $N_f$ fermions in the sector $\mathbf{K}_{0}=(k_{F},0)$, we add one more extra fermion and work in the sector $\mathbf{K}_{0}=(0,0)$. In other words, to describe the molecular state, from now on we will use the following variational wave function: \begin{align} |\Psi _{\mathrm{NGS}}^{N_f+1}\rangle =U_{\mathrm{LLP}}f_{\mathbf{K}_{0} = 0}^{\dagger }|\Psi _{\mathrm{GS}}^{N_f+1}\rangle, \end{align} where $|\Psi _{\mathrm{GS}}^{N_f+1}\rangle$ is chosen to be a Gaussian state for $N_f+1$ fermions. This insight comes from our previous observation that the impurity tends to bind one of the Fermi surface fermions -- see Fig.~\ref% {fig 2}(c). Thus, the newly introduced fermion fills the hole in the disturbed Fermi sea and makes the total momentum of the enlarged system zero. One can alternatively view the simplified molecular state in the spirit of Yosida's ansatz~\cite{Yosida1966}, where one has a Fermi sea of $% N_f$ fermions, and the impurity and extra bath fermion form a bound state with zero net momentum. In this molecule, both the impurity and extra fermion have to be outside the Fermi sea because their momenta should be opposite, but the extra fermion is excluded from the Fermi sea by the Pauli principle. We emphasize that our variational state goes beyond this simple ansatz because we take into account particle-hole excitations of the Fermi sea arising from the nonzero impurity-bath coupling. \begin{figure}[t] \includegraphics[width=1\linewidth]{Fig_3.pdf} \caption{Various residues and their convergence with varying the UV cutoff $% k_\Lambda$ and the linear system's size encoded in $\protect\delta_p$. Solid lines denote the extrapolation to the continuum limit $k_{\Lambda }\rightarrow \infty $ and $\protect\delta_{p}\rightarrow 0$. Parameters are the same as in Fig.~\protect\ref{fig 2}.} \label{fig 3} \end{figure} Figure~\ref{fig 2}(a) shows the energies of polaronic and molecular states as functions of $\left( k_{F}a_{\mathrm{2D}}\right) ^{-1}$ for $m_{\mathrm{% imp}}=m_{\mathrm{b}}$. We find that our non-Gaussian variational approach quantitatively agrees with the known results from the CA~\cite{Parish2013} and DMC calculations~\cite{DMC2D}. Our method is particularly accurate at capturing the molecular branch, confirming the validity of the simplified molecular ansatz $|\Psi _{\mathrm{NGS}}^{N_{f}+1}\rangle $. The polaron-to-molecule transition is predicted to occur around $\left( k_{F}a_{% \mathrm{2D}}\right) ^{-1}=2.2$. Across the transition point, the polaronic residue $\mathcal{Z}$ remains finite [inset of Fig.~\ref{fig 3}(a)], but the molecular one is essentially zero [Fig.~\ref{fig 3}(a)]. Accurate analysis of the convergence of our results with the UV cutoff $k_{\Lambda }$ and the parameter $\delta _{p}$ that encodes the linear system's size indicates that indeed, in the limit $k_{\Lambda }\rightarrow \infty $ and $\delta _{p}\rightarrow 0$, the molecular residue approaches zero for $\left( k_{F}a_{\mathrm{2D}}\right) ^{-1}\gtrsim 2.2$ -- see the solid line in Fig.~% \ref{fig 3}(a). For $\left( k_{F}a_{\mathrm{2D}}\right) ^{-1}<2.2$, the ground state is polaronic, and, as such, the molecular state corresponds to some excited state. Since the size of this molecule becomes more extensive as $\left( k_{F}a_{% \mathrm{2D}}\right) ^{-1}$ is decreased, accurate computation of the residue $\mathcal{Z}$ for small $\left( k_{F}a_{\mathrm{2D}}\right) ^{-1}$ requires a smaller infrared cutoff $\delta _{p}$. Finally, we finish this section by introducing two more ``molecular residues" that help characterize the molecular state better. Since the quasiparticle residue is close to being one in the polaronic phase and vanishes in the molecular phase, the new molecular residues should display the opposite behavior. Motivated by this, we introduce the first one as $% \mathcal{Z}_{M}=|\langle \Psi_{M}|\Psi _{\mathrm{NGS}}^{N_f+1}\rangle |^{2}$% , where $|\Psi_{M}\rangle =\sum_{\left\vert \mathbf{k}\right\vert >k_{F}}\varphi _{\mathbf{k}}^{M}c_{\mathbf{k}}^{\dagger }f_{-\mathbf{k}% }^{\dagger }\left\vert \mathrm{FS}\right\rangle$ encodes the Yosida ansatz. Optimization of the variational parameters gives $\varphi _{\mathbf{k}% }^{M}\propto -1/(k^{2}+a_\mathrm{2D}^{-2}-k_{F}^{2})$. We define the second residue as $\mathcal{Z}% _{B}=|\langle \Psi_{B}|\Psi _{\mathrm{NGS}}^{N_f+1}\rangle |^{2}$, where $% |\Psi _{B}\rangle =\sum_{\left\vert \mathbf{k}\right\vert >k_{F}}\varphi _{% \mathbf{k}}^{B}c_{\mathbf{k}}^{\dagger }f_{-\mathbf{k}}^{\dagger }\left\vert \mathrm{FS}\right\rangle $ with the parameters $\varphi _{\mathbf{k}}^{B}$ given by: \begin{equation} \varphi _{\mathbf{k}}^{B}=\frac{1}{ L\sqrt{\mathcal{N}}}\frac{1}{- 1/(2 m_{% \mathrm{r}} a_X^2) -\varepsilon _{\mathrm{imp},\mathbf{k}}-\varepsilon _{% \mathrm{b},\mathbf{k}}}, \label{B-type molecular wavefunction} \end{equation} Here, $\mathcal{N}=m_{\mathrm{r}}^{2}a_X^{2}/[\pi (1+k_{F}^{2}a_X^{2})]$ is a normalization constant. The residue $\mathcal{Z}_{B}$ is useful because it can be measured with ultracold atom experiments, as we elaborate in the next section. We also postpone the discussion of the newly introduced length $a_{X}$ to the next section but will assume here that it is much smaller than the Fermi wavelength. We only emphasize here that $a_{X}$ is different from the scattering length $a_{\mathrm{2D}}$. We observe that the states $|\Psi_{B,M}\rangle$ can be written as $% |\Psi_{B,M}\rangle =U_{\mathrm{LLP}}f_\mathbf{0}^{\dagger}|0\rangle_{\mathrm{% imp}} \otimes|\bar{\Psi}_{B,M}\rangle$, with the Gaussian states $|\bar{\Psi}_{B,M}\rangle$ given by: \begin{equation} |\bar{\Psi}_{B,M}\rangle= \sum_{\left\vert \mathbf{k}\right\vert >k_{F}}\varphi^{B,M} _{\mathbf{k}}c_{\mathbf{k}}^{\dagger }\left\vert \mathrm{FS}\right\rangle. \label{eq: psibar B} \end{equation} The molecular residues $\mathcal{Z}_{B,M}$ are then computed through the corresponding covariance matrices $\Gamma_{B,M}$ as: \begin{align} \mathcal{Z}_{B,M}&=\left\vert \langle \bar{\Psi}_{B,M}|\Psi _{\mathrm{GS} }\rangle \right\vert^{2} \nonumber \\ &=\det\left(\,\mathrm{I}_{N}+2 \Gamma_{B,M}\Gamma-\Gamma-\Gamma_{B,M}\right). \end{align} Figure~\ref{fig 3}(b) shows the dependence of the two residues on $(k_F a_{% \mathrm{2D}})^{-1}$. Both of them approach one in the molecular phase. Physically, deep inside the molecular phase, the impurity and one bath fermion form a tight bare bound state, that in turn creates a scattering potential to the rest of the bath particles. Close to the phase transition, the bath fermions start to strongly affect the structure of the bound state, resulting in, for instance, a small overlap $\vert \langle \Psi_{B}|\Psi _{% \mathrm{NGS}}^{N_f+1}\rangle \vert $. For the residue $\mathcal{Z}_M$, we find that even though the parameters $\varphi _{\mathbf{k}}^{M}$ are being optimized, $\mathcal{Z}_{M}\sim 0.8$ is still smaller than one in the molecular phase. We attribute this deviation to the fact that the Yosida ansatz, in contrast to the non-Gaussian wave function, does not take into account the backreaction from the Fermi sea on the formation of the molecular bound state. \section{\label{sec: dynamical properties}Dynamical properties} \begin{figure}[t!] \includegraphics[width=1\linewidth]{Fig_schematic_v3.pdf} \caption{ Possible cold-atom setups. In the platform of panel (a), we assume that the impurity atom (red ball) has two hyperfine states $\ket{f,\uparrow}$ and $\ket{f,\downarrow}$. Initially, the impurity is prepared in $\ket{f,\downarrow}$, which is not interacting with the fermionic bath (represented with blue balls). By applying a weak RF-pulse, one drives a transition into $\ket{f,\uparrow}$, which interacts strongly with the Fermi sea. This protocol gives access to the polaronic spectral properties. The setup of panel (b) is different. Here we consider the initial state to be a Fermi sea of $c$-fermions and a BEC of $X$% -molecules that are composed of atoms $f$ and $b$ (grey balls). Now, we assume that $ \ket{c}$ and $\ket{b}$ are two hyperfine states, which are then coupled by a weak RF-pulse. Such an RF pulse breaks a ``grey-red'' $X$-molecule and leaves behind one strongly interacting ``red'' $f$-fermion and an additional ``blue'' majority $c$-fermion. These ``red'' and additional ``blue'' fermions are created as a pair with a wave function set by the wave function of the original $X$-molecule. As such, this protocol gives access to the molecular spectral properties. } \label{schematic} \end{figure} Having established the reliability of the non-Gaussian approach to the ground-state properties of 2D Fermi polaron, we move on to discuss dynamics. We remark that accurate analysis of out-of-equilibrium properties represents one of the main advantages of our method compared to, for instance, DMC. Here, we first discuss possible cold-atom experiments that enable one to measure polaronic and molecular spectral functions, in particular, to probe the residues $\mathcal{Z}$ and $\mathcal{Z}_B$. We then analyze these polaronic and molecular spectral properties separately in the following subsections. \subsection{Cold-atom platforms} We begin by discussing the conventional experimental protocol for measuring polaronic spectral properties [Fig.~\ref{schematic}(a)]. We will assume that the impurity atom has two hyperfine states, one of which $\ket{f,\downarrow}$ is not coupled to the $c$-fermions, while the other $\ket{f,\uparrow}$ strongly interacts with the bath. The system is initially prepared in $\ket{f,\downarrow}\otimes\ket{\rm FS}$, which is then driven into $|\Psi _{0}\rangle = \ket{f,\uparrow}\otimes\ket{\rm FS}$ by a weak RF-pulse. Then, Ramsey interferometry enables one to probe the dynamical overlap function $S(t)=e^{i E_\mathrm{FS} t}\langle \Psi _{0}|\exp (-iHt)|\Psi _{0}\rangle$, where $E_\mathrm{FS}$ is the total energy of the Fermi sea. The impurity spectral function $\mathcal{A}(\omega)$, also accessible with RF-spectroscopy, is given by: \begin{equation} \mathcal{A}(\omega)=\frac{1}{\pi}\mathrm{Re}\int_{0}^{\infty} dt\, e^{i\omega t} \, S(t). \end{equation} We remark that while we discuss here the setup to probe correlations when the total momentum is zero $\mathbf{K}_0 = \mathbf{0}$, it can be extended to explore $\mathbf{K}_0 \neq \mathbf{0}$ (see Ref.~\cite{Dolgirev2020} for a related discussion). The protocol to measure molecular properties is different [Fig.~\ref% {schematic}(b)]. We now choose the initial state to be a Fermi sea of $c$% -atoms and a BEC of $X$-molecules, composed of $\ket{f}$ and $\ket{b}$ atoms. In contrast to the previous setup, here $\ket{b}$ and $\ket{c}$ are assumed to be two hyperfine states. As we demonstrate in Appendix~\ref% {Appendix: Molecular RF spectroscopy} by performing adiabatic elimination of $b$-fermions, a weak RF-pulse is then described via: \begin{equation} H_{\mathrm{RF}}=\frac{\Omega_{\mathrm{RF}} }{L}\sum_{|\mathbf{k}| \lesssim a_X^{-1}}\frac{% e^{-i\omega t}}{E_{X}-\varepsilon _{\mathrm{imp},\mathbf{k}}-\varepsilon _{% \mathrm{b},\mathbf{k}}}c_{\mathbf{k}}^{\dagger }f_{-\mathbf{k}}^{\dagger }+% \mathrm{H.c.} \end{equation}% Here $E_X = -1/(2m_{\mathrm{r}}a_X^{2})$ is the binding energy of an $X$% -molecule; $a_X$ is the corresponding scattering length assumed to be much smaller than the Fermi wavelength $a_X k_F \ll 1$. The effective coupling $% \Omega_{\mathrm{RF}}$ is proportional to $\sqrt{N_{X}}$ and the intensity of the pulse, with $N_{X}$ being the total number of $X$-molecules. One can view such an RF-pulse as if it substitutes a $b$-atom in a tightly-bound $X$% -molecule with a $c$-atom. The corresponding dynamical overlap is given by: $% S_{B}(t)=e^{i (E_\mathrm{FS}+E_F) t}\langle \Psi _{B}|\exp (-iHt)|\Psi _{B}\rangle$, where the state $|\Psi _{B}\rangle $ has been defined in the previous section, cf. Eq.~(\ref{B-type molecular wavefunction}% ). The molecular spectral function $\mathcal{A}_B(\omega)$ is then defined as: \begin{align} \mathcal{A}_B(\omega) =\frac{1}{\pi}\text{Re}\int_{0}^{\infty }dt\, e^{i\omega t} \, S_B(t). \end{align} Having introduced all the relevant dynamical quantities, we turn to explore them in the next subsections. \begin{figure}[t!] \includegraphics[width=1\linewidth]{Fig_5.pdf} \caption{ Polaronic spectral properties. Left and right panels correspond to the cases with the ground state being polaronic $(k_F a_{\mathrm{2D}})^{-1} = 1$ and with it being molecular $(k_F a_{\mathrm{2D}})^{-1} = 4$, respectively. Panels (c) and (f) display the evolution of the real-space density $\mathbf{\protect\delta }\protect\rho (r,t)=\protect\rho (r,t)-% \protect\rho (r,0)$ (this quantity has the unit of $k_F^2$). Parameters used: $m_{\mathrm{imp}}=m_{\mathrm{b}}$, $k_{\Lambda}=20k_F$, $\protect\delta% _p=\left.k_F\right/40$, and $n_{\Lambda }=8$.} \label{fig 5} \end{figure} \subsection{Polaronic spectral properties} In the co-moving LLP frame, the dynamical overlap reads: $S(t)=\langle \mathrm{FS}|\bar{\Psi}_{\pmb{0}}(t)\rangle $, where $|\bar{\Psi}_{\pmb{0}}(t)\rangle =\exp (-iH_{\mathrm{LLP}}t)|\mathrm{FS}\rangle$ -- this latter state is obtained via the real-time equations of motion detailed in Sec.~\ref{sec: formalism}. Within the non-Gaussian states, $S(t)$ is computed through $\theta$ and $U$ as: \begin{equation} S(t)=e^{-i(\theta(t)-E_\mathrm{FS}t)} \det\left\{\mathrm{I} _N-[\mathrm{I}_N-U(t)] \Gamma_\mathrm{FS}^{\mathrm{T}}\right\}. \end{equation} We begin by considering $(k_{F}a_{\mathrm{2D}})^{-1}=1$, in which case the ground state is polaronic. Figure~\ref{fig 5}(a) shows the dynamics of Re$% \,S(t)$ and $|S(t)|$ that display damped oscillatory behavior. At long times, $|S(t)|$ approaches a finite value, which is nothing but the quasiparticle residue $\mathcal{Z}$. The spectral function $\mathcal{A}% (\omega)$ is shown in Fig.~\ref{fig 5}(b). We find that the ground-state energy, as determined by the position of the sharp peak in $\mathcal{A}% (\omega)$ (attractive polaron), and the corresponding oscillator strength are in agreement with the energy and quasiparticle residue calculations in Figs.~\ref{fig 2} and~\ref{fig 3}. The additional hump in $\mathcal{A}% (\omega)$ for $\omega > 0$ (repulsive polaron) occurs due to the fact that the initial state has a finite overlap with the continuum of particle-hole excitations -- the position and width of this hump determine the frequency and decay rate of $|S(t)|$ at initial times. Figure~\ref{fig 5}(c) shows the dynamics of the real-space fermionic density. Upon the abrupt creation of the impurity at $t = 0$, the density near the impurity initially exhibits profound oscillations but then reaches a steady state at longer times. We turn to consider the case with $(k_{F}a_{\mathrm{2D}})^{-1}=4$, characterized by the ground state being molecular. To properly account for the finite-momentum nature of the molecular state, here we follow the previous section and consider $N_f$ + 1 fermions. Figure~\ref{fig 5}(e) shows the spectral function $\mathcal{A}(\omega)$, which displays two sharp peaks at $\omega \sim 1.57 E_F$ and $\omega \sim -31.1E_F$ corresponding to the energies of the repulsive and attractive polarons, respectively, in agreement with the results of Ref.~\cite{schmidt2012}. There is an additional tiny peak at $\omega \sim -32.5E_F$, which emerges due to the finite size effects and vanishes in the thermodynamic limit. As we discuss in the following subsection, this tiny peak corresponds to the energy of the molecular state. The time-dependent overlap function $S(t)$ exhibits long-lived oscillations shown in Fig.~\ref{fig 5}(d), which can be understood as arising from quantum beatings between the attractive and repulsive polarons. We note that the initial state $|\Psi _{0}(t=0)\rangle =|% \mathrm{FS}\rangle $ has finite overlaps with both of these polaron branches. As $|\Psi _{0}(t)\rangle $ propagates in time, parts of the wave function corresponding to the two polarons evolve with different energies, and since the system seems to never relax to the molecular ground state locally [Fig.~\ref{fig 5}(d)], it results in long-lived oscillations of the fermionic density near the impurity, as illustrated in Fig.~\ref{fig 5}(f). \begin{figure}[t!] \includegraphics[width=1\linewidth]{Fig_6.pdf} \caption{ The same as in Fig.~\protect\ref{fig 5}, except here we show molecular spectral properties. Here we fix $(k_F a_X)^{ -1} = 4$.} \label{fig 6} \end{figure} \subsection{Molecular spectral properties} Here we do a similar analysis as in the preceding subsection, but now consider the experimental protocol in Fig.~\ref{schematic}(b) that enables one to probe molecular spectral properties. In the numerical simulations, we consider the initial state $% |\Psi_{B}(t=0)\rangle =\sum_{\left\vert \mathbf{k}\right\vert >k_{F}}\varphi _{% \mathbf{k}}^{B}c_{\mathbf{k}}^{\dagger }f_{-\mathbf{k}}^{\dagger }\left\vert \mathrm{FS}\right\rangle$ in the lab frame, which describes a deeply bound molecule created on top of the Fermi sea. The corresponding dynamical overlap function in the co-moving frame becomes: $S_{B}(t)=$ $\langle \bar{% \Psi}_{B}(t=0)|\bar{\Psi}_{B}(t)\rangle $, where $|\bar{\Psi}_{B}(t)\rangle =\exp (-iH_{\mathrm{LLP}}t)|\bar{\Psi}_{B}(t=0)\rangle $, and $|\bar{\Psi}% _{B}(t=0)\rangle =\sum_{\left\vert \mathbf{k}\right\vert >k_{F}}\varphi _{% \mathbf{k}}^{B}c_{\mathbf{k}}^{\dagger }\left\vert \mathrm{FS}\right\rangle$ is the initial molecule state in the LLP frame, i.e., Eq.~\eqref{eq: psibar B}. Hereafter, for concreteness, we focus on the case $(k_{F} a_{X})^{-1}=4$. The dynamical overlap function $S_{B}(t)$ is calculated analytically:% \begin{equation} S_{B}(t)=e^{-i( \theta (t)-(E_{\mathrm{FS}}+E_{F})t) }\det \left\{ \mathrm{I}_{N}-[\mathrm{I}_{N}-U(t)]\Gamma _{B}^{\mathrm{T}}\right\} . \end{equation}% Implicit in the discussion below is that the bath is composed of $N_{f}+1$ fermions. We first discuss the polaronic regime $(k_{F}a_{\mathrm{2D}})^{-1}=1$ -- the results of our simulations are summarized in Fig.~\ref{fig 6} (left panels). We find that the frequency of the sharp peak in the molecular spectral function $\mathcal{A}_{B}(\omega )$ [Fig.~\ref{fig 6}(b)] is $\omega \sim -2.07E_{F}$, in good agreement with the ground-state energy $% E_{\mathrm{GS}}\sim -2.09E_{F}$ of $N_{f}+1$ fermions in the total momentum sector $\mathbf{K}_{0}=\mathbf{0}$. Our results suggest that the initial tight bound state $|\bar{\Psi}_{B}(t=0)\rangle $ has a finite overlap {$\sim 0.3$} with the ground state -- this is indicated by the steady-state value of $|S_{B}(t)|$ [Fig.~\ref{fig 6}(a)]. The long tail in $\mathcal{A}_{B}(\omega )$ also implies that the state $|\bar{\Psi}_{B}(t=0)\rangle $ has a substantial spectral weight associated with the continuum of particle-hole excitations of the Fermi sea. Deep in the molecular phase, $% (k_{F}a_{\mathrm{2D}})^{-1}=4$ [right panels in Fig.~\ref{fig 6}], the initial two-body bound state on top of the the undisturbed Fermi sea quickly relaxes to the true molecular ground state. The sharp peak in the spectral function $\mathcal{A}_{B}(\omega )$ [Fig.~\ref{fig 6}(e)] is at $\omega \sim -32.5E_{F}$, which is exactly the molecular ground-state energy. At long times, $|S_{B}(t)|\sim 0.8$ [Fig.~\ref{fig 6}(d)] -- this value agrees with the analysis in Fig.~\ref{fig 3}(b) of the molecular residues. Finally, in both regimes, we find that the density profiles [Fig.~\ref{fig 6}% (c) and~\ref{fig 6}(f)] show rapid relaxational dynamics. \section{\label{sec: summary}Summary and outlook} In this paper, we analyzed both the ground-state and dynamical properties of Fermi polarons in two spatial dimensions using a new family of non-Gaussian variational wave functions. We showed that this class of states captures the polaron-to-molecule transition that emerges as one increases the attractive interaction strength. Energies of both polaronic and molecular states, as well as the transition point, are in good agreement with the known Monte-Carlo simulations. Our theory, in contrast to conventional numerical methods, enables efficient computation of the polaronic spectral functions, accessible with RF spectroscopy. In addition to the commonly discussed quasiparticle spectral function and residue, we introduced molecular spectral function and residue that help characterize better the nature of the molecular state. We discussed how these molecular properties could be measured with RF-like experiments, where we proposed the initial state to contain a BEC of tightly-bound molecules. While the analysis in our paper focused on systems of ultracold atoms, we expect that our results will be relevant for exciton-electron mixtures in TMD materials. In particular, we anticipate that the proposed experimental protocol for the molecular spectral properties can be realized in bilayer TMDs. Indeed, interlayer excitons are relatively long-lived and can be used to achieve BEC states. Terahertz pulses can then be used to convert these interlayer excitons into intralayer ones, demonstrate the existence of Feshbach resonances, and probe the molecular spectral function~\cite{kuhlenkamp2022,tang2021tuning}. \begin{acknowledgments} We thank M. Zvonarev, A. Salvador, A. Imamoglu, A. M\"{u}ller, K. Seetharam, I. Esterlis, C. Robens, M. Zwierlein, and R. Schmidt for stimulating discussions, M. M. Parish and J. Levinsen for sharing the data in Ref.~\cite{Parish2013}, and J. Ryckebusch and K. V. Houcke for sharing the data in Ref.~\cite{DMC2D}. T. S. is supported by National Key Research and Development Program of China (Grant No. 2017YFA0718304), by the NSFC (Grants No. 11974363, No. 12135018, and No. 12047503). P. D. and E. D. acknowledge support from the ARO grant number W911NF-20-1-0163 and Harvard/MIT CUA. \end{acknowledgments}
{ "timestamp": "2022-09-23T02:15:16", "yymm": "2209", "arxiv_id": "2209.10998", "language": "en", "url": "https://arxiv.org/abs/2209.10998" }
\section{Introduction} Dynamics of geophysical mass flows including landslides, avalanches and debris flows can be dominantly affected by the complex mechanical processes of erosion, entrainment and deposition (Huggel et al., 2005; Hungr et al., 2005; Santi et al., 2008; de Haas et al., 2016, 2020). Here, the terms landslides, avalanches, debris flows and mass flows are used as synonyms. As these multi-phase flows cascade down mountain slopes the sediment and fluid are entrained from the bed. In turn, these events can disproportionately increase their volumes and destructive potentials by several orders of magnitude and become exceptionally mobile (Evans et al., 2009; Theule et al., 2015; Somos-Valenzuela et al., 2016; Mergili et al., 2018; Liu et al., 2019). {Erosion,} entrainment and associated flow bulking in landslide prone areas and debris-flow torrents are a major concern for civil and environmental engineers and landuse planners. {It requires a cost-intensive mitigation of the associated hazard}. Mobility is among the most important features of the erosive landslide as it directly measures the threat posed by the landslide. Landslide mobility is associated with erosion-induced excessive volume and material properties and is characterized by {an} enormous impact {force}, exceptional travel distance and inundation area. {Thus,} erosion$-$induced excessive volume is {a key} control on the flow dynamics including the flow velocity, depth, travel distance and impact area, {in turn affecting} the number of fatalities (Huggel et al., 2005; Evans et al., 2009; Le and Pitman, 2009; Dowling and Santi, 2014). The spatially varying erosion rates and entrainment processes are dependent on the geomorphological, lithological and mechanical conditions (Berger et al., 2011; Iverson et al., 2011; Reid et al., 2011; McCoy et al., 2012; Dietrich and Krautblatter, 2019). A proper understanding of landslide erosion, entrainment and resulting increase in mass is a basic requirement for an appropriate modelling of landslide motion and its impact because the associated risk is directly related to the landslide momentum. However, as mechanical controls of erosion and entrainment {are not} well understood yet, despite large efforts in the recent years, evolving volume, mobility and impact {forces} of landslides and debris flows are often quite improperly estimated (Dietrich and Krautblatter, 2019). \\[3mm] Recently, there has been a rapid increase in the studies of erosion and entrainment both in laboratory (Egashira et al., 2001; Fraccarollo and Capart, 2002; Iverson et al., 2011; de Haas et al., 2016), and field scales (Cuomo et al., 2014, 2016; de Haas et al., 2020). Empirical (Takahashi and Kuang, 1986; Rickenmann et al., 2003; McDougall and Hungr, 2005; Chen et al., 2006; Le and Pitman, 2009) and mechanical (Fraccarollo and Capart, 2002; Iverson, 2012) erosion models have been developed. Although by nature mass transports are multi-phase phenomena (Pudasaini and Mergili, 2019), most erosion models only consider effectively single-phase, {or at most quasi two-phase} flows (Fraccarollo and Capart, 2002; McDougall and Hungr, 2005; Armanini et al., 2009; Le and Pitman, 2009; Iverson, 2012). {Different} numerical models incorporating erosion {have been proposed} (Le and Pitman, 2009; Iverson and Ouyang, 2015; McDougall and Hungr, 2005; Christen et al., 2010; Frank et al., 2015). However, the erosion rates presented and utilized therein are either not based on physical principles, or {these} are physically incomplete, because they do not consider all the complex mechanical interactions between the materials in the flow and the erodible bed. Moreover, these models are inconsistent, because they do not include the erosion-induced net momentum productions, and not all interactions across the erosion-interface are physically meaningfully considered (de Haas et al., 2020; Pudasaini and Fischer, 2020a). {These physical shortcomings demand for a comprehensive and complete descriptions of the multi-phase erosion-entrainment process.} This will become clearer in the model development section. \\[3mm] Erosion and deposition play an important role in mass transport and evolution of the landscape (Huggel et al., 2005; Evans et al., 2009; Dietrich and Krautblatter, 2019; Mergili et al., 2020) However, our understanding of these processes is much below than needed to apply them to real events. Because of the complexity of the terrain and sporadic nature of the landslide events, the time and cost demands for their field measurements are high. Yet, these measurements are only discrete. This limits the scope and utility (de Haas et al., 2020) of the available field data (Berger et al., 2011; Sch\"urch et al., 2011; McCoy et al., 2012; Theule et al., 2015; Dietrich and Krautblatter, 2019). Physics-based advanced and comprehensive models (Pudasaini and Fischer, 2020a), and sophisticated numerical simulations (Pudasaini and Mergili, 2019) can overcome these limitations aiming to facilitate a more complete understanding by investigating much wider aspects of the flow parameters, erosion, mobility and deposition (Pudasaini and Krautblatter, 2021; Mergili et al., 2020). \\[3mm] Pudasaini and Fischer (2020a) proposed a process-based erosion-deposition model for two-phase mass flows consisting of viscous fluid and solid particles, which, to a large extent, is capable of describing the complex erosive phenomena commonly observed in landslides, avalanches and debris flows. Their mechanical erosion-rate models proved that the effectively reduced friction (force) in erosion is equivalent to the momentum production. This shows that erosion can enhance the mass flow mobility. The importance of the Pudasaini and Fischer (2020a) mechanical erosion model for two-phase mass flows {is} widely realized in simulations of the real {catastrophic} multi-phase events (Li et al., 2019; Qiao et al., 2019; Shen et al., 2019; Liu and He, 2020; Mergili et al., 2020; Liu et al, 2021). These modelling approaches have clearly indicated the need of the mechanical erosion model in appropriately simulating the actual flow dynamics, run-out, and deposition morphology based on the mechanical erosion rates and the erosion-induced momentum productions. \\[3mm] The Pudasaini and Fischer (2020a) two-phase model built a foundation for erosive mass flows by mechanically including the momentum production into the momentum balance equation. However, they did not include the inertia of the entrained mass and they could not present a clear mechanical condition for when and how the mobility of an erosive landslide will be enhanced or reduced and how to quantify it. So, their model appeared to be incomplete. Extending the Pudasaini and Fischer (2020a) model, Pudasaini and Krautblatter (2021) addressed the important issue of erosion-induced landslide mobility by explicitly deriving mechanical conditions for the mobility of erosive landslides in terms of the erosion velocity of the mobilized bed material. They mechanically explained how and when erosive landslides enhance or reduce their mobility. This has been made possible by physically correctly considering the inertia and the momentum production of the erosive landslide. This model distinctly quantify the mobility of an erosive landslide. Pudasaini and Krautblatter (2021) revealed that the erosion velocity determines the energy budget of an erosive landslide and provides an accurate description of mobility. They identified a novel mechanism of landslide-propulsion providing the erosion-thrust to the landslide. They constructed the mobility scaling that precisely quantifies the contribution of erosion in landslide mobility. They also derived a set of dynamical equations in which the momentum balance correctly includes the erosion-induced change in inertia and the momentum production, together called the net momentum production. Their model constitutes a foundation for physically meaningful simulation of landslide motion with erosion. However, their model is only for effectively-single phase solid-type materials in the flow and in the bed. Also, the Pudasaini and Fischer (2020a) two-phase erosion model appeared to be incomplete, because it does not contain all natural interactions between phases across the erosion-interface, and requires fundamental enhancements to make it physically fully consistent and complete that follows the nature of erosive mass flows. This is what we achieve here with an innovative modelling approach. \\[3mm] Based on the Pudasaini and Fischer (2020a) and Pudasaini and Krautblatter (2021) models, here, we develop a unified, consistent and complete mechanical erosion model for multi-phase mass flows. This is the first-ever model to do so. There are three major aspects of this contribution in relation to erosive multi-phase mass flows. ($i$) To physically correctly establish the jumps in shear stresses and momentum fluxes across the erosion-interface between the landslide and the bed substrate, and construct unified, comprehensive and consistent mechanical erosion rates for the solid and fluid phases. We propose multi-phase mechanically-described interactive shear structures containing all the interactions between different phases on either side of the erosion-interface between the flowing landslide and the erodible bed substrate. The sum of the solid and fluid erosion rates will be the total basal erosion rate. The new erosion rate models are broad, well defined and well constrained. ($ii$) To construct extensive and complete erosion-induced net momentum productions for both the solid and fluid phases. This is based on the physically correctly described novel complex erosion velocities of the mobilized particles and fluid from the basal substrate and the unified erosion rates. ($iii$) To develop a realistic and comprehensive multi-phase mechanical erosion model that includes the novel, unified mechanical erosion rates, extended erosion velocities and the advanced net momentum productions into the mass and momentum balance equations. The new approach makes a complete description of the multi-phase erosive landslide by considering all the aspects associated with the erosion-induced momentum productions and the correct handling of the inertia while incorporating the net momentum productions. Moreover, the newly developed stress correction factor, erosive-shear-velocity, super-erosion-drift, and the erosion-matrix further highlight the importance of our modelling approach for the complex erosive multi-phase mass flows. \section{Model development for a unified multi-phase mechanical erosion rate} The mechanical principle of erosion is based on the jump in shear stresses and the jump in momentum fluxes across the erosion interface between the landslide and the bed materials (Fraccarollo and Capart, 2002; Pudasaini and Fischer, 2020a). So, the major task here is to physically correctly construct these two jump quantities and relate them to a unified mechanical erosion rate for a multi-phase mass flow. However, first, we justify the real need of the new modelling philosophy. Then, we proceed with constructing the unified erosion rate model for multi-phase mass flows. \subsection{Necessity of a unified and consistent multi-phase mechanical erosion rates,\\ and net momentum productions} Pudasaini and Fischer (2020a) established a foundation of two-phase mechanical erosion model for mass flows. However, their model still requires a fundamental advancement in truly solving problems of erosive mass transports. For several reasons, a novel unified multi-phase mechanical erosion model is required: The sum of solid and fluid erosion rates must be the total erosion rate of the erodible bed substrate. All the interactions between the solids and fluids in the landslide and the bed substrate must be considered. All the shear resistances from the bed against all the applied shear stresses from the landslide must be mechanically explained, consistent and appropriate. We must completely model the essentially complex erosion velocities (velocities of the eroded particles and fluid from the basal substrate) and produce the correct net momentum productions. We must finally construct a comprehensive and unified mechanical model for multi-phase mass flows. These are explained below. To simplify the situation, consider the two-phase materials as an acceptable representation of multi-phase flows (Pudasaini, 2012) in both the landslide and the bed consisting of solid particles and viscous fluid. However, the derived model can be extended to multi-phase erosive flows. \\[3mm] {\bf I. Consistency of the total basal erosion rate:} Out of the total eroded material from the bed, the solid and fluid fractions must be consistently incorporated into the solid and fluid components in the moving material. However, this is a challenging problem that cannot be fixed with the existing two-phase mechanical erosion model (Pudasaini and Fischer, 2020a). As the solid and fluid erosion rates in Pudasaini and Fischer (2020a); the only existing two-phase erosion model; are developed independently, they do not include the basal solid and fluid volume fractions as multipliers. So, in technical applications, the values of solid and fluid erosion rates therein must be adjusted by choosing several parameters such that the solid and fluid fractions of the eroded material is added in a desired way to the solid and fluid components in the moving mass. Due to this constraint, in real flow situations, the sum of these erosion rates do not realistically correspond to the eroded material from the bed. Moreover, as the fluid erosion rate in Pudasaini and Fischer (2020a) model is a function of the fluid velocity, for fast flows, the fluid erosion rate can be unrealistically high, and in turn, the solid erosion rate can be unrealistically low. However, often, the selected model parameters cannot adjust the erosion rates that correspond to the material composition in the erodible bed. So, natural erosion rates that are inline with the events cannot be obtained from the independent solid and fluid erosion rates developed by Pudasaini and Fischer (2020a). This poses a great problem in consistent and appropriate selection of physical parameters appearing in their erosion rate models. This demands for a unified erosion rate model such that the sum of the derived solid and fluid erosion rates automatically satisfies the required natural criterion of the total erosion rate without any need of adjustment. We will achieve this here by first developing the total mechanical erosion rate for the mixture, that can be uniquely and legitimately split in to the solid and fluid erosion rates, which inherently contain the respective solid and fluid volume fractions of the bed material. Then, the total erosion rate is consistently and exactly obtained by summing up the solid and the fluid erosion rates, removing the great hurdle in the Pudasaini and Fischer (2020a) erosion rates. This is exactly what is needed in technical applications. \\[3mm] {\bf II. All interactions between solids and fluids in the landslide and bed substrate:} It is intuitively clear that there are four interactions between the solid and fluid phases in the landslide and the bed across the erosion-interface. These are: the solid-solid, solid-fluid, fluid-solid and fluid-fluid interactions. However, the Pudasaini and Fischer (2020a) two-phase erosion model only considers the direct phase-phase interactions, i.e., solid-solid and fluid-fluid interactions between the landslide and the bed, but ignores the cross-phase interactions, i.e., the solid-fluid, fluid-solid interactions. These interactions, though quite natural, are not yet recognized. This severely limits the applicability of existing models in simulating real multi-phase erosive events. We overcome this problem here by considering all phase-phase and cross-phase interactions. \\[3mm] {\bf III. Mechanically explained bed shear resistances against applied shear stresses from landslide:} As there are four interactions between the solid and fluid phases, there must be four basal shear resistances against the four applied shear stresses from the landslide. As only solid-solid and fluid-fluid interactions are considered by Pudasaini and Fischer (2020a), there are two main shortcomings in their erosion rate modelling. First, the solid-solid shear stresses were modelled by applying the often used Coulomb-type frictional law. However, for the fluid-fluid interactions, classical Chezy-type friction law was applied, which as we will see later, is not appropriate. We eliminate this problem here by developing novel, mechanically appropriate fluid-fluid shear stress models. Second, the true solid-fluid and fluid-solid interactions were ignored by Pudasaini and Fischer (2020a). Nevertheless, these interactions are substantial and appear to be quite complex than our present state of knowledge. So, based on the Chezy-type friction law, here, we present largely novel, mechanical shear stress structure for the solid-fluid interaction. \\[3mm] {\bf IV. Complex erosion velocities and correct multi-phase net momentum productions:} One of the most astonishing facts emerges here while dealing with the erosion velocities of the mobilized bed materials by the flow. As there are four phase-phase and cross-phase interactions between the solid and fluid materials in the landslide and the bed, erosion velocities for both the solid and fluid take extensive and complex forms. Consider the erosion velocity of a solid particle at the bed. This solid particle is pushed (sheared) and then mobilized by both the solid and fluid in the flow. However, Pudasaini and Fischer (2020a) considered the mobilization of the basal solid particle only by the solid from the flow, but ignored the other component of mobilization of this solid particle by the fluid in the landslide (mixture). The same is true for the erosion velocity of the fluid molecule at the bed. With these novel realizations, we constructed mechanically complex, comprehensive solid and fluid erosion velocities by considering all the mobilization components as applied by both the solid and fluid from the flow to the solid and fluid in the erodible bed (mixture). These will have huge impacts in correctly describing net momentum productions, because the net momentum productions are functions of erosion velocities. This is very important, because, as we will see later, the net momentum production entirely controls the dynamics, mobility, impact energy and the deposition morphology of the mass transport. Moreover, the Pudasaini and Krautblatter (2021) erosion-induced landslide mobility model is only for effectively single-phase flows and only considers the single-phase net momentum production. However, as the real events are of multiphase nature, we must consistently extend the single-phase net momentum production to multi-phase net momentum productions, which however turn out to be quite elaborate. \\[3mm] {\bf V. Comprehensive multi-phase erosion model:} Limitations of the existing two-phase erosion model as exposed above in {\bf I} - {\bf IV} demand for a consistent and complete multi-phase mechanical erosion model. This is achieved here by constructing a novel, unified mechanical erosion rates and momentum production models and embedding them into the dynamical model equations (mass and momentum balances). This paves the way for the legitimate use of the developed erosion models in applications, a monumental step forward in complex multi-phase mass flow simulations. \subsection{Definition of variables and parameters} For simplicity, first we consider a two-phase landslide (flow material) and a bed morphology consisting of viscous fluids and solid particles of different physical, mechanical and geometrical properties. Later we discuss how the derived models can be extended to multi-phase debris mixture and the erodible bed substrate. The two-phase mixtures with solid and fluid phases are indicated by the subscripts $_s$ and $_f$. Let the physical and mechanical parameters and flow dynamical variables in the landslide mixture be denoted by the superscript $^m$ and the erodible material in the bed be denoted by the superscript $^b$, respectively. Let ${\bf u} = (u,v)$ be the flow velocity, where $u$ and $v$ are the downslope ($x$) and cross slope ($y$) components. Furthermore, let $\alpha, \rho, u, \mu$ and $\tau$ denote the volume fraction, density, velocity, friction coefficient and the shear stress. Then, the mixture densities, velocities and the shear stresses on either sides of the erosion interface are given by (Pudasaini and Fischer, 2020b): \begin{equation} \rho^m = \alpha_s^m \rho_s^m + \alpha_f^m \rho_f^m,\,\,\,\,\, \rho^m u^m = \alpha_s^m \rho_s^m u_s^m + \alpha_f^m \rho_f^m u_f^m,\,\,\,\,\, \tau^m = \tau_s^m + \tau_f^m, \label{Eqn_1} \end{equation} \begin{equation} \hspace{-1.8cm} \rho^b = \alpha_s^b \rho_s^b + \alpha_f^b \rho_f^b,\,\,\,\,\, \rho^b u^b = \alpha_s^b \rho_s^b u_s^b + \alpha_f^b \rho_f^b u_f^b,\,\,\,\,\, \tau^b = \tau_s^b + \tau_f^b, \label{Eqn_2} \end{equation} where the hold-ups $\alpha_s^m + \alpha_f^m =1$ and $\alpha_s^b + \alpha_f^b =1$ are satisfied. The quantities without subscripts are the total quantities associated with the mixtures, either in the flow $\left ( \rho^m, u^m, \tau^m\right )$ or in the bed $\left ( \rho^b, u^b, \tau^b\right )$. \\[3mm] {To connect the velocities at the lower level (denoted by $_l$) of the flow (landslide) and the erodible substrate to the mean flow velocities, following Pudasaini and Fischer (2020a), we define the following extended relations, \begin{eqnarray} \begin{array}{lll} &&u_l^m = \lambda^m_l u^m, u^b = \lambda^b u^m; u_{s_l}^m = \lambda_{s_l}^m u_s^m, u_{f_l}^m = \lambda_{f_l}^m u_f^m; u_{{sf}_l}^m = \lambda_{{sf}_l}^m u_s^m, u_{{fs}_l}^m = \lambda_{{fs}_l}^m u_f^m;\\[3mm] &&u_{ss}^b = \lambda_{ss}^b u_s^m, u_{ff}^b = \lambda_{ff}^b u_f^m; u_{fs}^b = \lambda_{fs}^b u_f^m, u_{sf}^b = \lambda_{sf}^b u_s^m, \label{Eqn_5} \end{array} \end{eqnarray} where, $\lambda_l^m , \lambda^b $; $\lambda^m_{s_l}, \lambda^m_{f_l}; \lambda_{{sf}_l}^m, \lambda_{{fs}_l}^m; \lambda_{ss}^b, \lambda_{ff}^b; \lambda_{fs}^b, \lambda_{sf}^b$ are called the erosion drifts, and their mechanical closures are determined at Section 3.1.3 by the erosion-drift equations. These erosion drifts are positive numbers, and are bounded from above by unity. The relations in (\ref{Eqn_5}) are needed while deriving erosion (basal) velocities (see below), shear stresses (Section 2.4), erosion rates or mass productions (Section 3) and momentum productions (Section 4). The origin and use of the double subscripts $\left ( ss, sf, fs, ff\right )$ for the phase interactions has been explained below and at Section 2.3 and Section 2.4. In (\ref{Eqn_5}) $u_{{sf}_l}^m$ is the velocity of the solid at the base of the landslide as it is in contact with the fluid at the bed. Similar definition applies for $u_{{fs}_l}^m$. Although there are ten erosion drifts in (\ref{Eqn_5}), which we had to define for formal reasons, only few of them are independent (and needed) which can be modelled or described relatively easily as functions of known relevant bed inertial numbers (associated effective density ratios, Pudasaini and Krautblatter, 2021), as we will see at Section 3.1.3. Because, only very few of the erosion drifts will be needed in the final model equations, and simple mechanical closures are developed for all of them, there is absolutely no agnst. Note that $\lambda_l^m , \lambda^b$ are associated with the total mixtures, and $\lambda^m_{s_l}, \lambda^m_{f_l}$, $\lambda_{{sf}_l}^m, \lambda_{{fs}_l}^m$, $\lambda_{ss}^b, \lambda_{ff}^b$ and $\lambda_{fs}^b, \lambda_{sf}^b$ are associated with the solid and fluid phases (separately, or in combination) in the landslide and bed. These will be clearer in due courses. In Pudasaini and Fischer (2020a) only $\lambda^m_{s_l}, \lambda^m_{f_l}; \lambda_{ss}^b, \lambda_{ff}^b$ appear, but in reduced form, without any cross-phase interactions across the erosion-interface, limiting the applicability of their model. \subsection{Solid and fluid erosion velocities} In (\ref{Eqn_1}), (\ref{Eqn_2}) and (\ref{Eqn_5}) all the velocities $u$ with the superscript $^b$ are the erosion velocities, i.e., they are the velocities of the eroded materials, or are associated with the erosion velocities of the mobilized materials from the bed. For mixture materials, however, the erosion velocities of the solid particle and the fluid molecule at the bed are complex. There are four different interactions between the particles and fluid in the flow and the bed. For this reason, we introduced four double subscripts for the erosion velocities, similarly for the shear stresses and shear resistances (Section 2.4), namely the solid-solid $(ss)$, solid-fluid $(sf)$, fluid-solid $(fs)$, and fluid-fluid $(ff)$ interactions. This is so, because the solid particle at the bed is sheared (pushed) by both the solid and fluid at the base of the flow (landslide). It means, the solid erosion velocity $u_s^b$ has two components. It is jointly induced by the pushing of the solid from the landslide, which we denote by $u_{ss}^b$, and by the pushing of the fluid from the landslide, which we denote by $u_{fs}^b$, respectively. Similarly, the fluid molecule at the bed is also sheared by both the solid and fluid at the base of the flow which are denoted by the components $u^b_{sf}$ and $u^b_{ff}$. However, the basal velocity components $u^b_{ss}$ and $u^b_{fs}$ for solid are induced by the solid and fluid fractions $\alpha_s^m$ and $\alpha_f^m$ in the flowing mixture, because the solid particles at the bed are pushed by both the solid and fluid at the base of the flow. This is an important realization. The same is true for the fluid erosion velocity components $u^b_{sf}$ and $u^b_{ff}$ as the fluid molecules at the bed are pushed by both the solid and fluid at the base of the flow. So, as invented here, the solid and fluid erosion velocities take their complex forms: \begin{eqnarray} \begin{array}{lll} \displaystyle{u_s^b = \alpha_s^m u_{ss}^b + \alpha_f^m u_{fs}^b = \alpha_s^m \lambda_{ss}^b u_s^m + \alpha_f^m \lambda_{fs}^b u_f^m},\\[3mm] \displaystyle{u_f^b = \alpha_s^m u_{sf}^b + \alpha_f^m u_{ff}^b = \alpha_s^m \lambda_{sf}^b u_s^m + \alpha_f^m \lambda_{ff}^b u_f^m}, \label{Eqn_1NNN} \end{array} \end{eqnarray} where the erosion drifts are employed from (\ref{Eqn_5}). New drift parameters (or functions) $\lambda_{fs}^b$ and $\lambda_{sf}^b$ appear in (\ref{Eqn_5}) and (\ref{Eqn_1NNN}). We call them the cross-phase erosion drifts. Similarly, $\lambda_{ss}^b$ and $\lambda_{ff}^b$ are called the direct phase-erosion drifts. Once the erosion drifts $\left ( \lambda_{ss}^b, \lambda_{fs}^b\right )$ and $\left ( \lambda_{sf}^b, \lambda_{ff}^b\right )$ are determined at Section 3.1.3, the basal erosion velocities in (\ref{Eqn_1NNN}) are closed, because the phase volume fractions $\left (\alpha_s^m, \alpha_f^m \right)$ and velocities $\left (u_s^m, u_f^m \right)$ are the state variables in the mass and momentum balance equations, and are presented at Section 6. \subsection{Shear stresses} As the erosion rate depends on the shear stress jump $\tau^m - \tau^b$ across the erosion interface, first, we need to describe all the shear stresses and their jumps across the erosion-interface. In the Pudasaini and Fischer (2020a) two-phase erosion model, only the direct solid-solid and fluid-fluid interactions between the landslide and bed materials were considered. This is incomplete and partly inconsistent. Incomplete, because, in general, there are four phase-interactions across the erosion-interface. Inconsistent, because the fluid-fluid interactions $(ff)$ must follow some other mechanical principles, but not the Chezy-type frictions. Moreover, there are solid-fluid $(sf)$ and fluid-solid $(fs)$ interactions which were not considered in the Pudasaini and Fischer (2020a) model. This will be clearer below. \\[3mm] As mentioned above, consider the two-phase mixture landslide $\left( ^m\right )$ and bed material $\left ( ^b \right)$, both consisting of solid ($s$) and fluid ($f$) phases. When both the landslide (flow) and the erodible bed are composed of two-phase (or multi-phase) materials of different physical properties, the shear stresses are non-trivial and can become very complicated. There are two types of interactions. Solid from the flow interacting with the solid and fluid in the bed, and fluid from the flow interacting with the solid and fluid in the bed. These interactions are discussed below in detail. \\[3mm] {\bf I. Solid in the landslide interacting with solid and fluid in the bed:} The solid particles in the landslide (flow) are in contact either with the solid or with the fluid in the bed. This introduces two interactions: solid from the flow interacting with the solid in the bed ($ss$: given by $ \alpha_f^b\left [ \tau_{ss}^m - \tau_{ss}^b \right ] $ on the left hand side in (\ref{Eqn_1N}) below), and solid from the flow interacting with the fluid in the bed ($sf$: given by $ \alpha_s^b\left [ \tau_{sf}^m - \tau_{sf}^b \right ]$ on the left hand side in (\ref{Eqn_1N}) below). However, as the solid phase in the mixture talks with the solid in the bed, it tells that its solid stress has to be uniquely distributed between the fluid $\left (\alpha_f^b\right )$ and the solid $\left (\alpha_s^b\right )$ phases in the bed, and mechanically they must agree. This is the reason why these factors appear in (\ref{Eqn_1N}) below. This results in two different (solid-solid, solid-fluid) interactions \begin{equation} \alpha_f^b\left [ \tau_{ss}^m - \tau_{ss}^b \right ] + \alpha_s^b\left [ \tau_{sf}^m - \tau_{sf}^b \right ] \!=\!\alpha_f^b\left [ \tau_{ss}^m - \tau_{ss}^b \right ] + \alpha_s^b\left [ \tau_{ss}^m - \tau_{sf}^b \right ] \!=\! \left[\alpha_f^b + \alpha_{s}^b \right] \tau_{ss}^m - \alpha_f^b \tau_{ss}^b - \alpha_s^b \tau_{sf}^b \!=\! \tau_{ss}^m - \alpha_f^b \tau_{ss}^b - \alpha_s^b \tau_{sf}^b, \label{Eqn_1N} \end{equation} where the double suffix $ss$ in $\tau_{ss}^m$ means the shear stress is applied by the solid in the landslide mixture (first $s$ in $ss$) to the solid in the bed (second $s$ in $ss$), and $ss$ in $\tau_{ss}^b$ means the shear resistance by the solid from the bed (second $s$ in $ss$) against the applied shear stress by the solid from the landslide (first $s$ in $ss$). Similarly, the suffix $sf$ in $\tau_{sf}^m$ means the shear stress is applied by the solid in the landslide mixture (first $s$ in $sf$) to the fluid in the bed (second $f$ in $sf$), and the suffix $sf$ in $\tau_{sf}^b$ means the shear resistance by the fluid from the bed (second $f$ in $sf$) against the applied shear stress by the solid from the landslide (first $s$ in $sf$). Since the solid material in the mixture always behaves as the solid whether it interacts with the solid or fluid in the bed, $\tau_{sf}^m = \tau_{ss}^m$. This has been employed in (\ref{Eqn_1N}). In the sequel, other similar double suffix notations will have analogous meanings. So, the double suffix represents the type of interactions and motions between the flow and the bed. \\[3mm] {\bf II. Fluid in the landslide interacting with solid and fluid in the bed:} Fluid molecules in the flow interact either with the solid or with the fluid in the bed. This introduces two further interactions: fluid from the flow interacting with the solid in the bed ($fs$), and fluid from the flow interacting with the fluid in the bed ($ff$). Again, as the fluid phase in the mixture communicates with the solid in the bed, it tells that this fluid stress has to be distributed between the solid $\left (\alpha_s^b\right )$ and the fluid $\left (\alpha_f^b\right )$ phases in the bed. This results in two different additional (fluid-solid, fluid-fluid) interactions \begin{equation} \alpha_s^b\left [ \tau_{fs}^m - \tau_{fs}^b \right ] + \alpha_f^b\left [ {\tau}_{ff}^m - \tau_{ff}^b \right ] =\alpha_s^b\left [ \tau_{fs}^m - \tau_{ss}^b \right ] + \alpha_f^b\left [ {\tau}_{ff}^m - \tau_{ff}^b \right ]. \label{Eqn_2N} \end{equation} In (\ref{Eqn_2N}) too, on the left hand side, $\tau_{fs}^b$ indicates the shear resistance of the basal solid material against the applied fluid shear stress from the flow, $\tau_{fs}^m$. Similarly, $\tau_{ff}^b$ indicates the shear resistance of the basal fluid material against the applied fluid shear stress from the flow, $\tau_{ff}^m$. Since the basal resistance from the solid in the bed is the same whether it is against solid or fluid from the mixture, $\tau_{fs}^b = \tau_{ss}^b$, which is applied on the right hand side of (\ref{Eqn_2N}). However, note that due to the responses from the essentially distinct solid and fluid materials, there are fundamental differences between (\ref{Eqn_1N}) and (\ref{Eqn_2N}). The way the fluid in the mixture applies the shear stress to the solid in the bed $\left ( \tau_{fs}^m\right)$ and the fluid in the bed $\left ( \tau_{ff}^m\right)$ are different. Due to this, unlike (\ref{Eqn_1N}), (\ref{Eqn_2N}) could not be reduced further. These need to be modelled separately and carefully. This will be clearer at Section 2.4.2 and Section 2.4.3. \\[3mm] In (\ref{Eqn_1N}) and (\ref{Eqn_2N}), the $+$ and $-$ signs, respectively, are associated with the applied shear stresses from the flowing mixture and the shear resistances from the erodible bed material. By adding (\ref{Eqn_1N}) and (\ref{Eqn_2N}) we obtain the net shear stress of the system (applied shear stresses from the mixture flow minus shear resistances from the bed): \begin{equation} \tau_{ss}^m - \alpha_f^b \tau_{ss}^b - \alpha_s^b \tau_{sf}^b +\alpha_s^b\left [ \tau_{fs}^m - \tau_{ss}^b \right ] + \alpha_f^b\left [ {\tau}_{ff}^m - \tau_{ff}^b \right ] = \left[\tau_{ss}^m + \alpha_s^b \tau_{fs}^m + \alpha_f^b {\tau}_{ff}^m\right ] - \left[\tau_{ss}^b + \alpha_s^b \tau_{sf}^b + \alpha_f^b \tau_{ff}^b \right ]. \label{Eqn_3N0} \end{equation} So, the jump in the shear stress $\tau^m - \tau^b$ (or the net shear stress) across the erosion-interface can be written as: \begin{equation} \tau^m - \tau^b = \left[\tau_{ss}^m + \alpha_s^b \tau_{fs}^m + \alpha_f^b {\tau}_{ff}^m\right ] - \left[\tau_{ss}^b + \alpha_s^b \tau_{sf}^b + \alpha_f^b \tau_{ff}^b \right ], \label{Eqn_3N} \end{equation} where $\tau^m$ and $\tau^b$ are the total shear stress applied by the flow and the shear resistance by the bed, respectively. The shear stress jump in (\ref{Eqn_3N}) can also be written as: \begin{equation} \tau^m - \tau^b = \left[\tau_{ss}^m - \tau_{ss}^b\right ] + \alpha_s^b\left[ \tau_{fs}^m - \tau_{sf}^b\right ] + \alpha_f^b\left[ \tau_{ff}^m - \tau_{ff}^b\right ]. \label{Eqn_3al} \end{equation} {\bf Importance of the shear stress jump in (\ref{Eqn_3N}):} There are several important aspects associated with the net shear stress jump of the system as seen in (\ref{Eqn_3N}). The uniqueness and legitimacy of the elegant cross-couplings in (\ref{Eqn_1N}) and (\ref{Eqn_2N}) have been discussed at Section 2.5 and Section 5.1. \\[3mm] {\bf A.} The first terms on the right hand side in the square brackets in (\ref{Eqn_3N}), i.e., $\tau_{ss}^m$ and $\tau_{ss}^b$ are due to the shear stress applied by the solid in the mixture and the resistance from the solid in the bed, respectively. As discussed at Section 2.4.1, these terms satisfy the Coulomb-type frictional rheologies because they originate from solid-solid interactions. \\[3mm] {\bf B.} The second terms on the right hand side in the square brackets in (\ref{Eqn_3N}), i.e., $\alpha_s^b \tau_{fs}^m$ and $\alpha_s^b \tau_{sf}^b$ are the shear stress applied by the fluid in the landslide against the solid in the bed, and the shear resistance by the fluid in the bed against the applied shear stress by the solid from the landslide. As these terms emerge from the fluid-solid and solid-fluid interactions, they satisfy the Chezy-type frictional rheologies, but with some crucial amendments revealed here. Models for these shear stresses are presented at Section 2.4.2. \\[3mm] {\bf C.} However, the last terms on the right hand side in the square brackets in (\ref{Eqn_3N}), i.e., $\alpha_f^b {\tau}_{ff}^m$ and $\alpha_f^b \tau_{ff}^b$ are the fluid-fluid interactions between the fluid in the landslide (applied shear stress) and the fluid in the bed (shear resistance). These need to be described in a new way as there exist no model for erosive mass flows for such interactions. Models for these shear stresses are presented at Section 2.4.3. \\[3mm] The major task lies in modelling each of the shear stresses in (\ref{Eqn_3N}), which we deal with below for all six components (three shear stresses applied by the landslide material, plus terms in the first square bracket, and three shear resistances from the erodible bed, minus terms in the second square bracket). \subsubsection{Coulomb-type shear stresses for solid-solid interactions} Following the classical approaches (Pudasaini and Fischer, 2020a), the solid phase shear stresses in (\ref{Eqn_3N}) induced by the solid-solid interactions between the landslide and the erodible basal surface are modelled by the Coulomb-type shear stresses, and are given by \begin{eqnarray} \begin{array}{lll} \tau_{ss}^m = \left ( 1 - \gamma^m\right ) \rho_s^m g^z h\mu_s^m\alpha_s^m,\\[3mm] \tau_{ss}^b = \left ( 1 - \gamma^b\right ) \rho_s^b g^z h \mu_s^b\alpha_s^b, \label{Eqn_1NN} \end{array} \end{eqnarray} where $\gamma^m = \rho_f^m/\rho_s^m$ and $\gamma^b = \rho_f^b/\rho_s^b$ are the density ratios, and the terms $\left ( 1 - \gamma^m\right )$ and $\left ( 1 - \gamma^b\right )$ emerge due to the buoyancy reduced normal loads of the respective solid particles in the landslide and the bed mixtures. Moreover, $h$ is the total flow depth (solid plus fluid); $\mu_s^m = \tan\delta_s^m$ and $\mu_s^b = \tan\delta_s^b$ are the friction coefficients corresponding to the friction angles $\delta_s^m$ and $\delta_s^b$, and $g^z$ is the component of gravitational acceleration in the direction normal to the slope. Out of the six shear stresses in (\ref{Eqn_3N}), these Coulomb-type shear stresses for solid-solid interactions appeared to be the most simple to model. \subsubsection{Novel Chezy-type shear stresses for fluid-solid and solid-fluid interactions} Following the classical approaches (Fraccarollo and Capart, 2002), the fluid-solid and solid-fluid shear stresses in (\ref{Eqn_3N}) between the landslide and the erodible basal surface are described by the Chezy-type shear stresses (Pudasaini and Fischer, 2020a): \begin{eqnarray} \begin{array}{lll} \tau_{fs}^m = C_f^m \rho_f^m \left ( u_{f_l}^m\right )^2\alpha_f^m h/H = C_f^m \rho_f^m \left ( \lambda_{f_l}^m u_f^m\right )^2\alpha_f^m h/H, \\[3mm] \tau_{sf}^b = C_f^b \rho_f^b \left ( u_{sf}^b\right )^2\alpha_f^b h/H = C_f^b \rho_f^b \left ( \lambda_{sf}^b u_s^m\right )^2\alpha_f^b h/H, \label{Eqn_2NN} \end{array} \end{eqnarray} where $C_f^m$ and $C_f^b$ are the Chezy-friction coefficients (Chow, 1959), and the erosion drifts $\lambda_{f_l}^m$ and $\lambda_{sf}^b$ are defined at Section 2.2 and their mechanical closures are presented at Section 3.1.3. It is very important to observe that, the fluid-type resistance from the bottom material against the shear stress from the solid in the landslide, i.e., $\tau_{sf}^b$, contains a cross-velocity expression $u_{sf}^b = \lambda_{sf}^b u_s^m$. As mentioned at Section 2.3, the fluid motion in the bed for this situation is partly determined by the solid motion at the base, but not by the fluid motion at the base of the landslide. This is a special circumstance encountered and invented here due to the solid-fluid interactions, but not realized previously in erosive mass transports. It poses a challenge, yet, equally unravels the physical situation associated with such an interaction. This is in contrast to the usual mechanism in $\tau_{fs}^m $ which includes the fluid motion at the base of the landslide material. This is natural. In this respect, the structure of $\tau_{sf}^b$ is a fundamentally new understanding over the previous fluid-type bed shear resistance against the solid-type shear stress from the landslide in Pudasaini and Fischer (2020a), in which, simply $u_{ff}^b = \lambda_{ff}^b u_f^m$ was used, which for the mixture interactions, appeared to be physically inappropriate. This is a notable aspect. Also, contrary to the classical Chezy-type law, because of the mixtures in our consideration, as seen in (\ref{Eqn_2N}) and (\ref{Eqn_2NN}), the bed solid and landslide fluid volume fractions appear both along with the shear stresses $\tau^m_{fs}$ and $\tau^b_{sf}$. \subsubsection{Novel shear stress descriptions for fluid-fluid interactions} Although there are some crucial innovative aspects in the Chezy-type shear stresses for fluid-solid and solid-fluid interactions in (\ref{Eqn_2NN}), structurally the Coulomb-type and Chezy-type shear stresses are known and have been applied previously for the erosive mass flows (Fraccarollo and Capart, 2002; Pudasaini and Fischer, 2020a). However, for the erosive landslide, the fluid-fluid interactions are not yet known, and we need to consider (construct) some new physically-based models. Assuming that, momentarily before the erosion takes place, the fluid in the bed material is stationary, or its motion is negligible (this restriction can be lifted) as compared to the motion of the fluid in the landslide. For shear stresses that act at the interface between the viscous fluid in the landslide and the viscous fluid in the bed, pioneering work by Beavers and Joseph (1967) provides a basis for modelling such processes (Jones, 1973). We utilize this idea to model shear stresses between the fluid in the landslide mixture (as in a porous media) and the fluid in the mixture bed (also behaving as a porous media) and vice versa: \begin{eqnarray} \begin{array}{lll} \displaystyle{{\tau}_{ff}^m = \eta_f^m \frac{\alpha_{_{BJ}}^b}{\sqrt{\mathcal K^b}} u_{f_l}^m = \eta_f^m \frac{\alpha_{_{BJ}}^b}{\sqrt{\mathcal K^b}} \lambda_{f_l}^m u_{f}^m = \eta_f^m \lambda_{f_l}^m\frac{\alpha_{_{BJ}}^b}{\sqrt{\mathcal K^b}} u_{f}^m,}\\[5mm] \displaystyle{\tau_{ff}^b = \eta_f^b \frac{\alpha_{_{BJ}}^m}{\sqrt{\mathcal K^m}}u_{ff}^b = \eta_f^b \frac{\alpha_{_{BJ}}^m}{\sqrt{\mathcal K^m}} \lambda_{ff}^b u_{f}^m = \eta_f^b \lambda_{ff}^b\frac{\alpha_{_{BJ}}^m}{\sqrt{\mathcal K^m}} u_{f}^m,} \label{Eqn_3NN} \end{array} \end{eqnarray} where, ${\alpha_{_{BJ}}}$ are dimensionless numbers, with typical values in $[0.7, 4.0]$ that depend on the physical parameters characterizing the structure of the landslide and bed materials (Beavres and Joseph, 1967), but independent of fluid viscosities on either side of the interface, and $\left ({\mathcal K^m}, {\mathcal K^b}\right)$ are the permeabilities of the landslide and the bed materials, respectively, which are small values, but depending on the material type, can vary several orders of magnitudes. Permeabilities are assumed to be given quantities rather than unknown parameters. Furthermore, the erosion drifts $\lambda_{f_l}^m$ and $\lambda_{ff}^b$ are defined at Section 2.2 and are given at Section 3.1.3. Shear stresses in (\ref{Eqn_3NN}) are distinguished by the fluid viscosities and velocities on their respective sides, however, with the permeability of the bed and the permeability of the landslide materials on opposite sides of the erosion-interface. Moreover, the shear stress $\tau_{ff}^b$ contains a component $u_{ff}^b$ of the bed fluid (erosion) velocity $u_f^b$ induced by the fluid velocity in the landslide $\left (u_{f}^m\right)$ which complicates the situation, but the model for this has been developed in (\ref{Eqn_1NNN}). This has been applied in (\ref{Eqn_3NN}). \\[3mm] The relations in (\ref{Eqn_3NN}) are based on the hypothesis of slip boundary condition and is developed with a boundary layer approach (Nield, 2009). Also, note that in contrast to the Beavers-Joseph-type law, because of the mixtures in our consideration, as seen in (\ref{Eqn_3N}), the bed fluid volume fraction appears along with these shear stresses, and both sides of the interface contain solid particles, so forming porous material of different physical properties. As $\sqrt{\mathcal K^b}$ has the dimension of length, $\lambda_{f_l}^m \alpha_{BJ}^b u_{f}^m/\sqrt{\mathcal K^b}$ can be perceived as the rate of strain, which when multiplied with the respective viscosity $\eta_f^m$ results in the usual structure of the viscous shear stress at the landslide base: ${\tau}_{ff}^m \approx \eta_f^m \left [\partial u_{f_l}^m/\partial z\right ]_{z = b}$, where $z$ is the flow depth. The same applies to $\tau_{ff}^b$. This reveals the origin, and justifies the physical mechanisms, of the interfacial shear stresses in (\ref{Eqn_3NN}). \\[3mm] {\bf Simplified fluid-fluid interactions:} The expressions in (\ref{Eqn_3NN}) can further be simplified. Following Chandesris and Jamet (2006), if $\alpha_{_{BJ}}^b$ may be written as the ratio of the fluid viscosities in the bed and the landslide $\alpha_{_{BJ}}^b = \sqrt{\eta_f^b/\eta_f^m}$, similarly $\alpha_{_{BJ}}^m = \sqrt{\eta_f^m/\eta_f^b}$, then (\ref{Eqn_3NN}) can be reduced to yield: \begin{equation} {\tau}_{ff}^m = \eta_e^{P} \frac{\lambda_{f_l}^m u_{f}^m}{\sqrt{\mathcal K^b}}, \,\,\,\,\, \tau_{ff}^b = \eta_e^{P} \frac{\lambda_{ff}^b u_{f}^m}{\sqrt{\mathcal K^m}}, \label{Eqn_4NN} \end{equation} where the geometric mean $\eta_e^{P} = \sqrt{\eta_f^m \eta_f^b}$ is the (net) effective system viscosity at the erosion-interface, which we call the erosive product viscosity, or simply the P-viscosity. So, the fluid shear stresses at the erosion-interface are dependent on the common P-viscosity, and the respective fluid velocities together with the drifts on their own sides, but inversely associated with permeabilities on opposite (resistive) sides of the landslide-bed interface. \\[3mm] To sum up, the jump in the shear stresses across the erosion-interface is given by (\ref{Eqn_3N}) together with the shear stress models developed in (\ref{Eqn_1NN})-(\ref{Eqn_3NN}). \subsection{Reduced net shear stresses} Depending on the composition of the flowing landslide and the bed mixture and the significance of the relevant shear stresses, amazingly as we figure out here, we will have eight different reduced (indicated by $_r$) net shear stresses $\left (\tau^{net}_r = \tau^m_r - \tau^b_r\right)$ of the system (shear stress jump across the erosion-interface). This also indicates the complexity associated with the two-phase erosive landslide, which have not been recognized in the landslide research as such large number of interactions were not identified previously. However, these need to be realized correctly while distributing the shear stresses among the respective components, in both the flow and bed materials as the landslide interacts with the bed. This is important. These are analyzed below. \\[3mm] {\bf I. Solid-type materials:} If both the landslide and bed materials are only of solid-type (dry landslide entraining dry bed), then (\ref{Eqn_3N}) directly reduces to the simple expression: \begin{equation} \tau^m_r - \tau^b_r = \tau_{ss}^m - \tau_{ss}^b = \tau_{s}^m - \tau_{s}^b. \label{Eqn_4N} \end{equation} Since there are only solid materials, there are no cross-interactions, so, we used $s$ for $ss$. This recognition will also be applied below for those shear stresses when some or all cross-interactions can be ignored. \\[3mm] {\bf II. Fluid-type materials:} If both the flow and bed materials are only of fluid-type (fluid flow entraining fluid material, e.g., flood entraining river, lake or reservoir fluid), then (\ref{Eqn_3N}) reduces to the simple expression: \begin{equation} \tau^m_r - \tau^b_r = \alpha_f^b \left [ {\tau}_{ff}^m- \tau_{ff}^b \right ] = {\tau}_{ff}^m- \tau_{ff}^b = {\tau}_{f}^m- \tau_{f}^b, \label{Eqn_5N} \end{equation} because, there is no solid means all other solid and cross-phase contributions do not exist and $\alpha_f^b = 1$, and $f$ is used instead of $ff$ as there are no cross-interactions. However, note that, in this situation, we should use simple fluid shear stress because there are no solid particles to form matrices constituting porous medium. Yet, in general, only fluid-fluid interaction is less likely to take place in erosive mass flows. \\[3mm] {\bf III. Solid-type landslide and fluid-type bed:} If the landslide is a solid-type material and the bed is a fluid-type material (dry landslide entraining river, lake or reservoir fluid), then the landslide solid stress does not need to be distributed but only applied to the fluid in the bed. With these realizations, from (\ref{Eqn_1N}), we obtain: \begin{equation} \tau^m_r - \tau^b_r = {\tau}_{sf}^m- \tau_{sf}^b = {\tau}_{ss}^m-\tau_{sf}^b. \label{Eqn_5N_1} \end{equation} {\bf IV. Fluid-type landslide and solid-type bed:} If the landslide is a fluid-type material and the bed is a solid-type material (flood entraining soil, sand, or gravel from the bed), then the fluid stress does not need to be distributed but only applied to the solid in the bed. With these realizations, from (\ref{Eqn_2N}), we obtain: \begin{equation} \tau^m_r - \tau^b_r = {\tau}_{fs}^m- \tau_{fs}^b = {\tau}_{fs}^m-\tau_{ss}^b. \label{Eqn_5N_2} \end{equation} {\bf V. Mixture landslide and solid-type bed:} If the landslide is a mixture material and the bed is a solid-type material (debris flow entraining soil, sand, or gravel from the bed), then landslide shear stresses do not need to be distributed, and only the bed solid shear resistance should be distributed. With these realizations, from (\ref{Eqn_1N}) and (\ref{Eqn_2N}), we get: \begin{eqnarray} \begin{array}{lll} \tau^m_r - \tau^b_r \!\!\!&=&\!\!\!\left [ \tau_{ss}^m - \alpha_f^b\tau_{ss}^b \right ] + \left [ \tau_{fs}^m - \alpha_s^b\tau_{fs}^b \right ] \!=\!\left [ \tau_{ss}^m - \alpha_f^b\tau_{ss}^b \right ] + \left [ \tau_{fs}^m - \alpha_s^b\tau_{ss}^b \right ]\\[3mm] \!\!\!&=&\!\!\!\left [ \tau_{ss}^m + \tau_{fs}^m \right ]- \tau_{ss}^b \!=\!\left [ \tau_{ss}^m - \tau_{ss}^b \right ] + \tau_{fs}^m. \label{Eqn_6N} \end{array} \end{eqnarray} If there is no fluid-solid interactions, i.e., only solid-type landslide and solid-type bed, this can easily be understood as it directly reduces to $\left [ \tau_{ss}^m - \tau_{ss}^b \right ] = \left [ \tau_{s}^m - \tau_{s}^b \right ]$ which is (\ref{Eqn_4N}). If there is no solid but only fluid-type material in the landslide, then (\ref{Eqn_6N}) rather reduces to $ \left [\tau_{fs}^m - \tau_{ss}^b \right ]$, which is (\ref{Eqn_5N_2}). \\[3mm] {\bf VI. Mixture landslide and fluid-type bed:} If the landslide is a mixture material and the bed is a fluid-type material (debris flow entraining river, lake or reservoir fluid), then, from (\ref{Eqn_1N}) and (\ref{Eqn_2N}), we have \begin{equation} \tau^m_r - \tau^b_r = \left [ \tau_{sf}^m - \alpha_s^b\tau_{sf}^b \right ] + \left [ {\tau}_{ff}^m - \alpha_f^b\tau_{ff}^b \right ] = \left [ \tau_{ss}^m - \alpha_s^b\tau_{sf}^b \right ] + \left [ {\tau}_{ff}^m - \alpha_f^b\tau_{ff}^b \right ] = \left [ \tau_{ss}^m + {\tau}_{ff}^m\right ] - \left [ \alpha_s^b\tau_{sf}^b + \alpha_f^b\tau_{ff}^b\right ]. \label{Eqn_7N} \end{equation} Unlike the same type of solid resistances from the bed in {\bf V}, here the fluid resistances from the bed $\tau_{sf}^b$ and $\tau_{ff}^b$ are incomparably different, as have been revealed in (\ref{Eqn_2NN}) and (\ref{Eqn_3NN}). So, the situation here could not be simplified further. This is an important novel development. However, if there is no solid in the landslide, then $\alpha_f^b = 1$ and (\ref{Eqn_7N}) reduces to $\left [ \tau_{ff}^m - \tau_{ff}^b\right ] = \left [ \tau_{f}^m - \tau_{f}^b\right ]$ which is (\ref{Eqn_5N}). In another scenario, if the landslide only contains the solid material, then $\alpha_s^b = 1$ and (\ref{Eqn_7N}) takes the form $\left [\tau_{ss}^m - \tau_{sf}^b\right ]$, which is (\ref{Eqn_5N_1}). \\[3mm] {\bf VII. Solid-type landslide and mixture bed:} If the landslide is dry and bed material is a mixture (dry landslide of soil, sand, or gravel entraining debris material from the slope or bed), then, from (\ref{Eqn_1N}), we have: \begin{equation} \tau^m_r - \tau^b_r = \left [ \alpha_f^b\tau_{ss}^m - \tau_{ss}^b \right ] + \left [ \alpha_s^b\tau_{sf}^m - \tau_{sf}^b \right ] = \left [ \alpha_f^b\tau_{ss}^m - \tau_{ss}^b \right ] + \left [ \alpha_s^b\tau_{ss}^m - \tau_{sf}^b \right ] = \tau_{ss}^m - \left [\tau_{ss}^b + \tau_{sf}^b\right ] = \left [\tau_{ss}^m - \tau_{ss}^b\right ] - \tau_{sf}^b. \label{Eqn_8N} \end{equation} Again, if there is no solid-fluid interactions (no fluid in the bed), this reduces to $\left [\tau_{ss}^m - \tau_{ss}^b\right ] = \left [\tau_{s}^m - \tau_{s}^b\right ]$ which is (\ref{Eqn_4N}). In another scenario, if the bed only contains fluid material, then (\ref{Eqn_8N}) takes the form $\left [\tau_{ss}^m - \tau_{sf}^b\right ] $, which is (\ref{Eqn_5N_1}). \\[3mm] {\bf VIII. Fluid-type landslide and mixture bed:} If the landslide is a fluid material and bed is a mixture (flood entraining debris material from the slope or bed), then, from (\ref{Eqn_2N}), we have: \begin{eqnarray} \begin{array}{lll} \tau^m_r - \tau^b_r\!\!\! &=& \!\!\!\left [ \alpha_s^b\tau_{fs}^m - \tau_{fs}^b \right ] + \left [ \alpha_f^b {\tau}_{ff}^m - \tau_{ff}^b \right ] =\left [\alpha_s^b \tau_{fs}^m + \alpha_f^b {\tau}_{ff}^m \right ] - \left [\tau_{fs}^b + \tau_{ff}^b \right ]\\[3mm] \!\!\!&=&\!\!\! \left [\alpha_s^b \tau_{fs}^m + \alpha_f^b {\tau}_{ff}^m \right ] - \left [\tau_{ss}^b + \tau_{ff}^b \right ]. \label{Eqn_9N} \end{array} \end{eqnarray} Because of the complex fluid shear stresses, the stress jump mechanism here is quite different than that in {\bf VII}, and these shear stresses can not be reduced any further. However, if there is no solid in the bed, then $\alpha_s^b = 0$, $\alpha_f^b = 1$ and (\ref{Eqn_9N}) reduces to $\left [\tau_{ff}^m - \tau_{ff}^b \right ]$ = $\left [\tau_{f}^m - \tau_{f}^b \right ]$, which is (\ref{Eqn_5N}). In another scenario, if the bed only contains the solid material, then $\alpha_s^b = 1$ and (\ref{Eqn_9N}) takes the form $\left [\tau_{fs}^m - \tau_{ss}^b\right ] $, which is (\ref{Eqn_5N_2}). \\[3mm] The reduced net shear stresses {\bf I}$-${\bf VIII} clearly demonstrate the consistency and physical significance of the interfacial interactions (\ref{Eqn_1N}) and (\ref{Eqn_2N}) and the shear stress jumps in (\ref{Eqn_3N}). These aspects have been elaborated at Section 5.1. } \subsection{Modelling basal substrate as an effectively single-phase material} For simplicity and convenience, in applications, alternatively, we can also consider the basal substrate as an effectively single-phase (mixture) in which the dynamics of the fluid component is not explicit, but only via the pore pressure of the fluid in the solid matrix. In this situation, commonly, a single solid-type shear stress (the total shear stress) is considered for the erodible bed as: \begin{equation} \tau^b = \left ( 1 - \frac{p_f^b}{p_T^b}\right ) \rho^b_B g^z h\mu^b_s, \label{Eqn_4_1} \end{equation} where, $p_f^b$ and $p_T^b$ are the pore fluid pressure and the total pressure in the basal material and ${p_f^b}/{p_T^b}$ represents the corresponding pore pressure ratio (ratio between the basal pore fluid pressure and the total basal normal stress, Hungr, 1995; Pudasaini, 2012), and $\rho^b_B$ is the bulk density of the basal substrate. $p_T^b$ is given by $p_T^b = \alpha_s^b\rho_s^b g^z h + \alpha_f^b\rho_f^b g^z h$. However, depending on the dynamical state of the pore fluid pressure, $p_f^b$ varies between $\alpha_f^b\rho_f^b g^z h$ and $\alpha_s^b\rho_s^b g^z h + \alpha_f^b\rho_f^b g^z h$. So, there exists a parameter (or a function) $\Upsilon^b \in [0, 1]$ such that \begin{equation} p_f^b = \left ( 1- \Upsilon^b\right ) \alpha_f^b\rho_f^b g^z h+ \Upsilon^b \left ( \alpha_s^b\rho_s^b g^z h+ \alpha_f^b\rho_f^b g^z h\right). \label{Eqn_4p} \end{equation} We call $\Upsilon^b$ the effective pore pressure ratio. The use of this terminology is explained below. This expression automatically satisfies the natural condition that as $\Upsilon^b \to 0$ the fluid pressure nears the usual hydrostatic pressure, and as $\Upsilon^b \to 1$ the fluid pressure approaches the total material load (pressure). As $\Upsilon^b$ covers the whole spectrum, it measures the deviation of the pore fluid pressure from the hydrostatic fluid pressure to the complete liquefaction of the bed material. With a simple algebra, (\ref{Eqn_4_1}) can be re-written as: \begin{equation} \tau^b = \left ( 1 - \Upsilon^b\right) {\mathcal S_c} \,\rho^b_B g^z h\mu^b, \label{Eqn_4} \end{equation} where \begin{equation} {\mathcal S_c} = \left[\frac{\rho_s^b\alpha_s^b}{\rho_s^b\alpha_s^b + \rho_f^b\rho_f^b}\right], \label{Eqn_4_0} \end{equation} and ${\mathcal S_c} \in [0, 1]$. We call ${\mathcal S_c}$ the shear stress correction factor. As revealed below, the representation of $\tau^b$ in (\ref{Eqn_4}) has several crucial mechanical inferences. \\[3mm] ($i$) When there is a substantial amount of fluid in the erodible bed substrate, simply using $ 1 - \Upsilon^b$ and not considering ${\mathcal S_c}$ significantly overestimates the basal shear stress. This indicates that, the previous bulk basal shear stress models (e.g., Hungr, 1995; Pudasaini et al., 2005; Iverson and George, 2014; Iverson and Ouyang, 2015) for the mixture material are inappropriate. So, in fact, the basal material is substantially weaker than it was previously thought. This is a fundamentally novel understanding with important implications in mass flow simulation, particularly when it comes to the erosive flows. \begin{figure} \begin{center} \includegraphics[width=9cm]{ShearStressCorrection_Sc.eps} \includegraphics[width=9cm]{ShearStressWithWithout_Sc.eps} \end{center} \caption[]{(a) Shear stress correction factor $\mathcal S_c$ as given by (\ref{Eqn_4_0}) that increases with the solid volume fraction $\alpha_s^b$ in the bed material. (b) Incorrect and correct shear stresses that respectively neglect and include $\mathcal S_c$. The incorrect shear stress substantially overestimates the basal shear resistance, largely so for the lower $\alpha_s^b$.} \label{Fig_1} \end{figure} \\[3mm] ($ii$) Another crucial aspect is the direct involvement of the solid particle concentration in the basal shear stress via $\mathcal S_c$, which was ignored by all the previous shear stress models as those models only use the mixture bulk density $\rho^b_B$, but not the composite mixture density $\rho^b$. From the mixture perspective of the basal substrate, i.e., $\rho^b_B = \rho^b$, (\ref{Eqn_4}) reduces to \begin{equation} \tau^b = \left ( 1 - \Upsilon^b\right) \alpha_s^b\rho_s^b \,g^z h\mu^b_s. \label{Eqn_4_4} \end{equation} Mechanically, the form of (\ref{Eqn_4_4}) is important. Since the particle concentration is one of the main controlling dynamical factors in determining the shear resistance, those models (e.g., Hungr, 1995; Pudasaini et al., 2005; Iverson and George, 2014; Iverson and Ouyang, 2015) without its involvement cannot legitimately represent the true shear resistance. \\[3mm] To see the dynamical effect of the factor ${\mathcal S_c}$ in the basal shear resistance $\tau^b$, consider some usual parameters (Mergili et al., 2020, Pudasaini and Fischer, 2020a; Pudasaini and Krautblatter, 2021): $\rho_s^b = 2700, \rho_f^b = 1100, h = 5, \delta_s^b = 20^\circ, \zeta = 45^\circ, g = 9.81$, where $\mu_s^b = \tan\delta_s^b$ is the friction coefficient, $\zeta$ is the slope angle, $g^z = g \cos\zeta$, and $g$ is the gravitational acceleration. Furthermore, we consider $\Upsilon^b = 0.5$. As shown in Fig. \ref{Fig_1}a, the expression in (\ref{Eqn_4}) reveals the fact that the shear resistance decreases non-linearly with the decreasing solid fraction in ${\mathcal S_c}$, it tends to vanish as the solid fraction is negligibly small, and it takes the maximum value as the solid fraction approaches unity. Only this limiting value could be considered by all the existing effectively single-phase shear resistance models (e.g., Hungr, 1995; Pudasaini et al., 2005; Iverson and George, 2014; Iverson and Ouyang, 2015), which is less realistic for the mixture substrate. For a particularly representative value of the basal solid fraction $\alpha_s^b = 0.55$, $\mathcal S_c = 0.75$. This means, the existing shear resistance models typically overestimates $\tau^b$ by about 25\%, which is substantial. The overestimation can further increase if the basal material is till-reach for which the true solid density can decrease to $\rho_s^b = 2000$ and the shear stress reduces by about 30\%. This can result in completely different scenario as (erosive) debris avalanches travel long distances. This demonstrates the mechanical and dynamical importance of the new shear resistance model in (\ref{Eqn_4}) and (\ref{Eqn_4_4}). Similarly, Fig. \ref{Fig_1}b shows that the classical shear stress that neglects ${\mathcal S_c}$ substantially overestimates the basal shear resistance, mainly for the lower $\alpha_s^b$. \\[3mm] $(iii)$ The shear stress given by (\ref{Eqn_4}) or (\ref{Eqn_4_4}) includes both the aspects of pore pressure ratio and its dependency on the particle concentration. We have presented the first shear stress model that formally includes these important mechanical aspects. \\[3mm] {\bf The effective pore pressure ratio:} Following its definition from (\ref{Eqn_4p}), $\Upsilon^b$ can be explicitly written as: \begin{equation} \Upsilon^b = \frac{p_f^b - \alpha_f^b\rho_f^b g^z h}{\alpha_s^b\rho_s^b g^z h}, \label{Eqn_4_33} \end{equation} which says that $\Upsilon^b$ is the measure of the actual pore fluid pressure in excess to the hydrostatic fluid pressure (relative to the solid load in the bed). The structure (form) of $\Upsilon^b$ in (\ref{Eqn_4_33}) justifies the terminology used above for $\Upsilon^b$, the effective pore pressure ratio. \section{Erosion rates} \subsection{The total erosion rate} Erosion rate (or the rate of mass production) is determined with the jump in the shear stresses and the jump in the momentum fluxes across the landslide-bed interface. We consider the jump in the shear stresses, $\tau^m - \tau^b $ from (\ref{Eqn_3N}) together with all the stress components as modelled in (\ref{Eqn_1NN})-(\ref{Eqn_3NN}) (alternatively, (\ref{Eqn_6N}) with (\ref{Eqn_4_4}), $\left [\tau^m_{ss} + \tau^m_{fs}\right] - \tau^b_{ss}$, $\tau^b = \tau^b_{ss}$), and the jump in the momentum fluxes, $\rho^m u_l^m - \rho^b u^b$ across the erosion-interface. Then, following Pudasaini and Fischer (2020a), the total erosion-rate $E$ of the system is obtained by: \begin{equation} E = \frac{\tau^m - \tau^b}{ \rho^m u_l^m - \rho^b u^b } = \frac{\tau^m - \tau^b}{\left ( \rho^m\lambda^m_l - \rho^b\lambda^b \right ) u^m }, \label{Eqn_6} \end{equation} where, the mixture densities $\rho^m, \rho^b$ are given by (\ref{Eqn_1}) and (\ref{Eqn_2}), respectively, and the drift relations for $u_l^m$ and $u^b$ have been employed from (\ref{Eqn_5}) with their descriptions at Section 3.1.3. We note that negative of $E$ corresponds to the deposition. \\[3mm] Now, following Pudasaini and Fischer (2020a), we consider the shear velocity $u^*$ of the system which is given by the square root of the ratio between the net shear stress of the system and the relevant net density across the erosion-interface, and reads: \begin{equation} \displaystyle{u^* = \frac{\sqrt{\tau^m - \tau^b }} {\sqrt{\left ( \rho^m\lambda^m_l - \rho^b\lambda^b \right )}}}. \label{Eqn_6_sv} \end{equation} The shear velocity is proportional to the flow velocity $u^m$. So, with the proportionality factor $\tilde \nu$, we can define a relationship as $u^m = \tilde \nu u^*$. With this, the erosion rate $E$ yields \begin{equation} E = \frac{\sqrt{\tau^m - \tau^b }} {\sqrt{\nu \left ( \rho^m\lambda^m_l - \rho^b\lambda^b \right )}}, \label{Eqn_7} \end{equation} where, for simplicity, $\nu = \tilde \nu^2$ is set. The total (overall) erosion rate $E$ in (\ref{Eqn_7}) is, in fact, in a very compact form. While expressing the involved total shear stresses $\tau^m$, $\tau^b$, and the mixture densities $\rho^m, \rho^b$ in explicit form, this expression appears to be extensive. The number of involved model (physical) parameters, their values and closures are presented at Section 5.4. \subsubsection{Erosive-shear-velocity} By comparing (\ref{Eqn_6}) with (\ref{Eqn_6_sv}), we obtain an expression for $u^*$. This leads to the definition of a new shear velocity, which we call the erosive-shear-velocity, write as $u_{_{{\mathcal S}_E}}$, and takes the form: \begin{equation} \displaystyle{u_{_{{\mathcal S}_E}} = \mathcal P_{_{E_u}}\, u^m}, \label{Eqn_6_sv1} \end{equation} where, $P_{_{E_u}} = \sqrt{{E}/{u^m}}$ is the dynamical proportionality factor between the flow velocity and the erosive-shear-velocity. The erosive-shear-velocity $u_{_{{\mathcal S}_E}}$ is the square root of the momentum production induced by the flow velocity and the erosion rate $\left (\sqrt{u^m E}\right)$. $u_{_{{\mathcal S}_E}}$ increases with the flow velocity $u^m$ (but can be directly related to the erosion velocity $u^b$ with the relationship $u^b = \lambda^b u^m$) and the erosion rate $E$. So, the erosive-shear-velocity is primarily induced by erosion, and vanishes for non-erosive flows. It is important to note that the erosive-shear-velocity is proportional to the flow velocity, and its proportionality $P_{_{E_u}}$ varies as a function of the erosion rate and inversely with the flow velocity, i.e., $\sqrt{{E}/{u^m}}$. As the erosion rate can be relatively small and the flow velocity can be relatively large, the erosive-shear-velocity is expected to be substantially smaller than the shear velocity. Yet, as clear from their definitions, the erosive-shear-velocity is fundamentally different than the classical shear velocity as it includes the erosion rate. The novel mechanism and understanding of erosive-shear-velocity is revealed here with our dynamical modelling approach. \subsubsection{Closure for the shear velocity} Usually, the shear velocity is about 5\% to 10\% of the mean flow velocity, and thus, $1/\sqrt{\nu}\in \left ( 0.05, 0.1 \right)$. So, for simplicity, we can accordingly take the suitable value of $\nu$ in the range (100, 400). Otherwise, we follow Pudasaini and Fischer (2020a) for an analytical closure relation for $\nu$. \subsubsection{Closures for erosion drifts} Erosion drifts are essential quantities as they provide crucial information about the erosion velocities which play central role in explaining the erosion rate and the net momentum production, and the associated excess energy that control the mobility of erosive mass transports (Pudasaini and Krautblatter, 2021). So, now we construct different erosion drift equations providing mechanical closures for all the erosion drifts appearing in the process of model development (erosion rates and net momentum productions). \\[3mm] {\bf I. Total erosion drift, solid-solid and fluid-fluid direct phase erosion drifts:} Considering the balance between the effective reduced net frictional stress, $\left ( \tau^m - \tau^b\right )/\rho^m$, where $\rho^m$ emerges due to the mass factor, and momentum production in erosion, which is related to (\ref{Eqn_6}), $u^b E = \lambda^b u^m E = \lambda^b \left ( \tau^m - \tau^b\right )/\left ( \rho^m\lambda^m_l - \rho^b\lambda^b\right )$ (Pudasaini and Fischer, 2020a), we obtain the erosion drift equation for the total mixture system as: \begin{equation} \lambda^m_l = \left ( 1 + \frac{\rho^b}{\rho^m}\right)\lambda^b. \label{Eqn_8} \end{equation} This is a compact, strong and general drift equation, where $N^I = {\rho^b}/{\rho^m}$ is the associated (erosional) bed inertial number as the ratio between the effective mass in the bed and the flow (Pudasaini and Krautblatter, 2021). \begin{figure} \begin{center} \includegraphics[width=9cm]{ErosionDrift_Total.eps} \end{center} \caption[]{The total erosion drift $\lambda^b$ given by (\ref{Eqn_8}) as a function of the basal fluid volume fraction $\alpha_f^b$ showing the increase in $\lambda^b$ with the weakened basal material.} \label{Fig_2} \end{figure} We analyze each of the erosion drift (for simplicity) by locally keeping the associated mass of the landslide unchanged but weakening the basal materials. Otherwise, we can also reverse the situation, or consider general settings and analyze the erosion drifts. As we will see later, one of the main aspects is to demonstrate significance to dominant influence of the cross-erosion drifts in the erosion dynamics, we consider the fluid dominated landslide, akin to hyperconcentrated flows. Without loss of generality, this is achieved by choosing $\alpha_s^m = 0.35$ (so $\alpha_f^m = 0.65$). Moreover, following the literature (Mergili et al., 2020; Pudasaini and Krautblatter, 2021) other physical parameters used are: $\rho_s^m = 2700, \rho_f^m = 1100; \rho_s^b = 1600, \rho_f^b = 1000; u_s^m = 25, u_f^m = 30$. With these, $\rho_s^m\alpha_s^m$ and $\rho_f^m\alpha_f^m$ are assumed to be locally not changing so much, which however, are variable in general situations. \\[3mm] Figure \ref{Fig_2} displays the dynamics of the total erosion drift $\lambda^b$ as a function of the basal fluid volume fraction $\alpha_f^b$. As $\alpha_f^b$ increases, the bed material becomes weaker. It results in the increased values of the total erosion drift, which in turn leads to an increase in erosion velocity. This is how the drift explains the erosion mechanism. There are two important features associated with the drift equation (\ref{Eqn_8}). \\[3mm] {\bf A. Solid-solid and fluid-fluid direct-phase-erosion drifts:} In the limit, (\ref{Eqn_8}) reduces to the solid only (solid-solid) and fluid only (fluid-fluid) erosion drift equations as constructed by Pudasaini and Fischer (2020a): \begin{equation} \lambda_{s_l}^m = \left ( 1 + \frac{\rho_s^b\alpha_s^b}{\rho_s^m\alpha_s^m}\right)\lambda_{ss}^b, \label{Eqn_9} \end{equation} and \begin{equation} \lambda_{f_l}^m = \left ( 1 + \frac{\rho_f^b\alpha_f^b}{\rho_f^m\alpha_f^m}\right)\lambda_{ff}^b, \label{Eqn_10} \end{equation} respectively. In these drift equations $N_{ss}^I = {\rho_s^b\alpha_s^b}/{\rho_s^m\alpha_s^m}$, and $N_{ff}^I = {\rho_f^b\alpha_f^b}/{\rho_f^m\alpha_f^m}$ are the corresponding bed inertial numbers. \\[3mm] {\bf B. Clousers:} As the velocity shearing in the depth averaged modelling frame is generally ignored, $\lambda^m_l$ in (\ref{Eqn_8}) can be set to unity (Pudasaini and Fischer, 2020a). So, since $\rho^m$ and $\rho^b$ are known from the flow configuration, the drift equation (\ref{Eqn_8}) provides a closer for $\lambda^b$. This also applies to (\ref{Eqn_9}) and (\ref{Eqn_10}), providing the closures for $\lambda_{ss}^b$ and $\lambda_{ff}^b$. \\[3mm] {\bf II. Solid-fluid and fluid-solid cross-erosion-drifts:} We also need to develop closure relations for the cross-drifts $\lambda_{sf}^b$ and $\lambda_{fs}^b$ whose structures are not known yet. With the elegant procedure, first, we construct the closure for $\lambda_{sf}^b$. For this, consider the fluid erosion rate $E_f$. It can be formally de-composed into the components induced by the fluid-fluid $\left (E_{ff}\right)$ and the solid-fluid $\left (E_{sf}\right)$ interactions between the landslide and the bed. As the cross erosion drift $\lambda_{sf}^b$ is associated with the solid-fluid interaction, we need to consider the cross-contribution $E_{sf}$. Following the structure of the erosion rate from Section 3.1, we can write $E_{sf}$ that is associated with its corresponding jump in the shear stresses $\alpha_s^b \left [ \tau_{sf}^m - \tau_{sf}^b\right ]$, and the jump in the momentum fluxes $\left [ \alpha_s^m\rho_s^m u_{{sf}_l}^m - \alpha_f^b\rho_f^b u_{{sf}}^b\right ]$, and applying the relevant drifts for $u_{{sf}_l}^m$ and $u_{{sf}}^b$ from (\ref{Eqn_5}), as \begin{equation} \displaystyle{E_{sf} = \frac{\alpha_s^b \left [ \tau_{sf}^m - \tau_{sf}^b\right ]}{\left [ \alpha_s^m\rho_s^m u_{{sf}_l}^m - \alpha_f^b\rho_f^b u_{{sf}}^b\right ]} = \frac{\alpha_s^b \left [ \tau_{sf}^m - \tau_{sf}^b\right ]}{\left [ \alpha_s^m\rho_s^m \lambda_{{sf}_l}^m - \alpha_f^b\rho_f^b \lambda_{{sf}}^b\right ]u_s^m}. } \label{Eqn_10_sf_1} \end{equation} So, since $u_{sf}^b = \lambda_{sf}^b u_s^m$, the corresponding momentum production $\left ( u_{sf}^bE_{sf} \right )$ takes the form: \begin{equation} \displaystyle{u_{sf}^bE_{sf} = \frac{\alpha_s^b \left [ \tau_{sf}^m - \tau_{sf}^b\right ] \lambda_{sf}^b}{\left [ \alpha_s^m\rho_s^m \lambda_{{sf}_l}^m - \alpha_f^b\rho_f^b \lambda_{{sf}}^b\right ]}. } \label{Eqn_10_sf_2} \end{equation} Moreover, the effectively reduced net frictional stress (dissipation) for the solid-fluid interaction is given by, \begin{equation} \displaystyle{\frac{\alpha_s^b \left [ \tau_{sf}^m - \tau_{sf}^b\right ]}{\alpha_s^m\rho_s^m}. } \label{Eqn_10_sf_3} \end{equation} Since (\ref{Eqn_10_sf_2}) and (\ref{Eqn_10_sf_3}) are equivalent (Pudasaini and Fischer, 2020a), by comparing these expressions, we finally obtain a closure relationship between $\lambda_{{sf}_l}^m$ and $\lambda_{{sf}}^b$ as \begin{equation} \displaystyle{\lambda_{{sf}_l}^m = \left ( 1 + \frac{\rho_f^b\alpha_f^b}{\rho_s^m\alpha_s^m}\right )\lambda_{{sf}}^b, } \label{Eqn_10_sf} \end{equation} in which $N_{sf}^I = {\rho_f^b\alpha_f^b}/{\rho_s^m\alpha_s^m}$ is the associated bed inertial number. Although (\ref{Eqn_10_sf}) involved a bit of mechanical derivation (as seen above), physically this is a great achievement. The point is that, once derived formally and known explicitly, this can also be directly extracted now from the compact drift equation (\ref{Eqn_8}) by carefully and consistently taking the cross terms $\rho_f^b\alpha_f^b$ from $\rho^b$ and $\rho_s^m\alpha_s^m$ from $\rho^m$, and correspondingly realizing $\lambda^b$ as $\lambda_{{sf}}^b$ and $\lambda^m_l$ as $\lambda_{{sf}_l}^m$. This converts (\ref{Eqn_8}) into (\ref{Eqn_10_sf}). \\[3mm] However, note that $E_{sf}$ as considered in (\ref{Eqn_10_sf_1}) is only for the purpose of constructing the closure relation (\ref{Eqn_10_sf}) but not for the purpose of obtaining a contribution for the fluid erosion rate $E_{f}$. Due to its consistency and importance in practical applications (as explained at Section 2.1 and Section 5.2), the solid and fluid erosion rates $E_{s}$ and $E_{f}$ are derived at Section 3.4 from the total erosion rate $E$ in (\ref{Eqn_7}). \\[3mm] Exactly in the same manner, we can construct the closure relationship between $\lambda_{{fs}_l}^m$ and $\lambda_{{fs}}^b$, either formally as in (\ref{Eqn_10_sf}), or by the careful and consistent extraction from (\ref{Eqn_8}) to yield: \begin{equation} \displaystyle{\lambda_{{fs}_l}^m = \left ( 1 + \frac{\rho_s^b\alpha_s^b}{\rho_f^m\alpha_f^m}\right )\lambda_{{fs}}^b, } \label{Eqn_10_fs} \end{equation} where $N_{fs}^I = {\rho_s^b\alpha_s^b}/{\rho_f^m\alpha_f^m}$ is the respective bed inertial number. With this, all the drifts in (\ref{Eqn_5}) are known fully and mechanically without any fit parameter. \begin{figure} \begin{center} \includegraphics[width=9cm]{ErosionDrift_ss_fs.eps} \includegraphics[width=9cm]{ErosionDrift_sf_fs.eps} \end{center} \caption[]{Left: the solid-solid ($ss$) and fluid-solid ($fs$) erosion drifts associated with the erosion velocity of the solid particle given by (\ref{Eqn_9}) and (\ref{Eqn_10_fs}) in the bed. Right: the fluid-fluid ($ff$) and solid-fluid ($sf$) erosion drifts associated with the erosion velocity of the fluid given by (\ref{Eqn_10}) and (\ref{Eqn_10_sf}) in the bed.} \label{Fig_3} \end{figure} \\[3mm] It is crucial to understand the dynamics of the phase-phase and cross-phase drifts in accordance of how they are influenced by changing composition of the bed material which controls the erosion velocities. Figure \ref{Fig_3} shows the phase-phase ($ss$ and $ff$) and cross-phase ($fs$ and $sf$) erosion drifts $\left (\lambda_{ss}^b, \lambda_{ff}^b\right)$, and $\left (\lambda_{fs}^b, \lambda_{sf}^b\right)$, respectively. As the erosion velocity of the solid particle at the bed $u_s^b$ is a composite function of the solid-solid ($ss$) and fluid-solid ($fs$) erosion drifts they are put together. Similarly, as the erosion velocity of the fluid molecule at the bed $u_f^b$ is a composite function of the fluid-fluid ($ff$) and solid-fluid ($sf$) erosion drifts they are put together. However, these sets of erosion drifts are described as a function of the fluid volume fraction in the bed $\left (\alpha_f^b\right)$ and the solid volume fraction in the bed $\left (\alpha_s^b\right)$, respectively. This is done logically and legitimately. Because, as the fluid volume fraction in the bed increases, there are less solid particles in the bed. Then, both for the solid and the fluid in the landslide, it will be easier to mobilize the solid in the bed. This is why both of $\lambda_{ss}^b$ and $\lambda_{fs}^b$ increase with $\alpha_f^b$. This is quite natural. Similar analysis applies to the mobilization of the basal fluid by the solid and the fluid from the landslide. Figure \ref{Fig_3} clearly manifests that the cross-drifts $\lambda_{fs}^b$ and $\lambda_{sf}^b$ may even be of the same order of magnitude as the phase-drifts $\lambda_{ss}^b$ and $\lambda_{ff}^b$. These important aspects could not be considered previously. \\[3mm] The drifts and cross drifts $\lambda_{ss}^b$ and $\lambda_{fs}^b$ are employed to describe the solid erosion velocity $u_s^b$ in (\ref{Eqn_1NNN}). However, they appear together with $\alpha_s^m$ and $\alpha_f^m$. So, it is more logical to define the effective phase-drifts and effective cross-drifts as $\Lambda_{ss}^b = \alpha_s^m\lambda_{ss}^b$ and $\Lambda_{fs}^b = \alpha_f^m\lambda_{fs}^b$. The results are presented in Fig. \ref{Fig_4}. The advantages here are two fold. First, the figure proves that the effective cross-drift $\Lambda_{fs}^b$ may even strongly dominate the effective phase-drift $\Lambda_{ss}^b$ and the difference between them increases steadily. Second, their sum is bounded from above by unity. This is so nice as they together constitute the velocity of the mobilized solid particle from the bed. Analogous analysis applies between the effective cross-drift $\Lambda_{sf}^b$ and the phase-drift $\Lambda_{ff}^b$ associated with the mobilization of the fluid molecule from the bed. \begin{figure} \begin{center} \includegraphics[width=9cm]{ErosionDriftEffective_ss_fs.eps} \includegraphics[width=9cm]{ErosionDriftEffective_sf_ff.eps} \end{center} \caption[]{Left: the solid-solid ($ss$) and fluid-solid ($fs$) effective erosion drifts associated with the erosion velocity of the solid particle from the bed. Right: the fluid-fluid ($ff$) and solid-fluid ($sf$) effective erosion drifts associated with the erosion velocity of the fluid from the bed.} \label{Fig_4} \end{figure} \\[3mm] {\bf III. The super-erosion-drift equation:} It is so pleasant to observe some important aspects of the erosion drift equation derived above. First, now the cross-drifts (\ref{Eqn_10_sf}) and (\ref{Eqn_10_fs}) can be directly extracted from the compact total drift relationship (\ref{Eqn_8}). For this reason, we call (\ref{Eqn_8}) the super-erosion-drift-equation (or simply the S-drift). Yet, the most astonishing fact is that the S-drift contains all the necessary information for all the drift factors. It means, in fact, essentially all the needed five drift factors can be obtained by a single drift equation (\ref{Eqn_8}). Second, the cross-drift (\ref{Eqn_10_sf}) and (\ref{Eqn_10_fs}) are symmetrical about the solid-fluid and fluid-solid cross-phase interactions across the erosion interface. Third, these properties could be expected, but they also signify the strength of the total erosion drift equation (\ref{Eqn_8}), consistency of all the derivations for drifts, and also the mechanical equivalence between the erosion-induced momentum production and the reduced frictional strength for the erosional mass flows. \subsection{Bed inertial numbers and mobility} The shear resistances of the bed materials (in total or component-wise, direct or cross-phases) against the applied shear stresses from the landslide are described by the bed inertial numbers $N^I, N^I_{ss}, N^I_{ff}, N^I_{sf}, N^I_{fs}$ in (\ref{Eqn_8}), (\ref{Eqn_9}), (\ref{Eqn_10}), (\ref{Eqn_10_sf}), (\ref{Eqn_10_fs}), respectively. As these numbers decrease, bed materials become weaker against the applied shear. This elevates the values of corresponding drifts $\lambda^b, \lambda^b_{ss}, \lambda^b_{ff}, \lambda^b_{sf}, \lambda^b_{fs}$, resulting in the increased erosion velocity. This effectively means the mass flow mobility is associated with the bed inertial numbers. Depending on the respective inertial number, each erosion drift has its own special dynamics. However, the structures of the cross-inertial-numbers $N^I_{sf}$ and $N^I_{fs}$ tell that it is relatively difficult for the fluid in the landslide to mobilize the grain in the bed, but it is relatively easy for the grain in the landslide to mobilize the fluid in the bed. This is intuitively clear from the perspective of the strength of material, but revealed here with the mechanical closures for the cross-erosion drifts. In this respect, it is important to carefully analyze the erosion velocities in relation to the involved erosion drifts, but also with the volume fractions of the solid and fluid in the landslide and their respective velocities. We focus on this below. \subsection{Importance of different components in erosion velocities} From (\ref{Eqn_1NNN}), by considering the solid erosion velocity $u_s^b = \alpha_s^m \lambda_{ss}^b u_s^m + \alpha_f^m \lambda_{fs}^b u_f^m$, we see that there are complex and composite contributions to the erosion velocity from the fractions of the materials with their velocities, and the drifts. Usually, $\lambda_{ss}^b$ can be substantially greater than $\lambda_{fs}^b$. However, depending on the volume fractions and velocities, the component $\alpha_f^m \lambda_{fs}^b u_f^m$ may still play important to dominant role in comparison to the other component $\alpha_s^m \lambda_{ss}^b u_s^m$ in $u_s^b$. Example includes the mobilization of light and loose material (e.g., till) from the bed by fluid dominated debris flood with higher fluid velocity. So, the contribution of the bed solid mobilization by the fluid in the landslide cannot just be ignored, but was disregarded previously. We have shown that it must be determined dynamically by all the contributing factors, the solid and fluid fractions, their velocities and drifts. The same applies to the fluid erosion velocity $u_f^b$. This will have huge implications in the erosion-induced momentum productions which are dealt with at Section 4.2. As momentum productions play decisive role in the dynamics, mobility, destructive power, run-out and deposition morphology of mass flows (explained at Section 7), we must mechanically correctly describe the momentum productions with respect to the erosion velocities. This sheds light on the importance of the cross-mobility. This very crucial aspect, which was not realized before, and has been considered here for the first time. \\[3mm] Figure \ref{Fig_5} shows the (strong) dominance of the cross-phase contributions over the phase-contributions on the solid and fluid erosion velocities. The cross-phase contributions are considered here for the first time revealing their essence in correctly describing the erosive mass transports. \begin{figure} \begin{center} \includegraphics[width=9cm]{ErosionVelocitySolid.eps} \includegraphics[width=9cm]{ErosionVelocityFluid.eps} \end{center} \caption[]{Contributions of different components for the solid erosion velocity $u_s^b$ (left), and the fluid erosion velocity $u_f^b$ (right) in (\ref{Eqn_1NNN}) together with the phase- and cross-drifts and the solid and fluid volume fractions in the landslide.} \label{Fig_5} \end{figure} \subsection{Solid and fluid erosion rates} With the relations derived at Section 3.1, the total erosion rate $E$ in (\ref{Eqn_7}) is mechanically fully described. As the basal substrate, composed of the solid and fluid volume fractions $\alpha_s^b$ and $\alpha_f^b$, is entrained by the flow at the rate $E$, the amounts produced by $\alpha_s^b E$ and $\alpha_f^b E$ are respectively added to the solid and fluid components of the sliding mixture. Since $\alpha_s^b + \alpha_f^b =1$, this facilitates constructing the solid erosion rate $\left ( E_s\right)$ and fluid erosion rate $\left ( E_f\right)$ by mechanically splitting the total erosion rate into the solid and fluid erosion rates $\left (E = E_s + E_f\right)$ as: \begin{equation} E_s = \alpha_s^b E,\,\,\, E_f = \alpha_f^b E. \label{Eqn_11} \end{equation} For this reason, we call $E$ in (\ref{Eqn_7}) the unified erosion rate for mixture mass flows. This completes the derivation of the unified mechanical erosion rate models for the solid and fluid components in two-phase mass flows. Physical parameters and dynamical variables involved in erosion rates are explained at Section 5.4. \section{Momentum productions for solid and fluid phases} When the erosion induced produced (rate of) solid and fluid masses $E_s$ and $E_f$ are combined with the erosion velocities, the (rate of) momentum productions are obtained. There are two possibilities for this. Either, we consider the total erosion velocity $u^b$ and construct both the produced solid and fluid momenta $u^b E_s$ and $u^bE_f$. Or, the produced solid and fluid momenta $u_s^b E_s$ and $u_f^b E_f$ are constructed by utilizing the solid and fluid erosion velocities $u_s^b$ and $u_f^b$, respectively. The first choice is appropriate if at the time of erosion both the solid particle and fluid molecule from the bed move with similar velocities. This is relatively simple. However, if the eroded solid particle and the fluid molecule move with substantially different velocities, then the second choice is preferable as it can better describe the momentum productions. Below, we consider the both of them as two alternative mechanical methods for the erosion-induced (rate of) momentum productions. \subsection{In terms of the total erosion velocity} First, we construct the momentum productions in terms of the down-slope ($x$-directional) component $u^b$ of the total velocity ${\bf u}^b$ of the eroded material, where, ${\bf u}^b = \left (u^b, v^b \right)$. With the solid and fluid erosion rates presented in (\ref{Eqn_11}), the solid and fluid momentum productions in the down-slope direction $\mathcal M_{x_s}$ and $\mathcal M_{x_f}$ that enter the solid and the fluid momentum equations (Section 6, equations (\ref{Model_Final})) are $u^b E_s$ and $u^b E_s$, respectively: \begin{equation} \mathcal M_{x_s} = u^b E_s = \lambda^b u^m E_s = \lambda^b u^m\alpha_s^b E = \lambda^b\alpha_s^b u^m E, \label{Eqn_14u} \end{equation} \begin{equation} \mathcal M_{x_f} = u^b E_f = \lambda^b u^m E_f= \lambda^b u^m\alpha_f^b E = \lambda^b \alpha_f^b u^m E, \label{Eqn_15u} \end{equation} where the erosion drift relation $u^b = \lambda^b u^m$ has been employed from (\ref{Eqn_5}). These momentum productions explicitly depend on the four aspects of flow: ($i$) erosion drift $\left (\lambda^b \right)$ characterizing the erosion velocity, ($ii$) volume fractions of solid and fluid in the erodible bed $\left (\alpha^b_{s}, \alpha^b_{f}\right)$, ($iii$) the flow velocity $\left(u^m\right)$, and ($iv$) the total erosion rate of the system $(E)$. \\[3mm] Similarly, the solid and fluid momentum productions $\mathcal M_{y_s}$ and $\mathcal M_{y_f}$ in the cross-slope ($y$) direction can be written, respectively, by consistently replacing $u$ by $v$ and $x$ by $y$ in (\ref{Eqn_14u})-(\ref{Eqn_15u}): \begin{equation} \mathcal M_{y_s} = v^b E_s = \lambda^b v^m E_s = \lambda^b v^m\alpha_s^b E = \lambda^b\alpha_s^b v^m E, \label{Eqn_14v} \end{equation} \begin{equation} \mathcal M_{y_f} = v^b E_f = \lambda^b v^m E_f= \lambda^b v^m\alpha_f^b E = \lambda^b \alpha_f^b v^m E. \label{Eqn_15v} \end{equation} As in (\ref{Eqn_14u})-(\ref{Eqn_15u}), these momentum productions $\mathcal M_{y_s}$ and $\mathcal M_{y_f}$ analogously depend on the four aspects characterizing the flow mechanical properties. \subsection{In terms of the solid and fluid erosion velocities} Next, we construct the $x$-directional solid and fluid momentum productions in terms of the solid and fluid erosion velocities. With the velocities of the eroded solid particles and fluid molecules $u_s^b$ and $u_f^b$ from (\ref{Eqn_1NNN}), the solid and fluid momentum productions $\mathcal M_{x_s}$ and $\mathcal M_{x_f}$ that enter the $x$-directional solid and fluid momentum equations (Section 6) are $u_s^b E_s$ and $u_f^b E_s$, respectively. The momentum productions then take the forms: \begin{equation} \mathcal M_{x_s} = u_s^b E_s = \left[\alpha_s^m \lambda_{ss}^b u_s^m + \alpha_f^m \lambda_{fs}^b u_f^m\right] E_s = \left[\alpha_s^m \lambda_{ss}^b u_s^m + \alpha_f^m \lambda_{fs}^b u_f^m\right] \alpha_s^b\, E, \label{Eqn_12u} \end{equation} \begin{equation} \mathcal M_{x_f} = u_f^b E_f = \left[ \alpha_s^m \lambda_{sf}^b u_s^m + \alpha_f^m \lambda_{ff}^b u_f^m\right] E_f = \left[ \alpha_s^m \lambda_{sf}^b u_s^m + \alpha_f^m \lambda_{ff}^b u_f^m\right] \alpha_f^b\, E. \label{Eqn_13u} \end{equation} It is important to note that these momentum productions explicitly depend on the five aspects of flow. These are: ($i$) erosion drifts $\left (\lambda^b_{ss}, \lambda^b_{fs}; \lambda^b_{sf}, \lambda^b_{ff}\right)$ characterizing erosion velocities, ($ii$) volume fractions of solid and fluid in the landslide $\left (\alpha^m_{s}, \alpha^m_{f}\right)$, ($iii$) volume fractions of solid and fluid in the erodible bed $\left (\alpha^b_{s}, \alpha^b_{f}\right)$, ($iv$) $x$-directional solid and fluid velocities in the flow $\left(u^m_{s}, u^m_{f}\right)$, and ($v$) the total erosion rate of the system $(E)$. However, $\lambda^b_{sf}$ and $\lambda^b_{ff}$ are already in $E$. Moreover, appearance of $\left (\alpha^m_{s}, \alpha^m_{f}\right)$ in (\ref{Eqn_12u})-(\ref{Eqn_13u}) is explicit, which was not the case in (\ref{Eqn_14u})-(\ref{Eqn_15u}). So, these momentum productions are broader than those in (\ref{Eqn_14u})-(\ref{Eqn_15u}). \\[3mm] By neglecting the solid-fluid and fluid-solid interactions $\left (\lambda_{sf}^b = 0, \lambda_{fs}^b = 0\right)$, and considering only the solid-solid and fluid-fluid contacts between the solid and fluid across the erosion interface (so the factors $\alpha^m_s, \alpha^m_f$ do not appear, and $\lambda_{ss}^b, \lambda_{ff}^b$ become $\lambda_{s}^b, \lambda_{f}^b$), (\ref{Eqn_12u})-(\ref{Eqn_13u}) reduce to the solid and fluid momentum productions in Pudasaini and Fischer (2020a), which, however, are largely incomplete. \\[3mm] The major role the erosion velocities play are in constituting momentum productions, which in turn, rule the entire dynamics of erosive mass flows. Field measurements have shown that the erosion rate $E$ can vary in the range 0.002-0.8 m$s^{-1}$, but can also exceed 1.0 m$s^{-1}$ (Berger et al., 2011; Iverson et al., 2011; McCoy et al., 2012). So, for the demonstrative purpose, we take a plausible value as $E = 0.25$ m$s^{-1}$. Figure \ref{Fig_6} displays the complete net momentum production for solid including both the contributions in the solid erosion velocity as mobilized by the solid and fluid from the landslide, $2 \mathcal M_{x_s}$, and the incomplete net momentum production for solid that only includes the contribution in the solid erosion velocity as mobilized by the solid from the landslide, $2\mathcal M_{x_{ss}}$. Similar analysis holds for fluid net momentum productions. These results reveal the need of including both the phase- and cross-phase momentum productions as the differences are substantial. This implies that the erosion-induced net momentum productions can be closer to the gravity loads on the landslide. So, the erosive mass transports must include the complete descriptions of erosion velocities and their full involvement in net momentum productions. \\[3mm] \begin{figure} \begin{center} \includegraphics[width=9cm]{NetMomentumProduction_Solid_a.eps} \includegraphics[width=9cm]{NetMomentumProduction_Fluid_a.eps} \end{center} \caption[]{Left: the net momentum production for solid, including both the phase- and cross-phase erosion velocity $\left (2 \mathcal M_{x_s}\right)$, and neglecting the cross-phase erosion velocity but only including the solid-solid phase erosion velocity $\left (2\mathcal M_{x_{ss}}\right)$ as given by (\ref{Eqn_12u}). Right: the net momentum production for fluid, including both the phase- and cross-phase erosion velocity $\left(2 \mathcal M_{x_f}\right)$, and neglecting the cross-phase erosion velocity but only including the fluid-fluid- phase erosion velocity $\left(2\mathcal M_{x_{ff}}\right)$ as given by (\ref{Eqn_13u}).} \label{Fig_6} \end{figure} In general it is less likely, but, in special situation the difference between the solid and fluid phase velocities in the landslide may be negligible, i.e., $u_s^m \approx u_f^m =: u^m$. Even then, the solid and fluid in the landslide may differently mobilize the solid in the bed, and so, $\lambda_{ss}^b \neq \lambda_{fs}^b$. But, if the solid and fluid in the landslide may similarly mobilize the solid in the bed, only then, $ \lambda_{ss}^b \approx \lambda_{fs}^b =: \lambda^b$. In this situation, (\ref{Eqn_12u}) is $\lambda^b\alpha_s^b u^m E$, which is (\ref{Eqn_14u}). Similar analysis applies to (\ref{Eqn_13u}). This implies the wide spectrum and the physically fully consistent modelling of momentum productions in (\ref{Eqn_12u})-(\ref{Eqn_13u}) associated with the solid and fluid erosion velocities. \\[3mm] Similarly, the $y$-directional momentum productions (in terms of the corresponding solid and fluid erosion velocities: $v^b_s, v^b_f$) are obtained by consistently replacing $u$ by $v$ and $x$ by $y$ in (\ref{Eqn_12u})-(\ref{Eqn_13u}): \begin{equation} \mathcal M_{y_s} = v_s^b E_s = \left[\alpha_s^m \lambda_{ss}^b v_s^m + \alpha_f^m \lambda_{fs}^b v_f^m\right] E_s = \left[\alpha_s^m \lambda_{ss}^b v_s^m + \alpha_f^m \lambda_{fs}^b v_f^m\right] \alpha_s^b\, E, \label{Eqn_12v} \end{equation} \begin{equation} \mathcal M_{y_f} = v_f^b E_f = \left[ \alpha_s^m \lambda_{sf}^b v_s^m + \alpha_f^m \lambda_{ff}^b v_f^m\right] E_f = \left[ \alpha_s^m \lambda_{sf}^b v_s^m + \alpha_f^m \lambda_{ff}^b v_f^m\right] \alpha_f^b\, E. \label{Eqn_13v} \end{equation} As in (\ref{Eqn_12u})-(\ref{Eqn_13u}), these momentum productions also explicitly depend on the five aspects of flow: ($i$) erosion drifts $\left (\lambda^b_{ss}, \lambda^b_{fs}; \lambda^b_{sf}, \lambda^b_{ff}\right)$ characterizing erosion velocities, ($ii$) volume fractions of solid and fluid in the landslide $\left (\alpha^m_{s}, \alpha^m_{f}\right)$, ($iii$) volume fractions of solid and fluid in the erodible bed $\left (\alpha^b_{s}, \alpha^b_{f}\right)$, ($iv$) $y$-directional solid and fluid velocities in the flow $\left(v^m_{s}, v^m_{f}\right)$, and ($v$) the total erosion rate of the system $(E)$. However, $\lambda^b_{sf}$ and $\lambda^b_{ff}$ are already in $E$. In total, there are five erosion drifts in (\ref{Eqn_12u})-(\ref{Eqn_13v}) including those in $ E$, that are needed in application, namely, $\lambda^b; \lambda_{ss}^b, \lambda_{fs}^b, \lambda_{sf}^b, \lambda_{ff}^b$, as explained at Section 3.1.3., all of which are explicitly known with the bed inertial numbers $N^I, N_{ss}^I, N_{fs}^I, N_{sf}^I, N_{ff}^I$. \\[3mm] By comparing (\ref{Eqn_14u})-(\ref{Eqn_15u}) with (\ref{Eqn_12u})-(\ref{Eqn_13u}), it is interesting to observe that the complex phase erosion velocities $\left[\alpha_s^m \lambda_{ss}^b u_s^m + \alpha_f^m \lambda_{fs}^b u_f^m\right]$ and $\left[ \alpha_s^m \lambda_{sf}^b u_s^m + \alpha_f^m \lambda_{ff}^b u_f^m\right]$ are intrinsically related to the structurally simple total erosion velocity $\lambda^b u^m$. Similar situation applies between (\ref{Eqn_14v})-(\ref{Eqn_15v}) and (\ref{Eqn_12v})-(\ref{Eqn_13v}). The momentum productions (\ref{Eqn_14u})-(\ref{Eqn_15v}) are structurally easier over (\ref{Eqn_12u})-(\ref{Eqn_13v}). However, whether (\ref{Eqn_14u})-(\ref{Eqn_15v}) or (\ref{Eqn_12u})-(\ref{Eqn_13v}) are more appropriate should be verified with applications to erosive laboratory flows and real events. \subsection{The erosion matrix} The structures of erosion velocities in (\ref{Eqn_12u})-(\ref{Eqn_13u}) can be written in a compact matrix-vector form: \begin{equation} {\boldsymbol{S}}_{_E}^b {\boldsymbol{A}}^m {\bf u}^m = {\bf u}^b, \label{Eqn_SE} \end{equation} where \begin{equation} {\boldsymbol{S}}_{_E}^b = \begin{bmatrix} \lambda_{ss}^b & \lambda_{fs}^b \\[2mm] \lambda_{sf}^b & \lambda_{ff}^b \end{bmatrix} , \,\, \boldsymbol{A}^m = \begin{bmatrix} \alpha_s^m & 0 \\[2mm] 0 & \alpha_f^m \end{bmatrix} , \,\, {\bf u}^m = \begin{bmatrix} u_s^m \\[2mm] u_f^m \end{bmatrix} , \,\, {\bf u}^b = \begin{bmatrix} u_s^b \\[2mm] u_f^b \end{bmatrix}, \label{Eqn_SE1} \end{equation} are the matrix of the (phase-phase and cross-phase) erosion drifts, diagonal matrix of the volume fractions in the landslide, the vector of flow velocities in the landslide, and the vector of erosion velocities at the bed. I call ${\boldsymbol{S}}_{_E}^b$ the (system) erosion matrix. This is invented here. Note that ${\boldsymbol{S}}_{_E}^b$ and $\boldsymbol{ A}^m$ can be combined to obtain another matrix, say, ${\boldsymbol{ S}}_{_{E_e}}^b = {\boldsymbol{ S}}_{_E}^b \boldsymbol{ A}^m$. I call it the effective erosion matrix. There are some interesting and important properties of the erosion matrix. For the vanishing off-diagonal elements ${\boldsymbol{ S}}_{_E}^b$ degenerates to imply the simple erosion velocities in Pudasaini and Fischer (2020a), but without the cross-phase interactions, which is not complete. The determinant of ${\boldsymbol{ S}}_{_E}^b$ shows that ${\boldsymbol{ S}}_{_E}^b$ is primarily ruled by the solid and fluid masses in the flow, $\left (\rho_s^m\alpha_s^m\right)$ and $\left (\rho_f^m\alpha_f^m\right)$, appearing in its numerator. Which means, the erosion is essentially driven by the flow dynamics, which can be easily understood. However, the intensity of erosion depends on the strength of the resisting bed material as ${\boldsymbol{ S}}_{_E}^b$ contains the factor $1/\left (\rho_s^m\alpha_s^m + \rho_s^b\alpha_s^b\right) \left (\rho_f^m\alpha_f^m + \rho_f^b\alpha_f^b\right) - 1/\left (\rho_s^m\alpha_s^m + \rho_f^b\alpha_f^b\right) \left (\rho_f^m\alpha_f^m + \rho_s^b\alpha_s^b\right)$. Since for mixture flows $\left (\rho_s^m\alpha_s^m\right)$ and $\left (\rho_f^m\alpha_f^m\right)$ are finite positive values, and $\left (\rho_s^m\alpha_s^m - \rho_f^m\alpha_f^m\right)$ and $\left (\rho_s^b\alpha_s^b - \rho_f^b\alpha_f^b\right)$ substantially deviate away from zero, ${\boldsymbol{ S}}_{_E}^b$ is non-singular and well defined, so is $\boldsymbol{ A}^m$. This means the system (\ref{Eqn_SE}) is invertible, allowing to reconstruct the flow velocities of the landslide from the knowledge of the erosion velocities, a novel perception in erosive mass transports. Moreover, this also tells that the erosion intensity increases as the solid and fluid phase masses in the flow and the bed deviate from one another. Hence, ${\boldsymbol{ S}}_{_E}^b$ characterizes the erosion mechanism of the system. The erosion matrix formally maps the flow dynamics (volume fractions and velocities of the flow) to the erosion velocities of the bed. This shows that, as the erosion velocity ${\bf u}^b$ governs the system, the process of erosion is jointly determined by the erosion matrix, volume fractions of the flow and the flow velocities. So, we have presented the first systematic, compact and the complete description of the dynamics and mechanical process of erosion. \section{Essence of unified mechanical erosion rates and momentum productions} In the form the erosion rate models developed here are similar to those already presented in Pudasaini and Fischer (2020a) which offers the basic mechanical foundation. We have utilized three important mechanical aspects from Pudasaini and Fischer (2020a) in constructing the new erosion rate models. These are: the jump in the shear stresses and the momentum fluxes across the erosion-interface, the shear velocity and the erosion drifts. However, there are substantial differences between the erosion rate models in Pudasaini and Fischer (2020a) and the ones developed here. Moreover, we have presented extensive and complete multi-phase momentum productions than those in Pudasaini and Fischer (2020a). We discuss several crucial properties of the new modelling framework. \subsection{Novelty and significance of the new approaches for shear stresses and interactions} The general situation of the stress jump given by (\ref{Eqn_3N}), and the eight different scenarios (\ref{Eqn_4N})-(\ref{Eqn_9N}), demonstrate its richness, the spectrum of applicability, and urgency of the new unified, consistent and comprehensive multi-phase erosion model presented here. The reductions at Section 2.5 also signify that the interacting stress structures in (\ref{Eqn_1N}) and (\ref{Eqn_2N}) are mechanically valid. Many of these aspects could not be described by any of the existing mass flow models. The professional and engineers will find the model structure (\ref{Eqn_3N}) and their reductions (\ref{Eqn_4N})-(\ref{Eqn_9N}) intuitive and useful in solving applied mass flow simulation associated with erosive events. \\[3mm] In the Pudasaini and Fischer (2020a) two-phase mechanical erosion model, the fluid-solid ($fs$) and solid-fluid ($sf$) interaction shear stresses (\ref{Eqn_2NN}) do not exist as the cross-shear stresses, but rather were directly used as $\tau_{ff}^m$ and $\tau_{ff}^b$ with the classical Chezy-frictions, which are physically inconsistent for their true cross-phase interactions. Moreover, in Pudasaini and Fischer (2020a), the fluid-fluid ($ff$) shear stresses (\ref{Eqn_3NN}) do not exist at all. We have made a fundamental advancement in modelling multi-phase erosive mass transport with entirely novel and innovative mechanical ideas for the shear stresses associated with multi-phase erosive mass transports with all possible interactions and natural shear stress descriptions for these interactions. There are several important aspects of the new unified modelling of the multi-phase erosive mass transport. \\[3mm] {\bf I.} As shown in (\ref{Eqn_1N}) and (\ref{Eqn_2N}), there are four fundamentally different types of interactions between the solid and fluid in the landslide and the solid and fluid in the bed material. These are: the solid-solid interaction $\alpha_f^b\left [ \tau_{ss}^m - \tau_{ss}^b \right ]$, which means solid in the landslide applies shear stress to the solid in the bed that is resisted by the solid material from the bed. Similar situations apply to other interactions: the solid-fluid $\alpha_s^b\left [ \tau_{sf}^m - \tau_{sf}^b \right ]$, fluid-solid: $\alpha_s^b\left [ \tau_{fs}^m - \tau_{fs}^b \right ]$, and fluid-fluid: $\alpha_f^b\left [ \tau_{ff}^m - \tau_{ff}^b \right ]$ interactions. The only two-phase erosion model that exists is by Pudasaini and Fischer (2020a). However, they only consider solid-solid $\left [ \tau_{ss}^m - \tau_{ss}^b \right ] = \left [ \tau_{s}^m - \tau_{s}^b \right ]$ and fluid-fluid $\left [ \tau_{ff}^m - \tau_{ff}^b \right ] = \left [ \tau_{f}^m - \tau_{f}^b \right ]$ interactions, respectively, for which only the single suffix $s$ and $f$ are used as there are no cross-phase interactions. These are just the direct solid-solid, and fluid-fluid interactions. Moreover, therein, fluid-fluid interactions are modelled with the classical Chezy-type frictions, which, as stated at Section 2.4.3, are mechanically inconsistent. \\[3mm] {\bf II.} Here, for the first time, we introduced mechanically important inter-phase solid-fluid $\alpha_s^b\left [ \tau_{sf}^m - \tau_{sf}^b \right ]$ (second element on the left hand side of (\ref{Eqn_1N})) and fluid-solid $\alpha_s^b\left [ \tau_{fs}^m - \tau_{fs}^b \right ]$ (first element on the left hand side of (\ref{Eqn_2N})) interactions across the erosion-interface. \\[3mm] {\bf III.} The shear applied by the fluid from the flow to the solid at the bed $\tau_{fs}^m$ in (\ref{Eqn_2NN}) follows the usual Chezy-type friction with the fluid velocity at the base of the landslide. However, surprisingly, we revealed that the shear resistance by the fluid at the bed against the applied shear from the solid from the flow $\tau_{sf}^b$ in (\ref{Eqn_2NN}) does not directly follow the usual Chezy-type friction with the fluid velocity at the bed, rather it is associated with the solid velocity at the base of the landslide. This is so, because this type of basal shear stress is induced by the shear load of the solid particles from the landslide applied to the fluid at the bed. This also led to a challenge in constructing the cross-erosion drift $\lambda_{sf}^b$, which, as discussed at Section 2.4.2, required a different type of erosion drift relationship between the fluid in the bed and the solid in the landslide. As revealed at Section 3.1.3, this is a novel understanding. \\[3mm] {\bf IV.} The further astonishing fact we discovered here is the fluid-fluid $\alpha_f^b\left [ \tau_{ff}^m - \tau_{ff}^b \right ]$ interaction. One may simply think to directly apply the classical Chezy-type relation as in Pudasaini and Fischer (2020a) (which, as discussed above, is not fully consistent), or simple use of viscous fluid shear stresses for this interaction. However, the situation of multi-phase erosive mass transport is different, and requires physically meaningful complex mechanical interfacial shear stresses. The point is that, these are fluid-fluid interactions, so Chezy-type friction is not applicable. Moreover, the fluids both in the landslide and basal materials are not the free fluids that can directly interact to each other and follow the simple rule of viscous shear stress. These fluids are contained inside the matrices of the solid particles in the landslide and the solid particles in the bed material. So, in addition to its own viscosity and velocity, the shear stress of the fluid in the landslide is influenced by the geometrical and mechanical properties of the basal material, particularly the permeability of the bed. The same applies to the shear resistance of the fluid in the erodible bed. As shown in (\ref{Eqn_3NN}), this required novel descriptions of the fluid-fluid interactions at the interface. \\[3mm] These four aspects clearly manifest the physical novelty and significance of the modelling approach for the shear stresses and interactions between different materials across the landslide-bed interface, presented here for the complex multi-phase erosive mass transports that are still lacking. \subsection{Advantages of the unified mechanical erosion rate model} One of the main purposes of this contribution is to construct a unified and physically consistent mechanical erosion rates for multi-phase mass flows. For several reasons, the novel unified multi-phase mechanical erosion models (\ref{Eqn_7}) or (\ref{Eqn_11}) are required. The total basal erosion rate $E$ in (\ref{Eqn_7}) is the consistent and exact sum of the solid and fluid erosion rates $E_s$ and $E_f$ in (\ref{Eqn_11}), i.e., $E = \alpha_s^b E + \alpha_f^b E$. This is crucial. We have presented the first such mechanical model for mixture mass flows. These erosion rates inherently contain the solid and fluid fractions of the bed material $\alpha_s^b$ and $\alpha_f^b$, respectively. These solid and fluid erosion rates $E_s = \alpha_s^b E$ and $E_f = \alpha_f^b E$ consistently take the solid and fluid fractions $\alpha_s^b$ and $\alpha_f^b$ from the bed and customarily supply them to the flow. Now, from the total eroded material, $\alpha_s^b$ and $\alpha_f^b$ fractions are persistently incorporated into the solid and fluid components in the moving material. This was not possible with the existing multi-phase erosion models (Pudasaini and Fischer, 2020a). That could only be achieved partially, but not consistently, by manually adjusting several parameters including the solid and fluid erosion drifts, Chezy friction coefficients, and the shear velocity factor. So, in the existing models, the sum of the solid and fluid erosion rates do not realistically correspond to the eroded material from the bed. Moreover, for fast flows, the fluid erosion rate can be unrealistically higher than the solid erosion rate which poses a great problem in consistent selection of the parameters in the erosion rates in Pudasaini and Fischer (2020a) model. Thus, natural erosion rates that correspond to the events cannot be obtained from the solid and fluid erosion rates developed by Pudasaini and Fischer (2020a). Here, we have addressed this great existing problem in erosive mass transport as required by practitioners with a unified erosion rate model such that the sum of the derived solid and fluid erosion rates automatically satisfies the natural criterion without any adjustment. \\[3mm] The previous explicit erosion rate models can be applied to the situations when both the flowing mixture and base material include sufficient amount of solid and fluid phases. However, in certain situations, e.g., if the flowing material is almost a fluid, then solid material from the bed cannot be entrained into the flow. The newly developed unified erosion rate model overcomes this limitation. \\[3mm] The new unified erosion rate models include all interactions between the solid and fluid phases in the landslide and the bed across the erosion interface. Each of these interactions are mechanically and dynamically important. However, not all of them were recognized previously, fiercely limiting the applicability of the existing erosion models. We considered all phase-phase and cross-phase interactions across the flow-bed interface. We removed several shortcomings inherent in the existing erosion rate modelling (Pudasaini and Fischer, 2020a), where the fluid-fluid interactions were not described mechanically appropriately. Moreover, the true solid-fluid and fluid-solid interactions were ignored previously. These complex interactions were beyond our present understanding. We solved these problems by presenting novel, mechanical shear stress models for the solid-fluid and fluid-fluid interactions. \\[3mm] Surprisingly, erosion velocities for both the solid and fluid appeared to be extensive. For the first time, we revealed the complex, but real mechanical situations that the solid particle at the bed is pushed and then mobilized both by the solid and fluid in the flow. However, the existing model (Pudasaini and Fischer, 2020a) only considered the mobilization of the basal solid particle by the solid from the landslide, but ignored its mobilization by the fluid from the landslide. This also applies to the erosion velocity of the fluid molecule at the bed. These realizations are crucial with which we presented mechanically complex, extended erosion velocity (Section 2.3) and mobility (Section 7) models by incorporating all the interactions between the particles and fluids across the erosion-interface. We constructed a novel, unified and complete mechanical mass and momentum production rates (Section 3.4 and Section 4) and embedded them into the dynamical model equations (mass and momentum balances, Section 6). This overcomes the severe limitations of existing erosion model, and opens a wide spectrum of possibilities for real applications in complex mass flow simulations. \subsection{Wide applicability of the new model} One of the most important aspects of the new erosion rate model presented here is that it can be applied to any flow situations and the bed morphologies, irrespective of the number of contributory components in the flow and in the bed. As exclusively discussed at Section 2.4 and Section 2.5, the new erosion rate models can be applied to nine different scenarios: ($i$) sliding debris mixture entraining different debris mixture from the bed, ($ii$) dry landslide entraining dry bed, ($iii$) fluid flow entraining fluid material from bed - river, lake or reservoir fluid, ($iv$) dry landslide entraining river, lake or reservoir fluid, ($v$) flood entraining soil, sand or gravel from the bed, ($vi$) debris flow entraining soil, sand or gravel from the bed, ($vii$) debris flow entraining fluid material from bed - river, lake or reservoir fluid, ($viii$) dry landslide entraining soil, sand or gravel from the bed, and ($ix$) flood entraining debris mixture from the bed. Such a wide spectrum of erosion modelling is presented here for the first time with the unified modelling approach. The new model is fully mechanical. So, the model presents a great opportunity in solving different applied, technical, engineering and geomorphological problems. \subsection{Parameters in erosion rates and net momentum productions} {\bf Physical parameters and dynamical variables involved in erosion rates:} Multi-phase mass flow and the associated erosion phenomena are characterized by distinct physical and mechanical parameters and flow dynamical variables. The erosion rate models (\ref{Eqn_11}) contain such parameters and variables from the flowing mixture and the erodible basal substrate, and also the induced quantities such as the proportionality factor in the shear velocity, and erosion drifts which are expressed in terms of the stated parameters and variables. The physical and mechanical parameters are the densities, friction coefficients and Chezy-type friction coefficients. The dynamical variables are the volume fractions and the flow depth. Additional parameters are the local slope angle, and the typical flow depth ($H$), and depending on the choice of the basal shear stress, the effective pore pressure ratio may also inter the list of parameters. All these parameters are either measurable, or can be obtained from the literature, or are obtained from the mechanical closures derived above. The flow dynamical quantities are obtained directly from the model simulations. Full set of dynamical model is presented at Section 6. So, the mechanical erosion rate models in (\ref{Eqn_11}) are well defined and well constrained. \\[3mm] In general, there are six quantities that need to be closed in (\ref{Eqn_7}). These are, $\nu; \lambda^m_l, \lambda^b$; $\lambda_{f_l}^m$; $\lambda_{ff}^b$ and $\lambda_{sf}^b$, which are the quantities and parameters related to the shear velocity and the erosion drifts. These quantities will be needed only if the solid and fluid erosion rates are derived explicitly for the solid and fluid phases. However, if we use (\ref{Eqn_4_4}) instead of (\ref{Eqn_3N}) then, we need to parameterize $\Upsilon^b$ instead of $\lambda_{ff}^b$ and $\lambda_{sf}^b$. As shown in Section 3.1.3, except for $\Upsilon^b$, all these quantities can be closed or parameterized relatively easily. However, note that the legitimate mechanical closure for $\Upsilon^b$ is still an open question in mass flows (Iverson, 2012; Ouyang et al., 2015). So, the real multi-phase interactive shear structure (\ref{Eqn_3N}) is mechanically superior and dynamically flexible over the effectively single-phase reduced structure in (\ref{Eqn_4_4}) for the basal shear resistance. Also, $\alpha_{_{BJ}}^m$, $\alpha_{_{BJ}}^b$; $C^m_f$, $C^b_f$ and $H$ are other physical parameters in $E$. However, $\alpha_{_{BJ}}^m$, $\alpha_{_{BJ}}^b$ can be modelled as explained in (\ref{Eqn_4NN}). Similarly, the values of $C^m_f$, $C^b_f$ can be obtained from the literature (Fraccarollo and Capart, 2002; Pudasaini and Fischer, 2020a). Moreover, the value of $H$ can be selected as a typical flow depth for a given event. So, the values of the parameters $\nu$; $C_f^m, C_f^b$; $H$, $\alpha_{_{BJ}}^m$, $\alpha_{_{BJ}}^b$, and $\mathcal K^m, \mathcal K^b$ can either be found in the literature or be closed as discussed above. Only $\mathcal K^m$ and $\mathcal K^b$ are new modelling parameters in the present model development. Otherwise, all the parameters in the erosion rates $\left (E_s, E_f\right )$ and the momentum productions $(\mathcal M)$ are already in Pudasaini and Fischer (2020a). However, the new models are mechanical much stronger, consistent and comprehensive than those in existing models. \\[3mm] In total, there are five erosion drifts in the momentum productions (\ref{Eqn_14u})-(\ref{Eqn_15v}) including those in $ E$, namely, $\lambda^b, \lambda_{l}^m, \lambda_{f_{l}}^m, \lambda_{sf}^b, \lambda_{ff}^b$. In application, we only need $\lambda^b, \lambda_{sf}^b, \lambda_{ff}^b$ as the variation of the flow velocities through the depth can be neglected. However, if (\ref{Eqn_6N}) and (\ref{Eqn_4_4}) are used instead of (\ref{Eqn_3N}) then, we only need $\lambda^b$ and $\Upsilon^b$, further reducing the erosion drifts by two, but as explained before, without clear mechanical closure for $\Upsilon^b$. If we use the momentum productions (\ref{Eqn_12u})-(\ref{Eqn_13v}), we effectively have five erosion drifts: $\lambda^b, \lambda_{sf}^b, \lambda_{ff}^b; \lambda_{fs}^b, \lambda_{ss}^b$, where $\lambda_{fs}^b, \lambda_{ss}^b$ emerge due to the composite erosion velocities. As explained earlier, in any situation, all these erosion drifts are mechanical closed. \\[3mm] We mention that one cannot expect well-defined mechanical models without the involvement of a number of intrinsic physical parameters characterizing the dynamics and the complexity of natural events. However, the parameter fit approach based on the empirical models cannot be anticipated to meaningfully represent the complex natural phenomenon of erosive mass transport. With the mechanically-explained models, such as the ones developed here, one understands a wide spectrum of natural processes taking place in a deterministic way, which is far beyond the reach of empirical models. \subsection{Recovering the existing model} In the limits, we recover the solid and fluid erosion rates in Pudasaini and Fischer (2020a) by respectively neglecting the fluid contribution and the solid contribution from the unified erosion rate model developed in (\ref{Eqn_7}). However, the erosion rates in Pudasaini and Fischer (2020a) could not be directly generalized to obtain the unified erosion rate (\ref{Eqn_7}). It was made possible here by following the process of considering total stresses, mixture densities, shear velocity and erosion drift equations for the mixtures and other aspects as discussed in Section 2 - Section 4. \subsection{Extension to multi-phase erosion rates} From the derivations and structures of the models developed above, it is straightforward to observe that, our method can be directly extended to derive the erosion rates for multi-phase mass flows, consisting of any number of constituent solid particles and viscous fluid phases in the flow material and the bed substrate. A particular example is the three-phase mass flows consisting of the coarse solid, fine solid and the fluid phases (Pudasaini and Mergili, 2019). All the properties of the multi-phase erosion rate (mass production) models explained above also apply to the multi-phase momentum productions. \section{Full dynamical model with unified erosion rates and\\ net momentum productions} In two-phase debris mixtures, phases are characterized by different material properties. The fluid phase is characterized by its material density $\rho_{f}$, viscosity $\eta_{f}$ and isotropic stress distribution; whereas the solid phase is characterized by its material density $\rho_{s}$, the internal friction angle $\phi$, the basal friction angle $\delta$, an anisotropic stress distribution, and the lateral earth pressure coefficient $K$. The subscripts $s$ and $f$ represent the solid and the fluid phases respectively, with the depth-averaged velocity components for fluid $\textbf{u}_{f}$ = ($u_{f}$, $v_{f}$) and for solid $\textbf{u}_{s}$ = ($u_{s}$, $v_{s}$) in the down-slope $(x)$ and the cross-slope $(y)$ directions. The total flow depth is denoted by $h$, and the solid volume fraction $\alpha_s$ (similarly the fluid volume fraction $\alpha_f = 1- \alpha_s$) are functions of space and time. Note that, except for the mass and momentum production rates, all other variables and parameters in the mass and momentum balance equations and other related terms, including forces, are for the landslide mixture. However, in what follows, for notational convenience, the superscript $^m$ indicating the mixture quantities have been omitted. \\[3mm] The solid and fluid mass balance equations for the landslide (Pudasaini, 2012) including the mass productions (erosion rates) together with the evolution equation for the basal morphology are given by \begin{eqnarray} \begin{array}{lll} \displaystyle{\Pd{}{t}{\left( \alpha_s h\right)} + \frac{\partial}{\partial x}{\left( \alpha_s h u_s\right)} + \frac{\partial}{\partial y}{\left( \alpha_s h v_s\right)}=E_s}, \\[5mm] \displaystyle{\Pd{}{t}{\left( \alpha_f h\right)} + \frac{\partial}{\partial x}{\left( \alpha_f h u_f\right)} + \frac{\partial}{\partial y}{\left( \alpha_f h v_f\right)}=E_f,} \\[5mm] \displaystyle{\Pd{b}{t}= -E; \,\,\,\,\,\,\, E = E_s + E_f,} \end{array} \label{Model_Final_Mass} \end{eqnarray} where $b = b(x,y; t)$ is the basal topography that evolves in space $(x, y)$ and time $(t)$, and $E_s$, $E_f$ are the solid and the fluid erosion-rates, and $E$ is the total erosion-rate as given by (\ref{Eqn_11}) and (\ref{Eqn_7}), respectively. This model can be used for partially or, fully saturated erodible basal substrate, or the substrate that is not erodible ($E = 0$). When the basal substrate is erodible, the solid fraction of $E$, i.e., $E_s$, enters into the solid mass balance as the solid mass production. So does the fluid fraction of $E$, i.e., $E_f$, that enters into the fluid mass balance as the fluid mass production. \\[3mm] Similarly, momentum conservation equations for the solid and fluid phases, in the down-slope ($x$) and cross-slope ($y$) directions, respectively, are: \begin{eqnarray} \begin{array}{lll} \resizebox{.935\hsize}{!}{$\displaystyle{\Pd{}{t}\biggl [ \alpha_s h \left( u_s \!-\! \gamma \mathcal C\left( u_f\! -\!u_s \right) \right) \biggr ] \!+\!\Pd{}{x}\biggl [ \alpha_s h \left( u_s^2 \!-\! \gamma \mathcal C\left( u_f^2 \!-\!u_s^2 \right)\!+\! \beta_{x_s} \frac{h}{2}\right) \biggr ] \!+\!\Pd{}{y}\biggl[ \alpha_s h \left( u_sv_s \!-\! \gamma \mathcal C\left( u_fv_f \!-\!u_sv_s \right) \right) \biggr ]} \displaystyle{= h\mathcal S_{x_s} \!+\! 2 \mathcal M_{x_s}}$},\\[5mm] \resizebox{.935\hsize}{!}{$\displaystyle{\Pd{}{t}\biggl [ \alpha_s h \left( v_s \!-\! \gamma \mathcal C\left( v_f \!-\!v_s \right) \right) \biggr ] \!+\!\Pd{}{x}\biggl [ \alpha_s h \left( u_sv_s \!-\! \gamma \mathcal C\left( u_fv_f \!-\!u_sv_s \right)\rb \biggr ] \!+\!\Pd{}{y}\left[ \alpha_s h \left( v_s^2 \!-\! \gamma \mathcal C\left( v_f^2 \!-\!v_s^2\right)\!+\! \beta_{y_s} \frac{h}{2} \right) \right ]} \displaystyle{= h\mathcal S_{y_s}\!+\! 2 \mathcal M_{y_s}}$},\\[5mm] \resizebox{.935\hsize}{!}{$ \displaystyle{\Pd{}{t}\left [ \alpha_f h \left( u_f \!+\! \frac{\alpha_s }{\alpha_f}\mathcal C\left( u_f \!-\!u_s \right) \right) \right ] \!+\!\Pd{}{x}\left [ \alpha_f h \left( u_f^2 \!+\! \frac{\alpha_s }{\alpha_f}\mathcal C\left( u_f^2 \!-\!u_s^2 \right) \!+\! \beta_{x_f} \frac{h}{2}\right) \right ] \!+\!\Pd{}{y}\left[ \alpha_f h \left( u_fv_f \!+\! \frac{\alpha}{\alpha_f}\mathcal C\left( u_fv_f \!-\!u_sv_s \right) \right) \right ] = h\mathcal S_{x_f}\!+\! 2 \mathcal M_{x_f}}$},\\[5mm] \resizebox{.935\hsize}{!}{$ \displaystyle{\Pd{}{t}\left [ \alpha_f h \left( v_f \!+\! \frac{\alpha_s }{\alpha_f}\mathcal C\left( v_f \!-\!v_s \right) \right) \right ] \! +\!\Pd{}{x}\left [ \alpha_f h \left( u_fv_f \!+\! \frac{\alpha_s }{\alpha_f}\mathcal C\left( u_fv_f \!-\!u_sv_s \right)\rb \right ] \!+\!\Pd{}{y}\left[ \alpha_f h \left( v_f^2 \!+\! \frac{\alpha_s}{\alpha_f}\mathcal C\left( v_f^2 \!-\!v_s^2 \right) \!+\! \beta_{y_f} \frac{h}{2}\right) \right ] = h\mathcal S_{y_f}\!+\! 2 \mathcal M_{y_f}}$}, \end{array} \label{Model_Final} \end{eqnarray} where $\mathcal S$ are the source terms (discussed below), and the momentum productions $\mathcal M$ are given by (\ref{Eqn_14u})-(\ref{Eqn_15v}) or (\ref{Eqn_12u})-(\ref{Eqn_13v}), respectively. \\[3mm] These solid and fluid momentum equations are rigorously derived (Pudasaini, 2012), and include the solid and fluid momentum production terms, as modelled in Pudasaini and Krautblatter (2021), second terms on the right hand sides. Following Pudasaini and Krautblatter (2021), the momentum balance equations (\ref{Model_Final}) correctly include the erosion-induced change in inertia and the momentum production of the system via the terms $2 \mathcal M$, which are the net momentum productions. Importantly, our present approach makes a complete description of the full dynamical model equations for multi-phase erosive landslide in conservative form by considering all the aspects associated with the erosion-induced reduced friction (the momentum production) and the correct handling of the inertia of the system. One of the important aspects in these momentum production terms are, that the velocities of the solid and fluid particles at the bottom that have just been eroded, $u^b, v^b$, or $u_s^b, v_s^b; u_f^b, v_f^b$, that appear in the net momentum productions (\ref{Eqn_14u})-(\ref{Eqn_15v}) or (\ref{Eqn_12u})-(\ref{Eqn_13v}) are different than the depth-averaged (mean) velocities, $u, v$, or $u_s, v_s; u_f, v_f$, that appear in the inertial (or, the convective) part, and also the source terms, of the mass and momentum equations. \\[3mm] In (\ref{Model_Final}), the source terms (written in dimensional form, Pudasaini and Mergili, 2019) are as follows: \begin{eqnarray} \resizebox{.935\hsize}{!}{$ \mathcal S_{x_s}\! = \alpha_s\left [g^x \!- \frac{u_s}{|{\bf u}_s|}\tan\delta g^z(1-\gamma) \!- g^z(1-\gamma)\Pd{b}{x}\right ] \!-\! \gamma\alpha_s g^z\!\left [ \Pd{h}{x} + \Pd{b}{x}\right ] \!+ C_{DG} \left( u_f - u_s \right){ |{\bf u}_f - {\bf u}_s|}^{\jmath-1} \!-C_{DV}^{x_s} u_s|{\bf u}_s| \alpha_s,$} \label{Model_Final_ss}\\[5mm] \resizebox{.935\hsize}{!}{$\mathcal S_{y_s} \!= \alpha_s\left [ g^y \!- \frac{v_s}{|{\bf u}_s|}\tan\delta g^z(1-\gamma) \!- g^z(1-\gamma)\Pd{b}{y}\right ] \!-\! \gamma \alpha_s g^z\!\left [ \Pd{h}{y} + \Pd{b}{y}\right ] \!+ C_{DG} \left( v_f - v_s \right){ |{\bf u}_f - {\bf u}_s|}^{\jmath-1} \!-C_{DV}^{y_s} v_s|{\bf u}_s|\alpha_s,$} \label{Model_Final_s} \end{eqnarray} \vspace{-3mm} \begin{eqnarray} \begin{array}{lll} \displaystyle{\mathcal S_{x_f} = \alpha_f\biggl [g^x - \biggl [-\frac{1}{2}p_{b_f}\frac{h}{\alpha_f}\Pd{\alpha_f}{x} + p_{b_f}\Pd{b}{x} -\left \{ 2\frac{\partial}{\partial x}\left ( \nu_f\frac{\partial u_f}{\partial x}\right ) +\frac{\partial}{\partial y}\left ( \nu_f\frac{\partial v_f}{\partial x}\right ) +\frac{\partial}{\partial y}\left ( \nu_f\frac{\partial u_f}{\partial y}\right ) - \nu_f\frac{\chi u_f}{h^2} \right \}} \\[5mm] +\displaystyle{ \frac{\mathcal A}{\alpha_f}\left \{ 2\frac{\partial}{\partial x}\left ( \nu_f \frac{\partial \alpha_s}{\partial x}\left( u_f-u_s\right )\right) +\frac{\partial}{\partial y}\left ( \nu_f \left(\frac{\partial \alpha_s}{\partial x}\left( v_f-v_s\right ) +\frac{\partial \alpha_s}{\partial y}\left( u_f-u_s\right ) \right)\right) \right\} -\frac{\mathcal A}{\alpha_f}\frac{\xi\alpha_s\nu_f}{h^2}\left ( u_f -u_s\right) \biggr]\biggl ]}\\[5mm] -\displaystyle{\frac{1}{\gamma}C_{DG}\left( u_f - u_s \right){ |{\bf u}_f - {\bf u}_s|}^{\jmath-1} -C_{DV}^{x_f} u_f|{\bf u}_f| \alpha_f}, \end{array} \label{Model_Final_fx} \end{eqnarray} \vspace{-3mm} \begin{eqnarray} \begin{array}{lll} \displaystyle{\mathcal S_{y_f} = \alpha_f\biggl [g^y - \biggl [-\frac{1}{2}p_{b_f}\frac{h}{\alpha_f}\Pd{\alpha_f}{y} + p_{b_f}\Pd{b}{y} -\left \{ 2\frac{\partial}{\partial y}\left ( \nu_f\frac{\partial v_f}{\partial y}\right ) +\frac{\partial}{\partial x}\left ( \nu_f\frac{\partial u_f}{\partial y}\right ) +\frac{\partial}{\partial x}\left ( \nu_f\frac{\partial v_f}{\partial x}\right ) - \nu_f\frac{\chi v_f}{h^2} \right \}} \\[5mm] +\displaystyle{ \frac{\mathcal A}{\alpha_f}\left \{ 2\frac{\partial}{\partial y}\left ( \nu_f \frac{\partial \alpha_s}{\partial y}\left( v_f-v_s\right )\right) +\frac{\partial}{\partial x}\left ( \nu_f \left(\frac{\partial \alpha_s}{\partial y}\left( u_f-u_s\right ) +\frac{\partial \alpha_s}{\partial x}\left( v_f-v_s\right ) \right)\right) \right\} -\frac{\mathcal A}{\alpha_f}\frac{\xi\alpha_s\nu_f}{h^2}\left ( v_f -v_s\right) \biggr]\biggl ]}\\[5mm] -\displaystyle{\frac{1}{\gamma}C_{DG}\left( v_f - v_s \right){ |{\bf u}_f - {\bf u}_s|}^{\jmath-1} -C_{DV}^{y_f} v_f|{\bf u}_f| \alpha_f}. \end{array} \label{Model_Final_fy} \end{eqnarray} The pressures and other parameters involved in the above model equations are as follows: \begin{eqnarray} \begin{array}{lll} \displaystyle{ \beta_{x_s} = K_x g^z(1-\gamma), \,\,\,\, \beta_{y_s} = K_y g^z(1-\gamma),\,\,\, \beta_{x_f} = \beta_{y_f} = g^z,\,\,\, p_{b_f} = g^z, \,\,\, p_{b_s} = (1-\gamma)p_{b_f},}\\[5mm] \displaystyle{C_{DG} = \frac{\alpha_s \alpha_f(1-\gamma)g}{\left [\mathcal U_T\{{\cal P}\mathcal F(Re_p) + (1-{\cal P})\mathcal G(Re_p)\} + {\mathcal S}_P\right ]^{\jmath}},\,\,\,\, \mathcal F = \frac{\gamma}{180}\left(\frac{\alpha_f}{\alpha_s} \right)^3 Re_p, \,\,\,\, \mathcal G= \alpha_f^{M(Re_p) -1},} \\[5mm] \displaystyle{\gamma =\frac{\rho_f}{\rho_s},\, Re_p = \frac{\rho_f d~ \mathcal U_T}{\eta_f},\, \nu_f = \frac{\eta_f}{\rho_f},\, \alpha_f = 1-\alpha_s,\, \mathcal A = \mathcal A(\alpha_f).} \end{array} \label{Model_Final_parameters} \end{eqnarray} Equations (\ref{Model_Final_Mass}) are the depth-averaged mass balances for solid and fluid phases respectively, and (\ref{Model_Final}) are the depth-averaged momentum balances for solid (first two equations) and fluid (other two equations) in the $x$- and $y$-directions, respectively. All equations and expressions are written in dimensional form. \\[3mm] In the above {equations (\ref{Model_Final_Mass})-(\ref{Model_Final}), $x$, $y$ and $z$ are the locally orthogonal coordinates in} the down-slope, cross-slope and flow normal directions, and $g^x$, $g^y$, $g^z$ are the respective components of gravitational acceleration. $\mu =\tan\delta$ is the basal friction coefficient and $C_{DG}$ is the generalized drag coefficient. Simple linear (laminar-type, at low velocity) or quadratic (turbulent-type, at high velocity) drag is associated with ${\jmath} = 1$ or $2$, respectively. $\mathcal{U}_{T}$ is the terminal velocity of a particle and $\mathcal{P}\in [0,1]$ is a parameter, {or a function (Pudasaini, 2020)} which combines the solid-like ($\mathcal{G}$) and fluid-like ($\mathcal{F}$) drag contributions to flow resistance. $p_{b_{f}}$ and $p_{b_{s}}$ are the effective fluid and solid pressures. $\gamma$ is the density ratio, $\nu_f$ is the kinematic viscosity of fluid, $\mathcal{C}$ is the virtual mass coefficient (kinetic energy of the fluid phase induced by solid particles, {(Pudasaini, 2019)}), $M$ is a function of the particle Reynolds number ($R_{e_{p}}$), $\chi$ includes vertical shearing of fluid velocity, $\xi$ takes into account different distributions of $\alpha_s$, and $\mathcal{A}$ is the mobility of the fluid at the interface between the solid and fluid in the flow. {$C_{DV}$ are the viscous drag coefficients (Pudasaini and Hutter, 2007), akin to Chezy-friction, that can also include the high intensity frontal ambient drag (Pudasaini and Fischer, 2020a)}. \\[3mm] The physically based, fully analytical, and well-bounded two-phase virtual mass force coefficient $\mathcal C$ (Pudasaini, 2019) is given by: $\mathcal C = \frac{\mathcal N_{vm}^0\left ( \l\, + \,\alpha_s^q\right) - 1}{\left(\alpha_f/\alpha_s\right) +\, \gamma}$, where $\mathcal N_{vm}^0$ is the virtual mass number, $\l$ and and $q$ are some numerical parameters. This model covers any distribution of the dispersive phase (dilute to dense distribution of the solid particles). As justified in Pudasaini (2019), the physically relevant values of these parameters can be $\mathcal N_{vm}^0 = 10.0, \l = 0.12, q = 1$. This virtual mass force is general and evolves automatically as a function of solid volume fraction. \\[3mm] $\mathcal S_P = \left ( \frac{P}{\alpha_s} + \frac{1 - P}{\alpha_f}\right ){\mathcal K}$ is called the smoothing function, where ${\mathcal K}$ is determined by the corresponding mixture mass flux per unit mixture density, typically 10 ms$^{-1}$ (Pudasaini, 2020). $P = \alpha_s^n$ is a function of the solid volume fraction $\alpha_s$, where $n$ is a positive number, close to 1. \\[3mm] The evolution of basal topography ${\partial b}/{\partial t}= -E$ in (\ref{Model_Final_Mass}) due to erosion and deposition is explicitly included in the model. With this, the basal change directly influences the source terms in (\ref{Model_Final_ss})-(\ref{Model_Final_fy}) by accounting for changes that are associated with the driving {and resisting} forces in the net force balance. This appears very important for geophysical mass flows which are mainly driven by gravity and slope changes, i.e., the respective components of gravitational accelerations, frictions, the basal and hydraulic pressure gradients, and the buoyancy induced terms. \\[3mm] In the derivation of the model in this section, as in Pudasaini and Fischer (2020a), we have assumed that the solid obeys to the Coulomb law at the base of the flow, and also in the bed when dealing with the erosion rate. However, at very large particle concentrations, the Coulomb rheology could be generalized by applying rate-dependent particle stresses such as the phenomenological models based on the $\mu(I)$ rheology (dry confined flows by Jop et al., 2005; for water-immersed grains by Cassar et al., 2005; confined submarine avalanches by Doppler et al., 2007), or even more complex pressure- and rate-dependent Coulomb-viscoplastic rheologies (Domnik et al., 2013; Pudasaini and Mergili, 2019). Moreover, fundamental approaches based on kinetic theory, which shows that the particle stresses are rate-dependent, might produce appreciable results for erosion associated with large particle concentration (Berzi and Fraccarollo, 2015). So, the models presented here can further be extended and generalized to the situation that the stresses are rate-dependent that might better deal with the erosion phenomena for large particle concentration. \\[3mm] As discussed in Pudasaini and Fischer (2020a), in general, the above model can be applied to the fluid of any viscosity, where the effective viscosity increases with the particle concentration (Takahashi, 2007; Pudasaini, 2012; Pudasaini and Mergili, 2019). We have applied Coulomb stress for particle or solid-phase. However, we note that, for the laminar flows, the particle stresses can scale with the fluid viscosity, as in viscous suspensions (Ness and Sun, 2015). For more complex situation with turbulence, e.g., turbulence in the presence of particles, and the contribution of the particle fluctuations to the fluid viscosity, we refer to Berzi and Fraccarollo (2015) who showed that turbulent viscosity is a decreasing function of the local volume fraction of particles. Structurally, the mass and momentum equations (\ref{Model_Final_Mass}) and (\ref{Model_Final}) are the same as in Pudasaini and Fischer (2020a). However, there are fundamental differences, due to the applied erosion rate models in Pudasaini and Fischer (2020a) and here, and the new net momentum productions. \section{Erosion-induced net momentum production and flow mobility} Pudasaini and Krautblatter (2021) presented a mechanical condition for when, how and how much energy erosive landslides gain or lose. They pioneered a mechanical model for the energy budget of erosive landslides that controls enhanced or reduced mobility, and made a breakthrough in correctly determining the landslide mobility. Erosion velocity, which regulates the energy budget, determines the enhanced or reduced mobility. With their energy generator they offered the first-ever mechanical quantification of erosional energy and a precise description of mobility. They demonstrated that erosion and entrainment are different processes. Landslides gain energy and enhance mobility if the erosion velocity $\left(u^b\right)$ exceeds the entrainment velocity $\left( u - u^b\right)$. Presented dynamical equations in Pudasaini and Krautblatter (2021) correctly include erosion induced net momentum production. However, the Pudasaini and Krautblatter (2021) erosion-induced landslide mobility model is for an effectively single-phase bulk mixture, that we have extended here and applied in (\ref{Model_Final_Mass})-(\ref{Model_Final}) for a true two-phase (multi-phase) mass flows with unified mechanical model for erosion rates, and the rates of mass and momentum productions in (\ref{Eqn_11}) and in (\ref{Eqn_14u})-(\ref{Eqn_13v}). Yet, the mixture model presented here is more realistic, but also more comprehensive than in Pudasaini and Krautblatter (2021) due to the extended structure of erosion velocities, unified mechanical erosion rates, and the associated advanced net momentum productions in (\ref{Eqn_14u})-(\ref{Eqn_13v}). \\[3mm] It is important to note that, in (\ref{Model_Final}) out of the erosion induced net momentum productions $2 \mathcal M$, one $\mathcal M$ emerges from the momentum production derived from the effectively reduced friction, while the other $\mathcal M$ originates from the correct understanding of the inertia of the entrained mass. Pudasaini and Krautblatter (2021) invented these crucial aspects. Mechanically and dynamically, this makes a huge difference, and thus, is a great advancement in simulating landslide with erosion. However, as explained above, the structure and scope of $\mathcal M$ here is much more extensive than that in Pudasaini and Krautblatter (2021). \\[3mm] Pudasaini and Krautblatter (2021) proved that, if the erosion velocity is greater than one-half of the flow velocity the mobility is enhanced. For the momentum productions in (\ref{Eqn_14u})-(\ref{Eqn_15v}), this is $u^b > u^m/2$, or equivalently $\lambda^b > 1/2$. In other words, the landslide gains energy to enhance its mobility if the eroded material is easily entrainable with the velocity lower than the erosion velocity. Otherwise, the mobility of the mass flow will be reduced even for the erosive events. For the momentum productions in (\ref{Eqn_12u})-(\ref{Eqn_13v}), in principle, these conditions are valid in terms of the erosion velocities in connection with the flow velocities and the erosion drifts. However, the erosion velocities and the erosion drifts need to be restructured. The erosion velocities $u_s^b$ and $u_f^b$ in (\ref{Eqn_12u})-(\ref{Eqn_13u}) can be re-written in convenient forms as: \begin{equation} u_s^b = \left[\alpha_s^m \lambda_{ss}^b u_s^m + \alpha_f^m \lambda_{fs}^b u_f^m\right] = \alpha_s^m\left[ \lambda_{ss}^b\left\{ 1 + \frac{\alpha_f^m \lambda_{fs}^b u_f^m}{\alpha_s^m \lambda_{ss}^b u_s^m}\right\}\right]u_s^m = \alpha_s^m \hat{\lambda}_{s}^b u_s^m = \Lambda_{s}^b u_s^m, \label{Eqn_ubs_alt} \end{equation} \begin{equation} u_f^b = \left[\alpha_f^m \lambda_{ff}^b u_f^m + \alpha_s^m \lambda_{sf}^b u_s^m\right] = \alpha_f^m\left[ \lambda_{ff}^b\left\{ 1 + \frac{\alpha_s^m \lambda_{sf}^b u_s^m}{\alpha_f^m \lambda_{ff}^b u_f^m}\right\}\right]u_f^m = \alpha_f^m \hat{\lambda}_{f}^b u_f^m = \Lambda_{f}^b u_f^m. \label{Eqn_ubf_alt} \end{equation} In these representations, the solid and fluid mobilities will be enhanced if $u_s^b > u_s^m/2, u_f^b > u_f^m/2$, or $\Lambda_{s}^b > 1/2$ and $\Lambda_{f}^b > 1/2$, which can be considered component-wise or together. Otherwise, the mobility will be reduced if $\Lambda_{s}^b < 1/2$ and $\Lambda_{f}^b < 1/2$, and remains unchanged if $\Lambda_{s}^b = 1/2$ and $\Lambda_{f}^b = 1/2$ even for erosive mass flows. This way, we have presented extended conditions for the erosion-induced net momentum production and flow mobility as required for the mixture mass flows. \\[3mm] The effective composite solid and fluid erosion drifts appearing in connection with the composite erosion velocities for solid and fluid are presented in Fig. \ref{Fig_7}. These composite erosion drifts characterize the solid and fluid erosion velocity with the single phase-type erosion drifts. However, depending on the chosen solid and fluid volume fractions in the landslide and other physical quantities across the erosion interface, these erosion drifts are structurally and mechanically different, and may also be higher than the other phase- and cross-phase erosion drifts, which are bounded from above by unity. This is surprising. \begin{figure} \begin{center} \includegraphics[width=9cm]{EffectiveErosionDrift_bs_a.eps} \includegraphics[width=9cm]{EffectiveErosionDrift_bf_a.eps} \end{center} \caption[]{The effective composite erosion drifts as given by (\ref{Eqn_ubs_alt}) and (\ref{Eqn_ubf_alt}) for the composite solid and fluid erosion velocities $u_s^b$ and $u_f^b$, respectively.} \label{Fig_7} \end{figure} \section{Discussions} {\bf I. Complex phase- and cross-phase interactions:} The astonishing fact that emerged here while dealing with the erosion velocities of the mobilized bed materials is that there are four fundamentally different types of phase-phase and cross-phase interactions across the erosion interface between the solid and fluid materials in the landslide and the bed: the direct solid-solid and fluid-fluid interactions, and the cross solid-fluid and fluid-solid interactions. Because, the existing erosion models only consider the direct phase-interactions, cross-phase interactions are not recognized yet, limiting their applicabilities, as those models are incomplete and partly inconsistent. The fluid-fluid interactions are not described appropriately as they must follow some special mechanical principles which are freshly realized here. Similarly, the true solid-fluid and fluid-solid interactions are ignored previously although these interactions are substantial and appear to be quite complex. \\[3mm] {\bf II. Interactive phase- and cross-phase shear stresses:} When both the landslide and erodible bed are composed of two-phase materials of different physical properties, the interface shear stresses between them become very complicated. In previous erosion models, the fluid-solid and solid-fluid shear stresses do not exist as the cross-shear stresses, but rather were directly used as fluid-fluid interactions, which are physically inconsistent. Moreover, the fluid-fluid shear stresses do not exist at all. Here, we made a fundamental advancement in modelling multi-phase erosive mass transport with entirely innovative and appropriate mechanical shear stress models for all the solid-fluid, fluid-solid and fluid-fluid interactions across the erosion-interface. We introduced mechanically important inter-phase solid-fluid and fluid-solid interactions. We revealed that the shear resistance by the fluid at the bed against the applied shear from the solid from the flow is rather associated with the solid velocity in the landslide but not with the fluid velocity at the bed. This is a novel understanding. \\[3mm] The solid-solid interactions are modelled with the Coulomb-type frictional law. However, we presented elegant cross-couplings that are unique and legitimate. The solid-fluid interactions (the fluid-type bed shear resistance against the solid-type shear stress from the landslide) satisfy the Chezy-type frictional rheology, but with some crucial amendments. Such special circumstance is invented here for erosive mass transports. Astonishing fact we discovered is the description of the fluid-fluid interactions at the interface. Previously, fluid-fluid interactions were modelled with the classical Chezy-type friction which is mechanically inconsistent. However, these fluids are contained inside the matrices of the solid particles in the landslide and in the bed. As the fluid-fluid interactions are not yet known for the erosive landslide, following a boundary layer approach, we constructed new physically-based models. We described fluid shear stresses across the interface by assuming both the landslide and bed as porous medium. These shear stresses are distinguished by their viscosities and velocities, however, with permeabilities on opposite sides of the erosion-interface. Fluid resistances from the bed against the solid and fluid shear stress from the landslide are incomparably different. This is an important novel development. Physical mechanisms of all the interfacial shear stresses are justified. Depending on the composition of the flowing landslide and the bed mixture and the significance of the relevant shear stresses, as recognized here, amazingly there are nine different shear stress jumps across the erosion interface. These aspects clearly manifest the physical novelty and significance of our modelling approach for the interactive shear stresses between different materials across the landslide-bed interface, presented here for the first time for the complex multi-phase erosive mass transports. This indicates the complexity associated with the two-phase erosive landslide, the novelty and essence of our approach. \\[3mm] {\bf III. Composite erosion velocities:} Surprisingly, extended erosion velocities for both the solid particles and fluid molecules take mechanically extensive and complex forms. We revealed the real mechanical situations that the solid and fluid at the bed are mobilized both by the solid and fluid in the flow. With crucial novel realizations, we constructed comprehensive solid and fluid erosion velocities by considering all the interactions and mobilization components induced by both the solid and fluid from the flow to the solid and fluid in the erodible bed. There are composite contributions to the erosion velocities from the solid and fluid fractions from the flow and their velocities, and drifts. The structures of erosion velocities clearly indicate that the process of erosion is dominated by the erosion velocities rather than the flow velocities themselves, which are the key findings. These erosion velocities have huge implications in correctly describing the erosion-induced net momentum productions, because, the net momentum production entirely controls the dynamics, mobility, impact energy and the deposition morphology of the mass transport. \\[3mm] {\bf IV. Unified, extensive and consistent erosion rates:} Previously, the true solid-fluid and fluid-solid interactions were ignored and not described mechanically appropriately, fiercely limiting the applicability of the existing erosion models. Natural erosion rates that correspond to the events could not be obtained from the existing solid and fluid erosion rates. We solved these problems by presenting novel, mechanical shear stress models for the solid-fluid and fluid-fluid interactions. As erosion rates play a central role in erosive mass transports, we focused on constructing a novel, unified and physically consistent extensive mechanical erosion rates for multi-phase mass flows. Erosion rate is determined with the jump in the shear stresses and the jump in the momentum fluxes across the landslide-bed interface. The new unified erosion rate models include all mechanically and dynamically important interactions between the solid and fluid phases across the erosion-interface. The total basal erosion rate is extensive, compact, and is mechanically fully described. The total erosion rate is the exact sum of the solid and fluid erosion rates, automatically satisfying the natural criterion as required by practitioners. These erosion rates consistently take the solid and fluid fractions from the bed and customarily supply them to the flow. This is crucial. We recover the solid and fluid erosion rates in previous models. Our method can be directly extended to erosive multi-phase mass flows consisting of any number of solid particles and viscous fluid phases in the landslide and the bed substrate. Importantly, we invented a novel erosive-shear-velocity primarily induced by the erosion rate, which vanishes for non-erosive flows. \\[3mm] {\bf V. Super-erosion-drift, phase- and cross-phase drifts:} As essential quantities, erosion drifts provide crucial information about the erosion velocities which play central role in explaining erosion rates and net momentum productions. In turn, net momentum productions control the mobility of erosive mass transports. We constructed different erosion drift equations providing mechanical closures for all the erosion drifts. We presented a compact and general super-erosion-drift-equation. In the limit, it reduces to the solid-solid and fluid-fluid erosion drifts. With elegant procedures, we also constructed closures for the solid-fluid and fluid-solid cross-erosion-drifts. All drifts are known mechanically. We proved that as the super-erosion-drift contains all necessary information, essentially all the phase- and cross-phase drifts can be directly extracted from the super-erosion-drift. The cross-drift are symmetrical about the solid-fluid and fluid-solid cross-phase interactions. These properties signify the strength of the super-erosion-drift and consistency of all the drift relations, and the equivalence between the reduced frictional forces and momentum productions. \\[3mm] {\bf VI. Complete net momentum productions and flow mobility:} The very crucial aspect considered here for the first time is, as momentum productions play decisive role in the dynamics, mobility, destructive power and deposition morphology, we must mechanically correctly describe momentum productions with respect to the complex and compact erosion velocities and erosion rates. We constructed the erosion-induced produced solid and fluid momenta in terms of the total erosion velocities, or in terms of the solid and fluid erosion velocities. These momentum productions explicitly depend on several aspects of the flow: erosion drifts, volume fractions of solid and fluid in the landslide and the erodible bed, solid and fluid velocities in the flow, and the total erosion rate of the system. With the mechanical closures for the cross-erosion drifts, we revealed that it is relatively difficult for the fluid in the landslide to mobilize the grain in the bed, but it is relatively easy for the grain in the landslide to mobilize the fluid in the bed. However, our scrutiny shows that the erosion velocities are determined collectively by the involved erosion drifts, volume fractions of solid and fluid in the landslide, and their respective velocities. The newly constructed momentum productions reduce to the previously known solid only and fluid only momentum productions, the latter, however, are incomplete. This implies the wide spectrum and physically fully consistent modelling of momentum productions in our approach associated with the solid and fluid erosion velocities. This sheds light on the importance of the composite erosion velocities, unified erosion rates and the extensive net momentum productions for solid and fluid. Existing erosion-induced landslide mobility model is only for an effectively single-phase bulk mixture, that we have extended here for multi-phase mass flows with unified mechanical model for the rates of mass and momentum productions. Erosion-induced net momentum productions here are much more extensive than that in the existing model, the former adequately describe the flow mobility of mixture mass flows. \\[3mm] {\bf VII. Erosion-matrix and process of erosion:} We invented the erosion-matrix characterizing the erosion mechanism of the landslide. With the erosion matrix, we have presented the first systematic, compact and the complete description of the mechanical process of erosion. As the erosion velocity governs the system, the process of erosion is jointly determined by the erosion-matrix, volume fractions of solid and fluid in the flow, and the flow velocities. For the vanishing off-diagonal elements of the erosion-matrix, the system degenerates to the previously known simple erosion velocities, but without the cross-phase interactions, which is incomplete. \section{Summary} There are three major outcomes of this contribution in relation to erosive multi-phase mass flows. First, we physically correctly established the jumps in shear stresses and momentum fluxes across the erosion-interface between the landslide and the bed substrate, and with these, we constructed unified, comprehensive and consistent mechanical erosion rates for the solid and fluid phases. The general structure of these jumps demonstrate the richness, urgency and spectrum of applicability of the new unified multi-phase erosion model. The constructed shear resistances from the bed against all the applied shear stresses from the landslide are consistent and appropriate. The proposed multi-phase interactive shear structures are mechanically explained, which are physically superior, dynamically flexible and wider over the existing effectively single-phase shear structures as many of the interactions considered here could not be described by existing models. In our approach, the sum of the solid and fluid erosion rates turns out be the total basal erosion rate. The new erosion rate models are well defined and well constrained which can be applied to any flow situations and bed morphologies, irrespective of the number of components in the flow and the bed, and their interactions. Such a broad erosion modelling is presented here for the first time with our unified modelling approach. Second, we constructed extensive and complete net momentum productions for both the solid and fluid phases for which we completely and physically correctly described the essentially complex, composite erosion velocities of the mobilized particles and fluid from the basal substrate and the unified erosion rates. Mass and momentum productions include all the interactions between the solids and fluids in the landslide and the bed substrate, which are consistent and mechanically much stronger and wider than the existing models. Third, a general frame of the mass and momentum balance equations has been presented. We pioneered the stress correction factor, erosive-shear-velocity, super-erosion-drift, and the erosion-matrix. These greatly enhance our understanding by inherently characterizing the complex erosion processes in multi-phase mass flows. Finally, we developed a realistic and comprehensive multi-phase mechanical erosion model by embedding the novel, unified mechanical erosion rates, extended erosion velocities and the advanced net momentum productions into the mass and momentum balance equations. This removes the great hurdle in existing erosion modeling, and opens a wide spectrum of possibilities for real applications. Our approach makes a complete description of the full multi-phase erosive landslide in conservative form by considering all the aspects associated with the erosion-induced momentum productions and the correct handling of the inertia of the system via net momentum productions. The mechanically-explained models developed here cover a vast range of natural processes in a deterministic way, which is far beyond the reach of empirical models. This paves the way for the legitimate applications of the developed erosion model for complex multi-phase mass flows. So, the professionals and engineers may find the model intuitive and useful in solving applied, technical, engineering and geomorphological problems associated with erosive mass flow events. {\small
{ "timestamp": "2022-09-23T02:11:38", "yymm": "2209", "arxiv_id": "2209.10880", "language": "en", "url": "https://arxiv.org/abs/2209.10880" }
\section*{Introduction} Correlation functions play a significant role in conformal field theories (CFT), and their functional form can easily be obtained by methods in coordinate space rather than in momentum space. Although these methods are very powerful, they apply when the correlation functions are at separate points. On the other hand, anomalies occur in coordinate space at short distances and thus with configurations of a correlator in which some points coalesce. Then the coordinate space approach provides limited information on the origin of the anomaly, except for telling us that its origin is a short-distance effect. One of the main reasons for studying CFT correlation functions directly in momentum space is to see the effects of anomalies more directly. When a theory is classically conformal invariant, then as a consequence, its energy momentum tensor will have a null trace. If the quantum theory has an anomaly, then the trace of the expectation value of the energy momentum tensor in a metric ($g_{\mu\nu}$) background develops a non-zero value. This phenomenon is essentially related to divergences in the quantum theory that break the conformal invariance once the theory is renormalized. In particular, two counterterms are needed in $d=4$ in the renormalization of correlators with multiple stress-energy tensors, responsible for the generation of the anomaly: $E$ and $C^2$, the Euler-Poincar\`e density and the Weyl tensor squared respectively, which find application in the $\braket{TTT\dots}$ ($n$-graviton) vertex \cite{Coriano:2017mux, Coriano:2021nvn,Coriano:2022vrz}. For other correlators, such as the $\braket{TJJ}$, the renormalization of the vector 2-point function $\braket{JJ}$ is sufficient to generate a finite correlator, which is reflected in the $F^2$ term of the anomaly functional, with $F$ the field strength of the gauge field (see \cite{Coriano:2018bbe,Coriano:2018zdo} for related studies). \section*{Anomalies as light cone processes} The perturbative realization of CFT correlators allows us to handle far more simplified expressions proceeding with analyzing an ordinary Feynman expansion. The analysis in momentum space of such correlators provides additional information on the emergence of the conformal anomaly. Indeed, the appearance of the anomaly can be described by the emergence of massless effective scalar degrees of freedom in the 3-point functions containing insertions of stress energy tensors, which can be interpreted as light-cone interactions. As discussed in \cite{Giannotti:2008cv,Armillis:2010qk,Armillis:2010ru,Coriano:2020ees,Coriano:2012wp}, this phenomenon points towards an interpretation of the origin of the conformal anomaly as mediated by correlated pairs of fermions/scalars, as emerging from the spectral representation of a given perturbative correlator. Such interactions also play a role in the context of Weyl semimetals, with the paired electrons (representing the massless pole) interacting with the lattice of such materials \cite{Chernodub:2019tsx,Chernodub:2017jcp}. It is worth mentioning that both conformal and chiral anomalies play a key role in this phenomenon \cite{Chernodub:2021nff}. These interactions are associated with renormalization and are not related to specific parametrization of the tensor correlators. The proof can be illustrated more quickly in the case of the $TJJ$ correlator in QED. A similar analysis can be performed for QCD, even if it is more involved. \section*{Decomposition of correlators} We briefly review the method to decompose any $n$-point functions involving tensorial operators, first presented for the case of the three-point functions \cite{bms}. This method is based on the reconstruction of the full $n$-point function involving stress-energy tensors, currents, and scalar operators starting from the expressions of transverse and traceless part only. We will present the decomposition of $TJJ$ and $TTJJ$ correlation functions directly in momentum space. Defining the projectors \begin{equation} \pi^{\mu}_{\alpha}=\delta^{\mu}_\alpha-\frac{p^\mu p_\alpha}{p^2}, \qquad \Pi^{\mu\nu}_{\alpha\beta}=\pi^{(\mu}_\alpha\pi^{\nu)}_\beta-\frac{1}{d-1}\pi^{\mu\nu}\pi_{\alpha\beta}, \end{equation} with the properties \begin{equation} p_{i\mu_i}\,\pi^{\mu_i\nu_i}(p_i)=0,\quad p_{i\mu_i}\,\Pi^{\mu_i\nu_i}_{\alpha_i\beta_i}(p_i)=0,\qquad \delta_{\mu_i\nu_i}\,\Pi^{\mu_i\nu_i}_{\alpha_i\beta_i}(p_i)=0, \end{equation} we consider the decomposition of the energy-momentum tensor $T^{\mu\nu}$ and the current $J^\mu$ as \begin{align} &j^{\mu}(p)=\pi^{\mu}_{\alpha}(p)\,J^{\alpha}(p), && j_{loc}^{\mu}(p)=\frac{p^{\mu}p_\alpha}{p^2}J^{\alpha}(p),\notag\\ & t^{\mu\nu}(p)=\Pi^{\mu\nu}_{\alpha\beta}(p)\,T^{\alpha\beta}(p), && t_{loc}^{\mu\nu}(p)=\left(I^{\mu\nu}_{\alpha\beta}+\frac{1}{d-1}\pi^{\mu\nu}\delta_{\alpha\beta}\right)T^{\alpha\beta}(p), \label{proj}\\ &&&\hspace{-4.9cm} I^{\mu\nu}_{\alpha\beta}=\frac{p_\beta}{p^2}\left[2p^{(\mu}\delta^{\nu)}_\alpha-\frac{p_\alpha}{d-1}\left(\delta^{\mu\nu}+(d-2)\frac{p^\mu p^\nu}{p^2}\right)\right]\notag. \end{align} The decomposition of the operators $T^{\mu\nu}=t^{\mu\nu}_{loc}+t^{\mu\nu}$ and $J^\mu=j^\mu+j_{loc}^\mu$, allows to split any correlation function into a sum of correlators containing $j^\mu$, $j^\mu_{loc}$, $t^{\mu \nu}$ and $t^{\mu \nu}_{loc}$. However, as shown in \cite{bms, Bzowski:2017poo,Coriano:2020ees}, by using the conservation Ward Identities, that relate $n$-point functions to lower point, it is possible to completely fix the longitudinal parts, i.e. those terms containing at least one $t_{loc}$ or $j_{loc}$. Therefore, the only term to be studied in order to reconstruct the entire correlator is the transverse traceless part consisting only of operators $t^{\mu \nu}$ and $j^\mu$. \\ The transverse traceless part, as we will show, can be expressed in a number of minimal tensor structures and form factors. Furthermore, due to the presence of dimensional tensor degeneracies, the number of independent tensor structures contributing to the decomposition can be properly reduced. \section*{$TJJ$ reconstruction} In this section we give the decomposition of the correlator $TJJ$, and in particular, by using \eqref{proj}, we obtain \begin{align*} \braket{T^{\mu_1\nu_1}\,J^{\mu_2}\,J^{\mu_3}}&=\braket{t^{\mu_1\nu_1}\,j^{\mu_2}\,j^{\mu_3}}+\braket{T^{\mu_1\nu_1}\,J^{\mu_2}\,j_{loc}^{\mu_3}}+\braket{T^{\mu_1\nu_1}\,j_{loc}^{\mu_2}\,J^{\mu_3}}+\braket{t_{loc}^{\mu_1\nu_1}\,J^{\mu_2}\,J^{\mu_3}}\notag\\ &\quad-\braket{T^{\mu_1\nu_1}\,j_{loc}^{\mu_2}\,j_{loc}^{\mu_3}}-\braket{t_{loc}^{\mu_1\nu_1}\,j_{loc}^{\mu_2}\,J^{\mu_3}}- \braket{t_{loc}^{\mu_1\nu_1}\,J^{\mu_2}\,j_{loc}^{\mu_3}}+\braket{t_{loc}^{\mu_1\nu_1}\,j_{loc}^{\mu_2}\,j_{loc}^{\mu_3}}. \end{align*} It is important to emphasise that all the terms except the first one can be rewritten as two-point functions via Ward identities. The explicit form of the transverse traceless part $\braket{t^{\mu_1\nu_1}\,j^{\mu_2}\,j^{\mu_3}}$ is \begin{equation} \braket{ t^{\mu_1 \nu_1} (p_1)j^{\mu_2} (p_2) j^{\mu_3} (p_3) } =\Pi^{\mu_1 \nu_1}_{\alpha_1 \beta_1} (p_1) \pi^{\mu_2}_{\alpha_2} (p_2) \pi^{\mu_3}_{\alpha_3} (p_3) X^{\alpha_1 \beta_1 \alpha_2\alpha_3},\label{tjj} \end{equation} where $X^{\alpha_1\dots\alpha_3}$ is a general rank four tensor built by products of metric tensors and momenta with the appropriate choice of indices. As a consequence of the projectors in \eqref{tjj}, $X^{\alpha_1\dots\alpha_3}$ can not be constructed by using $\delta^{\alpha_1\beta_1}$, nor by $p_i^{\alpha_i}$, $i=1,\dots,3$. In addition, the conservation of the total momentum \begin{equation} p_1^{\alpha_i} + p_2^{\alpha_i} + p_3^{\alpha_i}= 0 , \end{equation} allows selecting for each index $\alpha_i$ a pair of momenta to be used in the general construction of $X$. The choice of the independent momenta of the expansion can be different for each set of contracted tensor indices. One can choose \begin{equation} \begin{split} &\{\alpha_1,\beta_1\}\leftrightarrow p_1,p_2,\\ &\{\alpha_2\}\leftrightarrow p_2,p_3\,,\\ &\{\alpha_3\}\leftrightarrow p_3,p_1\,, \end{split}\label{choicemom} \end{equation} as basis of the expansion for each pair of indices shown above. The linear dependence of one momentum, for instance $p_3$, which we will impose at a later stage, is not in contradiction with this choice, which allows to reduce the number of form factors, due to the presence of a single $t$ projector for each external momentum. For which concerns the metric delta the only non vanishing terms appearing in $X^{\alpha_1\dots\alpha_3}$ are \begin{align} \delta^{\alpha_1\alpha_2},\ \delta^{\alpha_1\alpha_3},\ \delta^{\alpha_2\alpha_3},\label{choicemetr} \end{align} together with the terms obtained by the exchange $\alpha_1\leftrightarrow\beta_1$. To construct the transverse traceless part, we must use these tensors to build all possible four rank tensors. Still, we must keep in mind that, due to symmetries of the correlator, form factors associated with structures linked by a $2 \leftrightarrow 3$ transformation are dependent. Then the transverse traceless part is written as \begin{align} \label{tjjdecomposition} & \langle t^{\mu_1\nu_1}(p_1)j^{\mu_2}(p_2)j^{\mu_3}(p_3)\rangle =\notag\\ &= {\Pi}^{\mu_1\nu_1}_{\alpha_1\beta_1}(p_1)\,{\pi}^{\mu_2}_{\alpha_2}(p_2)\,{\pi}^{\mu_3}_{\alpha_3}(p_3)\, \big( A_1\ p_2^{\alpha_1}p_2^{\beta_1}p_3^{\alpha_2}p_1^{\alpha_3} + A_2\ \delta^{\alpha_2\alpha_3} p_2^{\alpha_1}p_2^{\beta_1} \notag \\ &\quad+A_3\ \delta^{\alpha_1\alpha_2}p_2^{\beta_1}p_1^{\alpha_3} + A_3(p_2\leftrightarrow p_3)\delta^{\alpha_1\alpha_3}p_2^{\beta_1}p_3^{\alpha_2} + A_4\ \delta^{\alpha_1\alpha_3}\delta^{\alpha_2\beta_1}\big), \end{align} It is worth mentioning that the form factors $A_i$ are fixed by the conformal invariance and in particular by the CWIs \begin{equation*} K^\kappa\braket{T^{\mu_1\nu_1}\,J^{\mu_2}\,J^{\mu_3}}=0,\qquad D\braket{T^{\mu_1\nu_1}\,J^{\mu_2}\,J^{\mu_3}}=0, \end{equation*} where $D$ and $K^\kappa$ are the dilatation and the special conformal operators respectively \cite{Coriano:2013jba, bms, Bzowski:2015pba, Coriano:2019sth} \section*{$TJJ$ perturbative realization} A perturbative calculation of the $TJJ$ correlator has been performed in two different realization of QED, namely with complex scalars and fermions \cite{Coriano:2018bbe}. The actions considered are given by \begin{align} S_{s} &=\int d^d x \, \sqrt{-g} \, \left( \, \partial^\mu \phi^\dagger \, \partial_\mu \phi + ie \, A^\mu \, (\partial_\mu \phi^\dagger \, \phi - \phi^\dagger \, \partial_\mu \phi)\, \, + \, \,e^2 A^\mu A_\mu \, \phi^\dagger \phi \, \, + \, \, \chi R \, \phi^\dagger \phi \right),\\ S_{f} &= \int d^d x \, V \, \left(\, {i \over 2} \, ( \, \bar \psi \, \gamma^\lambda \, {\partial}_\lambda \psi \, - \, \partial_\gamma \bar \psi \, \gamma^\lambda \, \psi)\, \, - \, \, e \, \bar \psi \, \gamma^\lambda A_\lambda \, \psi \, \, - \, \, {i \over 4} \, \omega_{\mu a b } \, V^\mu_c\, \bar \psi \gamma^{abc} \psi \, \right) \end{align} where $V^\mu_a$ is the vielbein and $\omega_{\mu a b}$ is the spin connection. The correlator $TJJ$ is obtained from the sum of the Feynmann diagrams that can be constructed, and as an example we show some of them in Fig. \ref{fig1}. \begin{figure}[h!] \centering \includegraphics[scale=.18]{images/diagrams/TTJ/Fermion/Triangle2F.pdf}\,\raisebox{0.85cm}{$+$}\, \includegraphics[scale=.18]{images/diagrams/TTJ/Fermion/Bubble1F.pdf}\,\raisebox{0.8cm}{$+ \cdots$} \caption{One-loop diagrams for the $TJJ$ in QED. \label{fig1}} \end{figure} From the calculation of the Feynman diagrams, the form factors in the decomposition can be written explicitly in terms of master integrals $B_0$ and $C_0$. In $d=4$, this correlator manifests UV divergences that can be renormalized by adding the gauge and Weyl invariant counterterm \begin{equation} S_{ct}=-\frac{c}{\epsilon} \, \int d^d x\sqrt{-g} \ F_{\mu \nu} F^{\mu \nu}\, . \end{equation} The renormalization procedure of the correlator induces a breaking of the conformal invariance, manifested in the presence of an anomaly pole. The correlator, which was classically conformal invariant, acquires, after renormalization, a trace anomaly contribution. The effective action related to the anomaly contribution at the first order in the perturbation $h$ is \begin{equation} S_{pole} = -\frac{e^2}{36 \pi^2} \, \int d^4 x \, d^4 y \, \, \Big(\square h (x) - \partial_\mu \partial_\nu h^{\mu \nu} (x) \Big) \, \square^{-1}_{xy} \, F_{\alpha \beta} (x) F^{\alpha \beta} (y). \end{equation} \section*{Dimensional dependent degeneracies} The number of independent tensorial structures corresponds to the number of independent form factors that characterize the transverse traceless part. At first sight, it seems evident that different tensorial sectors (i.e. terms involving only metric $\delta$ and terms involving $\delta$ and momenta $p_i$) are not connected under symmetry transformations, and we can consider each sector individually when counting the number of independent form factors. \\ However, this is the case only when we work in general dimension $d$. If we are interested in a specific dimension, there may be relations that relate to terms among different sectors. One way to analyze such identities is by using Lovelock's double antisymmetrization method \cite{lovelock, edgar}. \\ The main point of this analysis is to consider a $(k,l)$-rank tensor $S$ in $d$ dimensions, with $k,l < d$. Then the relation \begin{equation} \label{lovelock1} S_{[\alpha_1 \dots \alpha_k}^{\quad \beta_1 \dots \beta_l} \, \, \delta_{\alpha_{k+1} }^{\beta_{l+1} } \, \cdots \, \delta_{\alpha_{k+m} ]}^{\beta_{l+m}} = 0 \qquad \qquad \text{for } \, \, k+m>d \, \, , \end{equation} is trivially satisfied, and further contractions of these identities may lead to non trivial relations. \\ This result can be reformulated in a different way \cite{lovelock}. Let $T$ be an anti-symmetric traceless $(k,l)$-rank tensor, then the Lovelock's theorem states that \begin{equation} \label{lovelock2} T_{[\alpha_1 \dots \alpha_k}^{\qquad [\beta_1 \dots \beta_l} \, \, \delta_{\alpha_{k+1} }^{\beta_{l+1} } \, \cdots \, \delta_{\alpha_{k+m} ]}^{\beta_{l+m}]} = 0, \qquad \qquad \text{for } \, \, m \geq d+1-(k+l) \, \, . \end{equation} This relation may seem less intuitive, but it is the same principle of \eqref{lovelock1}. Since the number of indices exceeds the dimensionality, two indices will be either equal on the same line or contracted. In the latter case, being the tensor traceless, the identity will be satisfied. Lovelock's theorem ensures that any contraction of \eqref{lovelock2} results only in trivial identities. \\ This theorem provides evidence for tensor identities based on the dimension considered. Indeed, given the dimension $d$, we build an antisymmetric traceless $(k,l)$ rank tensor. We obtain a family of tensor identities using a double antisymmetrization in the form of \eqref{lovelock2}. For example, we could construct a Riemann-like tensor using $p_1$ and $p_2$ \begin{equation} R_{\alpha_1 \alpha_2}^{\quad \beta_1 \beta_2} = {p_1}_{[ \alpha_1} {p_2}_{\alpha_2 ]} \, \, {p_1}^{[ \beta_1} {p_2}^{\beta_2 ]} \, \, . \end{equation} The antisymmetrization of this tensor with two metric tensors defines the traceless tensor $W$ \begin{equation} W_{\alpha_1 \alpha_2}^{\quad \beta_1 \beta_2}=R_{\alpha_3 \alpha_4}^{\quad [\alpha_3 \alpha_4}\delta_{\alpha_1}^{\beta_1}\delta_{\alpha_2}^{\beta_2]} \end{equation} that has the same properties as the Weyl tensor. In $d\le3$ this tensor vanishes identically, i.e. \begin{equation} W_{\alpha_1 \alpha_2}^{\quad \beta_1 \beta_2}=0,\quad d\le3, \end{equation} as specific case of \eqref{lovelock2} with $k=l=2$, that in $d=3$ imply $m=0$ as pointed out in \cite{Bzowski:2017poo}. \\ Then there are possible values of $(k,l)$ depending on the specific dimensions that produce different identities on tensor structures, and they may reduce the number of independent form factors. \section*{$TTJJ$ reconstruction and perturbative realization} The general decomposition presented can be extended in studying higher point functions. In the case of $\braket{TTJJ}$, the conservation of the total momentum is not as strict as the three-point case, and the symmetry of the correlator is now twofold, having both $(1\leftrightarrow2)$ and $(3\leftrightarrow4)$ permutation symmetry to be considered. The choice of the independent momenta in the transverse traceless part, as we did in \eqref{choicemom}, is similar for the four point function but with the association of two independent moments for each index, as pointed out in \cite{Coriano:2021nvn,Coriano:2019nkw}. More details to obtain the decomposition into minimal independent form factors will be presented in \cite{ttjj}. \\ The perturbative realization of the $\braket{TTJJ}$ can be obtained analogously to the case of the three-point function by calculating the corresponding Feynman diagrams. In both cases, when $d=4$, the renormalization procedure, due to the presence of divergences, breaks the conformal invariance, which is reflected in anomaly massless poles. We will present the form of the anomaly effective action for the $\braket{TTJJ}$ correlator studying its implications \cite{ttjj}. In addition, a detailed study on tensor degeneracies for the $\braket{TTJJ}$ decomposition will be investigated. \section*{Conclusions} We have briefly presented the general method to construct 3- correlation functions using the decomposition into a longitudinal part and a transverse traceless one. This method can be extended straightforwardly to the case of 4-point functions. The transverse traceless part of the correlator can be written in terms of independent form factors and tensor structures. The number of independent form factors is related to the number of independent tensorial structures that depends on the specific dimension chosen. Indeed, we have illustrated the case where tensor identities have to be considered in specific dimensions. These constraints change the number of independent form factors in the specific dimensions. Finally, in $d=4$, we have mentioned how the renormalization procedure of correlators involving energy momentum tensors leads to the presence of anomalous massless poles and the identification of the anomaly effective action for the $TJJ$. The tensor identities presented in this paper can play a crucial role in identifying the anomalous part of the $TTJJ$ in $d=4$. \section*{Acknowledgement} M. M. M. is supported by the European Research Council (ERC) under the European Union as Horizon 2020 research and innovation program (grant agreement No818066) and by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC-2181/1 - 390900948 (the Heidelberg STRUCTURES Cluster of Excellence). The work of R.T. is supported by INFN iniziativa specifica QFT-HEP.
{ "timestamp": "2022-09-23T02:13:03", "yymm": "2209", "arxiv_id": "2209.10932", "language": "en", "url": "https://arxiv.org/abs/2209.10932" }
\section{Introduction} \subsection{Background} Let \(\PG(2, q^2)\) denote the Desarguesian projective plane over the finite field with $q^2$ elements, $\F_{q^2}$, where $q$ is a prime power. A \textit{unital} \(U\) in \(\PG(2, q^2)\) is a set of \(q^{3} + 1\) points such that every line of \(\PG(2, q^{2})\) meets \(U\) in 1 or \(q + 1\) points. The \textit{classical} or \textit{Hermitian} unital, usually denoted by $\mathcal{H}(2,q^2)$, arises by taking the absolute points of a non-degenerate Hermitian polarity. Each point $P$ not lying on a unital $U$, lies on \(q + 1\) tangent lines to \(U\); the \(q + 1\) points of \(U\) whose tangent lines contain \(P\) are called the \textit{feet} of \(P\), and are denoted by \(\tau_P(U)\). It is well-known that \(\PG(2, q^2)\) can be modelled using a Desarguesian line spread of \(\PG(3, q)\) embedded in \(\PG(4, q)\) via the \textit{Andr\'{e}/Bruck-Bose (ABB)} construction. A wide class of unitals in $\PG(2,q^2)$, called \textit{Buekenhout unitals}, arise as follows from the ABB construction; starting in \(\PG(4, q)\) fixing a hyperplane \(\Sigma\), and a Desarguesian spread of \(\Sigma\), we take any ovoidal cone \(\mathcal{C}\) such that \(\mathcal{C} \cap \Sigma\) is a spread line of $\Sigma$. Then in \(\PG(2, q^2)\), \(\mathcal{C}\) gives rise to a unital \(U\). If the base of \(\mathcal{C}\) is an elliptic quadric, the unital is called a \textit{Buekenhout-Metz unital}. The family of Buekenhout-Metz unitals contains the Hermitian unitals, but there are many non-equivalent Buekenhout-Metz unitals (see \cite{Baker1992}, \cite{Ebert1992}). If \(q = 2^{2e + 1}\), \(e \geq 1\), and the base of \(\mathcal{C}\) is a Tits ovoid, the unital is a called a \textit{Buekenhout-Tits unital}. For more information on unitals and their constructions, see~\cite{barwick09_unital_projec_planes}. Unitals can be characterised by the combinatorial properties of their feet. It is easy to see that for the classical unital $\mathcal{H}(2,q^2)$, the feet of a point not on the unital are always collinear. Thas~\cite{Thas1992} showed the converse, namely, that a unital \(U\) is classical if and only if for all points, not on $U$, the feet are collinear. This was improved by Aguglia and Ebert \cite{MR1888419} who showed that a unital \(U\) is classical if and only if there exist two tangent lines \(\ell_{1}, \ell_{2}\) such that for all points \(P \in (\ell_1 \cup \ell_2) \setminus U\) the feet of \(P\) are collinear. It is known (see e.g. \cite{barwick09_unital_projec_planes}) that if \(U\) is a non-classical Buekenhout-Metz unital, the feet of a point \(P \notin U\) are collinear if and only if \(P \in \ell_{\infty}\). Furthermore, it is shown in \cite{abarz_metz_feet} that if \(U\) is Buekenhout-Metz unital, a line meets the feet of a point \(P \notin \ell_{\infty}\) in either $0$, $1$, $2$, or $4$ points. Ebert~\cite{ebert97_bueken_tits_unital} showed for a Buekenhout-Tits unital, the feet of \(P \notin U\) are collinear if and only if \(P \in \ell_{\infty}\). It is then natural to ask how a line may meet the feet of a point \(P \notin \ell_{\infty}\) for Buekenhout-Tits unitals. We will answer this question in Theorem \ref{thm:bt-unital-line-meet-feet}. Many characterisations of unitals make use of their stabilisers in $\PGL$, resp. $\PTL$. In~\cite{cossidente2000group} it is shown that a unital is classical if its stabiliser contains a cyclic group of order \(q^{2} - q + 1\). Several other characterisations of unitals by their stabiliser group are listed in~\cite{barwick09_unital_projec_planes}. In~\cite{ebert97_bueken_tits_unital}, Ebert determined the stabiliser of a Buekenhout-Tits unital in \(\PGL(3, q^{2})\) (see Result \ref{result:ebert-stab}). We will extend this work in this paper. \subsection{Summary of this paper} In this paper we present three main results: \begin{enumerate} \item We show that all Buekenhout-Tits unitals are projectively equivalent (see Theorem \ref{cor:proj-equivalence}). This addresses an open problem in~\cite{barwick09_unital_projec_planes}, and is alluded to in~\cite{feng19_exist_onan_config_ovoid_bueken} (see Remark~\ref{rem:equivalence}). \item A description of the full stabiliser group of a Buekenhout-Tits unital in \(\PTL(3, q^{2})\) (see Theorem \ref{thm:bt-unital-pgammal-stab}). Ebert~\cite{ebert97_bueken_tits_unital} only provides a description of stabiliser of the Buekenhout-Tits unital in \(\PGL\) (Result~\ref{result:ebert-stab}). The stabiliser of the classical unital in \(\PTL(3, q^2)\) is \(\mathrm{P\Gamma U}(3, q^2)\), and the stabiliser of the Buekenhout-Metz unital in \(\PTL(3, q^{2})\) is described in~\cite{Ebert1992} for \(q\) even and~\cite{Baker1992} for \(q\) odd. \item If \(U\) is a Buekenhout-Tits unital, then a line \(\ell\) meets the feet of a point \(P \notin (\ell_{\infty} \cup U)\) in at most 4 points. Moreover, there exists a point \(P\) and line \(\ell\) such that the feet of \(P\) meet \(\ell\) in exactly three points (see Theorem \ref{thm:bt-unital-line-meet-feet}). This highlights a difference between Buekenhout-Metz unitals and Buekenhout-Tits unitals. It also solves an open problem posed by Aguglia and Ebert~\cite{MR1888419} and later listed in~\cite{barwick09_unital_projec_planes}. \end{enumerate} \subsection{Coordinates for a Buekenhout-Tits unital}\label{sec:preliminaries} In~\cite{ebert97_bueken_tits_unital}, Ebert derives coordinates for a Buekenhout-Tits unital $\U$ in \(\PG(2, q^{2})\), $q=2^{2e+1}$. Pick \(\epsilon \in \mathbb{F}_{q^2}\) such that \(\epsilon^q = \epsilon + 1\), and \(\epsilon^2 = \epsilon + \delta\) for some \(1 \neq \delta \in \mathbb{F}_q\) with absolute trace equal to one. Then the following set of points in \(\PG(2, q^2)\) is a Buekenhout-Tits unital, \begin{equation} \label{eq:bt-definition} \U = \{(0, 0, 1)\} \cup \{P_{r, s, t} = (1, s + t \epsilon, r + (s^{\sigma + 2} + t^{\sigma} + st)\epsilon)\,|\, r, s, t \in \mathbb{F}_{q}\}, \end{equation} where \(\sigma = 2^{e + 1}\) has the property that $\sigma^2$ induces the automorphism $x\mapsto x^2$ of $\F_{q^2}$. In addition, it can be verified that \(\sigma + 1\), \(\sigma + 2\), \(\sigma - 1\), and \(\sigma - 2\) all induce permutations of \(\mathbb{F}_q\) with inverses induced by \(\sigma - 1\), \(1 - \sigma/2\), \(\sigma + 1\) and \(-(\sigma/2 + 1)\) respectively. The following theorem describes the group of projectivities (that is, elements of $\PGL(3,q^2)$) stabilising \(\U\). \begin{result}\cite[Theorem 4]{ebert97_bueken_tits_unital}\label{result:ebert-stab} Let \(G=\PGL(3,q^2)_{\U}\), $q=2^{2e+1}$, be the group of projectivities stabilising the Buekenhout-Tits unital \(\U\). Then \(G\) is an abelian group of order \(q^{2}\), consisting of the projectivities induced by the matrices \begin{equation} \label{eq:Muv} M_{u,v} = \left\{\index{$\begin$}\begin{bmatrix} 1 & u \epsilon & v + u^{\sigma} \epsilon \\ 0 & 1 & u + u \epsilon \\ 0 & 0 & 1 \end{bmatrix}\,\middle|\,u,v \in \mathbb{F}_{q}\right\}, \end{equation} where $\sigma=2^{e+1}$ and matrices act on the homogeneous coordinates of points by multiplication from the right. \end{result} \section{On the Projective Equivalence of Buekenhout-Tits Unitals} In this section, we show that all Buekenhout-Tits unitals are projectively equivalent to the unital $\U$ given in equation~\eqref{eq:bt-definition}. \begin{remark}\label{rem:equivalence} The authors of~\cite{feng19_exist_onan_config_ovoid_bueken} give this result without proof and state it can be derived using the same techniques employed by Ebert in~\cite{ebert97_bueken_tits_unital}. Ebert however, lists the equivalence of Buekenhout-Tits unital as an open problem in~\cite{barwick09_unital_projec_planes} which appeared about ten years after his original paper~\cite{ebert97_bueken_tits_unital}. \end{remark} It is easy to see that the Buekenhout-Tits unital $\U$ is tangent to the line \(\ell_{\infty} : x = 0\) in the point \(P_{\infty} = (0, 0, 1)\). From the ABB construction it follows that \(P_{\infty}\) has the following property with respect to $\U$. \begin{property}\label{prop:subline-property} Given any unital \(U\), a point \(P \in U\) has Property~\ref{prop:subline-property} if all secant lines through \(P\) meet \(U\) in Baer sublines. \end{property} It is shown in~\cite{Barwick2001} that if two different points of \(U\) have Property~\ref{prop:subline-property}, then $U$ is classical. Hence, the point $P_\infty$ is the unique point of $\U$ admitting this property. We will count all Buekenhout-Tits unitals tangent to \(\ell_{\infty}\) at a point \(P_{\infty}\) having Property~\ref{prop:subline-property}. \begin{lemma}\label{thm:unitals-equivalent-in-plane} There are \(q^{4} {(q^2 - 1)}^2\) unitals projectively equivalent to \(\U\) in \(\PG(2, q^2)\) tangent to \(\ell_{\infty}\, :\,x = 0\), and containing the point \(P_{\infty} = (0, 0, 1)\) with Property~\ref{prop:subline-property}. \end{lemma} \begin{proof} First note that any projectivity mapping $\U$ to a unital tangent to $\ell_\infty$ in $P_\infty$ necessarily is contained in the group \(H\) of projectivities fixing \(\ell_{\infty}\) line-wise and \(P_{\infty}\) point-wise. The elements of $H$ are induced by all matrices of the following form, \begin{equation*} \begin{bmatrix} 1 & x_{12} & x_{13} \\ 0 & x_{22} & x_{23} \\ 0 & 0 & x_{33} \end{bmatrix}, \end{equation*} where \(x_{22} x_{33} \neq 0\) and matrices act on homogeneous coordinates by multiplication on the right. It follows that \(|H|={(q^2 - 1)}^2 q^6\). Furthermore, from the description of \(G=\PGL(3,q^2)_{\U}\) in Result~\ref{result:ebert-stab}, we know that \(H_{\U}=G\), and hence, \(H_{\U}\) has order \(q^{2}\). By the orbit-stabiliser theorem, we find that there are \({(q^2 - 1)}^2 q^4\) unitals in the orbit of \(\U\) under \(H\). \end{proof} Consider \(\PG(2, q^{2})\) modelled using the ABB construction with fixed hyperplane \(H_{\infty}\). Let \(p_{\infty}\) be the spread line corresponding to \(P_{\infty}\). Then any Buekenhout-Tits unital \(U\) tangent to \(\ell_{\infty}\) at \(P_{\infty}\) with Property~\ref{prop:subline-property} corresponds uniquely to an ovoidal cone \(\mathcal{C}\) meeting \(H_{\infty}\) at \(p_{\infty}\). \begin{lemma}\label{lem:numbercones} There are \(q^{4}{(q^{2} - 1)}^{2}\) ovoidal cones \(\mathcal{C}\) in \(\PG(4, q)\) with base a Tits ovoid, such that \(\mathcal{C}\) meets \(H_{\infty}\) in the spread element \(p_{\infty}\). \end{lemma} \begin{proof} Let \(V\) be a point on the line \(p_{\infty}\), and \(H \neq H_{\infty}\) a hyperplane not containing \(V\). Then, \(H\) meets \(H_{\infty}\) in a plane containing a point \(R \in p_{\infty} \setminus \{V\}\). Any ovoidal cone \(\mathcal{C}\) with vertex \(V\) and base a Tits ovoid, such that \(\mathcal{C}\) meets \(H_\infty\) precisely in \(p_\infty\), meets \(H\) in a Tits ovoid tangent to \(H \cap H_{\infty}\) at the point \(R\). We will count all cones of this form, for all \(V \in p_{\infty}\). Consider the pairs of planes \(\Pi\) and Tits ovoids \(\mathcal{O}\), \((\Pi, \mathcal{O})\), where \(\Pi, \mathcal{O} \subset H\) and \(\Pi\) is tangent to \(\mathcal{O}\). On the one hand, there are \(|\PGL(4, q)|/|\mathcal{O}_{\PGL(4, q)}| = {(q + 1)}^2 q^4 {(q - 1)}^2 {(q^2 + q + 1)}\) Tits ovoids in \(\PG(3, q)\), and each has \(q^{2} + 1\) tangent planes. On the other hand, \(\PGL(4, q)\) is transitive on hyperplanes of \(\PG(3, q)\), so each plane is tangent to the same number of Tits ovoids. It thus follows, that there are \[\frac{(q + 1)^2q^4{(q-1)}^2(q^2+q+1)(q^2+1) }{q^4 + q^3+q^2+q+1}= {(q-1)}^{2}q^4(q+1)(q^2+q+1)\] Tits ovoids tangent to \(H \cap H_{\infty}\) conitained in \(H\). Furthermore, since \({\PGL(4, q)}_{H \cap H_{\infty}}\) is transitive on points of \(H \cap H_{\infty}\), each point of \(H \cap H_{\infty}\) is contained in the same number of Tits ovoids \(\mathcal{O}\), so it follows that the number of Tits ovoids tangent to \(H \cap H_{\infty}\) at \(R = p_{\infty} \cap H\) is \({(q - 1)}^2q^4(q + 1)\). Hence, there is an equal number of ovoidal cones with base a Tits ovoid, vertex \(V\), and meeting \(H_{\infty}\) at \(p_{\infty}\). As the choice of \(V\) was arbitrary, and there are \(q+1\) points on \(p_\infty\), there are \({(q^2 - 1)}^2q^4\) ovoidal cones with base a Tits ovoid, and meeting \(H_{\infty}\) at \(p_{\infty}\). \end{proof} \begin{theorem}\label{cor:proj-equivalence} All Buekenhout-Tits unitals in \(\PG(2, q^{2})\) are \(\PGL\)-equivalent. \end{theorem} \begin{proof} From Lemmas \ref{thm:unitals-equivalent-in-plane} and \ref{lem:numbercones}, we see that the number of ovoidal cones with vertex a Tits ovoid, tangent to \(H_{\infty}\) at \(p_{\infty}\) is equal to the number of Buekenhout-Tits unitals that are \(\PGL\) equivalent to \(\U\) and tangent to \(l_{\infty}\) at \(P_{\infty}\) with Property~\ref{prop:subline-property}. The result follows. \end{proof} \begin{corollary}\label{cor:bt-stab-equiv} Let \(U\) be a Buekenhout-Tits unital, then the projectivity group stabilising \(U\) is isomorphic to the group \(G\) in Theorem~\ref{result:ebert-stab}. \end{corollary} In showing that all Buekenhout-Tits unitals are projectively equivalent, we may use \(\U\) to verify statements about general Buekenhout-Tits unitals. \section{On the Stabiliser of the Buekenhout-Tits Unital}\label{sec:stab-buek-tits} We now describe the stabiliser of the Buekenhout-Tits unital \(\U\) in \(\PTL(3, q^{2})\). \begin{lemma}\label{lem:G-matrix-mult} Let \(M_{u, v}, M_{s, t}\) be matrices inducing collineations of \(G\) as defined in Result~\ref{result:ebert-stab}, then \(M_{u, v} M_{s, t} = M_{u + s, t + v + su \delta}\). \end{lemma} \begin{proof} Using equation~\eqref{eq:Muv}, we find \begin{align} M_{u, v} M_{s, t} = \begin{bmatrix} 1 & (s + u)\epsilon & (t + v + su \delta) + {(s + u)}^{\sigma} \\ 0 & 1 & (u + s) + (u + s)\epsilon \\ 0 & 0 & 1 \end{bmatrix}. \end{align} Thus, we have \(M_{u,v} M_{s, t} = M_{u + s, t + v + su \delta}\). \end{proof} \begin{corollary}\label{cor:G-matrix-order} The order of any collineation of \(G\) induced by a matrix \(M_{u, v}\) as defined in Result~\ref{result:ebert-stab} is four if and only if \(u \neq 0\), and two if and only if \(u = 0\) and \(v \neq 0\). \end{corollary} \begin{proof} Firstly note that \(M_{0, 0} = I\). Direct calculation shows that \(M_{u,v}^2=M_{0,u^2\delta}\), \(M_{u,v}^3=M_{u,v+u^2\delta}\) and \(M_{u,v}^4=M_{0,0}\). \end{proof} \begin{corollary}\label{cor:bt-homography-group-simple} The stabiliser group \(G\) as defined in Result~\ref{result:ebert-stab} is isomorphic to \({(C_{4})}^{2e + 1}\). \end{corollary} \begin{proof} Recall from Result~\ref{result:ebert-stab} that $|G|=q^2=2^{4e+2}$. From Corollary~\ref{cor:G-matrix-order}, we have that \(G \equiv {(C_{4})}^{k}{(C_{2})}^{l}\) for some integers \(k, l\) such that \(2^{2k + l} = |G| = 2^{4e + 2}\), and hence, \begin{align}l = 2(e + 1 - k). \label{constraints} \end{align} Furthermore, we see that the number of elements of order four in $G$ is \(q^{2} - q\) as they correspond to all matrices $M_{u,v}$ with $u,v\in \F_q$ and $u\neq 0$. The number of elements of order four in a group isomorphic to \({(C_{4})}^{k}{(C_{2})}^{l}\) is \((4^{k} - 2^{k})2^{l}\). Thus, \begin{equation} (4^{k} - 2^{k})2^{l} = 4^{2e + 1} - 2^{2e + 1}. \end{equation} Using \eqref{constraints}, we find that \(k = 2e + 1\), and therefore \(G \equiv {(C_{4})}^{2e + 1}\). \end{proof} \begin{theorem}\label{thm:bt-unital-pgammal-stab} Let \(U\) be a Buekenhout-Tits unital in $\PG(2,q^2)$, $q=2^{2e+1}$, then the stabiliser group of \(U\) in \(\PTL(3, q^{2})\) is the order \(q^{2}(4e + 2)\) group \(GK\), where \(K\) is a cyclic subgroup of order \(16e + 8\) generated by \begin{equation} \psi :\mathbf{x}\mapsto \mathbf{x}^{2} \begin{bmatrix} 1 & 1 & \epsilon \\ 0 & \delta^{\sigma/2}(1 + \epsilon) & \delta^{\sigma/2}(1 + \epsilon) \\ 0 & 0 & \delta^{\sigma+1} \end{bmatrix}. \end{equation} (Here, $\mathbf{x}$ denotes the row vector containing the three homogeneous coordinates of a point, and $\mathbf{x}^2$ denotes its elementwise power.) \end{theorem} \begin{proof} From Lemma \ref{lem:numbercones}, we have that the number of Buekenhout-Tits unitals is \(q^4{(q^2 - 1)}^{2}\). Since all of those unitals are \(\PGL\)-equivalent by Theorem \ref{cor:proj-equivalence}, and \(\PGL(3, q^2) \triangleleft \PTL(3, q^2)\), we have that \begin{equation} q^4{(q^2 - 1)}^{2} = \frac{|\PGL(3, q^2)|}{|\PGL{(3, q^2)}_U|} = \frac{|\PTL(3, q^2)|}{|\PTL{(3, q^2)}_{U}|}. \end{equation} So \(\PTL(3, q^2)_U\) must have order \(q^{2}(4e + 2)\). Direct calculation shows that \(\psi\) stabilises $\U$. We have \(\psi^{4e + 2} \in G\) as \(\mathbf{x}^{2^{4e + 2}} = \mathbf{x}^{q^{2}} = \mathbf{x}\). Hence, \(|\psi| = (4e + 2)|\psi^{4e + 2}|\). From Corollary~\ref{cor:G-matrix-order}, it follows that \(|\psi^{4e + 2}| \in \{ 1, 2, 4\}\), with \(|\psi^{4e + 2}| = 4\) if and only if \(\psi^{4e + 2}\) is induced by \(M_{u, v}\) for some \(u \neq 0\). Hence, \(|\psi^{4e + 2}| = 4\) if and only if \(\psi^{4e + 2}(0, 1, 0) \neq (0, 1, 0)\) as \((0, 1, 0)M_{u, v} = (0,1,u + u \epsilon)\). Consider the point \((0, 1, z)\) for some arbitrary \(z \in \mathbb{F}_q\). Direct calculation shows that \(\psi(0, 1, z) = (0, 1, 1 + \mu z^2)\), where \(\mu = \frac{\delta^{\sigma + 1}}{\delta^{\sigma/2}(1 + \epsilon)} = \delta^{\sigma/2}\epsilon\). Thus, \begin{equation} \psi^{k}(0, 1, z) = (0, 1, \sum_{i=0}^{k}\mu^{2^{i} - 1} + zg(z)) \end{equation} for some polynomial \(g(z)\) depending on \(k\). If \(z = 0\) and \(k = 4e + 2\) we thus find: \begin{align} \psi^{4e + 2}(0, 1, 0) & = (0, 1, \sum_{i=0}^{4e + 2}\mu^{2^{i} - 1}) \\ & = (0, 1, \frac{\trace(\mu)}{\mu}). \end{align} Recall that $\epsilon^q=\epsilon+1$, so $\trace_{\mathbb{F}_{q^2}/\mathbb{F}_{q}}(\epsilon)=1$. We have that, $\trace_{\mathbb{F}_{q^2}/\mathbb{F}_{2}}(\delta^{\sigma/2}\epsilon)=\trace_{\mathbb{F}_{q}/\mathbb{F}_{2}}(\trace_{\mathbb{F}_{q^2}/\mathbb{F}_{q}}(\delta^{\sigma/2}\epsilon))=\trace_{\mathbb{F}_{q}/\mathbb{F}_{2}}(\delta^{\sigma/2}\trace_{\mathbb{F}_{q^2}/\mathbb{F}_{q}}(\epsilon))=\trace_{\mathbb{F}_{q}/\mathbb{F}_{2}}(\delta^{\sigma/2}) = 1.$ Hence, \(\psi((0, 1, 0)) \neq (0, 1, 0)\), so \(|\psi^{4e + 2}| = 4\) and \(|\psi| = 16e + 8\). Let \(K = \langle\psi\rangle\), because \(|K \cap G| = 4\), it follows that \(|GK| = q^{2}(4e + 2)\) and thus \(GK = \PTL{(3, q^{2})}_U\). \end{proof} \section{On the Feet of the Buekenhout-Tits Unital} The feet of the Buekenhout-Tits unital $\U$ are first described by Ebert in~\cite{ebert97_bueken_tits_unital}. He shows that the feet of a point \(P = (1, y_1 + y_2\epsilon, z_1 + z_2\epsilon)\) is the following set of points: \begin{multline} \label{feetex} \tau_P(\U) = \{(1, s + t\epsilon, s^2 + t^2\delta + st + y_1s + y_1t + y_2\delta{t} + z_1 + (s^{\sigma + 2} + t^{\sigma} + st)\epsilon)\\|\,s, t \in \mathbb{F}_q,\, s^{\sigma + 2} + t^{\sigma} + st = y_2s + y_1t + z_2\}. \end{multline} If the line \(\ell\) has equation \(\alpha x + y = 0\), where \(\alpha \in \mathbb{F}_{q^2}\), Ebert shows that \(|\ell \cap \tau_P(\U)| \leq 1\). Otherwise, \(\ell\) has equation \((a_1 + a_2\epsilon)x + (b_1 + b_2\epsilon)y + z = 0\), with $a_1,a_2,b_1,b_2\in \F_q$, and Ebert shows that \(\ell\) meets \(\tau_P(\U)\) in the points \(P_{r, s, t} \in \U\), where \(r, s, t \in \F_q\) satisfy \begin{align} s^2 + \delta t^2 + st + (y_1 + b_1) s + (y_1 + y_2 \delta + b_2 \delta) t + z_1 + a_1 & = 0, \label{eq:sys-1} \\ s^{\sigma + 2} + t^{\sigma} + st & = b_2 s + (b_1 + b_2) t + a_2,\label{eq:sys-2} \\ y_{2} s + y_{1} t + z_{2} & = b_{2} s + (b_{1} + b_{2})t + a_{2}. \label{eq:sys-3} \end{align} We will show that for all choices of points \(P \notin \ell_{\infty}\) and lines \(\ell\), \(|\tau_{P}(\U) \cap \ell| \leq 4\). \begin{lemma}\label{lem:orb-reps} Let \(G\) be the group of projectivities stabilising the Buekenhout-Tits unital as described in Result~\ref{result:ebert-stab}. Then, the set of \(q^2 - q\) points \(\{P_{a, b} = (1, a, b \epsilon) \,|\, a,b \in \mathbb{F}_q,\, b \neq a^{\sigma + 2}\}\) are points from \(q^2 - q\) distinct point orbits of order \(q^2\) under \(G\). \end{lemma} \begin{proof} Suppose there exists a collineation of \(G\) induced by a matrix \(M_{u, v}\) such that \(P_{a, b} M_{u, v} = P_{c, d}\). Then, \begin{equation*} \left( 1, a, b \epsilon \right) \begin{bmatrix} 1 & u \epsilon & v + u^{\sigma} \epsilon \\ 0 & 1 & u + u \epsilon \\ 0 & 0 & 1 \end{bmatrix} = \left( 1, c, d \epsilon \right). \end{equation*} However, it is clear that \(P_{a, b}M_{u, v} = \left( 1, a + u \epsilon, v + u^{\sigma}\epsilon + a \left( u + u \epsilon \right) + b\epsilon \right)\), so \(a + u \epsilon = c\). Therefore, \(a = c\) and \(u = 0\). If \(u = 0\), then \(v + b\epsilon = d \epsilon\), and we have \(b = d\). Hence, \(P_{a, b} = P_{c, d}\) and the lemma follows. \end{proof} There are \(q^4 - q^3 = q^{2} (q^2 - q)\) points of \(\PG(2, q)\) not on \(\ell_{\infty}\) or \(\U\). By Lemma~\ref{lem:orb-reps}, each of these points lies in the orbit of a point of the form \((1, a, b \epsilon)\). Therefore, in order to study the feet of a point $P$, we may assume that the point \(P = (1, y_1 + y_2\epsilon, z_1 + z_2\epsilon)\) has \(y_2 = z_1 = 0\). The following lemma shows that the feet of a point $P = (1, y_1, z_2\epsilon)$ meets almost all lines in at most $2$ points. \begin{lemma}\label{lem:easy-case} Let $\ell:\alpha x+\beta y + z$ be a line in $\PG(2,q^2)$, where \(\alpha= a_{1} + a_{2} \epsilon\), \(\beta = b_{1} + b_{2} \epsilon\) and \(a_{1}, a_{2}, b_{1}, b_{2} \in \mathbb{F}_{q}\). Let \(P = (1, y_{1}, z_{2} \epsilon)\), with \(y_{1}, z_{2} \in \mathbb{F}_{q}\) such that \(z_{2} \neq y_{1}\). Unless \(b_{2} = 0\), \(y_{1} = b_{1}\) and \(a_{2} = z_{2}\), we have \(|\tau_P(\U) \cap \ell| \leq 2\). \end{lemma} \begin{proof} From the description given in \eqref{feetex}, we see that the points \(P_{r, s, t} \in \tau_{P}(\U)\) satisfy \(s^{\sigma + 2} + t^{\sigma} + st = y_{1}t + z_{2}\), and this equation has $q+1$ solutions. Substituting this into equation~\eqref{eq:sys-2}, the points \(P_{r, s, t} \in \tau_P(\U) \cap \ell\) have \(s, t\) satisfying \begin{align} s^{2} + \delta t^{2} + st + \left( y_{1} + b_{1} \right)s + \left( y_{1} + b_{2} \delta \right) t + a_{1} & = 0 \label{eq:sys-1-easy-case} \\ b_{2} s + \left( y_{1} + b_{1} + b_{2} \right)t + a_{2} + z_{2} & = 0 \label{eq:sys-2-easy-case} \\ s^{\sigma + 2} + t^{\sigma} + st + y_{1}t + z_{2} & = 0. \label{eq:sys-3-easy-case} \end{align} Recall that the points $(1,s,t,s^{\sigma + 2}+t^\sigma+ st)$, where $s,t\in\F_q$ are the $q^2$ affine points of a Tits ovoid. Hence, \eqref{eq:sys-3-easy-case} represents an affine section of a Tits ovoid. Since it has $q+1$ points, it is an oval projectively equivalent to the translation oval \(\mathcal{D}_{\sigma} = \left\{(1, t, t^{\sigma})\,|\, t \in \mathbb{F}_q\right\}\). Unless \(b_{2} = 0\) and \(y_{1} = b_{1}\), equation~\eqref{eq:sys-2-easy-case} represents a line in \(\AG(2, q)\) which meets the oval \eqref{eq:sys-3-easy-case} in at most two points, so we have at most two solutions to the system. If \(b_{2} = 0\), \(y_{1} = b_{1}\), and \(a_{2} \neq z_{2}\), then equation~(\ref{eq:sys-2-easy-case}) has no solutions. \end{proof} \begin{remark}\label{rem:barwick-remark} Lemma~\ref{lem:easy-case} is a refinement of~\cite[Theorem 4.33]{barwick09_unital_projec_planes}, where Barwick and Ebert rework Ebert's earlier proof in~\cite{ebert97_bueken_tits_unital} that the feet of a point \(P \notin (\ell_{\infty} \cup \U)\) are not collinear. This reworked proof asserts that the feet cannot be collinear because the line given by equation~\eqref{eq:sys-2-easy-case} and the conic from equation~\eqref{eq:sys-1-easy-case} cannot have \(q + 1\) common solutions. However, we can see that this logic is not complete, and leaves an interesting case to examine when equation~\eqref{eq:sys-2-easy-case} vanishes. Ebert's original proof in~\cite{ebert97_bueken_tits_unital} does not contain this error, instead arguing that equations~\eqref{eq:sys-1-easy-case} and~\eqref{eq:sys-3-easy-case} cannot have \(q + 1\) common solutions. \end{remark} It follows from Lemma~\ref{lem:easy-case} that the feet of a point \(P \notin (\ell_{\infty} \cup \U)\) is a set of \(q + 1\) points such that every line meets \(\tau_P(\U)\) in at most two points except for a set of \(q\) concurrent lines. To this end, assume that \(b_{2} = 0\), \(y_1 = b_1\) and \(a_2 = z_2\). In this case, equation~\eqref{eq:sys-2-easy-case} vanishes. The system describing \(\ell \cap \tau_P(\U)\) is thus \begin{align} s^{2} + \delta t^{2} + st & = y_{1} t + a_{1}\label{eq:sys-1-simple} \\ s^{\sigma + 2} + t^{\sigma} + st & = y_{1} t + z_{2}\label{eq:sys-2-simple}. \end{align} The lines that produce these cases are the lines with dual coordinates \([a_{1} + z_{2}\epsilon, y_{1}, 1]\). These lines are concurrent at the point \((0, 1, y_{1})\) which lies on \(\ell_{\infty}\). We will show in Corollary \ref{cor:rough-bound} that these latter lines meet \(\tau_P(\U)\) in at most four points. We require the following lemma, which adapts arguments found in~\cite[Lemma 2.1]{ceria21_mds}. \begin{lemma}\label{lem:nucleus-argument} Let \(\mathcal{O}\) be a translation oval in \(\PG(2, q)\) projectively equivalent to \(\mathcal{D}_{\sigma}\), and let \(\mathcal{C}\) be a non-degenerate conic. If the nucleus of \(\mathcal{O}\) is also the nucleus of \(\mathcal{C}\), then \(|\mathcal{O} \cap \mathcal{C}| \leq 4\). \end{lemma} \begin{proof} Without loss of generality we may take \(\mathcal{O} = \mathcal{D}_{\sigma}\), so that the nucleus of \(\mathcal{O}\) is \(N = (0, 1, 0)\). If \(N\) is also the nucleus of \(\mathcal{C}\), then \(\mathcal{C}\) is a conic of the following form, \begin{equation}\label{eq:conic-equation} a_1 x^2 + a_2 y^2 + a_3 z^2 + x z = 0, \end{equation} for some \(a_1, a_2, a_3 \in \mathbb{F}_q\) with \(a_{2} \neq 0\). Suppose that \((0, 0, 1) \notin \mathcal{C}\). Then \(a_3 \neq 0\), and the point \((1, t, t^{\sigma}) \in \mathcal{O}\) if and only if \(t\) satisfies \begin{equation}\label{eq:sigmas} a_1 + a_2 t^2 + a_3 t^{2 \sigma} + t^\sigma = 0, \end{equation} that is \begin{equation} 0 = {\left( a_1 + a_2 t^{2} + a_3 t^{2 \sigma} + t^{\sigma} \right)}^{\sigma/2} = a_{1}^{\sigma/2} + a_2^{\sigma/2} t^\sigma + t^2 + t. \end{equation} Therefore, \begin{equation} t^\sigma = {\left( \frac{a_3}{a_2} \right)}^{2^{e}} t^{2} + \frac{1}{a_2^{2^e}} t + {\left(\frac{a_1}{a_2}\right)}^{2^{e}} \end{equation} and substituting into equation \eqref{eq:sigmas}, we find that this equation has at most four solutions. If instead \((0, 0, 1) \in \mathcal{C}\), then \(a_3 = 0\) and arguing as above we find that equation \eqref{eq:sigmas} has at most two solutions, so \(|\mathcal{O} \cap \mathcal{C}| \leq 3\). \end{proof} \begin{corollary}\label{cor:rough-bound} The feet of a point \(P \notin \left( \ell_{\infty} \cup \U \right)\) meet a line \(\ell\) in at most four points. \end{corollary} \begin{proof}From Lemma \ref{lem:easy-case}, we know we can restrict ourselves to the case $b_2=0,y_1=b_1,a_2=z_2$ which means we are looking at the points \(P_{r, s, t} \in \tau_P(\U) \cap \ell\) have \(s, t\) satisfying \begin{align} s^{2} + \delta t^{2} + st & = y_{1} t + a_{1}\label{eq:conic} \\ s^{\sigma + 2} + t^{\sigma} + st & = y_{1} t + z_{2}\label{eq:oval}, \end{align} where equation \eqref{eq:conic} represents a conic $\mathcal{C}$, and equation \eqref{eq:oval} represents an oval $\mathcal{O}$ in \(\AG(2, q)\). If the conic is degenerate, it's easy to see that the oval and conic have at most four points in common. So we may assume that the conic is non-degenerate. The nucleus of $\mathcal{C}$ is \(N = (y_1, 0, 1)\). We now show that \(N\) is the nucleus of the oval $\mathcal{O}$. The line \(t = 0\) goes through $N$ and meets the oval~\eqref{eq:oval} when \(s^{\sigma + 2} = z_{2}\), which has one solution as \(\sigma + 2\) is a permutation of \(\mathbb{F}_{q}\). The line \(s + y_{1} = 0\) through $N$ meets the oval~\eqref{eq:oval} when \(t^{\sigma} = y^{\sigma + 2} + z_{2}\) which has one solution for $t$. Therefore, \(N\) is the nucleus, as it is the intersection of two tangent lines to the oval. It now follows from Lemma~\ref{lem:nucleus-argument} that equations~\eqref{eq:conic} and~\eqref{eq:oval} have at most four common solutions. \end{proof} We now show the existence of a point \(P \notin (\U \cup \ell_{\infty})\) and a line \(\ell\) such that \(|\ell \cap \tau_{P}(\U)| = 3\), and demonstrate that our bound is sharp. \begin{lemma}\label{lem:oval-parameterisation} Let \(y_{1} = 0\), then the points of the oval given by equation~\eqref{eq:oval} are \begin{equation} \left\{P_{u} = \left(\frac{z_{2}^{1 - \sigma/2}u^{\sigma}}{1 + u + u^{\sigma}}, \frac{z_{2}^{\sigma/2}(1 + u^{\sigma})}{1 + u + u^{\sigma}}\right)\,\middle|\,u \in \mathbb{F}_q \right\} \cup \left\{\left(z_{2}^{1 - \sigma/2}, z_{2}^{\sigma/2}\right)\right\}. \end{equation} \end{lemma} \begin{proof} If \(y_{1} = 0\), then equation~\eqref{eq:oval} reduces to \begin{equation} \label{eq:oval-y1-zero} s^{\sigma + 2} + t^{\sigma} + st + z_{2} = 0 \end{equation} Using the properties of \(\sigma\) described in Section~\ref{sec:preliminaries}, one can show the point \((z_2^{1-\sigma/2}, z_{2}^{\sigma/2})\) satisfies equation~\eqref{eq:oval-y1-zero}. Furthermore, the points \(\overline{P_u} = (z_{2}^{1 - \sigma/2}u^{\sigma}, z_{2}^{\sigma/2}(1 + u^{\sigma}), 1 + u + u^{\sigma})\), where \(u \in \mathbb{F}_q\), are projective points satisfying the following homogeneous equation \begin{equation} x^{\sigma + 2} + y^{\sigma}z^{2} + xyz^{\sigma} + z_{2}z^{\sigma + 2} = 0. \end{equation} Because \(\trace(u + u^{\sigma}) = 0\), and \(\trace(1) = 1\) when \(q = 2^{2e + 1}\), we have \(u^{\sigma} + u+ 1 \neq 0\) for all \(u \in \mathbb{F}_q\). Thus, normalising so \(z = 1\), the points \(\overline{P_u}\) have the form \((s, t, 1)\) where \(s\) and \(t\) satisfy equation~\eqref{eq:oval-y1-zero}. \end{proof} \begin{corollary}\label{cor:oval-polynomial} Let \(y_{1} = 0\) and consider the points \(P_u\) as described in Lemma~\ref{lem:oval-parameterisation}. A point \(P_{u}\) lies on the conic given by equation~\eqref{eq:conic}, if and only if \(u\) is a root of the following polynomial \begin{equation}\label{final} a_{1}^{\sigma/2}u^{\sigma} + (z_{2}^{\sigma - 1} + \delta^{\sigma/2}z_{2} + z_{2}^{\sigma/2} + a_{1}^{\sigma/2})u^{2} + z_{2}^{\sigma/2}u + \delta^{\sigma/2} z_{2} + a_{1}^{\sigma/2} \end{equation} \end{corollary} \begin{proof} By directly substituting \(P_u\) into equation~\eqref{eq:conic} we have \begin{equation} \label{eq:oval-polynomial} (z_{2}^{2 - \sigma} + \delta z_{2}^{\sigma} + z_{2} + a_{1})u^{2\sigma} + z_{2}u^{\sigma} + a_{1}u^{2} + (\delta z_{2}^{\sigma} + a_{1}) = 0 \end{equation} Raising both sides of equation~\eqref{eq:oval-polynomial} to the power of \(\sigma/2\) yields our result. \end{proof} \begin{theorem}\label{thm:bt-unital-line-meet-feet} Let \(U\) be a Buekenhout-Tits unital in \(\PG(2, q^{2})\). The feet of a point \(P \notin (\ell_{\infty} \cup U)\) meet a line \(\ell\) in at most four points. Moreover, there exists a line \(\ell\) and point \(P\) such that \(|\ell \cap \tau_{P}(U)| = k\) for each \(k \in \{0, 1, 2, 3,4\}\). \end{theorem} \begin{proof} By Theorem~\ref{cor:proj-equivalence} we may assume that \(U = \U\). The first part of the proof comes from Corollary~\ref{cor:rough-bound}. Let \(P = (1, y_1, z_2\epsilon)\). All lines through \(P\) meet \(\tau_P(U)\) in at most one point by definition, so it is clear that there exists lines \(\ell\) such that \(|\ell \cap \tau_P(U)|\) is zero or one. Because the points of \(\tau_P(U)\) are not collinear, there exists a pair of points \(Q, R \in \tau_P(U)\) such that the line \(QR\) does not contain \((0, 1, y_1)\). Hence, the line \(QR\) meets in precisely two points by Lemma~\ref{lem:easy-case}. Now consider a line \(\ell\) with equation \((\delta + \epsilon)x + z = 0\) and let $P$ be the point $(1,0,\epsilon)$ (that is, \(a_1=\delta, a_2=1, b_1=b_2=y_{1} = 0, z_{2} = 1\)). The number of points of \(\ell \cap \tau_P(U)\) is the same as the number of solutions to equations~\eqref{eq:sys-1-simple} and \eqref{eq:sys-2-simple}. By Lemma~\ref{lem:oval-parameterisation} the points \(P_u\) satisfying equation~\eqref{eq:sys-2-simple} lie on the conic~\eqref{eq:sys-1-simple} when \begin{equation} \delta^{\sigma/2}u^{\sigma} + u = u(\delta^{\sigma/2}u^{\sigma - 1} + 1) = 0, \end{equation} which has two roots as \(\sigma - 1\) is a permutation of \(\mathbb{F}_q\). It can also be shown that \((z_2^{1-\sigma/2}, z_2^{\sigma/2}) = (1, 1)\) satisfies both equations. Hence, the intersection of the feet of the point \((1, 0, \epsilon)\) and \(\ell\) has exactly three points. Finally, consider the point $P(1,0,\frac{1}{\delta^\sigma}\epsilon)$ and the line $\ell$ with dual coordinates $[\frac{1}{\delta}+\frac{1}{\delta^2}\epsilon,0,1]$. By Corollary \ref{cor:oval-polynomial}, the number of feet of $P$ on the line $\ell$ is the number of solutions to the equation \eqref{final}, where $a_1=\frac{1}{\delta}$ and $z_2=\frac{1}{\delta^\sigma}$ which is \begin{equation}\label{final2}\frac{1}{\delta^{\sigma/2}}u^{\sigma} + (\frac{1}{\delta^{2-\sigma}}+\frac{1}{\delta})u^2+\frac{1}{\delta}u=0. \end{equation} Since equation~\eqref{final2} is a $\F_2$-linearised polynomial, and there are at most $4$ roots, we have that equation~\eqref{final} has $1,2,$ or $4$ roots. We will show that, under the condition $\trace(\delta)=1$, it has four roots. Multiplying equation~\eqref{final2} by $\delta$ yields $\delta^{1-\sigma/2}u^\sigma+(\delta^{\sigma-1}+1)u^2+u=0$ and now substituting $a=\delta^{\sigma-1}+1$ gives \begin{equation}\label{h1}(a^{\sigma/2}+1)u^\sigma+au^2+u=0.\end{equation} We find that \(u = 0\) and \(u = \frac{1}{a^{1 + \sigma/2}}\) are solutions to equation~\eqref{h1}. Now consider \begin{equation} \label{eq:h2} u^{\sigma} + au^2 + 1 = 0. \end{equation} Any solution to equation~\eqref{eq:h2} also satisfies $(u^{\sigma} + au^2 + 1)^{\sigma / 2} + u^{\sigma} + a u^2 + 1 = 0$ which is precisely equation~\eqref{h1}. Multiply equation~\eqref{eq:h2} with $a^{\sigma+1}$, then we find $(a^{\sigma/2+1}u)^\sigma+(a^{\sigma/2+1}u)^2+a^{\sigma+1}=0$, and letting \(z = (a^{\sigma/2 + 1}u)^2\), \begin{equation}\label{eq:transform-eq} z^{\sigma/2}+z+a^{\sigma+1}=0,\end{equation} which is known (see \cite{menichetti}) to have solutions if and only if $\trace(a^{\sigma+1})=0$. As \(z = 0\) and \(z = 1\) are not solutions of equation~\eqref{eq:transform-eq}, no solutions of equation~\eqref{eq:transform-eq} correspond to the solutions \(u = 0\) or \(u = \frac{1}{a^{1 + \sigma/2}}\) of equation~\eqref{final2}. Furthermore, recall that equation~\eqref{final2} has $1,2$ or $4$ solutions and that we have assumed that $\trace(\delta)=1$. Since $\delta^{\sigma-1}=a+1$, it follows that $\delta=(a+1)^{\sigma+1}$ and $\trace(\delta)=\trace(a^{\sigma+1}+a^\sigma+a+1)=\trace(a^{\sigma+1})+\trace(1)=\trace(a^{\sigma+1})+1$. Hence, the conditions $\trace(\delta)=1$ and $\trace(a^{\sigma+1})=0$ are equivalent, and we find exactly four roots to equation~\eqref{final2}. \end{proof} \printbibliography% \noindent {\bf Address of the authors:}\\ Jake Faulkner \texttt{jake.faulkner@pg.canterbury.ac.nz}\\ \noindent Geertrui Van de Voorde \texttt{geertrui.vandevoorde@canterbury.ac.nz}\\ \noindent School of Mathematics and Statistics\\ University of Canterbury\\ Private bag 4800\\ 8140 Christchurch\\ New Zealand \end{document}
{ "timestamp": "2022-09-23T02:10:57", "yymm": "2209", "arxiv_id": "2209.10863", "language": "en", "url": "https://arxiv.org/abs/2209.10863" }
\section{Introduction} Cherenkov radiation is light produced by charged particles when they pass through an optically transparent medium at speeds exceeding the speed of light in that medium~\cite{CRdefinition}. It was first observed experimentally in 1937 by Pavel Cherenkov~\cite{Cherenkov1937}. He shared the Nobel Prize in Physics 1958 with Ilya Frank and Igor Tamm who developed a theoretical model of the phenomenon~\cite{Tamm1939}. The model was improved by Ginzburg and Frank to show the emission originated from dielectric material regions parallel to the particle motion \cite{Ginzburg1947}. The radiation emisison has since been calculated using electromagnetic field eigenvalues ~\cite{Linhart1955} and Di Francia expansions ~\cite{Ulrich1966}. More recent work has described ChDR generation in different scenarios~\cite{Harryman2020}\cite{Lasocha2020}.\\ In the last few years, the existence of ChDR has been proven experimentally~\cite{Kieffer2018}. ChDR can be emitted if an ultrarelativistic charged particle moves in the vicinity of a dielectric medium~\cite{Alves2019}. The atoms of the medium get polarized by the electric field of the ultrarelativistic charged particle, oscillate, and thereby emit light~\cite{Bobb2018} at a characteristic Cherenkov angle $\cos(\theta)=\frac{1}{\beta n}$, where $\beta$ is the relativistic factor and $n$ is the refractive index of the material. For fused silica $(n=1.46)$ and ultrarelativistic particles $(\beta \approx 1)$ the angle is approximately $\SI{46.8}{\degree}$~\cite{Lefevre2018}.\\ ChDR has been proposed to be a method for non-invasive beam diagnostics as the particles do not physically interact with the radiator~\cite{Kieffer2020}. Beam position and bunch length monitors exploiting ChDR emission have been trialled successfully~\cite{Alves2019}\cite{Curcio2020}. In this article we present the results of placing a dielectric radiator in the vicinity of a particle beam at the DESY II Test Beam Facility and measuring the emission rates of photons under different conditions\footnote{All experiments were conducted by high school students under expert guidance as part of the Beamline for Schools (BL4S) competition 2020. BL4S is a worldwide competition offered by CERN since 2014 that provides high school students with the opportunity to conduct their own experiments at a state-of-the-art particle accelerator~\cite{Arce-Larreta2021}.In the years 2019 - 2021, BL4S was co-organized by DESY and held mostly at their facilities in Hamburg due to the Long Shutdown 2 at CERN~\cite{Aretz2020}.}. We focus on a comparison of the emissions from electrons and positrons in the same setup. To our knowledge this has not been done before as previous experiments were conducted on circular colliders where electrons and positrons travel in opposite directions~\cite{Kieffer2020}. \section{Methods} \subsection{Experimental setup} The DESY II Test Beam Facility offers positron and electron beams with selectable momenta from $\SI{1}{\GeV\per\c}$ to $\SI{6}{\GeV\per\c}$~\cite{Diener2018}. A maximum particle rate of $\SI{10d3}{\Hz}$ is reached at around $\SI{2}{\GeV\per\c}$~\cite{Aretz2020}. The Test Beam is generated by double conversion of the DESY II synchrotron beam~\cite{Diener2018}. Bremsstrahlung is produced from $\SI{7}{\micro\metre}$ carbon primary targets held inside the synchrotron beam. The Bremsstrahlung then creates electron positrons pairs on a secondary metal target. The particles subsequently pass through a dipole magnet, which allows selection of particle type and momentum. A $\SI{10}{\mm} \times \SI{20}{\mm}$ collimator narrows the beam before it traverses the experimental setup.\\ The experimental setup (see Fig.~\ref{fig:ExpSetup}) comprises a beam telescope consisting of six silicon pixel detectors~\cite{Telescope2016} that are permanently installed at DESY, a photomultiplier tube (PMT) and a fused silica radiator. The beam telescope features a high resolution in the order of a few micrometers, and low material budget, which enables the reconstruction of particle tracks at the given momentum range and thus an estimation on the relative particle distance to the radiator. It is used in a configuration (see Table \ref{tab:DetectorPositions}) with three detector planes each before and behind the radiator. In addition, a pair of scintillators is utilised as input to the trigger system. The centerpiece of the experiment, the radiator and the PMT, can be seen in Figure~\ref{fig:Radiator setup}. The PMT (ET enterprises 9813QKB) was operated at $\SI{1650}{\V}$ for all experiments.\\ The radiator is positioned partially inside the beam, such that the center of the beam spot is located at the edge of the radiator. Thus, the majority of the particles passes in close proximity of the radiator. Inevitably, a significant fraction of particles traverses the radiator, leading to emission of non-diffraction Cherenkov radiation. Using the track information obtained via the beam telescope, events of this type can be identified. To reduce contamination from ambient light, PMT and radiator were placed in an aluminum box, painted black on the inside (see Fig.~\ref{fig:Radiator setup}). The box was placed on linear motion stages for an alignment transverse to the beam, while the radiator itself was mounted on a rotation stage for an angular alignment parallel to the beam. Beam windows covered with black tape were added to reduce the material budget while maintaining the blocking of ambient light.\\ The PMT can be further equipped with polarization filters in order to study radiation polarization. ChDR is polarized as it arises from fields of charged particles inducing dynamic polarization currents at the air-radiator interface~\cite{Shevelev2014}. The angular distribution is determined by the spatial arrangement of particle beam and radiator~\cite{Shevelev2014}. \subsection{Radiator} Right-angled trapezoid prism radiators made of high purity fused silica (SiO$_{2}$) were obtained from Heraeus~\cite{Heraeus} and CERN. The dimensions were $\SI{15}{\cm} \times \SI{1.5}{\cm} \times \SI{1}{\cm}$ and $\SI{5}{\cm} \times \SI{1}{\cm} \times \SI{0.5}{\cm}$, respectively. The unique prism geometry of the radiators allows for ChDR generated over the entire length of the radiator to reach the PMT. Due to the a significant fraction of the light undergoing internal reflection, it will reach the wedge shaped end of the radiator (see Fig~\ref{fig:SketchRadiator}). A reflective coating at the $\SI{21.8}{\degree}$ angled surface has been applied to enforce the radiation exiting the radiator perpendicularly to the opposite surface. To determine the QDC signal baseline, a small piece of aluminum foil, blocking the exiting light from entering the PMT, was temporarily applied over this area. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure1.png} \caption{Sketch of the experimental setup at the DESY II Test beam facility.} \label{fig:ExpSetup} \end{figure} \subsection{Triggering \& Data Acquisition} Two assemblies of scintillators, light guide and PMT, were used for triggering purposes. These scintillators were powered and their signals interpreted by the Trigger Logic Unit (TLU)~\cite{TLU2019}, which is used for a coincidence detection on discriminated input signals with a programmable threshold. The TLU in turn forms a particle trigger signal and performs a trigger-busy-handshake with all detectors, inhibiting any further trigger signals from being distributed while any detector is indicating a busy signal. In consequence of a trigger, the telescope data is recorded, storing the data as well as the trigger numbers to the disk. The PMT signal is digitized via a $\SI{12}{\bit}$ charge to digital converter (QDC) of the type CAEN V965~\cite{CAEN}. For this, an integration window is created through a pulse generator, initiated by the trigger signal and with a width that was empirically determined to cover the full duration of all PMT pulses. The QDC raises a busy signal while the integration is in process. The data acquisition is controlled via the EUDAQ2 data acquisition framework~\cite{EUDAQ}. This software enables the initialization, configuration and control of the telescope, the QDC, the scintillators, the TLU, and the motion and rotation stages via dedicated configuration files. It furthermore features so-called producers, which have the task of writing the data to disk. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure2.png} \caption{Sketch of ChDR emission in a long dielectric radiator based on a design previously described for non-invasive beam diagnostics~\cite{Alves2019}.} \label{fig:SketchRadiator} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure3.png} \caption{Picture of the radiator mounted on a movable stage next to the PMT inside a black painted aluminum box.} \label{fig:Radiator setup} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure4.png} \caption{Picture of a fused silica radiator used in the experiments with the geometry described in Figure \ref{fig:SketchRadiator}.} \label{fig:Radiator} \end{figure} \subsection{Data Analysis} The data was analysed using ROOT~\cite{ROOT} and PyRoot in Jupyter Notebook \cite{Python}. The particle hits were clustered and tracked using the corryvreckan library~\cite{Dannheim2021}. The position of the particles at the level of the radiator was calculated by reconstructing the pathway of the particle with straight lines fitted through the clusters before and after the radiator. The particle tracks were then used to calculate the impact parameter (distance of the particle from the radiator surface). Positive values indicate a position inside the radiator, negative values indicate a position outside of the radiator. The position of the radiator edge was determined using Material Budget Image (MBI) (see Fig.~\ref{fig:MBI79}), representing a two-dimensional mapping of the amount of material traversed by relativistic charged particles~\cite{Jansen2018}. This was achieved by fitting a straight line through the points on the MBI, where the value of the kink angle reached the midpoint between the minimum and the maximum value.\\ Next, the pulse amplitude measured from the PMT was plotted over impact parameter. Events with impact parameters greater than $\SI{1}{\mm}$ or smaller than $\SI{-1.2}{\mm}$ were excluded from the analysis as they were considered to be too far away from the air-radiator interface. The graph obtained was then modelled by an exponential function~\cite{Kieffer2018} of type $a+b\cdot\mathrm{e}^{cx}$ with an interval of length $\SI{1}{\mm}$, ending at the previously determined radiator edge (see Fig \ref{fig:Electronfit}). The integral of the exponential function in this interval was used as a single number to compare the total ChDR emission under different experimental conditions. Finally, the arbitrary QDC output unit was converted to photons using a calibration run with a controlled light emission (see Section~\ref{sec:QDC output}). We then performed a linear regression on the photon generation as a function of particle momentum in $R$ (see Fig \ref{fig:Integrals}). The obtained regression line coefficients were compared using analysis of covariance (ANCOVA). The $\SI{6}{\GeV\per\c}$ electron measurement was excluded from the analysis as after tracking there was an insufficient number of data points with accurate tracks (see Fig \ref{fig:Electrons}). This is due to scattering being observed more frequently at high particle momenta. The experiment with $\SI{6}{\GeV\per\c}$ positrons however included enough data.\\ \begin{figure} \centering \includegraphics[width= 0.95\textwidth]{Figure5.png} \caption{X-Y projection material budget image of the beam profile kink angle. Both the radiator and the mounting structure are clearly distinguishable from the background.} \label{fig:MBI79} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{Figure6.png} \caption{Centered X projection of the MBI with the determined boundary of the radiator. Non-diffraction Cherenkov radiation is produced inside the radiator that is located between $x=\SI{0}{mm}$ and $x=\SI{10}{\mm}$. For $x$ greater than $\SI{10}{\mm}$, non-diffraction Cherenkov radiation is produced from an interaction between the beam and the mounting stage. For $x$ smaller than $\SI{0}{\mm}$, only ChDR is observed.} \label{fig:Projection79} \end{figure} \begin{figure} \centering \includegraphics[width= 0.95\textwidth]{Figure7.png} \caption{Exponential fit of the photon emission as a function of impact parameter using $\SI{3}{\GeV\per\c}$ in the intervall $[-1,0]$.} \label{fig:Electronfit} \end{figure} \subsection{QDC output calibration} \label{sec:QDC output} A pulsed green LED next to the PMT was used to calibrate the QDC output. When the LED is turned on, if it is in a state where few photons reach the PMT, the amount of photons impinging the PMT can be assumed to take a Poisson distribution. By controlling the distance of a LED to the PMT, the voltage of the pulse applied to the LED and the pulse duration, the amount of photons reaching the PMT can be controlled. In this setup, the data acquisition is triggered by every pulse provided from the LED. Less than 1 in 10 events were observed to have a signal pulse from the PMT, suggesting the Poisson mean is less than $0.1$. Under these circumstances, the probability of having more than one photon is vanishingly small, therefore, the few pulses from the PMT must be from single photon events. By fitting a superposition of two Gaussian functions to the peak in the signal that corresponds to single photons and the bias current pedestal (see Fig.~\ref{fig:QDCPhoton}), an offset value arising from the measurement itself, an estimate of the unit conversion factor can be obtained. The conversion factor measured from the pedestal peak was found to be $\SI{7.2 \pm 3.8}{\QDCunits \per \photon}$. The uncertainty of the conversion factor was calculated using Gauss' law of error propagation and the width of each summand. \section{Results} Comparing the experiments with and without aluminum foil (see Fig.~\ref{fig:ElectronFoil}) suggests a significant increase in photon emission due to Cherenkov Radiation from an interaction between the charged particles and the radiator. Tracking the particles allows accurate discrimination between photons generated from non-diffraction Cherenkov Radiation and Cherenkov Diffraction Radiation. We are therefore confident to have detected ChDR. This is supported by our observation of a linear increase in light emission between $\SI{1}{\GeV\per\c}$ and $\SI{6}{\GeV\per\c}$ for positrons and $\SI{1}{\GeV\per\c}$ and $\SI{5}{\GeV\per\c}$ for electrons when comparing the values of the integral of the exponential fit (see Fig~\ref{fig:Integrals}).\\ Figure~\ref{fig:Integrals} also suggests electron ChDR emission is more dependent on particle momentum than that of positrons, resulting in a greater slope. To validate this, we performed an ANCOVA test on the regression lines. We found a significant difference between electrons and positrons after adjustment for particle momentum ($p=0.000862$). This is supported by visual differences in the photon emission rates as a function of $x$ position (see Fig~\ref{fig:Electrons} \& \ref{fig:Positrons}).\\ To further characterise the radiation, we measured the photon generation for various orientations of a polarization filter placed over the PMT. The results from this data indicate that ChDR generated in our setup has a higher horizontal than vertical polarization component both at $\SI{3}{\GeV\per\c}$ and $\SI{5}{\GeV\per\c}$ (see Table~\ref{tab:polarisation}). Higher emissions of vertically polarized photons for a radiator placed above the beam have been reported in the literature~\cite{Kieffer2020}. As the radiator in our experiment was placed on the side of the beam, our results agree with this observation.\\ We also evaluated a short radiator from CERN on top of the Heraeus radiator we used for the main experiments. The dimensions were $\SI{5}{\cm} \times \SI{1}{\cm}\times \SI{0.5}{\cm}$ and $\SI{15}{\cm} \times \SI{1.5}{\cm} \times \SI{1}{\cm}$, respectively. Previous experiments suggested a linear increase in light emission for longer radiators~\cite{Alves2019}. We found an increase in light emission (see Table~\ref{tab:positronselectrons}) for the larger radiator, but this was higher than the tripling the theory predicts for a radiator with trice the length. The deviation may be due to differences in thickness, width or manufacturing of the radiators. \begin{figure} \centering \includegraphics[width=\textwidth]{Figure8.png} \caption{Exponential fit of the photon emission as a function of impact parameter using $\SI{5}{\GeV\per\c}$ electrons with (bottom) and without (top) aluminum foil on the radiator. Blocking the photon exit point on the radiator reduces both non-diffraction Cherenkov Radiation and ChDR detection to negligible levels.} \label{fig:ElectronFoil} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{Figure9.png} \caption{Values of the integral of the exponential fits as a function of beam momentum.} \label{fig:Integrals} \end{figure} \begin{table} \centering \begin{tabular}[h]{l|c|c} Polarisation & $\SI{3}{\GeV\per\c}$ & $\SI{5}{\GeV\per\c}$ \\ \hline Vertical & 0.39 & 0.40 \\ Vertical +45° ccw & 0.77 & 0.88 \\ Horizontal & 3.12 & 3.78 \\ Horizontal +45° ccw & 1.54 & 1.90 \end{tabular} \caption{ChDR emission in photons for $\SI{3}{\GeV\per\c}$ and $\SI{5}{\GeV\per\c}$ electrons using different orientations of the polarisation filters.} \label{tab:polarisation} \end{table} \begin{table} \centering \begin{tabular}[h]{l|c|c} Beam & Small radiator & Large radiator \\ \hline 2 GeV $e^-$ & 1.99 & 21.799 \\ 3 GeV $e^-$ & 2.35 & 28.19 \\ 2 GeV $p^+$ & 2.15 & 16.28 \\ 3 GeV $p^+$ & 2.40 & 20.16 \end{tabular} \caption{ChDR emission in number of photons for $\SI{2}{\GeV\per\c}$ and $\SI{3}{\GeV\per\c}$ electrons and positrons using two different radiators.} \label{tab:positronselectrons} \end{table} \section{Conclusion} We show that ChDR emission increases linearly with particle momentum between $\SI{1}{\GeV\per\c}$ and $\SI{5}{\GeV\per\c}$ for both positrons and electrons. Unlike previous experiments on circular colliders, we measured emission by both particle types in the same setup. We report a significantly higher increase in ChDR emission rates as a function of particle momentum for electrons compared to positrons. To our knowledge, differences in emission rates of electrons and positrons have not been reported for ChDR or non-diffraction Cherenkov Radiation. Further experiments to investigate this possible difference are needed. Our results also indicate that ChDR may be useful for monitoring the momenta of particle beams, as the light emissions are a linear function of the particle momentum for both positrons and electrons. \section{Acknowledgements} The students among the authors would like to thank their teachers Mr. Seidemann and Mr. Irmer for taking them to CERN and BESSY II and sharing their passion for physics with them. They would also like to thank Sarah Aretz and Margherita Boselli for organizing the competition as well as all volunteers from DESY and CERN for supporting the data analysis. The students are thankful for financial support by the CERN and Society Foundation, the Wilhelm and Else Heraeus Foundation, the Arconic Foundation, AMGEN, and the Ernest Solvay Fund, managed by the King Baudouin Foundation. They would also like to express their gratitude towards CERN and DESY for organising BL4S. Receival of radiators from Thibaut Lefèvre of CERN and Heraeus Group is gratefully acknowledged. The measurements leading to these results have been performed at the Test Beam Facility at DESY Hamburg (Germany), a member of the Helmholtz Association (HGF). \section{Declaration of Competing Interest} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. \section{Data availability} Data will be made available on request. \newpage
{ "timestamp": "2022-09-23T02:13:11", "yymm": "2209", "arxiv_id": "2209.10937", "language": "en", "url": "https://arxiv.org/abs/2209.10937" }
\section{Introduction} In the last two decades, time-domain astronomy has become increasingly efficient, thanks to the ability of the surveys to (near) daily scan the entire visible sky. We can cite the Catalina Real-Time Transient Survey \citep{catalina}, PanSTARRS-1 \citep{panstarrs}, ASAS-SN \citep{Asas} and ATLAS \citep{Atlas}. A more recent survey is the Zwicky Transient Facility \citep[ZTF,][]{ztfbellm, ztfgraham}, successor of the Palomar Transient Facility \citep{PTF}, and using a \SI{47}{deg^{2}} camera. With such equipment, ZTF detects $\mathcal{O}(10^2)$ transients of interest every night, instrumental artifacts and previously known sources excluded, with a typical $5\sigma$ $r$ band AB magnitude limit of $20.5$. Among them, 10 to 15 are new objects that have just appeared and became bright enough to be detected. Once the photometric detection is triggered, ZTF relays the alert to the Spectral Energy Distribution machine \citep[SEDm,][]{SEDm}, an Integral Field Spectrograph (IFS) designed and built to spectroscopically type transients brighter than $\sim 19.5$~mag, and operating on the Palomar 60-inch telescope. The core of the SEDm is a Micro-Lenslet Array (MLA) covering $28\arcsec\times 28\arcsec$, subdivided into $52 \times 45$ hexagonal spaxels, combined to a multi-band ($ugri$) field acquisition camera, used for positioning and guiding. Currently, the automated pipeline routinely used for IFS data reduction and supernova (SN) spectrum extraction is \pkg{pysedm} \citep{pySEDm}. Since this pipeline intrinsically assumes the target is an isolated point source, it cannot properly handle the situation where the transient is close to its host galaxy core. As a matter of fact, since August 2018, $\sim 30\%$ of the observed SN show some severe host contamination which significantly decreases the confidence level of the classification, and $\sim 10\%$ are just unusable. This situation has various undesirable effects. From a mere statistical point of view, discarding SNe with too strong a host contamination reduces the type~Ia SN (SN~Ia) sample by 10 to 20\%, which weakens the strength of the Hubble diagram anchor at low redshift. Furthermore, the wrong classification of SNe~Ia could induce a significant bias in the cosmological analysis \citep[e.g.][]{jonesSNcontam}. Finally, a more subtle effect is related to the galactic environment bias, which would be caused by selecting out host-contaminated SNe \citep{Rigault2013}. In the past years, numerous studies have shown that the SN~Ia standardized luminosity is tightly correlated with the environment properties. \citet{rigault15, lssf_rigault20} showed that, after standardization for light curve shape and color, SNe~Ia having a large local specific Star Formation Rate are fainter by $0.16\pm 0.03$~mag. Other tracers, like host galaxy stellar mass \citep{kelly10, sullivan10, childress13, Betoule2014} or just host morphology \citep{morph20}, are finding the same correlation between SN~Ia luminosity and their environment. Recently, \citet{briday} have shown that all these tracers are compatible with two SN~Ia populations differing in standardized magnitude by at least $0.12 \pm 0.01$~mag. Some developments have been made to improve the robustness of the point source extraction by estimating the faintest iso-magnitude contour separating the galaxy and the SN \citep{contsep}; however, this is not yet optimal in most problematic situations, i.e. when the SN is faint or located near the host core: it only brings a marginal $1.7\%$ improvement in classification accuracy from the standard \pkg{pysedm} analysis. One could think of handling the host contamination by interpolating the galaxy area under the transient from the external parts in the FoV. Unfortunately, there are several reasons for not using such method, beyond the mere signal-to-noise issue. First, the seeing, which makes the SN spread over the galaxy structure: as much as the host light is contaminating the SN flux, the reverse is also true, and it is not clear how far from the SN position one could consider the galaxy flux to be free of the point source signal. Furthermore, the host spatial structure under the SN extent -- linear, concave or convex -- is not known \textit{a priori}, specially in a strongly structured region such as the galaxy core, which would prevent a clean and robust interpolation. Finally, an interpolation would assume that the host spectral features are spatially uniform under the SN extent, which again is usually not the case, specially close to the galaxy core. In order to improve the final SN~Ia sample in numerous ways, we present in this paper \pkg{HyperGal}\xspace\footnote{The code is available online at {\url{https://github.com/JeremyLezmy/HyperGal}}.}, a scene modeler specifically designed to handle the strong host contamination case, through a detailed hyperspectral galaxy modeling, complemented by a smooth background component and a point-source transient. The algorithm concept is based on two ideas: first, public multi-band wide photometric surveys can provide reference information on the host galaxy before the transient event; second, the required host galaxy cube (two spatial dimensions and one spectral one) can be estimated from pure photometric observations using a dedicated SED fitter as a physically motivated spectral interpolator. The resulting hyperspectral host model can then be projected in the observable space of the SEDm, taking into account all observational effects: relative geometry between the photometric pixels (px) and the IFS spaxels (spx), spatial (Point Spread Function, PSF) and spectral (Line Spread Function, LSF) Impulse Response Functions (IRF) of the SEDm, Atmospheric Differential Refraction (ADR), sky background and additional diffused light. Sec.~\ref{sec:pipeline} describes the \pkg{HyperGal}\xspace pipeline, and validation tests on realistic simulations are presented in Sec.~\ref{sec:validation}, to estimate the accuracy of the SN extraction as well as the SN typing itself, since this is what the SEDm is designed for. We will also show the improvement with respect to an isolated source extractor such as \pkg{pysedm}. A discussion of some hypotheses and possible future improvements can be found in Sec.~\ref{sec:discuss}. \section{\pkg{HyperGal}\xspace pipeline} \label{sec:pipeline} This section presents the different processing steps from the required input to the transient spectrum extraction (Fig.~\ref{fig:dag}). SN ZTF20aamifit, at redshift of $z=0.045$ as measured from strong H$\alpha$ line in the host spectrum, will systematically be used for illustration; it was observed with the SEDm in February 17, 2020, at airmass~1.7 in poor seeing conditions ($2\farcs 4$ FWHM). The SN is $\sim2\farcs8$ away from its host galaxy core, close enough not to be considered as isolated (see Fig.~\ref{fig:cutout}). \begin{figure*} \centering \includegraphics[width=\textwidth]{DAG_hypergal_paper_sec.pdf} \caption{Main processing steps of the \pkg{HyperGal}\xspace pipeline, and the sections where they are detailed.} \label{fig:dag} \end{figure*} \subsection{Inputs} \label{sec:inputs} Three main inputs are necessary to \pkg{HyperGal}\xspace: the SEDm cube to be analysed, the archival photometric thumbnails, and the redshift of the target. The SEDm IFS $(x, y, \lambda)$ cube of the scene is built from the 2D raw spectroscopic exposures with \pkg{pysedm} \citep[Sec.~2]{pySEDm}. It includes all the components -- transient point source, spatially and spectrally structured host galaxy, night sky background and spatially smooth diffused light -- to be handled by the scene modeler (Fig.~\ref{fig:e3dsedm}). \begin{figure} \centering \includegraphics[width=\linewidth]{e3dcube_ZTF20aamifit.png} \caption{SEDm cube from the observation of ZTF20aamifit. The \emph{left panel} shows the spectra, whose color corresponds to the selected spaxels in the \emph{right panel} (white image of the spectrally integrated cube). The \emph{red cross} shows the SN position.} \label{fig:e3dsedm} \end{figure} The archival multi-band photometric images of the transient environment, acquired \emph{before} the SN explosion, are obtained from the PanSTARRS-1 (PS1) $3\pi$ Steradian survey \citep{ChambersPanstarrs} in all $grizy$ bands, and queried at the SN location through the Image Cutout Server% \footnote{\url{https://ps1images.stsci.edu/cgi-bin/ps1cutouts}}. PS1 is chosen for its sky coverage compatible with ZTF (north of declination \SI{-30}{deg}). Figure~\ref{fig:cutout} shows an RGB image for ZTF20aamifit host galaxy, through the PS1 $grz$ bands. \begin{figure} \centering \includegraphics[width=\linewidth]{ps_cutouts_ZTF20aamifitzoom.pdf} \caption{RGB image of the host galaxy of SN ZTF20aamifit, constructed from the PS1 $grz$ cutouts. The red cross shows the position of the SN detected by ZTF. The $x$- and $y$-axes are in native PS1 pixels, $0\farcs 25$ aside. The white dashed box will be used as boundaries in Fig.~\ref{fig:cig_spatial_check} and~\ref{fig:cig_spectral_check}.} \label{fig:cutout} \end{figure} An analysis of spatially structured scenes (harboring 3 or more well resolved objects in the SEDm FoV) provides a precise estimation of a scale ratio of SEDm and PS1 pixel sizes of $2.230 \pm 0.003$, which, for a PS1 px scale of $0\farcs 25$, corresponds to an effective SEDm spaxel size of $0\farcs 558$. Once measured, this SEDm scale is fixed in the pipeline. To save computation time for the SED fit and the spatial projection step, PS1 images are first spatially rebinned $2\times2$. The third input is the host galaxy redshift, required by the SED-based interpolation of the photometric images. Around 50\% of the targets observed by the SEDm have a host galaxy spectroscopic redshift known beforehand \citep{ztfspecred}; for the others, a redshift is a priori estimated from a preliminary transient spectrum extraction, using the transient spectral features and the possible presence of emission lines from the host galaxy. While it would be theoretically possible to assess the host redshift directly during the scene modeling, we did not try to implement this feature yet (see Sec~\ref{sec:discuss}). Furthermore, the consequence of an inaccurate input redshift has not been studied for this analysis. \subsection{SED fit} \label{sec:sedfit} The SED fit aims to generate an effective hyperspectral -- i.e. full 3D $(x,y,\lambda)$ -- host model from the $grizy$ PS1 broadband images. During the process, each photometric pixel is treated independently, so that the resulting spaxel in the output cube gets its own spectrum. At the end of this process, this cube is still independent of the SEDm observation details (impulse responses, atmospheric effects, etc.). It is important to note that the SED fitter is not used here to derive accurate and spatially resolved physical parameters from the host galaxy, but rather to build a physically plausible spectral interpolation compatible with broadband archival images. The software used for this step is \pkg{cigale}\footnote{Version 2020, \href{https://cigale.lam.fr}{https://cigale.lam.fr}} \citep{Burgarella2005, Noll2009, cigale}. It is based on a progressive computation, successively using modules describing a unique component of the SED. The set of all parameters tested by \pkg{cigale} is shown in Table~\ref{tab:cigaleparams}. \subsubsection{Star Formation History and population} The time-evolution of the Star Formation Rate (SFR) is described by the Star Formation History (SFH) through the \texttt{sfhdelayed} module. Our SFH scenario includes two components, a delayed SFR and a late burst: \begin{equation} \label{eq:sfhdelayed} \text{SFR}(t) = \text{SFR}_{\text{delayed}}(t) + \text{SFR}_{\text{burst}}(t). \end{equation} Both terms have a decreasing exponential form, \begin{align} \label{eq:sfh} \text{SFR}_{\text{delayed}}(t) &\propto \left(t/\tau_{\text{main}}^{2}\right) e^{-t/\tau_{\text{main}}} \\ \text{SFR}_{\text{burst}}(t) &\propto e^{-(t-t_{0})/\tau_{\text{burst}}} \quad\text{for}\; t > t_{0},\;0\;\text{otherwise}. \end{align} The amplitude of the late starburst is fixed by the parameter $f_{\text{burst}}$, defined as the ratio between the stellar mass formed during this event and the total stellar mass. The SFH is applied with the Initial Mass Function (IMF) from \citet{Chabrier2003} on the stellar population model from \citet{bc03}, used through the \texttt{bc03} module. \subsubsection{Nebular emission} The light emitted in the Lyman continuum by the heaviest stars ionizes the gas in the galaxy. This physical process generates significant radiative emission in the continuum and spectral lines. This SED component is described by the \texttt{nebular} module, based on \citet{Inoue2011}. The model is effectively parameterized by the metallicity $Z$ (the same as in the stellar population model \texttt{bc03}) and the ionization parameter $\log(U)$. \subsubsection{Dust extinction} Dust in the galaxy absorbs the radiation at short wavelengths, especially from the UV to the near-IR; this energy is then re-emitted in the mid- to far IR. As \pkg{HyperGal}\xspace is primarily targeting sources at redshift $z < 0.1$ in the optical domain, extinction effect is properly considered through the dust attenuation module \texttt{dustatt\_modified\_CF00} from \citet{dustatt}. This approach is considering two star populations: the young ones ($< \SI{e7}{years}$) still reside in their Birth Cloud (BC), and the old ones are considered as already dispersed in the InterStellar Medium (ISM). Attenuation is therefore treated differently: for the young population, both ISM and BC are considered, while for the old population, only the ISM is considered. In both case, the attenuation $A_{\lambda}$ is modeled by a power law, normalized by the $V$-band attenuation: \begin{equation} A_{\lambda}^{k} = A_{V}^{k}\left(\frac{\lambda}{\lambda_V}\right)^{n_{k}} \quad k=\text{BC or ISM}, \end{equation} with $\lambda_{V} = \SI{0.5}{\micro\meter}$. The young-to-old star $V$-band attenuation ratio is parameterized through $\mu = A_{V}^{\text{ISM}}/(A_{V}^{\text{ISM}} + A_{V}^{\text{BC}})$, a free parameter allowing more flexibility and a better estimate of the H$\alpha$ emission lines \citep{Battisti2016, Buat2018, Malek2018, Chevallard2019}. The power-law slope for the ISM is fixed at $n_{\text{ISM}} = -0.7$ following \citet{dustatt}, and the slope for the BC at $n_{\text{BC}} = -1.3$ as advocated in \citet{Cunha2008MAGPHYS}. For completeness, the \texttt{dale2014} module is used for the dust emission \citep{dale2014}; however, this complex component has no significant impact in our spectral domain. \begin{table*} \centering \caption{Modules and input parameters used with \pkg{cigale}.} \label{tab:cigaleparams} \begin{tabular}{lcc} \toprule \textbf{Parameters} & \textbf{Symbol} & \textbf{Tested values}\\ \midrule \multicolumn{3}{l}{\textbf{Star Formation History (SFH)}} \\ e-folding time of the main stellar population & $\tau_{\text{main}}$ (Gyr) & 1, 3, 5 \\ e-folding time of the late starburst population & $\tau_{\text{burst}}$ (Gyr) & 10 \\ age of the main stellar population & $\text{age}_{\text{main}}$ (Gyr) & 1, 2, 4, 8, 10, 12 \\ age of the late starburst & $\text{age}_{\text{burst}}$ (Myr) & 10, 40, 70 \\ mass fraction of the late starburst & $f_{\text{burst}}$ & 0, \num{e-3}, \num{e-2}, \num{e-1}, \num{2e-1} \\ \hline \multicolumn{3}{l}{\textbf{Stellar population} } \\ Metallicity & $Z$ & \num{e-4}, \num{4e-4}, \num{4e-3}, \\ && \num{8e-3}, \num{2e-3}, \num{5e-2} \\ \hline \multicolumn{3}{l}{\textbf{Nebular emission}} \\ Ionisation parameter& $\log(U)$& $-4$, $-3$, $-2$, $-1$ \\ \hline \multicolumn{3}{l}{\textbf{Dust attenuation}} \\ InterStellar Medium attenuation in $V$ & $A_{V}^{\text{ISM}}$ & 0, 0.3, 0.7, 1, 1.3, 1.7, 2 \\ $A_{V}^{\text{ISM}}/(A_{V}^{\text{ISM}}+A_{V}^{\text{BC}})$ & $\mu$ & 0.1, 0.3, 0.7, 1 \\ BC power-law slope & $n_{\text{BC}}$ & $-1.3$ \\ ISM power-law slope & $n_{\text{ISM}}$ & $-0.7$ \\ \bottomrule \end{tabular} \end{table*} \subsubsection{From SED fit to hyperspectral galaxy model} \pkg{cigale} is run using PS1 filter transmission curves from \cite[see Fig.~\ref{fig:cig_spectral_check}]{ps1photo} on photometric pixels for which the Signal-to-Noise Ratio (SNR) is above 3 in all 5 bands. Otherwise, the output flux is set to 0 at all wavelengths: such pixels presumably belong to the sky or diffuse backgrounds, and cannot be properly modeled by the SED fitter. For all fitted pixels, \pkg{cigale} returns a spectrum over an extended wavelength domain (from far UV to radio), with an inhomogeneous spectral sampling between 1 and 5~\AA/px. All spectra are rebinned at the SEDm spectral sampling of $\sim 26$~\AA/px and truncated to the $[3700, 9300]$~\AA{} range, resulting in 220~monochromatic slices. The broadband flux from the SED fit is compared to the input photometric measurements in Fig.~\ref{fig:cig_spatial_check}, where is shown, for each PS1 band and pixel, the pull (i.e. the model residual normalized by the error on the data) and the relative RMS averaged over the 5 bands: \begin{equation} \label{eq:RMS} \text{RMS} = \sqrt{\frac{1}{5}\sum_{\lambda=grizy} \left( \frac{f_{\lambda}-\tilde{f}_{\lambda}}{f_{\lambda}} \right)^2} \end{equation} where $f_{\lambda}$ denotes the data and $\tilde{f}_{\lambda}$ the predicted value. The averaged RMS is generally lower than 3\% in the core of the galaxy, but can reach $\sim 10\%$ in the outer parts. However, as the PS1 observations are 2 to 3 magnitude deeper than the SEDm ones \citep{ChambersPanstarrs}, relatively poorly fitted pixels far away from the host core have a marginal flux impact proportionately to the SEDm background, and do not significantly affect the transient spectrum in the scene model. \begin{figure} \centering \includegraphics[width=\linewidth]{cigale_pullrms_ZTF20aamifit.pdf} \caption{From \emph{left} to \emph{right} and \emph{top} to \emph{bottom:} map of the pull for the $grizy$ broadband images from \pkg{cigale} outputs, and spectral relative RMS over the 5 reference host images. Only pixels with SNR>3 for all $grizy$ bands are considered (see Sec.~\ref{sec:sedfit}).} \label{fig:cig_spatial_check} \end{figure} \subsection{SEDm Impulse Response Functions} \label{ssec:irf} The ``intrinsic'' hyperspectral galaxy model obtained from the SED fit now has to be projected in the SEDm observation space, including the spectro-spatial IRFs. This section first presents the spectral component, i.e. the Line Spread Function (LSF), then the spatial component, aka the Point Spread Function (PSF). \subsubsection{Spectral IRF (LSF)} \label{sssec:lsf} The output spectra from \pkg{cigale} have a spectral resolution of $\sim 3$~\AA{} in the wavelength range 3200 to 9500~\AA{} (i.e. a median resolving power of $\mathcal{R} = \lambda/\Delta\lambda \sim 2000$, \citealt{bc03}), $20\times$ the near constant SEDm resolution ($\mathcal{R} \sim 100$, \citealt{SEDm}). The full SEDm LSF is therefore a very good approximation of the differential spectral IRF between \pkg{cigale} and the SEDm. To characterize the SEDm LSF, we use the intermediate line fits of the wavelength solution derived from arc-lamp observations, Cd, Hg, and Xe \citep[Sec.~2.1.2]{pySEDm}. Each emission line is fitted by a single Gaussian profile over a 3rd-order polynomial continuum. Studying wavelength calibration for 65 nights between 2018 and 2022, the LSF standard deviation $\sigma_{LSF}$ turned out to be stationary (no evidence of evolution with time), fairly homogeneous in the FoV, but chromatic (as expected). Figure~\ref{fig:lsf_sedm} shows the chromatic evolution of the standard deviation, and the quadratic polynomial model adjusted to it. \begin{figure} \centering \includegraphics[width=\linewidth]{LSF.pdf} \caption{LSF standard deviation $\sigma_{LSF}$ as a function of wavelength, from the wavelength calibration of 65 nights between 2018 and 2022. Each violin corresponds to an emission line in the arc-lamp spectra (color legend).} \label{fig:lsf_sedm} \end{figure} To adapt the \pkg{cigale} spectra to the SEDm resolution, the spectra of the hyperspectral galaxy model are convolved by the chromatic Gaussian LSF. An illustration of the result is shown in Fig.~\ref{fig:cig_spectral_check}. \begin{figure*} \includegraphics[width=\textwidth]{intcube_ZTF20aamifitlogscale.png} \caption{Hyperspectral galaxy model of ZTF20aamifit host galaxy, after projection in the SEDm observation space (including LSF). The \emph{green circles} correspond to the spatially integrated flux from PS1 cutouts, the \emph{black diamonds} to the same quantities as fitted by \pkg{cigale}. The 5 shaded curves show the transmission of the $grizy$ PS1 filters. The red and blue spectra on the left correspond to the spectra integrated in selected regions of same color in the model cube on the right; the black spectrum is the spectrum integrated over the full FoV.} \label{fig:cig_spectral_check} \end{figure*} \subsubsection{Spatial IRF (PSF)} \label{sssec:psf} SNe are effective point sources, therefore solely described in the FoV by the SEDm PSF (and its amplitude). \pkg{HyperGal}\xspace uses a bisymetric PSF model, in which radial profile is the sum of a Gaussian $\mathcal{N}(r; \sigma)$ for the core, and a Moffat $\mathcal{M}(r; \alpha, \beta)$ for the wings \citep{Buton2013, Rubin22}: \begin{equation} \mathcal{P}(r; \alpha, \sigma, \beta, \eta) = \eta\times\mathcal{N}(r;\sigma) + \mathcal{M}(r;\alpha,\beta), \end{equation} where $r$ is an elliptical radius: \begin{equation} \label{eq:ellipticity} r^{2} = (x-x_{0})^{2} + \mathcal{A}(y-y_{0})^{2} + 2 \mathcal{B}(x-x_{0})\times(y-y_{0}) \end{equation} with $(x_{0}, y_{0})$ the coordinates of the point source. Parameters $\mathcal{A}$ and $\mathcal{B}$ simultaneously describe the flattening and the orientation of the PSF. The 4 shape parameters $(\alpha, \beta, \sigma, \eta)$, which could be ill-constrained in low SNR regime if adjusted independently, are correlated by fixed relationships. The PSF was tested on 148 isolated standard stars, observed in 2021 with the SEDm, and we settled on the following model. The constrained PSF only has 2 free parameters: $\alpha$ (Moffat radius) and $\eta$ (relative normalization of the Gaussian), while the two other parameters are expressed as linear functions of $\alpha$: \begin{align} \beta &= \beta(\alpha) = \beta_0 + \beta_1 \alpha \\ \sigma &= \sigma(\alpha) = \sigma_0 + \sigma_1 \alpha \end{align} where $\beta_0 = 1.53$, $\beta_1 = 0.22$, $\sigma_0 = 0.42$ and $\sigma_1 = 0.39$ were determined from the training star sample. The chromaticity of $\alpha(\lambda)$ is set as a power law function: \begin{equation} \alpha(\lambda) = \alpha_{\ensuremath{\text{ref}}}\left(\frac{\lambda}{\lambda_{\ensuremath{\text{ref}}}}\right)^{\rho} \end{equation} where normalization $\alpha_{\ensuremath{\text{ref}}}$ and index $\rho$ are free parameters, and $\lambda_{\ensuremath{\text{ref}}} \equiv 6000$~\AA. Parameters $\eta$, $\mathcal{A}$ and $\mathcal{B}$ do not exhibit strong chromaticity, and are therefore considered constant. Finally, the SEDm PSF of a given observation is fully described by 5 independent parameters: $\alpha_{\ensuremath{\text{ref}}}$ and $\rho$, $\eta$, $\mathcal{A}$ and $\mathcal{B}$. \subsubsection{Differential PSF between PS1 and SEDm} \label{sssec:relpsf} The original hyperspectral galaxy model is derived from PS1 photometric exposures, with different seeing conditions than the SEDm observations: the median seeing is $\sim 1\farcs7$ in SEDm \citep{SEDm}, and $\sim 1\farcs2$ in PS1 images \citep{ps1pix}. As the exact PSF profile is less critical for extended objects such as the host galaxy, we chose to model the differential PSF between PS1 and SEDm as a single bisymmetric Gaussian kernel, with free ellipticity and position angle. The hyperspectral model is thus convolved with this differential PSF before the spatial projection. \subsection{Scene modeling} \label{ssec:scene} The two main elements are now at hand to build the scene model: \begin{itemize} \item a hyperspectral host galaxy model, and the (differential) spectral and spatial IRF to match it to the SEDm observations, \item a chromatic PSF model for the transient point source. \end{itemize} The last component to complete the scene is the night sky and diffused light background, modeled with a 2D 2nd-order polynomial at each wavelength. The non-uniform terms handle a strong diffused light component, clearly visible in the edges of the SEDm FoV and spectral range. Overall, the background component is described by 6~parameters, $b_{0}$, $b_{x}$, $b_{y}$, $b_{xy}$, $b_{xx}$ and $b_{yy}$. We now describes the progressive method used to adjust it to the observed SEDm cube, and the detailed spatial projection procedure used to match the two cubes. \subsubsection{General method} \label{sssec:method} We first consider $N \ll 220$ \emph{meta}slices of the SEDm cubes, i.e. slices summed over a restricted wavelength domain, small enough to be considered roughly achromatic, but large enough to increase the SNR and significantly speed up the computation time. The scene is projected and fitted on all metaslices independently (the so-called ``2D fit'', Sec.~\ref{sec:fitter}), which results in a set of $N \times m$ parameters; some are nuisance parameters (e.g. background and component amplitudes), other key scene parameters, such as the point source position and PSF shape parameters. From this set of parameters evaluated at $N$ wavelengths, specific chromatic models are used to fix all shape and position quantities (the ``1D fit''), for which the full spectral resolution is not required. Ultimately, \pkg{HyperGal}\xspace performs a final linear ``3D'' fit of the different component amplitudes over all monochromatic slices, providing the total scene model cube at original SEDm spectral sampling. The pipeline uses by default $N=6$~metaslices linearly sampled between 5000 and 8500~\AA. This spectral range is where the SEDm efficiency is higher than 70\% \citep{SEDm}, and is extended enough to well constrain the chromatic parameters, especially the ADR (see Sec.~\ref{sec:chrom_fit}). The pipeline was tested with different number of metaslices, but no significant difference was noticed in the results. \pkg{HyperGal}\xspace was extensively optimized with the parallel computing library \pkg{DASK}\footnote{\url{https://www.dask.org}} \citep{dask}, a dynamic task scheduler working as well on single desktop machines as on many-node clusters. \pkg{DASK} optimizes the pipeline by analyzing the (minimal) interdependencies between all computation tasks and building an optimal parallelized workflow to be submitted and run on an arbitrary number of available workers (in our case, we use 10 nodes on the IN2P3 Computing Center\footnote{\url{https://cc.in2p3.fr/}}). \subsubsection{Spatial projection} \label{sec:projection} The spatial projection of the hyperspectral galaxy model (matched to the SEDm spectral and spatial IRFs) is made by successively projecting each (meta)slice, taking into account the relative geometry and size between PS1-derived model (square, $0\farcs 50$ aside) and SEDm (hexagonal, $0\farcs 558$) spaxels. The projection is done according to a spatial anchor, a reference position in the sky supposedly known in both (meta)slice. The chosen anchor is the transient position, derived from the ZTF survey astrometry and located at the center of the queried PS1 images (and therefore at the center of the hyperspectral model). In the SEDm cube, this position is initially guessed from the astrometric solution of the SEDm Rainbow Camera \citep{SEDm, pySEDm}, but cannot be strictly fixed: the (chromatic) SEDm anchor position $(x_{0}, y_{0})$ is free in the fitting process of each metaslice. The projection is made by geometrically overlapping the two polygonal spaxel grids, with the anchor position as a reference; this is effectively equivalent to a nearest neighbor interpolation scheme. These computations are done using \pkg{shapely}% \footnote{\url{https://github.com/Toblerity/Shapely}} \citep{shapely} and \pkg{geopandas}% \footnote{\url{https://github.com/geopandas/geopandas}} \citep{geopandas}. At this point, the model cube on which the PS1/SEDm differential PSF and the SEDm LSF are applied is now projected in the SEDm observation space, over the SEDm spaxel grid. \subsubsection{Metaslice (2D) fit} \label{sec:fitter} As already mentioned, all components of the scene are first independently fitted on the $N$ metaslices. The free parameters per metaslice are: \begin{itemize} \item the SN position $(x_{0}, y_{0})$ in the SEDm FoV, used as an anchor position for the spatial projection; \item the SN PSF parameters ($\alpha$, $\eta$, $\mathcal{A}$, $\mathcal{B}$); \item the PS1/SEDm differential PSF parameters ($\sigma_G$, $\mathcal{A}_G$, $\mathcal{B}_G$); \item the amplitudes of the SN ($I$) and host ($G$) components; \item the background coefficients ($b_{0}, b_{x}, b_{y}, b_{xy}, b_{xx}, b_{yy}$). \end{itemize} We use \pkg{iminuit}% \footnote{\url{https://github.com/scikit-hep/iminuit}} \citep{minuit, iminuit2} to minimize a weighted $\chi^2$ for each metaslice independently: \begin{equation} \chi^2 = \sum\limits_{i}\left(\frac{y_i - \tilde{y}_i}{\sigma_{i}}\right)^2, \end{equation} where $i$ runs on the spaxels of the metaslice, $y$ and $\tilde{y}$ are the data and model fluxes respectively, and $\sigma$ the error on the data. Fig.~\ref{fig:projection} illustrates the projection of one metaslice of the hyperspectral galaxy model onto the SEDm space. The fitted scene on this metaslice shows a spatial RMS between the model and the data of 2.6\%. Although indicative of the overall scene model accuracy, a low RMS does not necessarily imply a clean separation of the different components, (e.g. when the transient lies on top of a sharp host galaxy core). Extraction accuracy is directly evaluated from simulated SN spectra in Sec.~\ref{sec:validation}. \begin{figure*} \includegraphics[width=\textwidth]{fitted_metaslice_ZTF20aamifit.png} \caption{Fit result for the $[6167,6755]$~\AA{} metaslice of ZTF20aamifit cube. \emph{From left to right:} metaslice from the original (transient-free) hyperspectral model with MLA footprint overplotted, projected fitted scene (host + background + SN), SEDm observations, and relative model residuals.} \label{fig:projection} \end{figure*} \subsubsection{Chromatic (1D) fit} \label{sec:chrom_fit} Once the fit is performed independently over all $N$ metaslices, a set of $N$ chromatic estimates of the $m$ parameters is at hand, and used to assess their (smooth) chromatic evolution (except for the component amplitudes and background parameters, which are nuisance parameters at this point). The chromaticity of the full Gaussian + Moffat PSF is modeled as detailed in Sec.~\ref{sssec:psf}. The chromaticity of the width of the 2D Gaussian which models the differential PSF between PS1 and the SEDm is adjusted by a similar power law, \begin{equation} \sigma_G(\lambda) = \sigma_{\ensuremath{\text{ref}}} \left(\frac{\lambda}{\lambda_{\ensuremath{\text{ref}}}}\right)^{\rho_G} \end{equation} where $\rho_G$ and $\sigma_{\ensuremath{\text{ref}}}$ are adjusted on the $N$ metaslice estimates obtained previously, and $\lambda_{\ensuremath{\text{ref}}} \equiv 6000$~\AA; the shape parameters $\mathcal{A}_G$ and $\mathcal{B}_G$ are considered constant equal to their (inverse-variance weighted) mean values over the $N$ metaslices. The effective anchor location in the SEDm FoV is systematically wavelength-dependent, due to the chromatic light refraction through the atmosphere (ADR). Given the $N$ positions of the SN in the different metaslices, an effective 4-parameter ADR can be fitted to track the chromatic offsets in the FoV: \begin{equation} \label{eq:adr} \begin{bmatrix} x_0(\lambda) \\ y_0(\lambda) \end{bmatrix} = \begin{bmatrix} x_{\ensuremath{\text{ref}}} \\ y_{\ensuremath{\text{ref}}} \end{bmatrix} - \frac{1}{2}\left( \frac{1}{n^{2}(\lambda)} - \frac{1}{n^{2}(\lambda_{\ensuremath{\text{ref}}})}\right) \times \tan(d_{z}) \begin{bmatrix} \sin\theta \\ \cos\theta \end{bmatrix} \end{equation} with ($\theta$, $z$, $x_{ref}$, $y_{ref}$) the fitted parameters, where $\theta$ is the parallactic angle, $z$ the airmass and $d_z=\arccos{z^{-1}}$ the zenith distance in the plane-parallel atmosphere approximation, and $(x_{\ensuremath{\text{ref}}}, y_{\ensuremath{\text{ref}}})$ the reference position at reference wavelength $\lambda_{\ensuremath{\text{ref}}} \equiv 6000$~\AA. The index of refraction $n(\lambda)$ of air is computed using the Edlén equation from \citet{refractindex}% \footnote{\url{https://emtoolbox.nist.gov/Wavelength/Documentation.asp}}, which takes into account the atmospheric pressure, temperature, and relative humidity, as provided for each exposure by the SEDm Telescope Control System. Figure~\ref{fig:output_adr} illustrates the ADR effect, a drift of the metaslice anchor position with wavelength, and the ADR model, at effective airmass $\sim 2.0$. \begin{figure} \centering \includegraphics[width=\linewidth]{output_adr_ZTF20aamifit2.pdf} \caption{SN positions as a function of wavelength, and the effective ADR fit. \emph{Top panel}: relative offsets with respect to reference position at reference wavelength along each axis; filled points correspond to the observed offsets, and open circles to the predictions of the ADR model. \emph{Bottom panel}: relative offsets in the $(x,y)$ plane. Color codes for the central wavelength of the metaslices.} \label{fig:output_adr} \end{figure} \subsubsection{Final (3D) fit} \label{sec:3Dfit} Once all PSF and ADR chromatic models are available from 2D+1D metaslice adjustments, the scene morphological parameters are considered known and fixed at each wavelength: the point source position $(x_{0}, y_{0})$ and PSF parameters ($\alpha$, $\eta$, $\mathcal{A}$, $\mathcal{B}$), as well as the PS1/SEDm differential PSF parameters ($\sigma_G$, $\mathcal{A}_G$, $\mathcal{B}_G$). This allows us to perform a final 3D linear fit over all monochromatic slices, where only scaling amplitudes of the different scene components -- namely host galaxy $\{G\}$, SN $\{I\}$ and background polynomial components $\{b_{0}, b_{x}, b_{y}, b_{xy}, b_{xx}, b_{yy}\}$ -- are let free per slice. The total scene is then reconstructed at full spectral resolution. Although $G(\lambda)$ is primarily used to recover flux calibration mismatch between PS1 and SEDm, this normalization parameter can interfere in a non-trivial way with the position and intensity of the emission lines in the hyperspectral galaxy model. This effect might help to handle slightly incorrect input redshift used in the SED fitting step, especially under the assumption of a uniform spatial distribution of the line. As this has not been analysed in depth, we extend this thought in Sec.\ref{sec:discuss}. Fig.~\ref{fig:output_global} presents the white image (spectral integral) of the final \pkg{HyperGal}\xspace scene model for SN ZTF20aamifit. The quality of the fit is evaluated from the pull map, showing no evidence of structured residuals. The spectral relative RMS map indicates an accuracy of $\sim 4\%$ at SN and host core location, and 6 to 7\% where only the background is significant. \begin{figure} \centering \includegraphics[width=\linewidth]{output_global_ZTF20aamifit2_nohost.png} \caption{Full scene model for ZTF20aamifit. \emph{Top panel}: integrated SEDm and \pkg{HyperGal}\xspace-modeled cubes; the red cross indicates the adjusted point source position at 6000~\AA. \emph{Bottom panel}: spectral pull and spectral relative RMS. No galaxy- or SN-related structured residual is visible in the pull map and the spectral RMS indicates an accuracy of $\sim 4\%$ at the host and SN locations.} \label{fig:output_global} \end{figure} \subsection{Component extraction} \label{sec:source_extract} The strength of the \pkg{HyperGal}\xspace pipeline is the simultaneous fit of the 3 scene components, the host galaxy, the transient point source and the background. The main quantity of interest is of course the SN spectrum (i.e. the vector of the point source amplitudes $I(\lambda)$, see Fig.~\ref{fig:outputsnspec}), but one can also selectively subtract individual components to assess the quality of the scene model. \begin{figure} \centering \includegraphics[width=\linewidth]{output_sn_spectra_ZTF20aamifit_wsky_wpysedm_v2.pdf} \caption{SN ZTF20aamifit spectrum -- as extracted by \pkg{HyperGal}\xspace (\emph{black}) and \pkg{pysedm} (\emph{blue}) -- and uniform sky spectrum (coefficient $b_0(\lambda)$, \emph{red}). Flux unit $f_{\lambda}$ stands for femto-\si{erg.cm^{-2}.s^{-1}.\AA^{-1}}.} \label{fig:outputsnspec} \end{figure} \subsubsection{Host galaxy integrated spectrum} \label{sec:hg_outputs} Thus, the host contribution can be isolated in the SEDm cube by subtracting the SN and the background components (see Fig.~\ref{fig:output_host}). To further compute an integrated host spectrum, a large elliptical aperture is defined around the host with the \pkg{SEP} package \citep{sep, sep2} from the PS1 images. This aperture is then projected in the SEDm cube, using the respective World Coordinate Systems. Note that the ADR is neglected in the process, as it rarely induce a deviation of more than one or two spaxels in the FoV, and has barely any impact on the host spectrum integrated over a large aperture. The integrated host spectrum is shown in Fig.~\ref{fig:output_host}, with the expected position of some major emission lines at the input redshift (independently of the host spectrum). This procedure highlights the consistency between the input redshift used for the hyperspectral galaxy modeling and the extracted integrated spectrum. In the future, it could be considered to consistently estimate the host redshift directly from such integrated spectrum during the scene modeling (see Sec.~\ref{sec:discuss}). \begin{figure*} \includegraphics[width=\textwidth]{output_host_ZTF20aamifit2.png} \caption{ZTF20aamifit host galaxy, isolated from the SEDm data cube. \emph{Left panel:} isolated host galaxy component in the SEDm cube, after subtraction of both the SN and background models. \emph{Right panel:} host spectrum integrated over the selected spaxels; the main spectral features are marked for the input redshift $z=0.045$.} \label{fig:output_host} \end{figure*} \subsubsection{Point source radial profile} Similarly, the point source contribution can be isolated in the SEDm cube by subtracting both host and background models, as shown in Fig.~\ref{fig:output_sn_ifu} for the $[6167,6755]$~\AA{} metaslice of the ZTF20aamifit cube. This closer look at the point source contribution allows us to check the accuracy of the PSF profile in each metaslice. The fact that the profile smoothly tends to 0 means that the background was correctly modeled by \pkg{HyperGal}\xspace; also, the absence of outliers in the data points indicates that there is no evidence of residual host contamination in the profile, as noticed in the isolated SN image. \begin{figure} \centering \includegraphics[width=\linewidth]{output_sn_profile_ZTF20aamifit2_meta_prof_v2.png} \caption{SN ZTF20aamifit, isolated from the SEDm data cube. \emph{Left panel}: isolated SN component in the SEDm $[6167,6755]$~\AA{} metaslice, after subtraction of both the host and background models; the \emph{red cross} indicates the fitted SN location, and \emph{contours} show the elliptical iso-radius at 3 and 5~spx for observations (\emph{black solid lines}) and model (\emph{red dashed lines}). \emph{Right panel}: PSF profile for the same metaslice, as a function of the elliptical radius. The data points refer to the isolated SN on the left panel, the \emph{red curve} corresponds to the PSF profile (without the background), the \emph{blue} and the \emph{green curves} to the Moffat and the Gaussian components respectively. The Gaussian component is particularly weak because of the poor seeing conditions.} \label{fig:output_sn_ifu} \end{figure} \subsection{SN classification} \pkg{HyperGal}\xspace being primarily designed for the transient spectral classification, an automated typing procedure is included in the pipeline, based on the Supernova Identification \cite[\pkg{SNID}][]{snid}. The typing is performed over the 4000 to 8000~\AA{} spectral range, which includes the most discriminating spectral features for redshifts $z \lesssim 0.1$. This domain also corresponds to the one where the SEDm CCD quantum efficiency is over~60\%. The quality of the \pkg{SNID} classification is quantified by the r$lap$ parameter, measuring the strength of the correlation between the input and template spectra. According to \citet{snid}, an r$lap \geq 5$ indicates a high confidence in the classification, without considering any prior on the redshift or the phase of the SN. Figure~\ref{fig:output_snid_typing} presents the \pkg{SNID} typing of ZTF20aamifit using its \pkg{HyperGal}\xspace-extracted spectrum. The best match has an r$lap = 27$, which leaves no doubt about its classification as an SN~Ia. In comparison, the \pkg{pysedm}-extracted spectrum (see Fig.~\ref{fig:outputsnspec}) is also typed as an SN Ia but with a significantly lower confidence (r$lap = 9$). \begin{figure*} \includegraphics[width=\textwidth]{ZTF20aamifit_snid_typing_fullspec.pdf} \caption{\pkg{SNID} typing of the ZTF20aamifit \pkg{HyperGal}\xspace spectrum. \emph{Left panel}: input spectrum (in grey) and best model from \pkg{SNID} (in blue). \emph{Right panel}: distribution in the (redshift, phase) plane of the 30 best matches with an r$lap > 5$ (all being normal SNe~Ia in this case). The input redshift of the galaxy ($z=0.045$) is indicated with the horizontal grey line. The best model, with a very high r$lap = 27$, classifies ZTF20aamifit as an SN~Ia at redshift ${z=0.046}$ and phase $p=+5.6$~days.} \label{fig:output_snid_typing} \end{figure*} \section{\pkg{HyperGal}\xspace validation} \label{sec:validation} The \pkg{HyperGal}\xspace pipeline is validated with a set of simulations, in order to quantify the accuracy of the extracted SN spectra as a function of various observational conditions, and the ability to spectrally classify the transient. In this section, we first present the simulation process, before performing some statistical analysis on the spectral accuracy, followed by the typing efficiency. For comparison, the SNe are also extracted with a method similar to \pkg{pysedm} \citep{pySEDm}, that is a plain PSF extraction of a supposedly isolated source (not accounting for the background galaxy), but using the same PSF and diffuse background models as \pkg{HyperGal}\xspace for consistency. \subsection{Simulated sample} During a short shutdown of the main ZTF camera, the SEDm was free to observe a few galaxies which hosted SNe at least one year earlier. These observed host cubes are therefore naturally in the SEDm space for which \pkg{HyperGal}\xspace is designed; 10 different hosts with various morphologies were acquired at different locations in the IFU and with an airmass ranging from $1.01$ to $2.04$. This allows us to cover a large variety of observation conditions, from the ideal case to the poorest condition. An artificial point source, whose spectrum and type is known a priori, is then added to these cubes. To mimick SEDm spectra as much as possible, we use spectra of well-isolated transients observed with SEDm and successfully classified by \pkg{SNID} with a very high r$lap$. For the SNe~Ia (the most numerous to be observed), 70 spectra are selected with r$lap > 25$ for the best model and r$lap > 15$ for the first 30 models. Similarly, 7 SNe~II spectra with r$lap > 12$ are selected. For the more rarely observed SNe~Ic and SNe~Ib ($\sim 5\%$ of observations), only one spectrum of each was chosen, but with high classification confidence (r$lap \sim 22$ for the~Ib and r$lap \sim 13$ for the~Ic). To increase the SNR, each of these spectra is then slightly smoothed using a Savitzky-Golay filter (3rd-order polynomial over a window of 5 pixels), in order to keep intact the spectral structures. While building the simulated sample, the different SN types are distributed to follow the observed fractions \citep{ztfspecred}, with 80\% of SNe~Ia, 15\% of SNe~II, 2.5\% of SNe~Ib, and 2.5\% of SNe~Ic. For further analysis, Ib and Ic will be studied jointly as SNe~Ibc. A marginalization on the phase of the SNe~Ia is applied, based on the DR1 statistics from the ZTF SN~Ia group \citep{Dhawan2022}. Knowing the phase of the 70 SN~Ia input spectra used for the simulation, we draw the SN templates to follow the observed distribution of phases, modeled as a Gaussian distribution centered on $-3$~days with a standard deviation of 4~days. Concerning the PSF, the profile is assumed to follow the model presented in the Sec.~\ref{sssec:psf}. To faithfully represent the seeing diversity of the observations, the chromatic radial profile parameters are drawn from the joint distribution built from $\sim 2000$~standard stars, thus taking into account the latent correlations between parameters. Finally, 2 extra parameters -- which we consider the most likely to impact on the \pkg{HyperGal}\xspace robustness -- are introduced in the simulations: the contrast $c$ between the transient and the local background, and the distance $d$ between the target and the host. The latter aims to cover all observed cases, from the exact overlapping between the point source and the host ($d \approx 0$) to the limit of an unstructured background ($d \gg$ host core size). The host center is identified by matching the WCS solution from the SEDm cube and the underlying photometric images from PS1. The distance $d$ is drawn from a uniform distribution between 0 and $5\farcs6 \equiv 10$~spx. As the SEDm mostly observes well centered point sources, the simulated SN is placed within 12~spx from the center of the FoV, or at least towards the MLA center if the host is on the edge. The contrast $c$ is defined by $c = S/(S+B) \in[0,1]$ where $S$ is the transient signal, and $B$ is the total (sky and host) background, both spectrally integrated over the equivalent $r$ band of ZTF. For a random $c$ drawn from a uniform distribution in $[0, 1]$, the background signal $B$ is first estimated at the simulated SN location, by successively integrating spatially the pure host cube weighted by the chromatic PSF profile, then spectrally over the ZTF $r$ band. Once $B$ is known, the SN spectrum is scaled so that the $r$-band integral $S = cB / (1 -c)$. Finally, the simulated SN contribution to the cube variance is added to the one from the host galaxy, under the hypothesis of pure photon noise, using the flux solution of the host cube. Ultimately, the 5000~simulated cubes are built, covering a large range of observation conditions, host galaxy morphologies and positions in the FoV, transient locations and spectral types and SNR. The \pkg{HyperGal}\xspace pipeline and the standard point source extraction are then used to estimate the resulting SN spectra. \subsection{Extraction accuracy} \label{ssec:rmscontrast} The SEDm is designed and used for the spectral classification of transient. Thus, beyond pure absolute spectro-photometric flux accuracy, what is important is the capacity of \pkg{HyperGal}\xspace to extract the spectral features allowing a proper classification, independently of the absolute flux level or even the large-scale continuum shape. Consequently, the \pkg{HyperGal}\xspace performances are evaluated on continuum-normalized transient spectra in the $[4000,8000]$~\AA\ wavelength range, as in \pkg{SNID}. The continuum is fitted as a 5th-order polynomial over the wavelength range slightly extended by 100~\AA{} at each extreme, to avoid some unwanted boundary effects. The spectral comparison between simulation input and \pkg{HyperGal}\xspace/standard method output spectra is then systematically performed on continuum-normalized spectra, and is quantified using a wavelength-averaged relative RMS similar to Eq.~(\ref{eq:RMS}): \begin{equation} \label{eq:RMS2} \text{RMS} = \sqrt{\frac{1}{N}\sum_{\lambda} \left( \frac{f_{\lambda}-\tilde{f}_{\lambda}}{f_{\lambda}} \right)^2} \end{equation} where $N$ refers to the number of monochromatic slices between $[4000,8000]$~\AA, $f_{\lambda}$ denotes the data and $\tilde{f}_{\lambda}$ the predicted value. The distance $d$ is found to have no influence on the spectral accuracy of \pkg{HyperGal}\xspace, with an absolute correlation coefficient lower than 0.2. On the other hand, Fig.~\ref{fig:rmscontinuumdivide} shows the correlation between spectral relative RMS and contrast $c$ for both extraction methods on continuum-normalized spectra. The results are marginalized over all SN types, as the extraction accuracy is supposedly independent of the spectral shape. \begin{figure*} \centering \includegraphics[width=\textwidth]{simu_rms_contrast_continuum_divided_nobottom_nowhis.pdf} \caption{Distribution, as a function of the contrast, of the spectral relative RMS between simulation input spectra and extracted spectra, averaged over the $[4000, 8000]$~\AA{} domain. In the boxes, the 3 levels represent the 3 quartiles (25\%, median, and 75\%). Each bin includes the same number of simulations, as the contrast $c$ is uniformly distributed in $[0, 1]$.} \label{fig:rmscontinuumdivide} \end{figure*} Both methods obtain an RMS greater than 20\% for $c<0.2$, suggesting that spectral classification at such low contrast will be difficult. Yet, the standard method seems to be more accurate than \pkg{HyperGal}\xspace at extremely low contrast ($c<0.1$); this actually appear to be an artifact of the continuum normalization. At very low contrast, neither methods can reasonably disentangle the SN from the background; however, by effectively mixing SN and host signal, the standard point source extracted spectrum has a higher SNR (although less accurate), and the continuum normalization is less prone to fail catastrophically, contrary to the case of the spectrum consistent with 0 as extracted by \pkg{HyperGal}\xspace. \pkg{HyperGal}\xspace starts to stand out for $0.2 < c < 0.3$ with a median RMS around 10\%, and RMS decreases steadily below 10\% at $c > 0.3$, 5\% for $c > 0.5$, and 1\% for $c > 0.8$. Compared to the standard extraction method, \pkg{HyperGal}\xspace shows a median improvement of $\sim 50\%$ for $0.2 < c < 0.6$, and gradually returns to a median improvement of $\sim 20\%$ up to highest contrasts. Since the continuum normalization removes the effects of absolute scaling and color terms on the spectral RMS, the improvement exclusively relates to the contamination of the SN spectrum by the host galaxy spectral features. This demonstrates the effectiveness of \pkg{HyperGal}\xspace to drastically reduce this host contamination. \subsection{Distribution of contrast in the observations} Before turning to the classification efficiency, the contrast distribution in the SEDm observations is estimated, as a reference to compare our results with. Rather than using \pkg{HyperGal}\xspace on observations made with the SEDm (as was actually done for the ZTF Cosmology SN~Ia Data Release 2 to come \citealt{dr2rigault}) -- this would be like evaluating the pipeline with itself -- the contrast $c = S/(S+B)$ is estimated from photometric images of the same DR2 sample, made up of about 3000~SNe~Ia. For each SN, its signal $S$ in the PS1 $r$ band at the date of the SEDm observation is estimated from the SALT2 fit \citep{Guysalt2005, Guysalt22007, Betoule2014} of its light curve. We chose the PS1 $r$ band, in practice very similar to the ZTF one, because only images from this survey were available at the time of the study. On the other hand, the host contribution to the background $B_{gal}$ is estimated from the integrated flux within a radius of $2\arcsec$ around the SN. As PS1 images are already sky-subtracted, an additional sky background $B_{sky}$ has to be added for a fair comparison with simulations. Two different values are used: a fiducial value $m_{sky} = 20$~mag, approximatively corresponding to the magnitude depth of the SEDm, and a more conservative value $m_{sky} = 21$~mag. The sky background being largely negligible in front of a galactic one, its exact value essentially alters the high contrast values: for a SN isolated from its host galaxy, the contrast would systematically increases as the sky background tends to~0. Figure~\ref{fig:contrastdist} displays the cumulative distribution of the contrast for the DR2. The median contrast of this distribution is $c=0.58$ for $m_{sky} = 20$~mag and $c=0.63$ for $m_{sky} = 21$~mag. For both sky levels, less than 1\% of observations have a contrast $c < 0.1$, and only 7\% with $c < 0.2$. At high contrast end, 2 to 5\% of the observations have a $c > 0.9$ depending on the adopted sky magnitude. Almost 95\% of observations have a contrast $0.1 \leq c \leq 0.9$, and a slightly less than 90\% with $0.2 \leq c \leq 0.9$. \begin{figure} \centering \includegraphics[width=\linewidth]{contrastDR2_multisky_cum_paper.pdf} \caption{Cumulative contrast distribution estimated from $\sim 3000$ SN~Ia observed with the SEDm. Since only $B_{gal}$ is estimated from PS1 images, an additional $B_{sky}$ is estimated using two different sky levels, $m_{sky} = 20$ (\emph{blue}) for a realistic value, and $m_{sky} = 21$ (\emph{red}) for a conservative value.} \label{fig:contrastdist} \end{figure} According to the results of Sec.~\ref{ssec:rmscontrast}, one can therefore assess the spectral accuracy of \pkg{HyperGal}\xspace on the DR2 sample -- using the spectral relative RMS (Eq.~\ref{eq:RMS2}) as an indicator -- to be of the order of 10\%, 5\% and 2\% for 80\%, 60\% and 20\% of the observations respectively. In comparison, the standard extraction method reaches these levels for 60\%, 45\% and 15\% of the observations. \subsection{Typing efficiency} As mentioned earlier, the most important validation result in the context of the SEDm is the efficiency of \pkg{HyperGal}\xspace to spectrally classify the target SN. The test on the simulated cubes is performed using the same classifier as in ZTF, i.e. \pkg{SNID}; the confidence criteria given for the classification are however slightly stricter, as we regularly identified false positives (i.e. SN erroneously classified as Ia) in the current \pkg{pysedm} pipeline. The minimum r$lap$ is set to r$lap_{\min} = 6$ (rather than 5) for the best-fit model; furthermore, at least 50\% of the top-10 models have to be of the same type as the best one to confirm a classification. If one of these criteria is not met, the spectrum is classified as ``uncertain''. Figure~\ref{fig:typingresult} shows the typing efficiency from \pkg{HyperGal}\xspace, and the improvement with respect to the standard extraction method without host modeling. Contrary to the previous RMS analysis, results are presented for each SN type, since the spectral signatures are different in all SN types. \begin{figure*} \centering \includegraphics[width=\linewidth]{typingimprove_contrast.pdf} \caption{Typing efficiency on the validation simulations. \emph{Top panel}: rate of successful classification with \pkg{HyperGal}\xspace for each type of SN at different contrast levels. Results for $c > 0.6$ are aggregated as the results vary very little. \emph{Bottom panel}: improvement in typing compared to the standard extraction method.} \label{fig:typingresult} \end{figure*} As anticipated in Sec.~\ref{ssec:rmscontrast}, both methods are definitely not reliable for contrasts below 0.1. SNe~Ia are more easily classified, due to the quantity and strength of features in their spectra: the typing success is 71\% for SNe~Ia for $0.1 \leq c \leq 0.2$ ($\sim 7\%$ of real observations); types~Ibc and~II on the other hand are correctly classified with a success rate of 23\% and 35\% respectively. For $0.2 < c < 0.3$, the typing success reaches more than 96\% for SNe~Ia, 77\% for~Ibc and 51\% for SNe~II. More than 99\% of SNe~Ia are correctly classified with $c > 0.3$, and more than 95\% of all SNe for $c > 0.4$. With $\sim 84\%$ of observations having a contrast $c > 0.3$, $\sim 9\%$ with $0.2 < c < 0.3$ and $\sim 7\%$ with $0.1 < c < 0.2$, one can conclude that \pkg{HyperGal}\xspace can successfully classify nearly 95\% of all SNe~Ia observed by SEDm. For a contrast $c \gtrsim 0.2$ (which represents more than 90\% of the real observations), nearly 99\% of SNe~Ia are properly classified. The improvement brought by \pkg{HyperGal}\xspace over the standard extraction method is obvious, with a sweet spot in $0.1 < c < 0.6$: this will results in more than 30\% of additional SNe correctly classified. The main spectral feature of SNe~II being the H$\alpha$ emission line, usually highly contaminated by the host galaxy, \pkg{HyperGal}\xspace allows a significant improvement for this particular type, from 15\% to 37\% of additional correctly classified SNe~II in the $0.1 < c < 0.6$ range; for SNe~Ibc, the difference only appears from $c > 0.2$, with similar gains between 13\% and 31\%. SNe~Ia having a lot of strong and easily identified spectral features, the boost from the standard method is slightly less manifest, but stays highly significant, from 30\% of additional correctly classified SNe~Ia for $0.1 < c < 0.2$ to 5\% when $0.5 < c < 0.6$. For $c > 0.6$, when the SN ostensibly stands out of the galaxy, the difference between the two methods becomes marginal whatever the SN type. Taking into account the contrast distribution of the observations, \pkg{HyperGal}\xspace should significantly improve the classification of SNe~Ia in nearly 50\% of the observations (the other half being also properly classified by the standard extraction method). As 50\% of the observations have $0.1 < c < 0.6$, \pkg{HyperGal}\xspace will allow the correct classification of almost 20\% more SNe~Ia in this interval, corresponding to 10\% of all SNe~Ia classifiable with the SEDm. Assuming a similar contrast distribution for all SN types, \pkg{HyperGal}\xspace will classifiy 14\% additional SNe~II and 11\% SNe~Ibc. To probe the critical contamination of the SN~Ia sample by core-collapse SNe, the False Positive Rate (FPR) for SN~Ia is examined. Figure~\ref{fig:falspositive} shows that \pkg{HyperGal}\xspace has a significantly lower FPR than for the standard method. Excluding the unrealistically low contrast cases ($c < 0.1$), \pkg{HyperGal}\xspace shows a progressive decrease in FPR from 8\% to 1\% for contrast rising from 0.1 to 0.6 (FPR is null beyond that); in comparison, the standard method oscillates between 6 and 9\% in same contrast range. As a conclusion, the \pkg{HyperGal}\xspace FPR is on average less than 5\% for contrasts between 0.1 and 0.6 ($\sim 50\%$ of the observations), and less than 2\% for $c > 0.1$ (more than 99\% of all observations); this is half as much as the standard extraction method. \begin{figure} \centering \includegraphics[width=\linewidth]{falsepositivesIa.pdf} \caption{False-Positive Rate in SN~Ia classification for both extraction methods as a function of contrast.} \label{fig:falspositive} \end{figure} \section{Discussion} \label{sec:discuss} We now discuss some limitations of the current \pkg{HyperGal}\xspace implementation, and possible future developments. Regarding the validation methodology, we acknowledge some simplifications with respect to actual observations. For instance, the true distance distribution between the SN and its host was not explicitly modeled, i.e. this parameter was marginalized uniformly between 0 and $5 \farcs 6$. As a full-scene modeler which properly handles this parameter and therefore shows little sensitivity to it (Sec.~\ref{ssec:rmscontrast}), this approximation does not impact the \pkg{HyperGal}\xspace results; this is not true for the single point-source method which critically depends on the transient-host distance. Overall, we think the validation approximations actually tend to minimize the improvement of \pkg{HyperGal}\xspace with respect to the standard method. Undoubtedly, the most limiting constraint from \pkg{HyperGal}\xspace is the need for an external redshift measurement of the host galaxy, an a priori needed by the SED fitter used as a physically-motivated host galaxy spectral interpolator, and of critical importance for the treatment of emission lines. In practice, this is not so much of an issue: in the current ZTF sample, about 50\% of SN hosts already have a spectral redshift, mostly from SDSS surveys \citep{ztfspecred}, with a precision of $\sigma_z\sim\num{e-5}$ for $z<0.1$ \citep{Bolton2012}; the remaining 50\% of SNe have a redshift deduced from a preliminary extraction of the SN spectrum, either from low-resolution spectral features in the SN spectrum ($\sim 40\%$) or emission lines of the host galaxy having contaminated the SN spectrum ($\sim 10\%$). In both cases, the redshift is estimated by \pkg{SNID} with a precision of $\sigma_z\sim\num{5e-3}$ \citep{ztfspecred}. Furthermore, 95\% of ZTF SN hosts are brighter than 20~mag, allowing other surveys such as the Dark Energy Spectroscopic Instrument (DESI) Bright Galaxy Survey \citep{DESI} to systematically provide a large fraction of spectral redshifts in the future. A slightly incorrect input redshift (encoded as a wavelength offset of the emission line position in the hyperspectral galaxy model), as well as an approximate SED fit of the emission line fluxes (marginally constrained by broadband photometric observations) is corrected to first order by the monochromatic galaxy amplitudes $G(\lambda)$ during the ultimate 3D fit. Primarily introduced to recover flux calibration mismatch between PS1 and SEDm, this normalization parameter actually interferes in a non-trivial way with the position and intensity of emission lines in the brightest parts of the scene to minimize residuals between fixed (at this stage of the procedure) hyperspectral model and SEDm observations. This particular effect, which depends on the relative distribution of stellar and gaseous components in the host, has not been studied extensively for \pkg{HyperGal}\xspace, but we note it is efficient to disentangle host spectral features from SN spectrum even with sub-optimal input redshift and/or emission line fluxes. However, it effectively precludes the use of the residual host component for any \emph{a posteriori} measurements, e.g. redshift or local measurement of H$\alpha$ flux, yet crucial for local environment studies mentioned earlier \citep[e.g.][]{lssf_rigault20}. One could think of including a consistent redshift estimate directly in the \pkg{HyperGal}\xspace procedure, at the level of the hyperspectral model (to minimize artificial fluctuations of $G(\lambda)$), but also at the level of the SN spectral typing (to reach a redshift consensus between the host and the SN). This would imply to include the intensive SED fit and/or the SN typing procedure in the minimization loop, computationally costly in either cases. Another major \pkg{HyperGal}\xspace development would be to use the SEDm cube, a rich and faithful observation of the host galaxy at the position of the transient, as additional hyperspectral constraints in the SED fitting process. Both developments would push the concept of an SED fitter merely used as a spectral interpolator to its limit. It would then probably be preferable to switch to other more efficient methods such as physics-enabled deep learning \citep{2021AJ....162..275B}. \section{Conclusion} This paper presents \pkg{HyperGal}\xspace, a fully automated scene modeler for the transient typing with the SEDm \citep{SEDm}. The core of this pipeline is based on the use of archival photometric observations of the host galaxy, taken before the SN explosion. Knowing the physical processes in place within galaxies, as encoded in the SED fitter \pkg{cigale}, the spectral properties of the host are modeled, adjusted, and scaled appropriately to create a hyperspectral model of the host galaxy. This 3D intrinsic model is then convolved with the spectro-spatial instrumental responses of the SEDm, and projected in the space of the observations. A full scene model, including the structured host galaxy, the point source transient and a smooth background, is finally produced to match the SEDm observations, allowing the extraction of the SN spectrum from a highly contaminated environment. The pipeline is validated on a large set of realistic simulated SEDm observations, covering a wide variety of observation conditions (airmass, seeing and PSF parameters), scene details (host morphology, distance to the host, host/SN contrast) and transient types. The contrast distribution is estimated from about 3000 observed SNe~Ia of the ZTF Cosmology SN~Ia DR2 paper to come \citep{dr2rigault}. The transient spectra in the 5000 simulations are then extracted with \pkg{HyperGal}\xspace and compared to the historical point-source method, which ignores the structured host component. The most important results concern \pkg{HyperGal}\xspace efficiency in spectroscopically typing SNe, a key objective of the SEDm instrument. The full scene modeler shows an ability to correctly classify $\sim 95\%$ of the observed SNe~Ia under a realistic contrast distribution. For a contrast $c \gtrsim 0.2$ (more than 90\% of the observations), nearly 99\% of the SNe~Ia are correctly classified. Compared to the standard extraction method, \pkg{HyperGal}\xspace correctly classifies nearly 20\% more SNe~Ia between $0.1 < c < 0.6$, representing $\sim 50\%$ of the observation conditions. The false positive rate for \pkg{HyperGal}\xspace is less than 5\% for contrasts between 0.1 and 0.6, and less than 2\% for $c > 0.1$ ($> 99\%$ of the observations); this is half as much as the standard extraction method. \pkg{HyperGal}\xspace has demonstrated its ability to extract and classify the spectrum of an SN even in the presence of strong contamination from its host galaxy. The improvement compared to the standard method is significant: this will noticeably improve the statistic of the SNe~Ia sample for the ZTF survey while reducing a potential environmental bias, and will ultimately impact the precision of the cosmological analyses. \begin{acknowledgements} This project has received funding from the Project IDEX-LYON at the University of Lyon under the Investments for the Future Program (ANR-16-IDEX-0005), and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n°759194 - USNAC). The SED Machine is based upon work supported by the National Science Foundation under Grant No. 1106171. Based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grant No. AST-1440341 and a collaboration including Caltech, IPAC, the Weiz- mann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, the University of Washington, Deutsches Elektronen- Synchrotron and Humboldt University, Los Alamos National Laboratories, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW. This research made use of \pkg{python} \citep{python3}, \pkg{astropy} \citep{Astropy, Astropy2}, \pkg{matplotlib} \citep{matplotlib}, \pkg{numpy} \citep{numpy1, numpy2}, \pkg{scipy} \citep{scipy1, scipy2}. We thank their developers for maintaining them and making them freely available. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2022-09-23T02:11:40", "yymm": "2209", "arxiv_id": "2209.10882", "language": "en", "url": "https://arxiv.org/abs/2209.10882" }
\section{Introduction} The field of gravitational wave astronomy, while relatively new, has the potential to make exciting contributions to many areas of astrophysics. The first gravitational wave event, GW150914, was detected in 2015 and was generated by a binary black hole merger~\citep{Abbott2016:GW150914}. Since this first detection, the Laser Interferometer Gravitational-Wave Observatory (LIGO)~\citep{LIGO2015:AdvLIG} and Virgo~\citep{Virg2015:AdVScnIntGrWDt} gravitational wave observatories have made almost 100 confirmed detections~\citep{Abbott2021:GWTC2}. The KAGRA detector \citep{Akutsu2019:KAGRA} also operated during the later portion of the O3b observing run \citep{Akutusu2020:KAGRA_overview}. Gravitational wave detection allows for multi-messenger astronomy if an event is simultaneously observed in both electromagnetic and gravitational wave bands. This was successfully achieved in 2017 with the detection of GW170817, a gravitational wave event generated by a binary neutron star merger, which was also independently observed as the gamma ray burst GRB170817a across the electromagnetic spectrum~\citep{Abbott2017:GW170817, Abbott2017:GW170817a}. Gravitational wave and multi-messenger astronomy will likely be the source of many future discoveries as gravitational wave detector sensitivity increases and more detectors come online. Gravitational wave astronomy has the potential to advance our understanding of neutron stars and the physics of matter at extreme densities. Gravitational waves from neutron star mergers are detectable by current ground-based observatories, but only for a short time at the end of their life cycle when their stellar structure is tidally deformed. Continuous gravitational waves are long-lived, quasi-monochromatic gravitational waves emitted by isolated spinning neutron stars that are deformed asymmetrically about their rotation axes. The deformation may be caused by a number of mechanisms, including the neutron star's magnetic field~\citep{ZimmSzed1979:GrvWRtPrRBSMAppPl, BonaGour1996:GrvWPlEmMgFIDst}, magnetically-confined mountains~\citep{MelaPayn2005:GrvRdAcMlPMgnCM}, or electron capture gradients~\citep{UshoEtAl2000:DfrAcNtSCGrvWEm}. The stellar structure is expected to be in a long-lived stable equilibrium, and continuous gravitational waves (hereafter abbreviated to ``continuous waves'') could provide insights into this ground state of nuclear matter complementary to observations of binary neutron star mergers. While continuous waves have not yet been detected, prospects for a first detection continue to improve with more sensitive gravitational wave detectors. In addition, the data analysis techniques used to search for continuous wave signals continue to be refined; for reviews see \citet{Riles2013, Riles2017, Tenorio2021}. Searches for continuous waves cover a wide variety of parameters and include: known radio and X-ray pulsars~\citep{LIGOEtAl2020:GrvCnsEqElMlP, LIGOEtAl2021:DBSpLCnsGrvWEnYPPJ0, LIGOEtAl2021:CnsLODGrvEmDRGlPPJ0, LIGOEtAl2022:SCnGrvW20AcMllXPlOLD, LIGOEtAl2022:NrSCnLngTrGrvWKPLTOR, LIGOEtAl2022:SrGrvWKPTHrmSTLIObR}, likely neutron stars in supernova remnants~\citep{LIGOEtAl2021:SrCntGrvWYSpRETObRALV, LIGOEtAl2022:SEOLDCntGrvWCsVJSpRm}, and all-sky surveys for undiscovered neutron stars~\citep{LIGOVirg2021:ASEOLDCntGrvSgUnNtSBS, LIGOEtAl2021:AlSCntGrvWIsNtSEOLD, LIGOEtAl2022:AlSrGrvWEmsSBCASpBHLOD, CovaEtAl2022:CnsRMnMllNtSBSy}. In this paper, we study what macroscopic properties of neutron stars might be inferred using continuous waves. \citet{Sieniawska2021} have previously studied this question under the assumption that the neutron star loses rotational kinetic energy purely through gravitational wave radiation. Here we consider a more general model where energy is also lost through electromagnetic radiation, assuming the star possesses a dipole magnetic field. The population dynamics of neutron stars losing energy through both electromagnetic and gravitational radiation have previously been studied by \citet{Palomba2005, Knispel2008, Wade2012, Cieslar2021, ReedEtAl2021:MdGlNSPplUCnGrvS}. This paper is an initial attempt at studying the parameter estimation problem for such systems. This paper is organised as follows. Section~\ref{sec:background} introduces background information on continuous waves and their detection. Section~\ref{sec:framework} introduces the theoretical framework used to infer properties of the neutron star. Section~\ref{sec:MC} discusses how Monte Carlo simulations are used to estimate the errors of the estimated properties. Section~\ref{sec:results} presents the results of the Monte Carlo simulations. Section~\ref{sec:assumptions} considers some of the caveats and assumptions, and Section~\ref{sec:summary} summarises the results. \section{Background} \label{sec:background} This section presents background information relevant to the paper. Subsections \ref{sec:CW_signal} and \ref{sec:error_theory} introduce the basics of the signal model and parameter estimation techniques respectively for continuous wave searches.. \subsection{Continuous wave signal model} \label{sec:CW_signal} A continuous wave induces a strain $h(t)$ in a gravitational wave detector given by~\citep{Jaranowski1998}: \begin{equation} h(t) = \sum_{i=1}^4 \mathcal{A}_i h_i(t; \vec\lambda) \,. \label{eq:hoft} \end{equation} The four amplitudes $\{\mathcal{A}_i\}$ are functions of: the characteristic strain amplitude $h_0$, the inclination angle $\iota$ of the neutron star's angular momentum to the line of sight, a polarisation angle $\psi$ which fixes the principal axes of the two gravitational wave polarisations (``plus'' and ``cross''), and an arbitrary phase $\phi_0$ at a reference time $t_0$. Additional parameters, represented by $\vec\lambda$ in Eq.~\eqref{eq:hoft}, modify the phase of the signal; they include the star's sky position and, if the star is in a binary system, its orbital parameters. The characteristic strain amplitude $h_0$ of a continuous wave signal is~\citep{Jaranowski1998}: \begin{gather} h_0 = \frac{4\pi^2G}{c^4}\frac{I_{zz} \epsilon f^2}{r} \,, \label{eq:h0} \end{gather} where $r$ is the distance to the neutron star, $f$ is the gravitational wave frequency, $G$ is the gravitational constant, and $c$ is the speed of light. We model the neutron star as a tri-axial rotor~\citep{ZimmSzed1979:GrvWRtPrRBSMAppPl} with principal moments of inertia $(I_{xx}, I_{yy}, I_{zz})$, where the $z$ axis points along the star's symmetry rotation axis; the equatorial ellipticity $\epsilon = |I_{xx} - I_{yy}| / I_{zz}$ characterises the degree of non-axisymmetrical deformation of the star. For a tri-axial rotor the gravitational wave frequency is conventionally assumed to be twice the star's rotational frequency~\citep{VanD2005:GrvWSpNnxFPrNtS, Sieniawska2021}. The radiation of rotational kinetic energy away from the neutron star via continuous waves, and possibly electromagnetic radiation, causes the star to spin down. We model the evolution of the gravitational wave frequency as a second-order Taylor expansion~\citep{Jaranowski1998}: \begin{gather} f(t) = f + \dot{f} t + \frac{1}{2}\ddot{f} t^2 \, , \label{eq:signal_f} \end{gather} where $f$ is the gravitational wave frequency, and $\dot{f}$ and $\ddot{f}$ its first and second time derivatives respectively, and all three parameters are defined at $t=t_0$. These parameters enter into Eq.~\eqref{eq:hoft} as phase parameters represented by $\vec\lambda$. A useful quantification of the spin-down behaviour of a neutron star is its braking index~\citep{MancEtAl1985:ScMsrPlBrkIn}: \begin{gather} n = \frac{f\ddot{f}}{\dot{f}^2} \, . \label{eq:braking_index} \end{gather} If a neutron star is spinning down purely through the emission of gravitational waves from a (mass-type) quadrupole moment, as given by Eq.~\eqref{eq:h0}, its braking index is $n=5$; alternately, if the neutron star spins down only through electromagnetic radiation, its braking index is $n=3$~\citep{OstrGunn1969:NtrPlsITh} but cf. \citep{Mela1997:SpObRtCrrOMgn}. A third possibility, which we do not consider in this paper, is the emission of gravitational waves from a current-type quadrupole moment due to $r$-modes~\citep{Ande1998:NClUnsMdRtRltS, LindEtAl1998:GrvRdInsHYnNtS}, for which one has $n=7$. \subsection{Continuous wave parameter estimation}\label{sec:error_theory} Detection of a continuous wave signal would measure its amplitude and phase to some degree of uncertainty, assuming that the true continuous wave signal does not deviate appreciably from the model described in section~\ref{sec:CW_signal}. Bayesian inference is widely regarded as a robust method of inferring parameters of a signal model given a data-set and assumed priors on the parameters; for its application to continuous waves see \citet{DupuWoan2005:ByEstPlPrGrvWD, PitkEtAl2017:NSmCTrSrCntGrvWP}. As an initial attempt to study the errors in the parameters measured by a continuous wave detection, we instead adopt a simpler approach using the Fisher information matrix. While this approach is commonly used \citep{Sieniawska2021, Jaranowski1999}, the Fisher information matrix is strictly valid only in the case of high signal-to-noise ratios (a criterion for which is detailed in \citealt{Vallisneri2008}), which may not necessarily be the case for a first continuous wave detection. Further discussion of the weaknesses of the Fisher information matrix approach, such as the possibility of singular or ill-conditioned Fisher information matrices, are outlined in \citet{Vallisneri2008}. Notwithstanding these concerns, we use the Fisher information matrix because of its relative computational simplicity to arrive at a quantitative picture of parameter inference for continuous wave signals. We now outline how Fisher information matrices can be used to approximate the error of the continuous wave parameters. Data analysis techniques that seek to identify continuous waves often quantify how closely an observed signal matches a template of possible signals. An intuitive picture of how the Fisher information matrix works is that it quantifies the maximal possible ``distance'' in the parameter space that a true signal could be from the ``nearest'' template. That is, since the template bank forms a ``grid'' (which may not be uniform) in the parameter space, the maximal error of the parameter measurements comes from the size of the ``gaps'' in the template bank. As will become clear in Section~\ref{sec:framework}, we are particularly interested in estimating errors in the three parameters $f$, $\dot{f}$, and $\ddot{f}$ of Eq.~\eqref{eq:signal_f} which govern the gravitational wave frequency evolution. To construct the Fisher information matrix for these parameters, we start with phase of the continuous wave signal: \begin{align} \phi_{\text{spin}}(t) &= 2 \pi \int_0^{t} f(t') dt'\, , \label{eq:GW phase temp}\\ &= 2\pi \left[f t + \frac{1}{2}\dot{f}(t) t^2 + \frac{1}{6} \ddot{f}(t)t^3 \right] \, , \label{eq:GW phase} \end{align} where Eq.~\eqref{eq:signal_f} has been substituted into Eq (\ref{eq:GW phase temp}). We next compute the parameter-space metric~\citep{BalaEtAl1996:GrvWClsBDStMCEsPr,Owen1996:STmGrvWInsBnCTmS} which quantifies the notion of ``distance'' between the true signal and a template. It is necessary to first define the time average operator: \begin{gather} \Big \langle x(t) \Big \rangle = \frac{1}{T} \int_{-T/2}^{T/2} x(t) dt \, , \end{gather} where $x(t)$ is an arbitrary function, and $T$ is the time span of the gravitational wave observation. The parameter-space metric over $f$, $\dot{f}$, and $\ddot{f}$ is then given by~\citep{BradEtAl1998:SrcPrdSrLI,Prix2007} \begin{gather} \label{eq:template metric} g_{i j} = \Bigg\langle \frac{\partial \phi_{\text{spin}}(t)}{\partial f^{(i)}} \frac{\partial \phi_{\text{spin}}(t)}{\partial f^{(j)}} \Bigg\rangle - \Bigg \langle \frac{\partial \phi_{\text{spin}}(t)}{\partial f^{(i)}} \Bigg \rangle \Bigg \langle \frac{\partial \phi_{\text{spin}}(t)}{\partial f^{(j)}} \Bigg \rangle \, , \end{gather} with $i, j \in \{0, 1, 2\}$, $f^{(0)} = f$, $f^{(1)} = \dot{f}$, and $f^{(2)} = \ddot{f}$. The covariance matrix for $f$, $\dot{f}$, and $\ddot{f}$ is given by the inverse Fisher information matrix $\Gamma^{ij}$~\citep{Vallisneri2008}, which, in turn, is defined in terms of the parameter-space metric~\citep{Prix2007}: \begin{align} \label{eq:Fisher matrix} \Sigma(f, \dot{f}, \ddot{f}) &= \Gamma^{ij} \\ &= \frac{g^{ij}}{\rho^2} \,. \end{align} Here $\rho^2$ is the signal-to-noise ratio assuming an optimal match between the true signal and the best-fit template. For simplicity, in this paper we assume an expression for $\rho^2$ averaged over $\cos\iota$, $\psi$, and sky position~\citep{Prix2011}: \begin{align} \rho^2 &= \frac{4}{25}\frac{h_0^2 T}{S_h(f)} \, , \label{eq:rho2_using_h0} \\ &= \frac{4}{25}\frac{T}{\mathcal{D}^2} \, , \label{eq:rho2_using_depth} \end{align} where $S_h$ is the (single-sided) power spectral density of the strain noise in the gravitational wave detector, and Eq. \eqref{eq:rho2_using_depth} defines the "sensitivity depth"~\citep{BehnEtAl2015:PstMtUSCnGrvSGlC, DreiEtAl2018:FAcSnsEsCntSr}: \begin{equation} \mathcal{D} = \frac{ \sqrt{S_h(f)} }{ h_0 } \, , \label{eq:sens-depth} \end{equation} We assume, again for simplicity, that the gravitational wave detector network is operational at 100\% duty cycle; in practice duty cycles of $\gtrsim 70\%$ are achieved for current detectors, but this is expected to improve over time~\citep{KAGREtAl2020:PrObLclGrvTrALAVK}. Evaluating Eq.~\eqref{eq:Fisher matrix} using Eqs.~\eqref{eq:GW phase} and~\eqref{eq:template metric} gives the covariance matrix: \begin{align} \Sigma(f, \dot{f}, \ddot{f}) &= \frac{ \mathcal{D}^2 }{ \pi^2 } \begin{pmatrix} \frac{ 1875 }{ 16 T^3 } & 0 & -\frac{ 7875 }{ 2 T^5 } \\ 0 & \frac{ 1125 }{ T^5} & 0 \\ -\frac{ 7875 }{ 2 T^5 } & 0 & \frac{ 157500 }{ T^7 } \end{pmatrix} \, . \label{eq:spindown_cov} \end{align} Now considering the four amplitude parameters $h_0$, $\cos\iota$, $\psi$, $\phi_0$, only $h_0$ is potentially interesting for inferring neutron star properties, being a function of $I_{zz}$ and $\epsilon$ [Eq.~\eqref{eq:h0}]. The error in the $h_0$ measurement may be derived from the parameter-space metric over the amplitude parameters $\{\mathcal{A}_i\}$~\citep{Prix2007}, as outlined in \citet[Section 3.2]{Prix2011}. Averaging over the sky position of the neutron star ~\citep[Eq.~122]{Prix2011}, as well as $\psi$ gives: \begin{align} \label{eq:h0_error} \sigma(h_0) &= \frac{ a \mathcal{D} h_0 }{ \sqrt{T} } \frac{ \sqrt{ b + \xi^2 } }{ 1 - \xi^2 } \,, \\ \xi &\equiv \cos\iota \,, \\ a &= 2 \sqrt{\frac{6}{301}\left(344 - 43\sqrt{2} - 8\sqrt{86}\right)} \approx 4.08 \,, \\ b &= \frac{43 \left(8 - 8\sqrt{2} - \sqrt{86}\right)}{43\sqrt{2} + 8\sqrt{86} - 344} \approx 2.59 \,. \end{align} Note that Eq.~\eqref{eq:h0_error} becomes infinite at $\xi = \pm 1$, due to a singularity in the coordinate transform between $\{\mathcal{A}_i\}$ and $\{h_0, \xi, \psi, \phi_0\}$; for this reason Eq.~\eqref{eq:h0_error} cannot be analytically averaged over $\xi$ with a prior range that includes $\pm 1$. \section{Parameter estimation framework} \label{sec:framework} This section develops a framework for inferring three neutron star properties: its principal moment of inertia ($I_{zz}$), its ellipticity ($\epsilon$), and the component of the magnetic dipole moment perpendicular to its rotation axis ($m_p$, hereafter abbreviated to ``perpendicular magnetic moment''). It assumes that the neutron star is losing rotational kinetic energy (and hence spinning down) through both magnetic dipole radiation and gravitational wave (mass-type) quadrupolar radiation, and that no other mechanisms dissipate energy from the neutron star. This framework relies on the detection of a continuous wave signal to measure the frequency and spin-down parameters ($f$, $\dot{f}$, and $\ddot{f}$), and the characteristic strain amplitude ($h_0$). It also assumes that a measurement of the distance to the neutron star ($r$) is available. Balancing the spin-down power with the luminosity of electromagnetic and gravitational radiation gives: \begin{gather} \left(\frac{dE}{dt}\right)_{\text{EM}} + \left(\frac{dE}{dt}\right)_{\text{GW}} = -\left(\frac{dE}{dt}\right)_{\text{rot}} \, . \label{eq:energy_balance} \end{gather} The ellipticity of a neutron star is conventionally assumed to be relatively small~\citep{Sieniawska2021}, so the star is very close to spherical. The rotational kinetic energy of the star is then taken to be that of a rotating sphere~\citep{WettEtAl2008:SrGrvWvCssLI}: \begin{gather} \Bigg(\frac{dE}{dt}\Bigg)_{\text{rot}} = \pi^2 I_{zz} f \dot{f} \, . \label{eq:rot_energy} \end{gather} The luminosity of a rotating magnetic dipole is~\citep{OstrGunn1969:NtrPlsITh, CondRans2016:EssRdAst} \begin{gather} \left(\frac{dE}{dt}\right)_{\text{EM}} = \frac{2m_p^2}{3c^3 \mu_0} ( \pi f )^4 \, , \label{eq:EM_energy} \end{gather} where $\mu_0$ is the vacuum permeability. Note that this is given in terms of the gravitational wave frequency which is twice the rotational frequency as discussed in Section \ref{sec:CW_signal}. The gravitational wave luminosity of a (mass-type) quadrupole is~\citep{OstrGunn1969:NtrPlsITh, BlanEtAl2001:GrvRdThLgPrp} \begin{gather} \Bigg( \frac{dE}{dt}\Bigg)_{\text{GW}} = \frac{32G}{5c^5}I_{zz}^2 \epsilon^2 (\pi f)^6 \, . \ \label{eq:GW_energy} \end{gather} In order to simplify the expressions, we introduce the constants \begin{gather} K_{\text{EM}} = \frac{2 \pi^2}{3c^3 \mu_0} \, , \qquad K_{\text{GW}} = \frac{32 G \pi^4}{5c^5} \, . \end{gather} We then substitute Eqs.~\eqref{eq:rot_energy}~--~\eqref{eq:GW_energy} into Eq.~\eqref{eq:energy_balance} and rearrange to get: \begin{gather} \dot{f} = -\frac{K_{\text{EM}} m_p^2 f^3}{I_{zz}} - K_{\text{GW}} I_{zz} \epsilon^2 f^5 \, . \label{eq:simul_fd} \end{gather} Differentiating Eq.~\eqref{eq:simul_fd} with respect to time gives: \begin{gather} \ddot{f} = -\frac{3 K_{\text{EM}} m_p^2 f^2 \dot{f}}{I_{zz}} - 5 K_{\text{GW}} I_{zz} \epsilon^2 f^4 \dot{f} \, . \label{eq:simul_fdd} \end{gather} Given that $\ddot{f}$ is measured as a separate parameter of the continuous wave signal model [Eq.~\eqref{eq:signal_f}], Eq.~\eqref{eq:simul_fdd} provides an additional constraint independent of Eq.~\eqref{eq:simul_fd}. Equations~\eqref{eq:simul_fd} and~\eqref{eq:simul_fdd} depend on three unknowns: $I_{zz}$, $\epsilon$, and $m_p$. With the addition Equation.~\eqref{eq:h0} which also depends on $I_{zz}$ and $\epsilon$, we have three equations constraining the same three unknowns which may now be solved for: \begin{align} I_{zz} &= \frac{ K_{\text{GW}} c^8 r^2 h_0^2 f }{ 8\pi^4 G^2 \dot{f} ( 3 - n )} \, , \label{eq:Izz} \\ \epsilon &= \frac{ 2\pi^2 G \dot{f} ( 3 - n ) }{ K_{\text{GW}}c^4 r h_0 f^3 } \, , \label{eq:epsilon} \\ m_p &= \frac{ c^4 r h_0 }{ 4\pi^2 G f } \sqrt{ \frac{ K_{\text{GW}} ( n - 5 ) }{ K_{\text{EM}}(3 - n ) } } \, , \label{eq:mp} \end{align} where $n$ is the braking index of Eq.~\eqref{eq:braking_index}. Equations~\eqref{eq:Izz}~--~\eqref{eq:mp} remain valid provided that $3 < n < 5$, which is consistent with the power balance assumed by Eq.~\eqref{eq:energy_balance}. As discussed in Section~\ref{sec:CW_signal}, braking indices of 3 or 5 correspond to pure electromagnetic or gravitational wave radiation respectively; in either case, Eqs.~\eqref{eq:Izz}~--~\eqref{eq:mp} are no longer applicable. A combination of electromagnetic and gravitational wave radiation yields a braking index between 3 and 5; the loss of kinetic rotational energy through \emph{both} gravitational wave \emph{and} electromagnetic radiation is therefore a fundamental requirement of the framework outlined here. \citet{Sieniawska2021} show that, for a neutron star only emitting continuous waves and not electromagnetic radiation, degeneracies prevent direct inference of the neutron star properties without a measurement of $r$, which is unlikely to be measurable without an electromagnetic counterpart. Few other techniques exist to directly measure $I_{zz}$. \citet{Damour1988} propose a method which requires higher-order relativistic corrections to the periastron advance to be measurable, which is possible only for very rapidly-spinning binary pulsars. To date the method has only been applicable to the double pulsar system PSR~J0737$-$3039~\citep{Bejger2005, WorlEtAl2008:NclCnsMmInNtS, SteiEtAl2015:UNtSObsDtCThMITDfr, MiaoEtAl2022:MmInPJ07LIGNI}. Note that~\citet{MiaoEtAl2022:MmInPJ07LIGNI} assumes a neutron star equation of state, whereas the framework derived here does not. Other methods rely on separate measurements of the neutron star mass and radius through either electromagnetic observations and/or detection of gravitational waves from binary neutron star mergers~\citep{SteiEtAl2015:UNtSObsDtCThMITDfr, MiaoEtAl2022:MmInPJ07LIGNI}. It is difficult, however, to measure both properties simultaneously for the one same neutron star~\citep{MillEtAl2019:PJ0MRdNDImpPrpNtSM}. No method exists for directly measuring $\epsilon$ other than through a continuous wave detection. While $m_p$ (or equivalently the surface magnetic field strength $B$) is routinely inferred by assuming pure magnetic dipole radiation from known pulsars~\citep{Kram2005:Pls}, a measurement of $m_p$ from a mixed electromagnetic/gravitational wave pulsar would be of interest as it would provide an independent verification of the existing measurements or provide insight into neutron stars with different energy loss mechanisms. The errors in the inferred neutron star properties ($I_{zz}$, $\epsilon$, $m_p$) has the following dependencies: \begin{itemize} \item The errors of the inferred properties ($\Delta I_{zz}$, $\Delta \epsilon$, and $\Delta m_p$) depend on $\Delta f$, $\Delta \dot{f}$, $\Delta \ddot{f}$, $\Delta h_0$, and $\Delta r$ [Eqs.~\eqref{eq:Izz}~--~\eqref{eq:mp}]; \item The errors of the spindown parameters ($\Delta f$, $\Delta \dot{f}$, $\Delta \ddot{f}$) depend on $T$ and $\mathcal{D}$ [Eq.~\eqref{eq:Fisher matrix}]; \item The error $\Delta h_0$ depends on $T$, $\mathcal{D}$, and $h_0$ [or equivalently $S_h$; Eq.~\eqref{eq:sens-depth}] and $\xi$ [Eq.~\eqref{eq:h0_error}]; \item The error $\Delta r$ is independent of the other parameters. \end{itemize} Therefore, we see that the errors in $I_{zz}$, $\epsilon$, $m_p$ depend entirely on: the observation time ($T$); the strength of the continuous wave signal relative to the detector noise ($\mathcal{D}$, $h_0$); the ratio of gravitational wave ``plus'' and ``cross'' polarisations ($\xi$); and the uncertainty in the distance to the star ($\Delta r$). An estimate of the relative errors in $I_{zz}$, $\epsilon$, and $m_p$ and their dependence on the parameters $\Lambda = \{ f, \dot{f}, \ddot{f}, h_0, r \}$ may be arrived at through differential error analysis~\citep{BenkEtAl2018:EPrpCMAApAdDsdCn}: \begin{equation} \label{eq:rel-err-dea} \frac{ \sigma(I_{zz})^2 }{ I_{zz}^2 } = \frac{1}{ I_{zz}^2 } \sum_{x, y \in \Lambda} \left(\frac{ \partial I_{zz} }{ \partial x }\right) \left(\frac{ \partial I_{zz} }{ \partial y }\right) \begin{cases} \sigma(x)^2 & x = y \,, \\ \Sigma(x, y) & x \ne y \,, \end{cases} \end{equation} and similarly for $\sigma(\epsilon)^2 / \epsilon^2$ and $\sigma(m_p)^2 / m_p^2$; where $\sigma(x)$ is the standard deviation in the quantity $x$ and $\Sigma(x,y)$ is the covariance between the quantities $x$ and $y$. This analysis yields, to third order in $1/T$: \begin{align} \label{eq:Izz-rel-err-dea} \frac{ \sigma(I_{zz})^2 }{ I_{zz}^2 } &= \frac{ 4 \sigma(r)^2 }{ r^2 } + \frac{ 4 \sigma(h_0)^2 }{ h_0^2 } + \frac{16875 \mathcal{D}^2}{16 \pi ^2 f^2 (n-3)^2 T^3} \,, \\ \label{eq:eps-rel-err-dea} \frac{ \sigma(\epsilon)^2 }{ \epsilon^2 } &= \frac{ \sigma(r)^2 }{ r^2 } + \frac{ \sigma(h_0)^2 }{ h_0^2 } + \frac{1875 \mathcal{D}^2 (9 - 2n)^2}{16 \pi ^2 f^2 (n-3)^2 T^3} \,, \\ \label{eq:mp-rel-err-dea} \frac{ \sigma(m_p)^2 }{ m_p^2 } &= \frac{ \sigma(r)^2 }{ r^2 } + \frac{ \sigma(h_0)^2 }{ h_0^2 } + \frac{1875 \mathcal{D}^2 (n^2 - 9n + 15)^2}{16 \pi ^2 f^2 (n-5)^2 (n-3)^2 T^3} \, . \end{align} The leading-order terms of the relative errors in $I_{zz}$, $\epsilon$, and $m_p$ are the relative errors in $h_0$ and $r$. Note that $\sigma (r) / r$ is independent of $T$, $\sigma(h_0) / h_0$ scales with $T^{-1/2}$ [Eq.~\eqref{eq:h0_error}], and the remaining terms in Eqs.~\eqref{eq:Izz-rel-err-dea}~--~\eqref{eq:mp-rel-err-dea} scale with $T^{-3}$ or smaller. Since the distance error $\sigma(r) / r$ is assumed to be constant, in the limit of $T \to \infty$ the relative errors asymptote to: \begin{gather} \lim_{T\to \infty}\frac{\sigma(I_{zz})}{I_{zz}} = \frac{2\sigma(r)}{r} \label{eq:Izz_limiting_err} \\ \lim_{T\to \infty}\frac{\sigma(\epsilon)}{\epsilon} = \frac{\sigma(r)}{r} \label{eq:epsilon_limiting_err} \\ \lim_{T\to \infty}\frac{\sigma(m_p)}{m_p} = \frac{\sigma(r)}{r} \label{eq:mp_limiting_err} \end{gather} The asymptotic error in $I_{zz}$ is twice that of the other properties because the relationship between $I_{zz}$ and $r$ is $I_{zz} \propto r^2$ [Eq.~\eqref{eq:Izz}] whereas for the other two parameters it is $\epsilon \propto r^{-1}$ and $m_p\propto r$ [Eqs.~\eqref{eq:epsilon} -~\eqref{eq:mp}]. \section{Monte Carlo simulations} \label{sec:MC} The framework presented in Section~\ref{sec:framework} shows that it is possible to infer three neutron star properties using a continuous waves detection. In this section, we describe how Monte Carlo simulations were used to quantify to what accuracy these properties may be inferred with a detection. The inference relies on five parameters ($f$, $\dot{f}$, $n$, $h_0$, $r$) [Eqs. \eqref{eq:Izz}~--~\eqref{eq:mp}] and the errors of the inference depend on four additional parameters ($T$, $\mathcal{D}$, $\xi$, $\Delta r$) as well as $h_0$. In our simulations we choose to input values of $I_{zz}$ instead of $h_0$ through rearrangement of Eq~\eqref{eq:h0}. Results that directly depend on $h_0$ can be viewed as an optimistic or pessimistic case for the continuous wave signal detectability. In comparison, choices of $I_{zz}$ relate only to the neutron star’s internal physics. While larger values of $I_{zz}$ implicitly lead to a louder continuous wave signal, this also depends on the other neutron star parameters so does not relate as directly to the signal detectability. The signal from a neutron star is simulated as the set of input values for the nine parameters ($f^{\text{in}}$, $\dot{f}^{\text{in}}$, $n^{\text{in}}$, $I_{zz}^{\text{in}}$, $r^{\text{in}}$, $T^{\text{in}}$, $\mathcal{D}^{\text{in}}$, $\xi^{\text{in}}$, $\Delta r ^{\text{in}}$). The properties of the neutron star emitter $(I_{zz}^{\text{in}}, \epsilon^{\text{in}}, m_p^{\text{in}})$ can then be calculated using Eqs. \eqref{eq:Izz}--\eqref{eq:mp}. Measurement errors $(\delta f, \delta \ddot{f}, \delta \ddot{f})$ in the simulated $f^{\text{in}}, \dot{f}^{\text{in}},$ and $\ddot{f}^{\text{in}}$ (via $n^{\text{in}}$) are drawn from a multivariate normal distribution with covariance matrix given by Eq.~\eqref{eq:Fisher matrix}; the measurement error $\delta h_0$ in $h_0^{\text{in}}$ is drawn from a normal distribution with standard deviation given by Eq.~\eqref{eq:h0_error}. All other covariances between the parameters $(f^{\text{in}}, \dot{f}^{\text{in}}, \ddot{f}^{\text{in}}, h_0^{\text{in}})$ are assumed to be zero. The measured parameters of the continuous wave signal are then \begin{equation} \label{eq:MC-output-parameters} \begin{aligned} f^{\text{out}} &= f^{\text{in}} + \delta f \,, & \dot{f}^{\text{out}} &= \dot{f}^{\text{in}} + \delta \dot{f} \,, \\ \ddot{f}^{\text{out}} &= \ddot{f}^{\text{in}} + \delta \ddot{f} \,, & h_0^{\text{out}} &= h_0^{\text{in}} + \delta h_0 \,. \end{aligned} \end{equation} Substitution of $(f^{\text{out}}, \dot{f}^{\text{out}}, \ddot{f}^{\text{out}}, h_0^{\text{out}})$ into Eqs.~\eqref{eq:h0},~\eqref{eq:braking_index} and~\eqref{eq:Izz}--\eqref{eq:mp} gives the inferred neutron star properties $(I_{zz}^{\text{out}}, \epsilon^{\text{out}}, m_p^{\text{out}})$, which may then be compared to $(I_{zz}^{\text{in}}, \epsilon^{\text{in}}, m_p^{\text{in}})$. We repeat this process for $10^6$ samples. Below we describe the Monte Carlo procedure in further detail. \subsection{Choice of input parameters} \label{sec:MC_params} Nine variables control the outputs of the Monte Carlo simulations: $f$, $\dot{f}$, $n$, $I_{zz}$, $r$, $T$, $\mathcal{D}$, $\xi$, $\Delta r$. We consider an observation time in the range of $T = 0.5 - 4$ years. One can expect gravitational wave detector observing runs to last at least a year~\citep{KAGREtAl2020:PrObLclGrvTrALAVK}. A continuous wave signal detected in a year-long observing run may then be followed up in future and/or archival data. The neutron star distance $r^{\text{in}}$ is fixed to $\SI{1}{kpc}$ for simplicity. Such a distance is within the range where all-sky continuous wave surveys are sensitive to neutron stars with ellipticities $\epsilon \gtrsim 10^{-6}$ and emitting at frequencies $f \gtrsim \SI{100}{Hz}$~\citep{LIGOEtAl2021:AlSCntGrvWIsNtSEOLD}. Given that the neutron star properties depend on the product $r h_0$ [Eqs.~\eqref{eq:Izz}~--~\eqref{eq:mp}], a choice of a smaller (larger) distance would be equivalent to simulating a larger (smaller) $h_0$. The fractional uncertainty in $r$ is chosen to be $\sigma(r) / r = 20\%$. While radio pulsar distances (inferred through dispersion measures) exhibit appreciable variety and are susceptible to biases~\citep{Verbiest2012}, a typical measurement uncertainty of $\sim 20\%$ is not unreasonable~\citep{Taylor1993, Yao2017}, and indeed is expected to be readily achievable with next-generation radio telescopes~\citep{Smits2011}. We explore a range of sensitivity depths $\mathcal{D} = 30 - \SI{150}{Hz^{-1/2}}$. The lower end of the range is consistent with the sensitivities typical of all-sky continuous wave surveys for isolated neutron stars~\citep[Table~I]{DreiEtAl2018:FAcSnsEsCntSr}; given the wide parameter space of $f$, $\dot{f}$, and sky position these searches must cover, their sensitivities are typically lower than targeted continuous wave searches. The upper range is a conservative choice for searches targeting known pulsars~\citep[Table~V]{DreiEtAl2018:FAcSnsEsCntSr}; these searches cover a much smaller parameter space around the pulsar, and can afford the computational cost of performing an optimal matched filter analysis to maximise sensitivity. The range in $\mathcal{D}$ represents two possible scenarios for a first continuous wave detection. A continuous wave candidate initially found in an all-sky survey (with $\mathcal{D} \sim \SI{30}{Hz^{-1/2}}$) would be followed up with more sensitive analyses, increasing its signal-to-noise significantly and yielding a strongly-detected signal. On the other hand, given that searches for continuous waves from known pulsars already employ the most sensitive methods (and hence have $\mathcal{D} \gtrsim \SI{150}{Hz^{-1/2}}$), any signal may initially only be marginally detectable until more sensitive data becomes available. We draw the moment of inertia from the widely accepted range for neutron stars of $I_{zz}^{\text{in}} \in [1, 3]{\times}10^{38} \, \si{kg.m^2}$~\citep{MolnOstg1985:ClcMMmInrNtSt, Bejger2005, WorlEtAl2008:NclCnsMmInNtS, KramEtAl2021:StrGrvTsDbPl, MiaoEtAl2022:MmInPJ07LIGNI}. Ranges for $\epsilon$ and $m_p$ are less well constrained; estimates for $\epsilon$ range from $\sim 10^{-11}$~\citep{BonaGour1996:GrvWPlEmMgFIDst} to $\sim 10^{-4}$~\citep{Owen2005:MxElDfrCmSEEqtS}. Based on observations of radio pulsars and magnetars, the surface magnetic field strength $B = m_p / R^3$ (where $R$ is the neutron star radius) may range from $\sim 10^{8}$ to $\sim 10^{15}$~Gauss~\citep{Reis2001:MgnFlNtStOvr}. Certain values of $(\epsilon, m_p)$ drawn from these ranges represent neutron stars which spin down within timescales of seconds to days, which would be impossible to detect as continuous wave sources. To exclude such regions of the $\epsilon$--$m_p$ space, we instead draw values of $f^{\text{in}}$ and $\dot{f}^{\text{in}}$ from ranges which are typical of parameter spaces for all-sky continuous wave surveys~\citep{LIGOEtAl2021:AlSCntGrvWIsNtSEOLD}: \begin{gather} f^{\text{in}} \in [50, 2000]~\si{Hz} \,, \qquad \dot{f}^{\text{in}} \in [-10^{-8}, -10^{-12}]~\si{Hz.s^{-1}} \,. \end{gather} A braking index is also drawn from $n^{\text{in}} \in (3, 5)$ which is used to compute $\ddot{f}^{\text{in}}$ via Eq.~\eqref{eq:braking_index}, and ($f^{\text{in}}$, $\dot{f}^{\text{in}}$ $\ddot{f}^{\text{in}}$) are then used to compute $\epsilon^{\text{in}}$ and $m_p^{\text{in}}$ via Eqs.~\eqref{eq:epsilon} and~\eqref{eq:mp}. Having fixed $r^{\text{in}}$, chosen $I_{zz}^{\text{in}}$ and $f^{\text{in}}$, and chosen $\epsilon^{\text{in}}$ via $(f^{\text{in}}, \dot{f}^{\text{in}}, \ddot{f}^{\text{in}})$, $h_0^{\text{in}}$ may now be calculated via Eq.~\eqref{eq:h0}. A choice of $\mathcal{D}$ then fixes $S_h$ via Eq.~\eqref{eq:sens-depth}. In this paper, we do not assume a specific gravitational wave detector configuration (e.g.\ by setting $S_h$ to the noise power spectral density of a current or future detector). Instead, we assume that the sensitivity to continuous waves is calibrated by $r$ (the distances which we could detect signals) and $\mathcal{D}$ (how deep can the data analysis method dig into the data to extract weak signals). More sensitive gravitational wave detectors will increase the distances $r$ at which continuous wave signals may be detected, while improved data analysis methods will increase our sensitivity to signals, allowing $\mathcal{D}$ to increase. There is a strict range for the cosine of the inclination angle $|\xi| \le 1$. As noted in Section~\ref{sec:error_theory}, however, the error in $h_0$ [Eq.~\eqref{eq:h0_error}] becomes infinite at $|\xi| = 1$ due to a coordinate singularity. This is a limitation of the analytic Fisher information matrix approach to error estimation adopted in this paper. That said, the likelihood of sampling a value of $|\xi| \approx 1$ is negligible. The use of median and percentile differences to compare input and output parameters (Section~\ref{sec:norm-relat-errors}) also guards against degraded Monte Carlo samples where $|\xi|$ approaches 1. An alternative approach would have been to assume a particular inclination angle, e.g.\ $\xi = 0$~\citep[cf.][]{Sieniawska2021}. \subsection{Computation of output parameters} \label{sec:comp-outp-param} Having selected the input parameters, output parameters $(f^{\text{out}}, \dot{f}^{\text{out}}, \ddot{f}^{\text{out}}, h_0^{\text{out}})$ are computed via Eq.~\eqref{eq:MC-output-parameters}. An output braking index $n^{\text{out}}$ may then be computed via Eq.~\eqref{eq:braking_index}. Computation of $(I_{zz}^{\text{out}}, \epsilon^{\text{out}}, m_p^{\text{out}})$ requires $3 < n^{\text{out}} < 5$; this is not guaranteed and can be violated if $n^{\text{in}} \approx 3$ or $n^{\text{in}} \approx 5$, and the errors in $\Delta f$, $\Delta \dot{f}$, and/or $\Delta \ddot{f}$ are also large. Where $3 < n^{\text{out}} < 5$ is not satisfied, the Monte Carlo sample is simply discarded. At shorter $T$ ($\lessapprox 0.5$ years), a sizeable fraction ($\gtrapprox 80\%$) of the samples must be discarded. This fraction decreases with longer $T$, and often becomes a negligible effect ($\lessapprox 1\%$) once $T \gtrsim 1$~year, but depends on the exact parameters of the simulation. While this limitation may impede inference of the properties of a neutron star which is emitting almost purely electromagnetic or gravitational radiation ($n^{\text{in}} \approx 3$ or $\approx 5$ respectively), it is unlikely to be an impediment where an appreciable fraction of the star's rotational kinetic energy is radiated through both mechanisms. \subsection{Comparison of inputs and outputs} \label{sec:norm-relat-errors} The Monte Carlo simulations described above result in pairs of input and output neutron star properties $(I_{zz}^{\text{in}}, I_{zz}^{\text{out}}) \in \mathcal{MC}^{I_{zz}}$, $(\epsilon^{\text{in}}, \epsilon^{\text{out}}) \in \mathcal{MC}^{\epsilon}$, and $(m_p^{\text{in}}, m_p^{\text{out}}) \in \mathcal{MC}^{m_p}$, where $\mathcal{MC}$ denotes the results of the simulations for a particular property. We quantify the agreement between input and output properties using the median relative error over each set: \begin{equation} \label{eq:median-relative-error} \mathcal{E}(I_{zz}) \equiv \median \Bigg\{\, \frac{ | I_{zz}^{\text{out}} - I_{zz}^{\text{in}} | }{ I_{zz}^{\text{in}} } \,\Bigg|\, (I_{zz}^{\text{in}}, I_{zz}^{\text{out}}) \in \mathcal{MC}^{I_{zz}} \,\Bigg\} \,, \end{equation} and similarly for $\mathcal{E}(\epsilon)$ and $\mathcal{E}(m_p)$. From the differential error analysis of Eq.~\eqref{eq:Izz-rel-err-dea}--\eqref{eq:mp-rel-err-dea} it is expected that, as $T$ increases, $\mathcal{E}$ will asymptote to a value determine by the error in the distance $r$. We therefore define normalised relative errors which asymptote to unity in the limit of $T \to \infty$: \begin{gather} \label{eq:norm-median-relative-error} \bar{\mathcal{E}}(I_{zz}) = \frac{ \mathcal{E}(I_{zz}) }{ 2 \mathcal{E}(r) } \\ \bar{\mathcal{E}}(\epsilon) = \frac{ \mathcal{E}(\epsilon) }{ \mathcal{E}(r) } \\ \bar{\mathcal{E}}(m_p) = \frac{ \mathcal{E}(m_p) }{ \mathcal{E}(r) } \,, \end{gather} where $\mathcal{E}(r)$ is the median error for $r$. Note that $\mathcal{E}(I_{zz})$ is normalised by $2 \mathcal{E}(r)$ due to the quadratic dependency of $I_{zz}$ on $r$ [Eq. \eqref{eq:norm-median-relative-error}]; see Section~\ref{sec:framework} and Eqs.~\eqref{eq:Izz} and~\eqref{eq:Izz_limiting_err}. We have assumed (Section~\ref{sec:MC_params}) a relative error in $r$ of 20\%, i.e.\ samples of $(r^{\text{out}} - r^{\text{in}}) / r^{\text{in}}$ are drawn from a normal distribution $\mathcal{N}(0, 0.2)$ with mean zero and standard deviation $0.2$. Drawing $\sim 10^8$ samples from this distribution gives: \begin{align} \mathcal{E}(r) &= \median \Bigg\{\, \frac{ | r^{\text{out}} - r^{\text{in}} | }{ r^{\text{in}} } \,\Bigg|\, \frac{ r^{\text{out}} - r^{\text{in}} }{ r^{\text{in}} } \sim \mathcal{N}(0, 0.2) \,\Bigg\} \\ &\approx 0.135 \,. \end{align} We therefore expect $\mathcal{E}(I_{zz})$ to asymptote to $\sim 27\%$, and $\mathcal{E}(\epsilon)$ and $\mathcal{E}(m_p)$ to asymptote to $\sim 14\%$, at sufficiently large $T$. \section{Results} \label{sec:results} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/Single_NS.png} \caption{ Convergence of $(I_{zz}^{\text{out}}, \epsilon^{\text{out}}, m_p^{\text{out}})$ to $(I_{zz}^{\text{in}}, \epsilon^{\text{in}}, m_p^{\text{in}})$ as a function of observation time $T$. Here $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $n^{\text{in}} = 4$, $f^{\text{in}} = \SI{1000}{Hz}$, $\dot{f}^{\text{in}} = \SI{-1e-9}{Hz.s^{-1}}$, and $\mathcal{D} = \SI{30}{Hz^{-1/2}}$, which implies $h_0 = \num{8.1e-25}$, $\epsilon = \num{3.8e-7}$, and $m_p = \SI{2.3e19}{T.m^3}$. The input values (dashed lines) are plotted against the median, 16th and 84th percentiles fpr $10^6$ output value samples. } \label{fig:single_NS} \end{figure} Figure~\ref{fig:single_NS} illustrates how the errors in the inferred neutron star properties scale with observation time. Here the inputs are fixed to the representative values $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $\epsilon^{\text{in}} = \num{1.2e-7}$, $m_p = \SI{7.2e18}{T.m^3}$, and output values $(I_{zz}^{\text{out}}, \epsilon^{\text{out}}, m_p^{\text{out}})$ are simulated for different $T$, assuming a sensitivity depth $\mathcal{D} = \SI{30}{Hz^{-1/2}}$. As expected, the errors of the inferred parameters decrease with increasing observation time. For $T \gtrsim \SI{2}{years}$ the errors in $I_{zz}$, $\epsilon$, and $m_p$ asymptote to the error due to $r$, consistent with Eqs.~\eqref{eq:Izz-rel-err-dea}~--~\eqref{eq:mp-rel-err-dea}. We neglect the possibility that the error in distance may be improved over time if better models of the galactic electron density distribution become available. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/Stability_time_vs_depth.png} \caption{ $I_{zz}$ stability time versus sensitivity depth $\mathcal{D}$ for braking indices $n^{\text{in}} = 3.01, 3.1, 4.99$. Here $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $f^{\text{in}} = \SI{1000}{Hz}$, and $\dot{f}^{\text{in}} = \SI{-1e-9}{Hz.s^{-1}}$, with values for $h_0$, $\epsilon$, and $m_p$ implied by Eqs.~\eqref{eq:h0},~\eqref{eq:epsilon}, and~\eqref{eq:mp} respectively. Plotted are a subsampling of the results from $10^6$ samples (light-coloured dots) and best-fit curves (dark-coloured lines). } \label{fig:sensitivity_depth} \end{figure} We define the ``stability time'' for $I_{zz}$, $\epsilon$, and $m_p$ as the time required for the normalised relative errors $\bar{\mathcal{E}}$ of each property to reach $1.1$, i.e. to within 10\% of the asymptotic distance error [see Eq.~\eqref{eq:norm-median-relative-error}]. Figure \ref{fig:sensitivity_depth} plots the stability time for $I_{zz}$ as a function of $n$ and $\mathcal{D}$ for signals with $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $f^{\text{in}} = \SI{1000}{Hz}$, and $\dot{f}^{\text{in}} = \SI{-1e-9}{Hz.s^{-1}}$. We see that, for continuous wave signals at $\mathcal{D} \sim \SI{30}{Hz^{-1/2}}$ initially detected in an all-sky survey, the asymptotic error in $I_{zz}$ is approached after a few years observing with a fully-coherent follow-up search, which would include analysing both archival and future data. For continuous waves detected from known pulsars, where $\mathcal{D} \approx \SI{150}{Hz^{-1/2}}$, the asymptotic errors in $I_{zz}$ are not approached until the star is observed for $T\approx \SI{20}{years}$ which is an unrealistic time span to consider. Note, however, that the definition of ``stability time'' here assumes the detector sensitivity $S_h$ remains constant; in reality $S_h$ is likely to decrease over time~\citep{KAGREtAl2020:PrObLclGrvTrALAVK}, particularly if third-generation gravitational wave detectors are constructed~\citep{BailEtAl2021:GrvPhAst2020}. Such improvements would decrease the sensitivity depth $\mathcal{D}$ of a detected signal such that the inferred parameters would converge to the asymptotic distance error faster than suggested in figure \ref{fig:sensitivity_depth}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/Rel_err_vs_n.png} \caption{ Normalised relative errors $\bar{\mathcal{E}}$ as a function of braking index $n^{\text{in}}$. Here $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $f^{\text{in}} = \SI{1000}{Hz}$, $\dot{f}^{\text{in}} = \SI{-1e-9}{Hz.s^{-1}}$, and $\mathcal{D} = \SI{30}{Hz^{-1/2}}$, with values for $h_0$, $\epsilon$, and $m_p$ implied by Eqs.~\eqref{eq:h0},~\eqref{eq:epsilon}, and~\eqref{eq:mp} respectively. Plotted are a subsampling of the results from $10^6$ samples (light-coloured dots) and best-fit curves (dark-coloured lines). } \label{fig:braking_index} \end{figure} Figure~\ref{fig:braking_index} plots normalised relative errors in $I_{zz}$, $\epsilon$, and $m_p$ as functions of $n$ for signals with $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $f^{\text{in}} = \SI{1000}{Hz}$, $\dot{f}^{\text{in}} = \SI{-1e-9}{Hz.s^{-1}}$, and $\mathcal{D} = \SI{30}{Hz^{-1/2}}$. The neutron star properties $I_{zz}$ and $\epsilon$ are best estimated where the star is losing almost all energy in gravitational waves ($n \approx 5$), as expected. On the other hand, the electromagnetic property $m_p$ is best estimated where the star is losing its energy through both electromagnetic and gravitational radiation ($n \approx 4$). When energy loss through electromagnetic radiation is more dominant ($n \approx 3$), the errors for all three properties are larger than the $n\approx 4$ case. This is consistent with Eq.~\eqref{eq:Izz-rel-err-dea} - \eqref{eq:mp-rel-err-dea} and is because the continuous wave observation cannot measure the spindown parameters accurately when the neutron star only weakly emits continuous waves. However, note that these results are based on the techniques described in section \ref{sec:framework}. It may be possible for electromagnetic astronomers to use alternate techniques to measure $m_p$ with lower errors for neutron stars with certain braking indices. \begin{figure*} \includegraphics[width=\textwidth]{Figures/Heatmap.png} \caption{ Normalised relative errors (top to bottom rows) $\bar{\mathcal{E}}(I_{zz})$, $\bar{\mathcal{E}}(\epsilon)$, $\bar{\mathcal{E}}(m_p)$, for braking indices (left to right columns) $n = 3.01$, $3.1$, $4.99$, as functions of $f$ and $\dot{f}$. Plotted are the median errors over $I_{zz}$ and $\xi$, and for $T = 1$~year and $\mathcal{D} = \SI{30}{Hz^{-1/2}}$. The different colours represent the values of the normalised relative error and the white areas indicate where $\bar{\mathcal{E}} \ge 30.0$ and/or where $3 < n^{\text{out}} < 5$ no longer holds. A total of $10^6$ samples were used in the plots. } \label{fig:heatmap} \end{figure*} Figure~\ref{fig:heatmap} plots normalised relative errors $\bar{\mathcal{E}}(I_{zz})$, $\bar{\mathcal{E}}(\epsilon)$, and $\bar{\mathcal{E}}(m_p)$ as functions of $n$, $f$, and $\dot{f}$, taking the median errors over the sampled ranges of $I_{zz}$ and $\xi$ given in Section~\ref{sec:MC_params}. We assume $T = 1$~year and $\mathcal{D} = \SI{30}{Hz^{-1/2}}$, which is relevant to a continuous wave signal detected in an all-sky continuous wave survey. The errors in all three neutron star properties are smallest at the highest spin-down rates ($\dot{f} \approx \SI{-e-8}{Hz.s^{-1}}$), where rate of rotational kinetic energy loss from the star is highest, and lowest frequencies ($f\approx \SI{50}{Hz}$). Once $|\dot{f}| \lessapprox \SI{e-11}{Hz.s^{-1}}$, the errors are sufficiently large that $n$ cannot be reliably measured, the restriction $3 < n^{\text{out}} < 5$ is no longer satisfied, and most Monte Carlo samples must be discarded (see Section~\ref{sec:comp-outp-param}). For each heatmap, the error as a function of $|\dot{f}|$ increases more rapidly for lower $f$ than for higher $f$, consistent with the $f^{-2}$ dependence of the $T^{-3}$ terms in Eqs.~\eqref{eq:Izz-rel-err-dea}~--~\eqref{eq:mp-rel-err-dea}. Figure~\ref{fig:heatmap} suggests that normalised relative errors $\bar{\mathcal{E}} \lesssim 1.2$ are achievable over much of the $f$--$\dot{f}$ parameter space typically searched for continuous waves, and particularly for rapidly spinning-down sources ($|\dot{f}| \gtrapprox \SI{e-9}{Hz.s^{-1}}$). This implies errors in $I_{zz}$ of $\sim 32\%$, and errors in $\epsilon$ and $m_p$ of $\sim 16\%$. Given that models of non-axisymmetrically deformed neutron stars~\citep{BonaGour1996:GrvWPlEmMgFIDst, UshoEtAl2000:DfrAcNtSCGrvWEm, Cutl2002:GrvWvNtSLTrdBFl, Owen2005:MxElDfrCmSEEqtS, PaynMela2006:FrSGrvRGHydOsMgCMANS, HaskEtAl2008:MdlMgnDfNtSt, VigeMela2009:IEDtcGrvRdMgCMAcNS, WettEtAl2010:SnMgnCnMnAccNS, PriyEtAl2011:QdrMMgnCnMnAcNSEES} typically predict $\epsilon$ only to an order of magnitude, the error in $\epsilon$ should be sufficient to test such models. A $\sim 30\%$ error in $I_{zz}$ is of similar magnitude to measurements of $I_{zz}$ for PSR~J0737$-$3039A. \citet{MiaoEtAl2022:MmInPJ07LIGNI} found errors of $\sim 10$~--~$20\%$ after assuming an equation of state; without that assumption the errors in $I_{zz}$ increase by a factor of $\sim 4$. In comparison, no explicit assumptions regarding the neutron star equation of state are required for the framework of Section~\ref{sec:framework} or the results presented in Section \ref{sec:results}. Estimates of $m_p$ using this framework could also be compared to those estimates for known pulsars and serve as an independent verification of such measurements. \section{Assumptions} \label{sec:assumptions} This section elaborates on some of the assumptions made in this paper. We assume that continuous waves will eventually be detectable by contemporary and/or future gravitational wave detectors. This remains uncertain. The lowest bounds on $\epsilon \propto 10^{-11}$ from magnetic field distortions~\citep{BonaGour1996:GrvWPlEmMgFIDst} are small enough that only a few detections, at best, may be expected in the next generation of detectors~\citep{Pitk2011:PrObCntGrvWKPl}. Stars with stronger magnetic fields ($B \sim 10^{15}$~Gauss) lead to larger ellipticities $\epsilon \gtrsim 10^{-6}$~\citep{Cutl2002:GrvWvNtSLTrdBFl, HaskEtAl2008:MdlMgnDfNtSt} which are more likely detectable by the current generation of gravitational wave detectors. It is also possible that the internal magnetic fields of neutron stars could be stronger than their surface fields \citep{Lasky2015, Bransgrove2017}. Only a small fraction of the known pulsars are likely to be detectable, particularly if the fraction of rotational kinetic energy emitted in gravitational waves is small~\citep{Pitk2011:PrObCntGrvWKPl}. That said, the $\mathcal{O}(10^3)$ known pulsars may not be representative of the $\mathcal{O}(10^8)$ population of galactic neutron stars~\citep{Palomba2005, Knispel2008, Wade2012, Cieslar2021, ReedEtAl2021:MdGlNSPplUCnGrvS}, which could include a sub-population of strong gravitational wave emitters or ``gravitars''. We assume that Eq.~\eqref{eq:energy_balance} is a reasonable starting point for modelling the energy radiated by neutron stars. It is generally assumed that electromagnetic radiation from known pulsars is predominately dipolar, that neutron stars are triaxial rotors, and that continuous wave radiation would be predominately quadrupolar~\citep{OstrGunn1969:NtrPlsITh}. These assumptions predict $3 < n < 5$, which is at odds with measured braking indices from radio pulsars which span orders of magnitude outside this range~\citep{JohnGall1999:PlBrkInRvs, ZhanXie2012:WDBrkInPlSRnM100Mll, Lower2021}. Modified models for pulsar emission have been proposed to explain the observed braking indices~\citep{AlleHorv1997:ImpCnsObBrIYPSp, Mela1997:SpObRtCrrOMgn, XuQiao2001:PlBrkInTEmsMd, AlvaCarr2004:MnpPlSpn, YueEtAl2007:WCBrkIndTUAbNtPln, HamiEtAl2015:BrkInIslPl} including the addition of gravitational waves \citep{deArEtAl2016:GrvWvPlMsrBrI, ChisEtAl2018:AnlAppStPlSp}. On the other hand, accurate phase-connected measurement of a second time derivative of the rotation frequency needed to compute $n$ is challenging~\citep[cf.][]{JohnGall1999:PlBrkInRvs}. Existing measurements of $n$ are generally dominated by timing noise~\citep{HobbEtAl2004:LngTmObs374Pl, HobbEtAl2010:AnlTmIrr366Pl}, with some possible exceptions~\citep{ArchEtAl2016:HgBrkInPls,LaskEtAl2017:BrInMllMgn}. Prospects for an accurate determination of $n$ may be improved by a continuous wave detection. Since gravitational wave detectors are omni-directional, gravitational wave data is recorded at a much higher duty cycle ($\gtrsim 70\%$; \citealt{KAGREtAl2020:PrObLclGrvTrALAVK}) than typical pulsar observing cadences (e.g. $\lesssim 35 \text{hours/year} \sim 0.4\%$; \citealt{Lam2018:OpPTAObsCSnLwGrvS}). Although $\ddot{f}$ would not be resolved in all-sky continuous wave surveys, which sacrifice phase resolution in favour of reduced computational cost, a candidate from such a survey would then be followed up using a fully phase-coherent search in a restricted parameter space around the candidate. Such a search would be computationally inexpensive, and would be able to resolve $\ddot{f}$ to a resolution $\sim \mathcal{D} / T^{7/2}$ [cf. Eq.~\eqref{eq:spindown_cov}]. Pulse emission from radio pulsars is subject to various noise sources~\citep{deKoAnze1993:SmAnlPrNBnXPls, ArchEtAl2008:RNsAnmXPlTmRsd, LentEtAl2016:SNSysStPrFIntPTADRl, GoncEtAl2021:IdnMtgNSrPrPlTDS}. Individual pulses from radio pulsars are highly variable, and achieve a stable pulse profile once averaged over many cycles~\citep{Kram2005:Pls}. It remains to be seen whether detected continuous wave signals will suffer from comparable noise sources~\citep{AshtEtAl2015:ETNTrNrrChSCnGrvWP, Suvorova2016, Myers2021a, Myers2021b}. Gravitational waves, being weakly interacting, are not perturbed by matter along the line of sight to the star, unless the signal is lensed~\citep{BiesHari2021:GrvLnCnGrvWv}. Furthermore, unlike electromagnetic emission that arises from the outer surface and plasma of the star where a small fraction of the neutron star mass is located, gravitational wave emission arises from the rotating mass quadrupole. Physical processes within the star would therefore need considerable energy to perturb the star's rotation, and hence the continuous wave signal, in order to achieve a level of noisiness comparable to timing noise observed in radio pulsars. Superfluid vortices within the star's interior are suspected of being responsible for glitches~\citep{EysdMela2008:GrvRdtPlGl, WarsMela2011:GrsMdPlGlt, HoEtAl2015:PnSprMsrMsUPGl, HaskEtAl2020:TrPnSprNtSPGRcv, LIGOEtAl2021:CnsLODGrvEmDRGlPPJ0} which do perturb the star's rotation and may affect the detectability of continuous waves~\citep{AshtEtAl2018:SmcGlCntSrMt}. Glitches, however, are observed as discrete events even in prolifically glitchy pulsars~\citep{HoEtAl2020:RtBGltNTmGlPJ05} and the extent to which they could constitute a persistent noise source in detected continuous wave signals is unknown \citep{Yim2022}. Should continuous waves measure a braking index from $n \notin [3, 5]$, this might represent stronger evidence for new physics than current radio pulsar observations. Finally, we assume that the neutron star also emits electromagnetic radiation, and that a measurement of its distance can be obtained. Neutron stars are expected to possess magnetic fields~\citep{Reis2001:MgnFlNtStOvr} and will therefore (provided that the field is not symmetric about the star's rotation axis) emit electromagnetic radiation. Continuous waves may first be detected either from a known pulsar, or as a gravitational-wave-only candidate from an all-sky survey; in either case, observations over $T \gtrsim 1$~year would give the sky position of the source to sub-arcsecond resolution~\citep{Riles2013, Riles2017}. This would facilitate further electromagnetic observations to either detect an electromagnetic counterpart, or else refine the properties of one already known. Other methods exist to measure stellar distances in the absence of a radio pulsar detection. Parallax may be used to determine the distances to nearby neutron stars~\citep{Seto2005:GrvWAstRRtNSEsTDs,WaltEtAl2010:RvsPrIsNtSRJ18UHI}, while distances to neutron stars in supernova remnants may be inferred through observation of the radial velocities of the surrounding ejecta~\citep{ReedEtAl1995:ThrStCssSpRISpS}; these methods yield comparable uncertainties to radio pulsar distances. \section{Summary} \label{sec:summary} This paper presents a first analysis of what properties may be inferred from a neutron star radiating both electromagnetic and detectable continuous gravitational waves. We develop a simple Fisher information-based parameter estimation framework, which gives estimates of the uncertainties for the stellar moment of inertia $I_{zz}$, equatorial ellipticity $\epsilon$, and the component of the magnetic dipole moment perpendicular to its rotation axis $m_p$. This framework does not assume a particular neutron star equation of state and only requires a detection of continuous waves and a measurable distance to the star. Monte Carlo simulations over a parameter space of gravitational wave frequency and its derivatives, typical of that covered by all-sky continuous wave surveys, demonstrate that the relative errors in $I_{zz}$, $\epsilon$, and $m_p$ asymptote to 14--27\%, assuming a 20\% error in distance. The observation time required to reach these limits may be as little as a few years for a strong continuous wave signal detected in an all-sky survey; for weaker signals, such as those potentially associated with known pulsars, longer observations may be required. We also find that the errors of the inferred parameters tend to be smaller when the braking index is close to $n\approx4$, when $f$ is smaller and when $|\dot{f}|$ is larger. Future work could extend the assumed neutron star energy loss model of Eq.~\eqref{eq:energy_balance} to include a more complex model of the neutron star magnetic field, e.g.~\citet{LaskMela2013:TlTMgFNtSTGrvWSgn}. Recasting the parameter inference in a Bayesian framework would be advantageous for many reasons, including the avoidance of coordinate singularities present in the Fisher matrix approach, and the use of prior information from other gravitational wave and electromagnetic observations of neutron stars. \section*{Acknowledgements} We thank Lucy Strang, Lilli Sun, Matthew Bailes, and Ryan Shannon for helpful discussions. This research is supported by the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav) through project number CE170100004. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author(s). \bibliographystyle{mnras} \section{Introduction} The field of gravitational wave astronomy, while relatively new, has the potential to make exciting contributions to many areas of astrophysics. The first gravitational wave event, GW150914, was detected in 2015 and was generated by a binary black hole merger~\citep{Abbott2016:GW150914}. Since this first detection, the Laser Interferometer Gravitational-Wave Observatory (LIGO)~\citep{LIGO2015:AdvLIG} and Virgo~\citep{Virg2015:AdVScnIntGrWDt} gravitational wave observatories have made almost 100 confirmed detections~\citep{Abbott2021:GWTC2}. The KAGRA detector \citep{Akutsu2019:KAGRA} also operated during the later portion of the O3b observing run \citep{Akutusu2020:KAGRA_overview}. Gravitational wave detection allows for multi-messenger astronomy if an event is simultaneously observed in both electromagnetic and gravitational wave bands. This was successfully achieved in 2017 with the detection of GW170817, a gravitational wave event generated by a binary neutron star merger, which was also independently observed as the gamma ray burst GRB170817a across the electromagnetic spectrum~\citep{Abbott2017:GW170817, Abbott2017:GW170817a}. Gravitational wave and multi-messenger astronomy will likely be the source of many future discoveries as gravitational wave detector sensitivity increases and more detectors come online. Gravitational wave astronomy has the potential to advance our understanding of neutron stars and the physics of matter at extreme densities. Gravitational waves from neutron star mergers are detectable by current ground-based observatories, but only for a short time at the end of their life cycle when their stellar structure is tidally deformed. Continuous gravitational waves are long-lived, quasi-monochromatic gravitational waves emitted by isolated spinning neutron stars that are deformed asymmetrically about their rotation axes. The deformation may be caused by a number of mechanisms, including the neutron star's magnetic field~\citep{ZimmSzed1979:GrvWRtPrRBSMAppPl, BonaGour1996:GrvWPlEmMgFIDst}, magnetically-confined mountains~\citep{MelaPayn2005:GrvRdAcMlPMgnCM}, or electron capture gradients~\citep{UshoEtAl2000:DfrAcNtSCGrvWEm}. The stellar structure is expected to be in a long-lived stable equilibrium, and continuous gravitational waves (hereafter abbreviated to ``continuous waves'') could provide insights into this ground state of nuclear matter complementary to observations of binary neutron star mergers. While continuous waves have not yet been detected, prospects for a first detection continue to improve with more sensitive gravitational wave detectors. In addition, the data analysis techniques used to search for continuous wave signals continue to be refined; for reviews see \citet{Riles2013, Riles2017, Tenorio2021}. Searches for continuous waves cover a wide variety of parameters and include: known radio and X-ray pulsars~\citep{LIGOEtAl2020:GrvCnsEqElMlP, LIGOEtAl2021:DBSpLCnsGrvWEnYPPJ0, LIGOEtAl2021:CnsLODGrvEmDRGlPPJ0, LIGOEtAl2022:SCnGrvW20AcMllXPlOLD, LIGOEtAl2022:NrSCnLngTrGrvWKPLTOR, LIGOEtAl2022:SrGrvWKPTHrmSTLIObR}, likely neutron stars in supernova remnants~\citep{LIGOEtAl2021:SrCntGrvWYSpRETObRALV, LIGOEtAl2022:SEOLDCntGrvWCsVJSpRm}, and all-sky surveys for undiscovered neutron stars~\citep{LIGOVirg2021:ASEOLDCntGrvSgUnNtSBS, LIGOEtAl2021:AlSCntGrvWIsNtSEOLD, LIGOEtAl2022:AlSrGrvWEmsSBCASpBHLOD, CovaEtAl2022:CnsRMnMllNtSBSy}. In this paper, we study what macroscopic properties of neutron stars might be inferred using continuous waves. \citet{Sieniawska2021} have previously studied this question under the assumption that the neutron star loses rotational kinetic energy purely through gravitational wave radiation. Here we consider a more general model where energy is also lost through electromagnetic radiation, assuming the star possesses a dipole magnetic field. The population dynamics of neutron stars losing energy through both electromagnetic and gravitational radiation have previously been studied by \citet{Palomba2005, Knispel2008, Wade2012, Cieslar2021, ReedEtAl2021:MdGlNSPplUCnGrvS}. This paper is an initial attempt at studying the parameter estimation problem for such systems. This paper is organised as follows. Section~\ref{sec:background} introduces background information on continuous waves and their detection. Section~\ref{sec:framework} introduces the theoretical framework used to infer properties of the neutron star. Section~\ref{sec:MC} discusses how Monte Carlo simulations are used to estimate the errors of the estimated properties. Section~\ref{sec:results} presents the results of the Monte Carlo simulations. Section~\ref{sec:assumptions} considers some of the caveats and assumptions, and Section~\ref{sec:summary} summarises the results. \section{Background} \label{sec:background} This section presents background information relevant to the paper. Subsections \ref{sec:CW_signal} and \ref{sec:error_theory} introduce the basics of the signal model and parameter estimation techniques respectively for continuous wave searches.. \subsection{Continuous wave signal model} \label{sec:CW_signal} A continuous wave induces a strain $h(t)$ in a gravitational wave detector given by~\citep{Jaranowski1998}: \begin{equation} h(t) = \sum_{i=1}^4 \mathcal{A}_i h_i(t; \vec\lambda) \,. \label{eq:hoft} \end{equation} The four amplitudes $\{\mathcal{A}_i\}$ are functions of: the characteristic strain amplitude $h_0$, the inclination angle $\iota$ of the neutron star's angular momentum to the line of sight, a polarisation angle $\psi$ which fixes the principal axes of the two gravitational wave polarisations (``plus'' and ``cross''), and an arbitrary phase $\phi_0$ at a reference time $t_0$. Additional parameters, represented by $\vec\lambda$ in Eq.~\eqref{eq:hoft}, modify the phase of the signal; they include the star's sky position and, if the star is in a binary system, its orbital parameters. The characteristic strain amplitude $h_0$ of a continuous wave signal is~\citep{Jaranowski1998}: \begin{gather} h_0 = \frac{4\pi^2G}{c^4}\frac{I_{zz} \epsilon f^2}{r} \,, \label{eq:h0} \end{gather} where $r$ is the distance to the neutron star, $f$ is the gravitational wave frequency, $G$ is the gravitational constant, and $c$ is the speed of light. We model the neutron star as a tri-axial rotor~\citep{ZimmSzed1979:GrvWRtPrRBSMAppPl} with principal moments of inertia $(I_{xx}, I_{yy}, I_{zz})$, where the $z$ axis points along the star's symmetry rotation axis; the equatorial ellipticity $\epsilon = |I_{xx} - I_{yy}| / I_{zz}$ characterises the degree of non-axisymmetrical deformation of the star. For a tri-axial rotor the gravitational wave frequency is conventionally assumed to be twice the star's rotational frequency~\citep{VanD2005:GrvWSpNnxFPrNtS, Sieniawska2021}. The radiation of rotational kinetic energy away from the neutron star via continuous waves, and possibly electromagnetic radiation, causes the star to spin down. We model the evolution of the gravitational wave frequency as a second-order Taylor expansion~\citep{Jaranowski1998}: \begin{gather} f(t) = f + \dot{f} t + \frac{1}{2}\ddot{f} t^2 \, , \label{eq:signal_f} \end{gather} where $f$ is the gravitational wave frequency, and $\dot{f}$ and $\ddot{f}$ its first and second time derivatives respectively, and all three parameters are defined at $t=t_0$. These parameters enter into Eq.~\eqref{eq:hoft} as phase parameters represented by $\vec\lambda$. A useful quantification of the spin-down behaviour of a neutron star is its braking index~\citep{MancEtAl1985:ScMsrPlBrkIn}: \begin{gather} n = \frac{f\ddot{f}}{\dot{f}^2} \, . \label{eq:braking_index} \end{gather} If a neutron star is spinning down purely through the emission of gravitational waves from a (mass-type) quadrupole moment, as given by Eq.~\eqref{eq:h0}, its braking index is $n=5$; alternately, if the neutron star spins down only through electromagnetic radiation, its braking index is $n=3$~\citep{OstrGunn1969:NtrPlsITh} but cf. \citep{Mela1997:SpObRtCrrOMgn}. A third possibility, which we do not consider in this paper, is the emission of gravitational waves from a current-type quadrupole moment due to $r$-modes~\citep{Ande1998:NClUnsMdRtRltS, LindEtAl1998:GrvRdInsHYnNtS}, for which one has $n=7$. \subsection{Continuous wave parameter estimation}\label{sec:error_theory} Detection of a continuous wave signal would measure its amplitude and phase to some degree of uncertainty, assuming that the true continuous wave signal does not deviate appreciably from the model described in section~\ref{sec:CW_signal}. Bayesian inference is widely regarded as a robust method of inferring parameters of a signal model given a data-set and assumed priors on the parameters; for its application to continuous waves see \citet{DupuWoan2005:ByEstPlPrGrvWD, PitkEtAl2017:NSmCTrSrCntGrvWP}. As an initial attempt to study the errors in the parameters measured by a continuous wave detection, we instead adopt a simpler approach using the Fisher information matrix. While this approach is commonly used \citep{Sieniawska2021, Jaranowski1999}, the Fisher information matrix is strictly valid only in the case of high signal-to-noise ratios (a criterion for which is detailed in \citealt{Vallisneri2008}), which may not necessarily be the case for a first continuous wave detection. Further discussion of the weaknesses of the Fisher information matrix approach, such as the possibility of singular or ill-conditioned Fisher information matrices, are outlined in \citet{Vallisneri2008}. Notwithstanding these concerns, we use the Fisher information matrix because of its relative computational simplicity to arrive at a quantitative picture of parameter inference for continuous wave signals. We now outline how Fisher information matrices can be used to approximate the error of the continuous wave parameters. Data analysis techniques that seek to identify continuous waves often quantify how closely an observed signal matches a template of possible signals. An intuitive picture of how the Fisher information matrix works is that it quantifies the maximal possible ``distance'' in the parameter space that a true signal could be from the ``nearest'' template. That is, since the template bank forms a ``grid'' (which may not be uniform) in the parameter space, the maximal error of the parameter measurements comes from the size of the ``gaps'' in the template bank. As will become clear in Section~\ref{sec:framework}, we are particularly interested in estimating errors in the three parameters $f$, $\dot{f}$, and $\ddot{f}$ of Eq.~\eqref{eq:signal_f} which govern the gravitational wave frequency evolution. To construct the Fisher information matrix for these parameters, we start with phase of the continuous wave signal: \begin{align} \phi_{\text{spin}}(t) &= 2 \pi \int_0^{t} f(t') dt'\, , \label{eq:GW phase temp}\\ &= 2\pi \left[f t + \frac{1}{2}\dot{f}(t) t^2 + \frac{1}{6} \ddot{f}(t)t^3 \right] \, , \label{eq:GW phase} \end{align} where Eq.~\eqref{eq:signal_f} has been substituted into Eq (\ref{eq:GW phase temp}). We next compute the parameter-space metric~\citep{BalaEtAl1996:GrvWClsBDStMCEsPr,Owen1996:STmGrvWInsBnCTmS} which quantifies the notion of ``distance'' between the true signal and a template. It is necessary to first define the time average operator: \begin{gather} \Big \langle x(t) \Big \rangle = \frac{1}{T} \int_{-T/2}^{T/2} x(t) dt \, , \end{gather} where $x(t)$ is an arbitrary function, and $T$ is the time span of the gravitational wave observation. The parameter-space metric over $f$, $\dot{f}$, and $\ddot{f}$ is then given by~\citep{BradEtAl1998:SrcPrdSrLI,Prix2007} \begin{gather} \label{eq:template metric} g_{i j} = \Bigg\langle \frac{\partial \phi_{\text{spin}}(t)}{\partial f^{(i)}} \frac{\partial \phi_{\text{spin}}(t)}{\partial f^{(j)}} \Bigg\rangle - \Bigg \langle \frac{\partial \phi_{\text{spin}}(t)}{\partial f^{(i)}} \Bigg \rangle \Bigg \langle \frac{\partial \phi_{\text{spin}}(t)}{\partial f^{(j)}} \Bigg \rangle \, , \end{gather} with $i, j \in \{0, 1, 2\}$, $f^{(0)} = f$, $f^{(1)} = \dot{f}$, and $f^{(2)} = \ddot{f}$. The covariance matrix for $f$, $\dot{f}$, and $\ddot{f}$ is given by the inverse Fisher information matrix $\Gamma^{ij}$~\citep{Vallisneri2008}, which, in turn, is defined in terms of the parameter-space metric~\citep{Prix2007}: \begin{align} \label{eq:Fisher matrix} \Sigma(f, \dot{f}, \ddot{f}) &= \Gamma^{ij} \\ &= \frac{g^{ij}}{\rho^2} \,. \end{align} Here $\rho^2$ is the signal-to-noise ratio assuming an optimal match between the true signal and the best-fit template. For simplicity, in this paper we assume an expression for $\rho^2$ averaged over $\cos\iota$, $\psi$, and sky position~\citep{Prix2011}: \begin{align} \rho^2 &= \frac{4}{25}\frac{h_0^2 T}{S_h(f)} \, , \label{eq:rho2_using_h0} \\ &= \frac{4}{25}\frac{T}{\mathcal{D}^2} \, , \label{eq:rho2_using_depth} \end{align} where $S_h$ is the (single-sided) power spectral density of the strain noise in the gravitational wave detector, and Eq. \eqref{eq:rho2_using_depth} defines the "sensitivity depth"~\citep{BehnEtAl2015:PstMtUSCnGrvSGlC, DreiEtAl2018:FAcSnsEsCntSr}: \begin{equation} \mathcal{D} = \frac{ \sqrt{S_h(f)} }{ h_0 } \, , \label{eq:sens-depth} \end{equation} We assume, again for simplicity, that the gravitational wave detector network is operational at 100\% duty cycle; in practice duty cycles of $\gtrsim 70\%$ are achieved for current detectors, but this is expected to improve over time~\citep{KAGREtAl2020:PrObLclGrvTrALAVK}. Evaluating Eq.~\eqref{eq:Fisher matrix} using Eqs.~\eqref{eq:GW phase} and~\eqref{eq:template metric} gives the covariance matrix: \begin{align} \Sigma(f, \dot{f}, \ddot{f}) &= \frac{ \mathcal{D}^2 }{ \pi^2 } \begin{pmatrix} \frac{ 1875 }{ 16 T^3 } & 0 & -\frac{ 7875 }{ 2 T^5 } \\ 0 & \frac{ 1125 }{ T^5} & 0 \\ -\frac{ 7875 }{ 2 T^5 } & 0 & \frac{ 157500 }{ T^7 } \end{pmatrix} \, . \label{eq:spindown_cov} \end{align} Now considering the four amplitude parameters $h_0$, $\cos\iota$, $\psi$, $\phi_0$, only $h_0$ is potentially interesting for inferring neutron star properties, being a function of $I_{zz}$ and $\epsilon$ [Eq.~\eqref{eq:h0}]. The error in the $h_0$ measurement may be derived from the parameter-space metric over the amplitude parameters $\{\mathcal{A}_i\}$~\citep{Prix2007}, as outlined in \citet[Section 3.2]{Prix2011}. Averaging over the sky position of the neutron star ~\citep[Eq.~122]{Prix2011}, as well as $\psi$ gives: \begin{align} \label{eq:h0_error} \sigma(h_0) &= \frac{ a \mathcal{D} h_0 }{ \sqrt{T} } \frac{ \sqrt{ b + \xi^2 } }{ 1 - \xi^2 } \,, \\ \xi &\equiv \cos\iota \,, \\ a &= 2 \sqrt{\frac{6}{301}\left(344 - 43\sqrt{2} - 8\sqrt{86}\right)} \approx 4.08 \,, \\ b &= \frac{43 \left(8 - 8\sqrt{2} - \sqrt{86}\right)}{43\sqrt{2} + 8\sqrt{86} - 344} \approx 2.59 \,. \end{align} Note that Eq.~\eqref{eq:h0_error} becomes infinite at $\xi = \pm 1$, due to a singularity in the coordinate transform between $\{\mathcal{A}_i\}$ and $\{h_0, \xi, \psi, \phi_0\}$; for this reason Eq.~\eqref{eq:h0_error} cannot be analytically averaged over $\xi$ with a prior range that includes $\pm 1$. \section{Parameter estimation framework} \label{sec:framework} This section develops a framework for inferring three neutron star properties: its principal moment of inertia ($I_{zz}$), its ellipticity ($\epsilon$), and the component of the magnetic dipole moment perpendicular to its rotation axis ($m_p$, hereafter abbreviated to ``perpendicular magnetic moment''). It assumes that the neutron star is losing rotational kinetic energy (and hence spinning down) through both magnetic dipole radiation and gravitational wave (mass-type) quadrupolar radiation, and that no other mechanisms dissipate energy from the neutron star. This framework relies on the detection of a continuous wave signal to measure the frequency and spin-down parameters ($f$, $\dot{f}$, and $\ddot{f}$), and the characteristic strain amplitude ($h_0$). It also assumes that a measurement of the distance to the neutron star ($r$) is available. Balancing the spin-down power with the luminosity of electromagnetic and gravitational radiation gives: \begin{gather} \left(\frac{dE}{dt}\right)_{\text{EM}} + \left(\frac{dE}{dt}\right)_{\text{GW}} = -\left(\frac{dE}{dt}\right)_{\text{rot}} \, . \label{eq:energy_balance} \end{gather} The ellipticity of a neutron star is conventionally assumed to be relatively small~\citep{Sieniawska2021}, so the star is very close to spherical. The rotational kinetic energy of the star is then taken to be that of a rotating sphere~\citep{WettEtAl2008:SrGrvWvCssLI}: \begin{gather} \Bigg(\frac{dE}{dt}\Bigg)_{\text{rot}} = \pi^2 I_{zz} f \dot{f} \, . \label{eq:rot_energy} \end{gather} The luminosity of a rotating magnetic dipole is~\citep{OstrGunn1969:NtrPlsITh, CondRans2016:EssRdAst} \begin{gather} \left(\frac{dE}{dt}\right)_{\text{EM}} = \frac{2m_p^2}{3c^3 \mu_0} ( \pi f )^4 \, , \label{eq:EM_energy} \end{gather} where $\mu_0$ is the vacuum permeability. Note that this is given in terms of the gravitational wave frequency which is twice the rotational frequency as discussed in Section \ref{sec:CW_signal}. The gravitational wave luminosity of a (mass-type) quadrupole is~\citep{OstrGunn1969:NtrPlsITh, BlanEtAl2001:GrvRdThLgPrp} \begin{gather} \Bigg( \frac{dE}{dt}\Bigg)_{\text{GW}} = \frac{32G}{5c^5}I_{zz}^2 \epsilon^2 (\pi f)^6 \, . \ \label{eq:GW_energy} \end{gather} In order to simplify the expressions, we introduce the constants \begin{gather} K_{\text{EM}} = \frac{2 \pi^2}{3c^3 \mu_0} \, , \qquad K_{\text{GW}} = \frac{32 G \pi^4}{5c^5} \, . \end{gather} We then substitute Eqs.~\eqref{eq:rot_energy}~--~\eqref{eq:GW_energy} into Eq.~\eqref{eq:energy_balance} and rearrange to get: \begin{gather} \dot{f} = -\frac{K_{\text{EM}} m_p^2 f^3}{I_{zz}} - K_{\text{GW}} I_{zz} \epsilon^2 f^5 \, . \label{eq:simul_fd} \end{gather} Differentiating Eq.~\eqref{eq:simul_fd} with respect to time gives: \begin{gather} \ddot{f} = -\frac{3 K_{\text{EM}} m_p^2 f^2 \dot{f}}{I_{zz}} - 5 K_{\text{GW}} I_{zz} \epsilon^2 f^4 \dot{f} \, . \label{eq:simul_fdd} \end{gather} Given that $\ddot{f}$ is measured as a separate parameter of the continuous wave signal model [Eq.~\eqref{eq:signal_f}], Eq.~\eqref{eq:simul_fdd} provides an additional constraint independent of Eq.~\eqref{eq:simul_fd}. Equations~\eqref{eq:simul_fd} and~\eqref{eq:simul_fdd} depend on three unknowns: $I_{zz}$, $\epsilon$, and $m_p$. With the addition Equation.~\eqref{eq:h0} which also depends on $I_{zz}$ and $\epsilon$, we have three equations constraining the same three unknowns which may now be solved for: \begin{align} I_{zz} &= \frac{ K_{\text{GW}} c^8 r^2 h_0^2 f }{ 8\pi^4 G^2 \dot{f} ( 3 - n )} \, , \label{eq:Izz} \\ \epsilon &= \frac{ 2\pi^2 G \dot{f} ( 3 - n ) }{ K_{\text{GW}}c^4 r h_0 f^3 } \, , \label{eq:epsilon} \\ m_p &= \frac{ c^4 r h_0 }{ 4\pi^2 G f } \sqrt{ \frac{ K_{\text{GW}} ( n - 5 ) }{ K_{\text{EM}}(3 - n ) } } \, , \label{eq:mp} \end{align} where $n$ is the braking index of Eq.~\eqref{eq:braking_index}. Equations~\eqref{eq:Izz}~--~\eqref{eq:mp} remain valid provided that $3 < n < 5$, which is consistent with the power balance assumed by Eq.~\eqref{eq:energy_balance}. As discussed in Section~\ref{sec:CW_signal}, braking indices of 3 or 5 correspond to pure electromagnetic or gravitational wave radiation respectively; in either case, Eqs.~\eqref{eq:Izz}~--~\eqref{eq:mp} are no longer applicable. A combination of electromagnetic and gravitational wave radiation yields a braking index between 3 and 5; the loss of kinetic rotational energy through \emph{both} gravitational wave \emph{and} electromagnetic radiation is therefore a fundamental requirement of the framework outlined here. \citet{Sieniawska2021} show that, for a neutron star only emitting continuous waves and not electromagnetic radiation, degeneracies prevent direct inference of the neutron star properties without a measurement of $r$, which is unlikely to be measurable without an electromagnetic counterpart. Few other techniques exist to directly measure $I_{zz}$. \citet{Damour1988} propose a method which requires higher-order relativistic corrections to the periastron advance to be measurable, which is possible only for very rapidly-spinning binary pulsars. To date the method has only been applicable to the double pulsar system PSR~J0737$-$3039~\citep{Bejger2005, WorlEtAl2008:NclCnsMmInNtS, SteiEtAl2015:UNtSObsDtCThMITDfr, MiaoEtAl2022:MmInPJ07LIGNI}. Note that~\citet{MiaoEtAl2022:MmInPJ07LIGNI} assumes a neutron star equation of state, whereas the framework derived here does not. Other methods rely on separate measurements of the neutron star mass and radius through either electromagnetic observations and/or detection of gravitational waves from binary neutron star mergers~\citep{SteiEtAl2015:UNtSObsDtCThMITDfr, MiaoEtAl2022:MmInPJ07LIGNI}. It is difficult, however, to measure both properties simultaneously for the one same neutron star~\citep{MillEtAl2019:PJ0MRdNDImpPrpNtSM}. No method exists for directly measuring $\epsilon$ other than through a continuous wave detection. While $m_p$ (or equivalently the surface magnetic field strength $B$) is routinely inferred by assuming pure magnetic dipole radiation from known pulsars~\citep{Kram2005:Pls}, a measurement of $m_p$ from a mixed electromagnetic/gravitational wave pulsar would be of interest as it would provide an independent verification of the existing measurements or provide insight into neutron stars with different energy loss mechanisms. The errors in the inferred neutron star properties ($I_{zz}$, $\epsilon$, $m_p$) has the following dependencies: \begin{itemize} \item The errors of the inferred properties ($\Delta I_{zz}$, $\Delta \epsilon$, and $\Delta m_p$) depend on $\Delta f$, $\Delta \dot{f}$, $\Delta \ddot{f}$, $\Delta h_0$, and $\Delta r$ [Eqs.~\eqref{eq:Izz}~--~\eqref{eq:mp}]; \item The errors of the spindown parameters ($\Delta f$, $\Delta \dot{f}$, $\Delta \ddot{f}$) depend on $T$ and $\mathcal{D}$ [Eq.~\eqref{eq:Fisher matrix}]; \item The error $\Delta h_0$ depends on $T$, $\mathcal{D}$, and $h_0$ [or equivalently $S_h$; Eq.~\eqref{eq:sens-depth}] and $\xi$ [Eq.~\eqref{eq:h0_error}]; \item The error $\Delta r$ is independent of the other parameters. \end{itemize} Therefore, we see that the errors in $I_{zz}$, $\epsilon$, $m_p$ depend entirely on: the observation time ($T$); the strength of the continuous wave signal relative to the detector noise ($\mathcal{D}$, $h_0$); the ratio of gravitational wave ``plus'' and ``cross'' polarisations ($\xi$); and the uncertainty in the distance to the star ($\Delta r$). An estimate of the relative errors in $I_{zz}$, $\epsilon$, and $m_p$ and their dependence on the parameters $\Lambda = \{ f, \dot{f}, \ddot{f}, h_0, r \}$ may be arrived at through differential error analysis~\citep{BenkEtAl2018:EPrpCMAApAdDsdCn}: \begin{equation} \label{eq:rel-err-dea} \frac{ \sigma(I_{zz})^2 }{ I_{zz}^2 } = \frac{1}{ I_{zz}^2 } \sum_{x, y \in \Lambda} \left(\frac{ \partial I_{zz} }{ \partial x }\right) \left(\frac{ \partial I_{zz} }{ \partial y }\right) \begin{cases} \sigma(x)^2 & x = y \,, \\ \Sigma(x, y) & x \ne y \,, \end{cases} \end{equation} and similarly for $\sigma(\epsilon)^2 / \epsilon^2$ and $\sigma(m_p)^2 / m_p^2$; where $\sigma(x)$ is the standard deviation in the quantity $x$ and $\Sigma(x,y)$ is the covariance between the quantities $x$ and $y$. This analysis yields, to third order in $1/T$: \begin{align} \label{eq:Izz-rel-err-dea} \frac{ \sigma(I_{zz})^2 }{ I_{zz}^2 } &= \frac{ 4 \sigma(r)^2 }{ r^2 } + \frac{ 4 \sigma(h_0)^2 }{ h_0^2 } + \frac{16875 \mathcal{D}^2}{16 \pi ^2 f^2 (n-3)^2 T^3} \,, \\ \label{eq:eps-rel-err-dea} \frac{ \sigma(\epsilon)^2 }{ \epsilon^2 } &= \frac{ \sigma(r)^2 }{ r^2 } + \frac{ \sigma(h_0)^2 }{ h_0^2 } + \frac{1875 \mathcal{D}^2 (9 - 2n)^2}{16 \pi ^2 f^2 (n-3)^2 T^3} \,, \\ \label{eq:mp-rel-err-dea} \frac{ \sigma(m_p)^2 }{ m_p^2 } &= \frac{ \sigma(r)^2 }{ r^2 } + \frac{ \sigma(h_0)^2 }{ h_0^2 } + \frac{1875 \mathcal{D}^2 (n^2 - 9n + 15)^2}{16 \pi ^2 f^2 (n-5)^2 (n-3)^2 T^3} \, . \end{align} The leading-order terms of the relative errors in $I_{zz}$, $\epsilon$, and $m_p$ are the relative errors in $h_0$ and $r$. Note that $\sigma (r) / r$ is independent of $T$, $\sigma(h_0) / h_0$ scales with $T^{-1/2}$ [Eq.~\eqref{eq:h0_error}], and the remaining terms in Eqs.~\eqref{eq:Izz-rel-err-dea}~--~\eqref{eq:mp-rel-err-dea} scale with $T^{-3}$ or smaller. Since the distance error $\sigma(r) / r$ is assumed to be constant, in the limit of $T \to \infty$ the relative errors asymptote to: \begin{gather} \lim_{T\to \infty}\frac{\sigma(I_{zz})}{I_{zz}} = \frac{2\sigma(r)}{r} \label{eq:Izz_limiting_err} \\ \lim_{T\to \infty}\frac{\sigma(\epsilon)}{\epsilon} = \frac{\sigma(r)}{r} \label{eq:epsilon_limiting_err} \\ \lim_{T\to \infty}\frac{\sigma(m_p)}{m_p} = \frac{\sigma(r)}{r} \label{eq:mp_limiting_err} \end{gather} The asymptotic error in $I_{zz}$ is twice that of the other properties because the relationship between $I_{zz}$ and $r$ is $I_{zz} \propto r^2$ [Eq.~\eqref{eq:Izz}] whereas for the other two parameters it is $\epsilon \propto r^{-1}$ and $m_p\propto r$ [Eqs.~\eqref{eq:epsilon} -~\eqref{eq:mp}]. \section{Monte Carlo simulations} \label{sec:MC} The framework presented in Section~\ref{sec:framework} shows that it is possible to infer three neutron star properties using a continuous waves detection. In this section, we describe how Monte Carlo simulations were used to quantify to what accuracy these properties may be inferred with a detection. The inference relies on five parameters ($f$, $\dot{f}$, $n$, $h_0$, $r$) [Eqs. \eqref{eq:Izz}~--~\eqref{eq:mp}] and the errors of the inference depend on four additional parameters ($T$, $\mathcal{D}$, $\xi$, $\Delta r$) as well as $h_0$. In our simulations we choose to input values of $I_{zz}$ instead of $h_0$ through rearrangement of Eq~\eqref{eq:h0}. Results that directly depend on $h_0$ can be viewed as an optimistic or pessimistic case for the continuous wave signal detectability. In comparison, choices of $I_{zz}$ relate only to the neutron star’s internal physics. While larger values of $I_{zz}$ implicitly lead to a louder continuous wave signal, this also depends on the other neutron star parameters so does not relate as directly to the signal detectability. The signal from a neutron star is simulated as the set of input values for the nine parameters ($f^{\text{in}}$, $\dot{f}^{\text{in}}$, $n^{\text{in}}$, $I_{zz}^{\text{in}}$, $r^{\text{in}}$, $T^{\text{in}}$, $\mathcal{D}^{\text{in}}$, $\xi^{\text{in}}$, $\Delta r ^{\text{in}}$). The properties of the neutron star emitter $(I_{zz}^{\text{in}}, \epsilon^{\text{in}}, m_p^{\text{in}})$ can then be calculated using Eqs. \eqref{eq:Izz}--\eqref{eq:mp}. Measurement errors $(\delta f, \delta \ddot{f}, \delta \ddot{f})$ in the simulated $f^{\text{in}}, \dot{f}^{\text{in}},$ and $\ddot{f}^{\text{in}}$ (via $n^{\text{in}}$) are drawn from a multivariate normal distribution with covariance matrix given by Eq.~\eqref{eq:Fisher matrix}; the measurement error $\delta h_0$ in $h_0^{\text{in}}$ is drawn from a normal distribution with standard deviation given by Eq.~\eqref{eq:h0_error}. All other covariances between the parameters $(f^{\text{in}}, \dot{f}^{\text{in}}, \ddot{f}^{\text{in}}, h_0^{\text{in}})$ are assumed to be zero. The measured parameters of the continuous wave signal are then \begin{equation} \label{eq:MC-output-parameters} \begin{aligned} f^{\text{out}} &= f^{\text{in}} + \delta f \,, & \dot{f}^{\text{out}} &= \dot{f}^{\text{in}} + \delta \dot{f} \,, \\ \ddot{f}^{\text{out}} &= \ddot{f}^{\text{in}} + \delta \ddot{f} \,, & h_0^{\text{out}} &= h_0^{\text{in}} + \delta h_0 \,. \end{aligned} \end{equation} Substitution of $(f^{\text{out}}, \dot{f}^{\text{out}}, \ddot{f}^{\text{out}}, h_0^{\text{out}})$ into Eqs.~\eqref{eq:h0},~\eqref{eq:braking_index} and~\eqref{eq:Izz}--\eqref{eq:mp} gives the inferred neutron star properties $(I_{zz}^{\text{out}}, \epsilon^{\text{out}}, m_p^{\text{out}})$, which may then be compared to $(I_{zz}^{\text{in}}, \epsilon^{\text{in}}, m_p^{\text{in}})$. We repeat this process for $10^6$ samples. Below we describe the Monte Carlo procedure in further detail. \subsection{Choice of input parameters} \label{sec:MC_params} Nine variables control the outputs of the Monte Carlo simulations: $f$, $\dot{f}$, $n$, $I_{zz}$, $r$, $T$, $\mathcal{D}$, $\xi$, $\Delta r$. We consider an observation time in the range of $T = 0.5 - 4$ years. One can expect gravitational wave detector observing runs to last at least a year~\citep{KAGREtAl2020:PrObLclGrvTrALAVK}. A continuous wave signal detected in a year-long observing run may then be followed up in future and/or archival data. The neutron star distance $r^{\text{in}}$ is fixed to $\SI{1}{kpc}$ for simplicity. Such a distance is within the range where all-sky continuous wave surveys are sensitive to neutron stars with ellipticities $\epsilon \gtrsim 10^{-6}$ and emitting at frequencies $f \gtrsim \SI{100}{Hz}$~\citep{LIGOEtAl2021:AlSCntGrvWIsNtSEOLD}. Given that the neutron star properties depend on the product $r h_0$ [Eqs.~\eqref{eq:Izz}~--~\eqref{eq:mp}], a choice of a smaller (larger) distance would be equivalent to simulating a larger (smaller) $h_0$. The fractional uncertainty in $r$ is chosen to be $\sigma(r) / r = 20\%$. While radio pulsar distances (inferred through dispersion measures) exhibit appreciable variety and are susceptible to biases~\citep{Verbiest2012}, a typical measurement uncertainty of $\sim 20\%$ is not unreasonable~\citep{Taylor1993, Yao2017}, and indeed is expected to be readily achievable with next-generation radio telescopes~\citep{Smits2011}. We explore a range of sensitivity depths $\mathcal{D} = 30 - \SI{150}{Hz^{-1/2}}$. The lower end of the range is consistent with the sensitivities typical of all-sky continuous wave surveys for isolated neutron stars~\citep[Table~I]{DreiEtAl2018:FAcSnsEsCntSr}; given the wide parameter space of $f$, $\dot{f}$, and sky position these searches must cover, their sensitivities are typically lower than targeted continuous wave searches. The upper range is a conservative choice for searches targeting known pulsars~\citep[Table~V]{DreiEtAl2018:FAcSnsEsCntSr}; these searches cover a much smaller parameter space around the pulsar, and can afford the computational cost of performing an optimal matched filter analysis to maximise sensitivity. The range in $\mathcal{D}$ represents two possible scenarios for a first continuous wave detection. A continuous wave candidate initially found in an all-sky survey (with $\mathcal{D} \sim \SI{30}{Hz^{-1/2}}$) would be followed up with more sensitive analyses, increasing its signal-to-noise significantly and yielding a strongly-detected signal. On the other hand, given that searches for continuous waves from known pulsars already employ the most sensitive methods (and hence have $\mathcal{D} \gtrsim \SI{150}{Hz^{-1/2}}$), any signal may initially only be marginally detectable until more sensitive data becomes available. We draw the moment of inertia from the widely accepted range for neutron stars of $I_{zz}^{\text{in}} \in [1, 3]{\times}10^{38} \, \si{kg.m^2}$~\citep{MolnOstg1985:ClcMMmInrNtSt, Bejger2005, WorlEtAl2008:NclCnsMmInNtS, KramEtAl2021:StrGrvTsDbPl, MiaoEtAl2022:MmInPJ07LIGNI}. Ranges for $\epsilon$ and $m_p$ are less well constrained; estimates for $\epsilon$ range from $\sim 10^{-11}$~\citep{BonaGour1996:GrvWPlEmMgFIDst} to $\sim 10^{-4}$~\citep{Owen2005:MxElDfrCmSEEqtS}. Based on observations of radio pulsars and magnetars, the surface magnetic field strength $B = m_p / R^3$ (where $R$ is the neutron star radius) may range from $\sim 10^{8}$ to $\sim 10^{15}$~Gauss~\citep{Reis2001:MgnFlNtStOvr}. Certain values of $(\epsilon, m_p)$ drawn from these ranges represent neutron stars which spin down within timescales of seconds to days, which would be impossible to detect as continuous wave sources. To exclude such regions of the $\epsilon$--$m_p$ space, we instead draw values of $f^{\text{in}}$ and $\dot{f}^{\text{in}}$ from ranges which are typical of parameter spaces for all-sky continuous wave surveys~\citep{LIGOEtAl2021:AlSCntGrvWIsNtSEOLD}: \begin{gather} f^{\text{in}} \in [50, 2000]~\si{Hz} \,, \qquad \dot{f}^{\text{in}} \in [-10^{-8}, -10^{-12}]~\si{Hz.s^{-1}} \,. \end{gather} A braking index is also drawn from $n^{\text{in}} \in (3, 5)$ which is used to compute $\ddot{f}^{\text{in}}$ via Eq.~\eqref{eq:braking_index}, and ($f^{\text{in}}$, $\dot{f}^{\text{in}}$ $\ddot{f}^{\text{in}}$) are then used to compute $\epsilon^{\text{in}}$ and $m_p^{\text{in}}$ via Eqs.~\eqref{eq:epsilon} and~\eqref{eq:mp}. Having fixed $r^{\text{in}}$, chosen $I_{zz}^{\text{in}}$ and $f^{\text{in}}$, and chosen $\epsilon^{\text{in}}$ via $(f^{\text{in}}, \dot{f}^{\text{in}}, \ddot{f}^{\text{in}})$, $h_0^{\text{in}}$ may now be calculated via Eq.~\eqref{eq:h0}. A choice of $\mathcal{D}$ then fixes $S_h$ via Eq.~\eqref{eq:sens-depth}. In this paper, we do not assume a specific gravitational wave detector configuration (e.g.\ by setting $S_h$ to the noise power spectral density of a current or future detector). Instead, we assume that the sensitivity to continuous waves is calibrated by $r$ (the distances which we could detect signals) and $\mathcal{D}$ (how deep can the data analysis method dig into the data to extract weak signals). More sensitive gravitational wave detectors will increase the distances $r$ at which continuous wave signals may be detected, while improved data analysis methods will increase our sensitivity to signals, allowing $\mathcal{D}$ to increase. There is a strict range for the cosine of the inclination angle $|\xi| \le 1$. As noted in Section~\ref{sec:error_theory}, however, the error in $h_0$ [Eq.~\eqref{eq:h0_error}] becomes infinite at $|\xi| = 1$ due to a coordinate singularity. This is a limitation of the analytic Fisher information matrix approach to error estimation adopted in this paper. That said, the likelihood of sampling a value of $|\xi| \approx 1$ is negligible. The use of median and percentile differences to compare input and output parameters (Section~\ref{sec:norm-relat-errors}) also guards against degraded Monte Carlo samples where $|\xi|$ approaches 1. An alternative approach would have been to assume a particular inclination angle, e.g.\ $\xi = 0$~\citep[cf.][]{Sieniawska2021}. \subsection{Computation of output parameters} \label{sec:comp-outp-param} Having selected the input parameters, output parameters $(f^{\text{out}}, \dot{f}^{\text{out}}, \ddot{f}^{\text{out}}, h_0^{\text{out}})$ are computed via Eq.~\eqref{eq:MC-output-parameters}. An output braking index $n^{\text{out}}$ may then be computed via Eq.~\eqref{eq:braking_index}. Computation of $(I_{zz}^{\text{out}}, \epsilon^{\text{out}}, m_p^{\text{out}})$ requires $3 < n^{\text{out}} < 5$; this is not guaranteed and can be violated if $n^{\text{in}} \approx 3$ or $n^{\text{in}} \approx 5$, and the errors in $\Delta f$, $\Delta \dot{f}$, and/or $\Delta \ddot{f}$ are also large. Where $3 < n^{\text{out}} < 5$ is not satisfied, the Monte Carlo sample is simply discarded. At shorter $T$ ($\lessapprox 0.5$ years), a sizeable fraction ($\gtrapprox 80\%$) of the samples must be discarded. This fraction decreases with longer $T$, and often becomes a negligible effect ($\lessapprox 1\%$) once $T \gtrsim 1$~year, but depends on the exact parameters of the simulation. While this limitation may impede inference of the properties of a neutron star which is emitting almost purely electromagnetic or gravitational radiation ($n^{\text{in}} \approx 3$ or $\approx 5$ respectively), it is unlikely to be an impediment where an appreciable fraction of the star's rotational kinetic energy is radiated through both mechanisms. \subsection{Comparison of inputs and outputs} \label{sec:norm-relat-errors} The Monte Carlo simulations described above result in pairs of input and output neutron star properties $(I_{zz}^{\text{in}}, I_{zz}^{\text{out}}) \in \mathcal{MC}^{I_{zz}}$, $(\epsilon^{\text{in}}, \epsilon^{\text{out}}) \in \mathcal{MC}^{\epsilon}$, and $(m_p^{\text{in}}, m_p^{\text{out}}) \in \mathcal{MC}^{m_p}$, where $\mathcal{MC}$ denotes the results of the simulations for a particular property. We quantify the agreement between input and output properties using the median relative error over each set: \begin{equation} \label{eq:median-relative-error} \mathcal{E}(I_{zz}) \equiv \median \Bigg\{\, \frac{ | I_{zz}^{\text{out}} - I_{zz}^{\text{in}} | }{ I_{zz}^{\text{in}} } \,\Bigg|\, (I_{zz}^{\text{in}}, I_{zz}^{\text{out}}) \in \mathcal{MC}^{I_{zz}} \,\Bigg\} \,, \end{equation} and similarly for $\mathcal{E}(\epsilon)$ and $\mathcal{E}(m_p)$. From the differential error analysis of Eq.~\eqref{eq:Izz-rel-err-dea}--\eqref{eq:mp-rel-err-dea} it is expected that, as $T$ increases, $\mathcal{E}$ will asymptote to a value determine by the error in the distance $r$. We therefore define normalised relative errors which asymptote to unity in the limit of $T \to \infty$: \begin{gather} \label{eq:norm-median-relative-error} \bar{\mathcal{E}}(I_{zz}) = \frac{ \mathcal{E}(I_{zz}) }{ 2 \mathcal{E}(r) } \\ \bar{\mathcal{E}}(\epsilon) = \frac{ \mathcal{E}(\epsilon) }{ \mathcal{E}(r) } \\ \bar{\mathcal{E}}(m_p) = \frac{ \mathcal{E}(m_p) }{ \mathcal{E}(r) } \,, \end{gather} where $\mathcal{E}(r)$ is the median error for $r$. Note that $\mathcal{E}(I_{zz})$ is normalised by $2 \mathcal{E}(r)$ due to the quadratic dependency of $I_{zz}$ on $r$ [Eq. \eqref{eq:norm-median-relative-error}]; see Section~\ref{sec:framework} and Eqs.~\eqref{eq:Izz} and~\eqref{eq:Izz_limiting_err}. We have assumed (Section~\ref{sec:MC_params}) a relative error in $r$ of 20\%, i.e.\ samples of $(r^{\text{out}} - r^{\text{in}}) / r^{\text{in}}$ are drawn from a normal distribution $\mathcal{N}(0, 0.2)$ with mean zero and standard deviation $0.2$. Drawing $\sim 10^8$ samples from this distribution gives: \begin{align} \mathcal{E}(r) &= \median \Bigg\{\, \frac{ | r^{\text{out}} - r^{\text{in}} | }{ r^{\text{in}} } \,\Bigg|\, \frac{ r^{\text{out}} - r^{\text{in}} }{ r^{\text{in}} } \sim \mathcal{N}(0, 0.2) \,\Bigg\} \\ &\approx 0.135 \,. \end{align} We therefore expect $\mathcal{E}(I_{zz})$ to asymptote to $\sim 27\%$, and $\mathcal{E}(\epsilon)$ and $\mathcal{E}(m_p)$ to asymptote to $\sim 14\%$, at sufficiently large $T$. \section{Results} \label{sec:results} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/Single_NS.png} \caption{ Convergence of $(I_{zz}^{\text{out}}, \epsilon^{\text{out}}, m_p^{\text{out}})$ to $(I_{zz}^{\text{in}}, \epsilon^{\text{in}}, m_p^{\text{in}})$ as a function of observation time $T$. Here $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $n^{\text{in}} = 4$, $f^{\text{in}} = \SI{1000}{Hz}$, $\dot{f}^{\text{in}} = \SI{-1e-9}{Hz.s^{-1}}$, and $\mathcal{D} = \SI{30}{Hz^{-1/2}}$, which implies $h_0 = \num{8.1e-25}$, $\epsilon = \num{3.8e-7}$, and $m_p = \SI{2.3e19}{T.m^3}$. The input values (dashed lines) are plotted against the median, 16th and 84th percentiles fpr $10^6$ output value samples. } \label{fig:single_NS} \end{figure} Figure~\ref{fig:single_NS} illustrates how the errors in the inferred neutron star properties scale with observation time. Here the inputs are fixed to the representative values $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $\epsilon^{\text{in}} = \num{1.2e-7}$, $m_p = \SI{7.2e18}{T.m^3}$, and output values $(I_{zz}^{\text{out}}, \epsilon^{\text{out}}, m_p^{\text{out}})$ are simulated for different $T$, assuming a sensitivity depth $\mathcal{D} = \SI{30}{Hz^{-1/2}}$. As expected, the errors of the inferred parameters decrease with increasing observation time. For $T \gtrsim \SI{2}{years}$ the errors in $I_{zz}$, $\epsilon$, and $m_p$ asymptote to the error due to $r$, consistent with Eqs.~\eqref{eq:Izz-rel-err-dea}~--~\eqref{eq:mp-rel-err-dea}. We neglect the possibility that the error in distance may be improved over time if better models of the galactic electron density distribution become available. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/Stability_time_vs_depth.png} \caption{ $I_{zz}$ stability time versus sensitivity depth $\mathcal{D}$ for braking indices $n^{\text{in}} = 3.01, 3.1, 4.99$. Here $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $f^{\text{in}} = \SI{1000}{Hz}$, and $\dot{f}^{\text{in}} = \SI{-1e-9}{Hz.s^{-1}}$, with values for $h_0$, $\epsilon$, and $m_p$ implied by Eqs.~\eqref{eq:h0},~\eqref{eq:epsilon}, and~\eqref{eq:mp} respectively. Plotted are a subsampling of the results from $10^6$ samples (light-coloured dots) and best-fit curves (dark-coloured lines). } \label{fig:sensitivity_depth} \end{figure} We define the ``stability time'' for $I_{zz}$, $\epsilon$, and $m_p$ as the time required for the normalised relative errors $\bar{\mathcal{E}}$ of each property to reach $1.1$, i.e. to within 10\% of the asymptotic distance error [see Eq.~\eqref{eq:norm-median-relative-error}]. Figure \ref{fig:sensitivity_depth} plots the stability time for $I_{zz}$ as a function of $n$ and $\mathcal{D}$ for signals with $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $f^{\text{in}} = \SI{1000}{Hz}$, and $\dot{f}^{\text{in}} = \SI{-1e-9}{Hz.s^{-1}}$. We see that, for continuous wave signals at $\mathcal{D} \sim \SI{30}{Hz^{-1/2}}$ initially detected in an all-sky survey, the asymptotic error in $I_{zz}$ is approached after a few years observing with a fully-coherent follow-up search, which would include analysing both archival and future data. For continuous waves detected from known pulsars, where $\mathcal{D} \approx \SI{150}{Hz^{-1/2}}$, the asymptotic errors in $I_{zz}$ are not approached until the star is observed for $T\approx \SI{20}{years}$ which is an unrealistic time span to consider. Note, however, that the definition of ``stability time'' here assumes the detector sensitivity $S_h$ remains constant; in reality $S_h$ is likely to decrease over time~\citep{KAGREtAl2020:PrObLclGrvTrALAVK}, particularly if third-generation gravitational wave detectors are constructed~\citep{BailEtAl2021:GrvPhAst2020}. Such improvements would decrease the sensitivity depth $\mathcal{D}$ of a detected signal such that the inferred parameters would converge to the asymptotic distance error faster than suggested in figure \ref{fig:sensitivity_depth}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/Rel_err_vs_n.png} \caption{ Normalised relative errors $\bar{\mathcal{E}}$ as a function of braking index $n^{\text{in}}$. Here $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $f^{\text{in}} = \SI{1000}{Hz}$, $\dot{f}^{\text{in}} = \SI{-1e-9}{Hz.s^{-1}}$, and $\mathcal{D} = \SI{30}{Hz^{-1/2}}$, with values for $h_0$, $\epsilon$, and $m_p$ implied by Eqs.~\eqref{eq:h0},~\eqref{eq:epsilon}, and~\eqref{eq:mp} respectively. Plotted are a subsampling of the results from $10^6$ samples (light-coloured dots) and best-fit curves (dark-coloured lines). } \label{fig:braking_index} \end{figure} Figure~\ref{fig:braking_index} plots normalised relative errors in $I_{zz}$, $\epsilon$, and $m_p$ as functions of $n$ for signals with $I_{zz}^{\text{in}} = \SI{2e38}{kg.m^2}$, $f^{\text{in}} = \SI{1000}{Hz}$, $\dot{f}^{\text{in}} = \SI{-1e-9}{Hz.s^{-1}}$, and $\mathcal{D} = \SI{30}{Hz^{-1/2}}$. The neutron star properties $I_{zz}$ and $\epsilon$ are best estimated where the star is losing almost all energy in gravitational waves ($n \approx 5$), as expected. On the other hand, the electromagnetic property $m_p$ is best estimated where the star is losing its energy through both electromagnetic and gravitational radiation ($n \approx 4$). When energy loss through electromagnetic radiation is more dominant ($n \approx 3$), the errors for all three properties are larger than the $n\approx 4$ case. This is consistent with Eq.~\eqref{eq:Izz-rel-err-dea} - \eqref{eq:mp-rel-err-dea} and is because the continuous wave observation cannot measure the spindown parameters accurately when the neutron star only weakly emits continuous waves. However, note that these results are based on the techniques described in section \ref{sec:framework}. It may be possible for electromagnetic astronomers to use alternate techniques to measure $m_p$ with lower errors for neutron stars with certain braking indices. \begin{figure*} \includegraphics[width=\textwidth]{Figures/Heatmap.png} \caption{ Normalised relative errors (top to bottom rows) $\bar{\mathcal{E}}(I_{zz})$, $\bar{\mathcal{E}}(\epsilon)$, $\bar{\mathcal{E}}(m_p)$, for braking indices (left to right columns) $n = 3.01$, $3.1$, $4.99$, as functions of $f$ and $\dot{f}$. Plotted are the median errors over $I_{zz}$ and $\xi$, and for $T = 1$~year and $\mathcal{D} = \SI{30}{Hz^{-1/2}}$. The different colours represent the values of the normalised relative error and the white areas indicate where $\bar{\mathcal{E}} \ge 30.0$ and/or where $3 < n^{\text{out}} < 5$ no longer holds. A total of $10^6$ samples were used in the plots. } \label{fig:heatmap} \end{figure*} Figure~\ref{fig:heatmap} plots normalised relative errors $\bar{\mathcal{E}}(I_{zz})$, $\bar{\mathcal{E}}(\epsilon)$, and $\bar{\mathcal{E}}(m_p)$ as functions of $n$, $f$, and $\dot{f}$, taking the median errors over the sampled ranges of $I_{zz}$ and $\xi$ given in Section~\ref{sec:MC_params}. We assume $T = 1$~year and $\mathcal{D} = \SI{30}{Hz^{-1/2}}$, which is relevant to a continuous wave signal detected in an all-sky continuous wave survey. The errors in all three neutron star properties are smallest at the highest spin-down rates ($\dot{f} \approx \SI{-e-8}{Hz.s^{-1}}$), where rate of rotational kinetic energy loss from the star is highest, and lowest frequencies ($f\approx \SI{50}{Hz}$). Once $|\dot{f}| \lessapprox \SI{e-11}{Hz.s^{-1}}$, the errors are sufficiently large that $n$ cannot be reliably measured, the restriction $3 < n^{\text{out}} < 5$ is no longer satisfied, and most Monte Carlo samples must be discarded (see Section~\ref{sec:comp-outp-param}). For each heatmap, the error as a function of $|\dot{f}|$ increases more rapidly for lower $f$ than for higher $f$, consistent with the $f^{-2}$ dependence of the $T^{-3}$ terms in Eqs.~\eqref{eq:Izz-rel-err-dea}~--~\eqref{eq:mp-rel-err-dea}. Figure~\ref{fig:heatmap} suggests that normalised relative errors $\bar{\mathcal{E}} \lesssim 1.2$ are achievable over much of the $f$--$\dot{f}$ parameter space typically searched for continuous waves, and particularly for rapidly spinning-down sources ($|\dot{f}| \gtrapprox \SI{e-9}{Hz.s^{-1}}$). This implies errors in $I_{zz}$ of $\sim 32\%$, and errors in $\epsilon$ and $m_p$ of $\sim 16\%$. Given that models of non-axisymmetrically deformed neutron stars~\citep{BonaGour1996:GrvWPlEmMgFIDst, UshoEtAl2000:DfrAcNtSCGrvWEm, Cutl2002:GrvWvNtSLTrdBFl, Owen2005:MxElDfrCmSEEqtS, PaynMela2006:FrSGrvRGHydOsMgCMANS, HaskEtAl2008:MdlMgnDfNtSt, VigeMela2009:IEDtcGrvRdMgCMAcNS, WettEtAl2010:SnMgnCnMnAccNS, PriyEtAl2011:QdrMMgnCnMnAcNSEES} typically predict $\epsilon$ only to an order of magnitude, the error in $\epsilon$ should be sufficient to test such models. A $\sim 30\%$ error in $I_{zz}$ is of similar magnitude to measurements of $I_{zz}$ for PSR~J0737$-$3039A. \citet{MiaoEtAl2022:MmInPJ07LIGNI} found errors of $\sim 10$~--~$20\%$ after assuming an equation of state; without that assumption the errors in $I_{zz}$ increase by a factor of $\sim 4$. In comparison, no explicit assumptions regarding the neutron star equation of state are required for the framework of Section~\ref{sec:framework} or the results presented in Section \ref{sec:results}. Estimates of $m_p$ using this framework could also be compared to those estimates for known pulsars and serve as an independent verification of such measurements. \section{Assumptions} \label{sec:assumptions} This section elaborates on some of the assumptions made in this paper. We assume that continuous waves will eventually be detectable by contemporary and/or future gravitational wave detectors. This remains uncertain. The lowest bounds on $\epsilon \propto 10^{-11}$ from magnetic field distortions~\citep{BonaGour1996:GrvWPlEmMgFIDst} are small enough that only a few detections, at best, may be expected in the next generation of detectors~\citep{Pitk2011:PrObCntGrvWKPl}. Stars with stronger magnetic fields ($B \sim 10^{15}$~Gauss) lead to larger ellipticities $\epsilon \gtrsim 10^{-6}$~\citep{Cutl2002:GrvWvNtSLTrdBFl, HaskEtAl2008:MdlMgnDfNtSt} which are more likely detectable by the current generation of gravitational wave detectors. It is also possible that the internal magnetic fields of neutron stars could be stronger than their surface fields \citep{Lasky2015, Bransgrove2017}. Only a small fraction of the known pulsars are likely to be detectable, particularly if the fraction of rotational kinetic energy emitted in gravitational waves is small~\citep{Pitk2011:PrObCntGrvWKPl}. That said, the $\mathcal{O}(10^3)$ known pulsars may not be representative of the $\mathcal{O}(10^8)$ population of galactic neutron stars~\citep{Palomba2005, Knispel2008, Wade2012, Cieslar2021, ReedEtAl2021:MdGlNSPplUCnGrvS}, which could include a sub-population of strong gravitational wave emitters or ``gravitars''. We assume that Eq.~\eqref{eq:energy_balance} is a reasonable starting point for modelling the energy radiated by neutron stars. It is generally assumed that electromagnetic radiation from known pulsars is predominately dipolar, that neutron stars are triaxial rotors, and that continuous wave radiation would be predominately quadrupolar~\citep{OstrGunn1969:NtrPlsITh}. These assumptions predict $3 < n < 5$, which is at odds with measured braking indices from radio pulsars which span orders of magnitude outside this range~\citep{JohnGall1999:PlBrkInRvs, ZhanXie2012:WDBrkInPlSRnM100Mll, Lower2021}. Modified models for pulsar emission have been proposed to explain the observed braking indices~\citep{AlleHorv1997:ImpCnsObBrIYPSp, Mela1997:SpObRtCrrOMgn, XuQiao2001:PlBrkInTEmsMd, AlvaCarr2004:MnpPlSpn, YueEtAl2007:WCBrkIndTUAbNtPln, HamiEtAl2015:BrkInIslPl} including the addition of gravitational waves \citep{deArEtAl2016:GrvWvPlMsrBrI, ChisEtAl2018:AnlAppStPlSp}. On the other hand, accurate phase-connected measurement of a second time derivative of the rotation frequency needed to compute $n$ is challenging~\citep[cf.][]{JohnGall1999:PlBrkInRvs}. Existing measurements of $n$ are generally dominated by timing noise~\citep{HobbEtAl2004:LngTmObs374Pl, HobbEtAl2010:AnlTmIrr366Pl}, with some possible exceptions~\citep{ArchEtAl2016:HgBrkInPls,LaskEtAl2017:BrInMllMgn}. Prospects for an accurate determination of $n$ may be improved by a continuous wave detection. Since gravitational wave detectors are omni-directional, gravitational wave data is recorded at a much higher duty cycle ($\gtrsim 70\%$; \citealt{KAGREtAl2020:PrObLclGrvTrALAVK}) than typical pulsar observing cadences (e.g. $\lesssim 35 \text{hours/year} \sim 0.4\%$; \citealt{Lam2018:OpPTAObsCSnLwGrvS}). Although $\ddot{f}$ would not be resolved in all-sky continuous wave surveys, which sacrifice phase resolution in favour of reduced computational cost, a candidate from such a survey would then be followed up using a fully phase-coherent search in a restricted parameter space around the candidate. Such a search would be computationally inexpensive, and would be able to resolve $\ddot{f}$ to a resolution $\sim \mathcal{D} / T^{7/2}$ [cf. Eq.~\eqref{eq:spindown_cov}]. Pulse emission from radio pulsars is subject to various noise sources~\citep{deKoAnze1993:SmAnlPrNBnXPls, ArchEtAl2008:RNsAnmXPlTmRsd, LentEtAl2016:SNSysStPrFIntPTADRl, GoncEtAl2021:IdnMtgNSrPrPlTDS}. Individual pulses from radio pulsars are highly variable, and achieve a stable pulse profile once averaged over many cycles~\citep{Kram2005:Pls}. It remains to be seen whether detected continuous wave signals will suffer from comparable noise sources~\citep{AshtEtAl2015:ETNTrNrrChSCnGrvWP, Suvorova2016, Myers2021a, Myers2021b}. Gravitational waves, being weakly interacting, are not perturbed by matter along the line of sight to the star, unless the signal is lensed~\citep{BiesHari2021:GrvLnCnGrvWv}. Furthermore, unlike electromagnetic emission that arises from the outer surface and plasma of the star where a small fraction of the neutron star mass is located, gravitational wave emission arises from the rotating mass quadrupole. Physical processes within the star would therefore need considerable energy to perturb the star's rotation, and hence the continuous wave signal, in order to achieve a level of noisiness comparable to timing noise observed in radio pulsars. Superfluid vortices within the star's interior are suspected of being responsible for glitches~\citep{EysdMela2008:GrvRdtPlGl, WarsMela2011:GrsMdPlGlt, HoEtAl2015:PnSprMsrMsUPGl, HaskEtAl2020:TrPnSprNtSPGRcv, LIGOEtAl2021:CnsLODGrvEmDRGlPPJ0} which do perturb the star's rotation and may affect the detectability of continuous waves~\citep{AshtEtAl2018:SmcGlCntSrMt}. Glitches, however, are observed as discrete events even in prolifically glitchy pulsars~\citep{HoEtAl2020:RtBGltNTmGlPJ05} and the extent to which they could constitute a persistent noise source in detected continuous wave signals is unknown \citep{Yim2022}. Should continuous waves measure a braking index from $n \notin [3, 5]$, this might represent stronger evidence for new physics than current radio pulsar observations. Finally, we assume that the neutron star also emits electromagnetic radiation, and that a measurement of its distance can be obtained. Neutron stars are expected to possess magnetic fields~\citep{Reis2001:MgnFlNtStOvr} and will therefore (provided that the field is not symmetric about the star's rotation axis) emit electromagnetic radiation. Continuous waves may first be detected either from a known pulsar, or as a gravitational-wave-only candidate from an all-sky survey; in either case, observations over $T \gtrsim 1$~year would give the sky position of the source to sub-arcsecond resolution~\citep{Riles2013, Riles2017}. This would facilitate further electromagnetic observations to either detect an electromagnetic counterpart, or else refine the properties of one already known. Other methods exist to measure stellar distances in the absence of a radio pulsar detection. Parallax may be used to determine the distances to nearby neutron stars~\citep{Seto2005:GrvWAstRRtNSEsTDs,WaltEtAl2010:RvsPrIsNtSRJ18UHI}, while distances to neutron stars in supernova remnants may be inferred through observation of the radial velocities of the surrounding ejecta~\citep{ReedEtAl1995:ThrStCssSpRISpS}; these methods yield comparable uncertainties to radio pulsar distances. \section{Summary} \label{sec:summary} This paper presents a first analysis of what properties may be inferred from a neutron star radiating both electromagnetic and detectable continuous gravitational waves. We develop a simple Fisher information-based parameter estimation framework, which gives estimates of the uncertainties for the stellar moment of inertia $I_{zz}$, equatorial ellipticity $\epsilon$, and the component of the magnetic dipole moment perpendicular to its rotation axis $m_p$. This framework does not assume a particular neutron star equation of state and only requires a detection of continuous waves and a measurable distance to the star. Monte Carlo simulations over a parameter space of gravitational wave frequency and its derivatives, typical of that covered by all-sky continuous wave surveys, demonstrate that the relative errors in $I_{zz}$, $\epsilon$, and $m_p$ asymptote to 14--27\%, assuming a 20\% error in distance. The observation time required to reach these limits may be as little as a few years for a strong continuous wave signal detected in an all-sky survey; for weaker signals, such as those potentially associated with known pulsars, longer observations may be required. We also find that the errors of the inferred parameters tend to be smaller when the braking index is close to $n\approx4$, when $f$ is smaller and when $|\dot{f}|$ is larger. Future work could extend the assumed neutron star energy loss model of Eq.~\eqref{eq:energy_balance} to include a more complex model of the neutron star magnetic field, e.g.~\citet{LaskMela2013:TlTMgFNtSTGrvWSgn}. Recasting the parameter inference in a Bayesian framework would be advantageous for many reasons, including the avoidance of coordinate singularities present in the Fisher matrix approach, and the use of prior information from other gravitational wave and electromagnetic observations of neutron stars. \section*{Acknowledgements} We thank Lucy Strang, Lilli Sun, Matthew Bailes, and Ryan Shannon for helpful discussions. This research is supported by the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav) through project number CE170100004. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author(s). \bibliographystyle{mnras}
{ "timestamp": "2022-09-23T02:14:47", "yymm": "2209", "arxiv_id": "2209.10981", "language": "en", "url": "https://arxiv.org/abs/2209.10981" }
\section{Introduction} \label{sec:intro} Classical scalar fields coupled to out-of-equilibrium quantum matter play an important role in various settings in cosmology. Some key examples include non-perturbative particle production processes during the reheating after inflation, via a parametric resonance~\cite{Kofman:1994rk,kofman:1997yn,Greene:1997fu,Braden:2010wd,Berges:2002cz} or via spinodal instability~\cite{Calzetta:1989bj,Guth:1985ya,Weinberg:1987vp,Dufaux:2006ee,Felder:2000hj,Felder:2001kt,Bassett:1997az,Markkanen:2015xuw,Fairbairn:2018bsw}, as well as the processes leading to the electroweak baryogenesis~\cite{Cline:2000nw,Kainulainen:2001cn,Kainulainen:2002th,Cline:2013gha,Cline:2017qpe,Cline:2020jre,Konstandin:2013caa,Kainulainen:2021oqs} and the leptogenesis mechanism~\cite{Buchmuller:2000nd,Beneke:2010dz,Anisimov:2010dk,Dev:2017trv,DeSimone:2007gkc,Garny:2009qn,Garbrecht:2011aw,Garny:2011hg,Dev:2017wwc,Jukkala:2021sku}. Finding a complete solution of such problems often requires non-perturbative methods and non-equilibrium quantum field theory. In particular in the resonant particle production case the newly created quanta may significantly affect the evolution of the system~\cite{Boyanovsky:1992vi,Boyanovsky:1993pf,Baacke:2001zt,Arrizabalaga:2004iw,Arrizabalaga:2005tf,Kainulainen:2021eki}. In this work we study tachyonic dark matter production during the reheating epoch in a setup proposed in~\cite{Markkanen:2015xuw,Fairbairn:2018bsw}. Non-minimally coupled scalar fields may undergo a tachyonic instability, or spinodal decomposition, when an effective mass term $\xi R \chi^2$ periodically takes negative values, driven by the oscillating Ricci scalar $R$ during reheating. In~\cite{Markkanen:2015xuw} it was shown that for stable scalar fields with sufficiently weak couplings to visible matter the tachyonic particle production induced by the curvature coupling produces adiabatic dark matter, whose abundance can be made to agree with the observed value over a wide range of coupling values. The results of~\cite{Markkanen:2015xuw} and later in~\cite{Fairbairn:2018bsw} are based on perturbative studies of the particle production similar to those applied to the so called tachyonic reheating in~\cite{Dufaux:2006ee,Felder:2000hj,Felder:2001kt}.~In~\cite{Figueroa:2021iwm}, the dynamics of non-minimally coupled scalars were studied using classical lattice simulations but most of the numerical results shown in that work apply to the case without scalar self-interactions. Here we revisit the tachyonic dark matter production of~\cite{Fairbairn:2018bsw} applying a fully non-perturbative 2PI-approach using methods introduced in~\cite{Kainulainen:2021eki} (for earlier work see~\cite{Herranen:2008hi,Herranen:2008hu,Herranen:2008di,Herranen:2010mh,Fidler:2011yq}). The 2PI-framework is a powerful tool for studying dynamical non-equilibrium problems. It results in evolution equations which naturally include the backreaction from out-of-equilibrium modes on the evolution of the one-point function. We derive the renormalized 2PI equations of motion in an on-shell scheme in terms of physical parameters in the lowest non-trivial loop approximation. We then solve for the coupled dynamics of the one- and two-point functions of the scalar field and investigate the momentum structure of the two-point function. We identify the non-perturbative processes of parametric resonance and spinodal instability taking place during the reheating stage. The efficiency of these processes is found to sensitively depend on the parameters of the theory, such as the spectator self-interaction strength and the inflaton decay rate. Also, the tachyonic and subsequent parametric processes may be coupled in a very intricate way. We note that the methods and their numerical implementation discussed here are not limited to the particular example at hand, but similar techniques can be carried also to more general setups. This paper is organized as follows. In section~\cref{sec:model} we introduce the model and in section~\cref{sec:2PI} we derive the renormalized 2PI equations of motion in the comoving frame in the Hartree approximation. In section~\cref{sec:moments} we recast the equation for the two-point function into a form of moment equations in the mixed representation. In section~\cref{sec:results} we apply the numerical approach introduced in~\cite{Kainulainen:2021eki} to the physical setup of~\cite{Fairbairn:2018bsw}, which included backreaction but assumed adiabatic expansion for the mode functions and some further technical approximations. Finally, section~\cref{sec:conclusions} contains our conclusions and outlook. \section{The model} \label{sec:model} Following~\cite{Markkanen:2015xuw,Fairbairn:2018bsw}, we study a $\mathbb{Z}_{2}$-symmetric scalar singlet model where the singlet $\chi$ has no couplings to other matter fields. The singlet action is given by \begin{equation} \mathcal{S}_{\chi} = \int \mathrm{d}^4x\,\sqrt{-g}\biggl[ \frac{1}{2}(\nabla^{\mu}\chi)(\nabla_{\mu}\chi) - \frac{1}{2} m^2\chi^2 + \frac{\xi}{2}R\chi^2-\frac{\lambda}{4}\chi^4 \biggr]. \label{chiaction} \end{equation} We use the particle physics convention for the metric signature: $\mathrm{d}s^2 = \mathrm{d}t^2 - a^2\mathrm{d}{\bm{x}}^2$. We will assume that the singlet is energetically subdominant during inflation and reheating, $\rho_{\chi} \ll 3H^2 M_{\rm P}^2$, and treat it as a test field in a classical background space-time, whose evolution is determined by the inflaton field $\phi$. It should be noted that the non-minimal coupling $\xi R\chi^2$ of the field $\chi$ quantized in a classical curved space-time acquires radiative corrections already at the one loop level in the presence of the self-interaction \cite{Buchbinder:1992rb}. Therefore, although $\xi$ can be renormalized to zero at any given scale, it cannot be made to vanish on all scales. Rescaling the field $\chi$ by the scale factor, \begin{equation} \sigma \equiv a(t) \chi , \end{equation} and switching to the conformal time \(\eta\) defined through $a \mathrm{d}\eta \equiv \mathrm{d} t$, we can recast the action \cref{chiaction} for the $\chi$-field into an effectively flat space form: \begin{equation} \label{eq:sigma_action} \mathcal{S}_{\sigma}=\int \mathrm{d}\eta \, \mathrm{d}^3\bm{x}\left[\frac{1}{2}(\partial_{\eta}\sigma)^2-\frac{1}{2}(\nabla\sigma)^2 -\frac{1}{2} m_{\rm eff}^2(\eta) \sigma^2-\frac{\lambda}{4}\sigma^4\right], \end{equation} where the time-dependent effective mass term is defined as \begin{equation} m^2_{\rm eff}(\eta) \equiv a^2(\eta)\bigg[m^2 - \bigg( \xi-\frac{1}{6} \bigg)R(\eta)\bigg]. \label{eq:effective_mass} \end{equation} This action is the starting point for our derivation of the coupled evolution equations for the one- and two-point functions of the $\chi$-field. \paragraph{Equation of motion for the inflaton and the scale factor.} We will treat the inflaton at the classical level and assume a quadratic inflaton potential. Because we wish to study the $\chi$-evolution beyond the decay of the inflaton, we also add a coupling between the inflaton and a radiation component. The radiation energy density is set to zero before the end of inflation. Moreover, we will treat $\chi$ as a test field, so that the Hubble rate and the evolution of the Ricci scalar are determined solely by the inflaton and the radiation component. We then have \begin{equation} \begin{aligned} \ddot{\phi} + 3 H\dot{\phi}+\Gamma \dot\phi+m_{\phi}^2\phi & = 0 ,\\ \dot \rho_{\rm rad} + 4H \rho_{\rm rad} &= \Gamma \dot \phi^2 , \label{eq:inflaton_equations} \end{aligned} \end{equation} where the dots denote differentiation with respect to the cosmic time $t$. The above equations are solved together with the Friedmann equation $\dot a/a = H$, where the Hubble rate is given by \begin{equation} H = \frac{1}{\sqrt{6}M_{\rm P}}\left( \dot\phi^2+ m_\phi^2 \phi^2 + 2\rho_{\rm rad}\right)^{1/2}, \label{eq:friedman} \end{equation} with \(M_{\mathrm{P}}\) being the reduced Planck mass. The time-dependent Ricci scalar in this setup is given by \begin{equation} R = \frac{1}{ M_{\rm P}^2}\left(\dot\phi^2-2 m_\phi^2 \phi^2 \right), \label{eq:Ricci} \end{equation} as the conformally invariant radiation component gives no contribution at the classical level. These equations can be solved independently of the equations of motion for the spectator field. In the latter the scale factor $a$ and the Ricci scalar $R$ then appear as external functions that source the non-trivial behaviour of the $\chi$-field. \section{The renormalized 2PI equations of motion} \label{sec:2PI} In this section we derive the renormalized equations of motion for the mean $\sigma$-field and its two-point function corresponding to the action~\cref{eq:sigma_action}, using the 2PI effective action technique of non-equilibrium quantum field theory~\cite{Cornwall:1974vz,Berges:2004yj}. The generic form of the 2PI effective action of a scalar field is \begin{equation} \Gamma_{\rm 2PI} [\bar\sigma, \Delta_\sigma] = \mathcal{S}[\bar\sigma] - \frac{\mathrm{i}}{2}\mathrm{Tr}_{\mathcal{C}}\bigl[ \ln (\Delta_\sigma)\bigr] + \frac{\mathrm{i}}{2}\mathrm{Tr}_{\mathcal{C}}\bigl[\Delta_{0\sigma}^{-1}\Delta_\sigma\bigr] + \Gamma_2[\bar\sigma, \Delta_\sigma], \label{2PI-effective-action} \end{equation} where $\mathcal{S}$ is the classical action, $\bar\sigma(x)$ is the classical field and $\Delta_\sigma(x,y)$ is the connected two-point function of the scaled $\sigma$-field and the trace contains integration over the Keldysh contour $\mathcal{C}$~\cite{Keldysh:1964ud} and summation over possible field indices. The classical, real-time inverse propagator is \begin{equation} \mathrm{i}\Delta_{0\sigma,ab}^{-1}(x,y;\bar\sigma) = - \Big[ \dalembert_x + m^2_{\rm eff}(\eta) + 3\lambda \bar\sigma_a^2\Big]\delta^{(4)}(x-y)\delta_{ab}, \label{eq:free-inverse-propagator} \end{equation} where $\dalembert_x = \partial_\eta^2 -\partial_{\bm x}^2$ and $a,b \in \{1, 2\}$ are the time path indices of the Keldysh contour. The interaction term $\Gamma_2[\bar\sigma,\Delta_\sigma]$ consists of all two-particle irreducible vacuum graphs with lines corresponding to the full propagator $\Delta_\sigma$ and interactions derived from the shifted Lagrangian density $\mathcal{L}[\sigma \rightarrow \bar\sigma + \sigma_q]$, where $\sigma_q$ is the quantum fluctuation around the classical field configuration $\bar \sigma$. The equations of motion of the one- and two-point functions are then obtained as the stationary conditions of the 2PI effective action: \begin{equation} \frac{\delta \Gamma_{\rm 2PI}}{\delta \bar\sigma_a}=0 \qquad {\rm and} \qquad \frac{\delta \Gamma_{\rm 2PI}}{\delta \Delta_\sigma^{ab}}=0. \label{eq:stationarity} \end{equation} We will be restricting our attention to the lowest non-trivial order in the 2PI-expansion, called the Hartree approximation. In this case the interaction term is just \begin{equation} \Gamma_2[\bar\sigma, \Delta_\sigma] \equiv -\frac{3\lambda}{4} \int \mathrm{d}\eta \, \mathrm{d}^3 \bm{x} \, \Delta_\sigma^2(x,x). \end{equation} The non-renormalized equations of motion then become \begin{subequations} \label{eq:eom-for-12pt-fununren} \begin{align} \biggl[ \dalembert_x + m_{\rm eff}^2(\eta) + \lambda\bar\sigma^2(x) + 3 \lambda\Delta_\sigma(x,x)\biggr] \bar\sigma(x) &= 0 , \label{eq:eom-for-1pt-fun} \\ \biggl[ \dalembert_x + m_{\rm eff}^2(\eta) + 3\lambda \bar\sigma^2(x) + 3 \lambda\Delta_\sigma(x,x) \biggr]\mathrm{i}\Delta_\sigma^{ab}(x,y) &= a\delta^{ab} \delta^{(4)}(x-y). \label{eq:eom-for-2pt-fun} \end{align} \end{subequations} In particular the bare local correlation function $\Delta_\sigma(x,x)$ is a divergent quantity and equations~\cref{eq:eom-for-12pt-fununren} clearly need to be renormalized. We shall now show how this can be done in the 2PI-context, generalizing the derivation of~\cite{Kainulainen:2021eki} to a non-static space-time. \subsection{Renormalization} \label{sec:renormalization} A systematic renormalization in the 2PI-context was developed in~\cite{Berges:2005hc}, but we shall follow an equivalent, more intuitive method introduced in~\cite{Fejos:2007ec} and extended to curved space-time in~\cite{Arai:2012sh} (see also~\cite{Pilaftsis:2017enx,Pilaftsis:2013xna}). A crucial difference between the 1PI- and the 2PI-cases is that in the latter an infinite number of counterterms and loop diagrams get resummed and mix at high orders in the perturbative expansion. This introduces a number of sub-divergences that may depend on finite temperature or even on the out-of-equilibrium quantum corrections and gives rise to auxiliary $n$-point functions, where some or all of the external field lines are replaced by internal propagators. Each auxiliary function needs a new renormalization condition, but the final equations of motion are independent of the particular choices. We shall closely follow the treatment of~\cite{Kainulainen:2021eki}, extending it to the case of non-zero curvature. The renormalized quantities are defined from the bare ones through \begin{equation} \begin{alignedat}{2} \sigma &\equiv Z^{1/2}_\idx{2} \sigma_{\rm R}, \hspace{5em} \Delta_\sigma &&\equiv Z_\idx{0} \Delta_{\rm R}, \\ m_\idx{i}^2 &\equiv m^2_{{\rm R}\idx{i}} + \delta m^2_\idx{i}, \hspace{2.85em} \lambda_\idx{i} &&\equiv \lRidx{i} + \delta\lambda^\idx{i}, \hspace{3.5em} \xi_\idx{i} \equiv \xi_\mathrm{R}^\idx{i} + \delta \xi^\idx{i}. \label{eq:ren-quant} \end{alignedat} \end{equation} The index enclosed in parenthesis tells how many lines in the vertex function corresponding to the coupling or mass parameter in question are associated with external fields, as explained in~\cite{Fejos:2007ec,Kainulainen:2021eki}. Note that both the bare and the renormalized couplings in general are different for different $i$, as we shall see below. We then define accordingly: \begin{subequations} \label{eq:effcts} \begin{align} \delta_\lambda^\idx{0} &\equiv Z^2_\idx{0} \big(\lRidx{0} + \delta \lambda^\idx{0}\big) - \lRidx{0},\\ \delta_\lambda^\idx{2} &\equiv Z_\idx{0}Z_\idx{2} \big(\lRidx{2}+ \delta \lambda^\idx{2}\big) - \lRidx{2},\\ \delta_\lambda^\idx{4} &\equiv Z^2_\idx{2} \big(\lRidx{4}+ \delta \lambda^\idx{4}\big) - \lRidx{4},\\ \delta_m^\idx{i} &\equiv Z_\idx{i} \bigl(m^2_{{\rm R}\idx{i}} + \delta m^2_\idx{i}\bigr) - m^2_{{\rm R}\idx{i}},\\ \delta_{\xi}^\idx{i} &\equiv Z_\idx{i} \bigl(\xi_{\mathrm{R}}^{\idx{i}} - \sfrac{1}{6} + \delta \xi^{\idx{i}} \bigr) - \xi_{\mathrm{R}}^{\idx{i}} + \sfrac{1}{6} . \end{align} \end{subequations} Given these definitions we can write the unrenormalized equations of motion in terms of the renormalized quantities as follows: \begin{subequations} \label{eq:eom-for-12pt-fun} \begin{align} \begin{split} \biggl[ Z_\idx{2} \dalembert_x &+ a^2\Bigl(m^2_{{\rm R}\idx{2}} + \delta_m^\idx{2}\Bigr) - a^2 \Big(\xi_\mathrm{R}^\idx{2} - \sfrac{1}{6} + \delta_{\xi}^\idx{2} \Bigr)R \\ &+ 3\Bigl(\lRidx{4} + \sfrac{1}{3}\delta_{\lambda}^\idx{4}\Bigr)\sigma_{\mathrm{R}}^2 + 3\Bigl(\lRidx{2} + \delta_{\lambda}^\idx{2}\Bigr)\Delta_{\mathrm{R}}(x,x)\biggr] \sigma_{\mathrm{R}}(x) = 2\lambda_{\rm R}^\idx{4}\sigma^3_{\rm R} \, , \end{split} \label{eq:eom-for-1pt-funR} \\ \begin{split} \biggl[ Z_\idx{0} \dalembert_x &+ a^2\Bigl(m^2_{{\rm R}\idx{0}} + \delta_m^\idx{0} \Bigr) - a^2 \Big( \xi_{\mathrm{R}}^{{\idx{0}}} - \sfrac{1}{6} + \delta_\xi^\idx{0} \Big)R \\ &+ 3\Bigl(\lRidx{2} + \delta_{\lambda}^\idx{2}\Bigr)\sigma_{\mathrm{R}}^2 + 3\Bigl(\lRidx{0} + \delta_{\lambda}^\idx{0} \Bigr)\Delta_{\mathrm{R}}(x,x)\biggr]\mathrm{i}\Delta_{\mathrm{R}}^{bc}(x,y) =b\delta^{bc} \delta^{(4)}(x-y). \end{split} \label{eq:eom-for-2pt-funR} \end{align} \end{subequations} Here and in what follows we drop the bar when referring to the classical field $\sigma_{\rm R}$. \paragraph{Renormalization conditions.} To proceed, we must now define the renormalization conditions. We start by setting on-shell conditions for the auxiliary two-point function $\Delta^{11}_{\rm R}$ at a vanishing external vacuum expectation value, $\sigma_{\mathrm{R}} = v_{\mathrm{R}} = 0$, and some finite $R = R_0$, along with the requirement that the quantum corrections vanish at the minimum of the effective action: \begin{equation} \mathrm{i}\bigl(\Delta^{11}_{\rm R}\bigr)^{{-1}}\bigg|_{\stackrel{\scriptstyle\sigma_{\mathrm{R}}=0}{R=R_0}} \equiv k^2 - a^2m_{\mathrm{\Delta}}^2, \quad \frac{\rm d}{{\rm d}k^2}\,\mathrm{i}\bigl(\Delta^{11}_{\rm R}\bigr)^{-1}\bigg|_{\stackrel{\scriptstyle\sigma_{\mathrm{R}}=0}{R=R_0}} \equiv 1 \quad {\rm and} \quad \frac{\delta\Gamma_{\rm 2PI}}{\delta \sigma_{\mathrm{R}}}\bigg|_{\stackrel{\scriptstyle\sigma_{\mathrm{R}}=0}{R=R_0}} \equiv 0 \label{eq:intermediate-ren-conditions} \end{equation} Note that we are using the comoving units, so $k$ is also the comoving 4-momentum. These conditions imply that $Z_\idx{0} = 1$. Furthermore, one finds $Z_\idx{2} = 1$ in the Hartree approximation, when the renormalization is performed at $\sigma_{\mathrm{R}} = 0$~\cite{Kainulainen:2021eki}. As a result, one can set also $\smash{m^2_{\mathrm{\Delta}} = m^2_{\rm ph}}$, where $m_{\rm ph}$ refers to the usual mass parameter defined at the off-shell momentum $p^2=0$. The renormalization conditions~\cref{eq:intermediate-ren-conditions}, together with the equation of motion~\cref{eq:eom-for-2pt-funR}, then give \begin{equation} m^2_{{\rm R}\idx{0}} + \delta_m^\idx{0} - \Big(\xi_\mathrm{R}^\idx{0} - \sfrac{1}{6} + \delta_{\xi}^\idx{0} \Big) R_0 + 3\Bigl(\lRidx{0} + \delta_{\lambda}^\idx{0}\Bigr)a^{-2}\Delta_\mathrm{R} = m^2_{\mathrm{ph}}. \label{eq:eom-GR} \end{equation} Here $\Delta_\mathrm{R}$ is computed at the renormalization point. The $a^{-2}$-factor multiplying $\Delta_{\rm R}$ arises from the scaling of the field $\sigma$. In physical units it is absorbed to the correlation function. In the Hartree approximation we can renormalize $\lambda_{\rm R}^\idx{0}$ and $\lambda_{\rm R}^\idx{2}$ similarly, by setting \begin{equation} \delta_\lambda^\idx{0} \equiv \delta_\lambda^\idx{2}. \label{eq:rencons-1} \end{equation} From $Z_\idx{0,2}= 1$ it then follows that $\lambda_{\rm R}^\idx{0} = \lambda_{\rm R}^\idx{2}$. So, both bare and renormalized couplings can be chosen equal for these vertex functions. Next we set the bare mass parameters $m_\idx{i}^2$ and the $\xi_\idx{i}$-parameters equal for $i \in \{0,2\}$, which gives \begin{equation} m^2_{{\rm R}\idx{0}} + \delta_m^\idx{0} = m^2_{{\rm R}\idx{2}} + \delta_m^\idx{2} \qquad {\rm and} \qquad \xi_{\rm R}^\idx{0}+\delta_\xi^\idx{0} = \xi_{\rm R}^\idx{2} + \delta_\xi^\idx{2}, \label{eq:rencons-2} \end{equation} and we finally define \begin{equation} \lRidx{4} + \sfrac{1}{3}\delta_\lambda^\idx{4} \equiv \lRidx{0} + \delta_\lambda^\idx{0}. \label{eq:rencons-3} \end{equation} This condition ensures that renormalized effective potential has the same first derivative as the tree level potential for a finite $\sigma_{\mathrm R}$ (for more details, see~\cite{Kainulainen:2021eki}). Note that the bare coupling $\lambda_\idx{4}$ is then different from $\lambda_{\idx{0,2}}$, but this has no consequence for the renormalized low-energy theory. Finally, we could relate $\xi_{\mathrm{R}}^\idx{0}$ to a physical mass measured in a background with a non-zero $R$, but we simply define it as an $\overline{\rm MS}$-parameter instead. \paragraph{Cancellation of the sub-divergences.} Next we impose the conditions on the cancellation of the sub-divergences~\cite{Fejos:2007ec}. To this end we must work out the primitive divergence in the local correlation function, which in the Hartree approximation is given just by the momentum integral over the renormalized correlator $\mathrm{i}\Delta^{11}_{\rm R}$ defined in the conditions~\cref{eq:intermediate-ren-conditions}: \begin{align} \Delta_{\mathrm{R}} &= Q^\epsilon \int \frac{{\rm d}^dp}{(2\uppi )^d}\,\Delta_{\rm R}^{11} (p) = -\frac{a^2m_{\mathrm{ph}}^2}{16\uppi^2}\biggl[\frac{2}{\overline{\epsilon}} + 1 - \ln\biggl(\frac{a^2m^2_{\mathrm{ph}}}{Q^2}\biggr)\biggr] \nonumber \\ &\equiv a^2m_{\mathrm{ph}}^2 \Delta_{\overline\epsilon} + \Delta_{\rm F0}\bigl(am_{\rm ph},Q\bigr), \label{eq:division-R} \end{align} where $\Delta_{\overline\epsilon} \equiv -1/\bigl(8\uppi^2\overline\epsilon\bigr)$ and $Q$ is the comoving momentum scale used for the $\overline{\rm MS}$-re\-nor\-mal\-i\-za\-tion. Substituting this expression back into equation~\cref{eq:eom-GR} and requiring that the finite and divergent parts cancel separately, we find the following two equations: \begin{align} m_{\rm ph}^2 &\equiv m_{{\rm R}\idx{0}}^2 - \Bigl(\xi_\mathrm{R}^\idx{0}-\sfrac{1}{6}\Bigr) R_0 + 3\lRidx{0}a^{-2}\Delta_{\rm F0}, \label{eq:tree-level-def} \\[.2em] 0 &= \delta_m^\idx{0} - R_0\delta_{\xi}^\idx{0} + 3\delta_\lambda^\idx{0}a^{-2}\Delta_{\rm F0} + 3\Bigl(\lRidx{0}+\delta_\lambda^\idx{0}\Bigr)m_{\rm ph}^2\Delta_{\overline\epsilon}. \label{eq:loop-level-eq} \end{align} Using equation~\cref{eq:tree-level-def} one can rewrite equation~\cref{eq:loop-level-eq} as \begin{equation} \begin{split} \delta_m^\idx{0} + 3m^2_{{\rm R}\idx{0}}\Big(\lRidx{0}+\delta_\lambda^\idx{0}\Big)\Delta_{\overline\epsilon} &+ 3\Big[\delta_\lambda^\idx{0} + 3\Big(\lRidx{0} +\delta_\lambda^\idx{0}\Big)\lRidx{0}\Delta_{\overline\epsilon} \Big] a^{-2}\Delta_{\rm F0} \\ &- \Big[ \delta_{\xi}^\idx{0} + 3\Bigl(\xi_\mathrm{R}^\idx{0}-\sfrac{1}{6}\Bigr) \Big(\lRidx{0}+\delta_\lambda^\idx{0}\Big)\Delta_{\overline\epsilon} \Big] R_0 = 0. \label{eq:loop-level-rearr} \end{split} \end{equation} This equation can hold for arbitrary $R_0$ and $\Delta_{\rm F0}$ only if the coefficients multiplying each of these terms vanish separately. This gives us three constraints between the counterterms: \begin{subequations} \label{eq:last-ren-eqs2} \begin{align} \delta_m^\idx{0} + 3m^2_{{\rm R}\idx{0}}\Bigl(\lRidx{0}+\delta_\lambda^\idx{0}\Bigr) \Delta_{\overline\epsilon} &= 0, \label{eq:last-ren-eqs-12} \\[.2em] \delta_\lambda^\idx{0} + 3\Bigl(\lRidx{0} +\delta_\lambda^\idx{0}\Bigr)\lRidx{0} \Delta_{\overline\epsilon} &= 0, \\[.2em] \delta_{\xi}^\idx{0} + 3\Bigl(\xi_{\mathrm{R}}^\idx{0}-\sfrac{1}{6}\Bigr) \Bigl(\lRidx{0}+\delta_\lambda^\idx{0}\Bigr)\Delta_{\overline\epsilon} &= 0. \label{eq:last-ren-eqs-22} \end{align} \end{subequations} From these we find the explicit expressions for the counterterms $\delta_\lambda^\idx{0}$, $\delta_m^\idx{0}$ and $\delta_{\xi}^\idx{0}$: \begin{equation} \delta_\lambda^\idx{0} = - \frac{3\bigl(\lRidx{0}\bigr)^2\Delta_{\overline\epsilon}}{1+3\lRidx{0}\Delta_{\overline\epsilon}}, \hspace{1.5em} \delta_m^\idx{0} = -\frac{3m^2_{{\rm R}\idx{0}}\lRidx{0}\Delta_{\overline\epsilon}}{1+3\lRidx{0} \Delta_{\overline\epsilon}},\hspace{1.5em} \delta_\xi^\idx{0} = -\frac{3\bigl(\xi_{\mathrm{R}}^\idx{0}-\sfrac{1}{6}\bigr)\lRidx{0} \Delta_{\overline\epsilon}}{1+3\lRidx{0}\Delta_{\overline\epsilon}} \, . \label{eq:auxiliary-cts} \end{equation} The running of the renormalized parameters now follows from requiring that the corresponding bare parameters are constants: $\partial_Q\bigl[Q^\epsilon\bigl(\lRidx{0} + \delta_\lambda^\idx{0}\bigr)\bigr] = 0$, $\partial_Q\bigl[Q^\epsilon\bigl( m^2_{{\rm R}\idx{0}} + \delta_m^\idx{0}\bigr)\bigr] = 0$ and $\smash{\partial_Q\bigl[Q^\epsilon\bigl( \xi^{\idx{0}}_{{\rm R}}\! - \sfrac{1}{6} +\delta_\xi^\idx{0}\bigr)\bigr] = 0}$. For the running of $\lRidx{0}$ and $\xi_{\mathrm R}^\idx{0}$ one then finds \begin{equation} \lRidx{0}(Q) = \frac{\lambda^\idx{0}_{{\rm R0}}} {1+\frac{3\lambda^\idx{0}_{{\rm R0}}}{8\uppi^2} \ln\Bigl(\frac{Q_0}{Q}\Bigr)} \qquad {\rm and} \qquad \xi_{\rm R}^\idx{0}(Q) - \sfrac{1}{6} = \frac{\xi_{\rm R0}^\idx{0}-\sfrac{1}{6}} {1+\frac{3\lambda^\idx{0}_{{\rm R0}}}{8\uppi^2} \ln\Bigl(\frac{Q_0}{Q}\Bigr)}, \label{eq:running-parameters} \end{equation} where $\lambda^\idx{0}_{{\rm R0}}\equiv\lambda^\idx{0}_{{\rm R}}(Q_0)$ and $\xi^\idx{0}_{{\rm R0}}\equiv \xi^\idx{0}_{{\rm R}}(Q_0)$ and our previous choices imply that $\lRidx{2}=\lRidx{0}$. The running of the mass terms is analogous to the running of the couplings~\cite{Kainulainen:2021eki}. On the other hand, the coupling $\lRidx{4}$ does not run at all. Indeed, $\lRidx{4}$ remains finite because of the condition $\delta_\lambda^\idx{4} = 3\delta_\lambda^\idx{0}$ up to finite terms, which implies that $\partial_Q\lRidx{4} = 0$. \paragraph{Renormalized equations of motion.} Next we show that the full evolution equations~\cref{eq:eom-for-12pt-fun} get renormalized by the counterterms we have defined. We begin by defining a finite effective mass term, which includes general corrections from $R$, $\sigma_{\rm R}$ and $\Delta_{\rm F}$, as follows: \begin{equation} M_{\rm eff}^2(\sigma_{\mathrm{R}},\Delta_{\rm F}) \equiv a^2\Big[m^2_{{\rm R}\idx{0}} - \Big(\xi_{\mathrm{R}}^\idx{0}-\sfrac{1}{6}\Big)R\Big] + 3\lRidx{0}\Bigl(\sigma_{\rm R}^2 + \Delta_{\rm F}\Bigr). \label{eq:effective-mass-full} \end{equation} The finite part $\Delta_{\rm F}$ of the local correlation function $\Delta_{\rm R}$ is defined similarly to equation~\cref{eq:division-R}: \begin{equation} \Delta_{\mathrm{R}} \equiv M_{\rm eff}^2(\sigma_{\mathrm{R}},\Delta_{\rm F}) \Delta_{\overline\epsilon} + \Delta_{\rm F}. \label{eq:division-full} \end{equation} We furthermore split $\Delta_{\rm F} \equiv \Delta_{\rm F0}(M_{\rm eff},Q) + \delta \Delta_{\rm F}$, where $\Delta_{\rm F0}$ was defined in equation~\cref{eq:division-R} and $\delta \Delta_{\rm F}$ represents the remaining non-equilibrium fluctuations. Using this expression, the equation of motion for the two-point function becomes \begin{equation} \begin{split} \phantom{H} \biggl[\dalembert_x + M_{\rm eff}^2 &+ a^2\left(\delta_m^\idx{0} - R\delta_{\xi}^\idx{0}\right) + 3\delta_\lambda^\idx{0}\Bigl( \sigma_{\rm R}^2 + \Delta_{\rm F}\Bigr) \\ &+ 3\Bigl(\lRidx{0}+\delta_\lambda^\idx{0}\Bigr)M_{\rm eff}^2 \Delta_{\overline\epsilon}\biggr]\mathrm{i}\Delta^{bc}_{\rm R}(x,y) = b\delta^{bc}\delta^{(4)}(x-y). \label{eq:loop-level-eq-full} \end{split} \end{equation} Using the definition~\cref{eq:effective-mass-full} again in the term proportional to $\Delta_{\overline\epsilon}$, we can write equation~\cref{eq:loop-level-eq-full} as \begin{equation} \begin{split} \phantom{H} \biggl\{\dalembert_x + M_{\rm eff}^2 &-a^2 \left[\delta_{\xi}^\idx{0} + 3\Bigl(\xi_{\mathrm{R}}^\idx{0}-\sfrac{1}{6}\Bigr) \Bigl(\lRidx{0}+\delta_\lambda^\idx{0}\Bigr)\Delta_{\overline\epsilon} \right] R \\ &+ 3\Big[ \delta_\lambda^\idx{0} + 3\Bigl(\lRidx{0} +\delta_\lambda^\idx{0}\Bigr)\lRidx{0} \Delta_{\overline\epsilon} \Big] \Bigl( \sigma_{\rm R}^2 + \Delta_{\rm F}\Bigr) \\ &+ a^2\Big[\delta_m^\idx{0} + 3m^2_{{\rm R}\idx{0}}\Bigl(\lRidx{0}+\delta_\lambda^\idx{0}\Bigr) \Delta_{\overline\epsilon}\Big] \biggr\}\mathrm{i}\Delta^{bc}_{\rm R}(x,y) = b\delta^{bc}\delta^{(4)}(x-y). \label{eq:loop-level-eq-full2} \end{split} \end{equation} The renormalization conditions~\cref{eq:last-ren-eqs2} set all the terms in the square brackets to zero leaving behind only the finite mass term $M_{\rm eff}^2$. It should be appreciated how the {\em constant} counterterms cancel infinities that depend on the dynamical variables $\sigma_{\rm R}$, $R$ and $\Delta_{\rm F}$. Similar manipulations can be done, crucially dependent on the definition~\cref{eq:rencons-3}, in the equation~\cref{eq:eom-for-1pt-funR} for the one-point function. Our final equations then become \begin{subequations} \label{eq:ds_eoms} \begin{align} \Big[ \dalembert_x + M_{\rm eff}^2(\sigma_{\mathrm{R}},\Delta_{\rm F}) \Big] \sigma_{\mathrm{R}} &= 2\lRidx{4}\sigma_{\rm R}^3, \label{eq:eom-for-phi-R} \\ \phantom{H} \Big[ \dalembert_x + M_{\rm eff}^2(\sigma_{\mathrm{R}},\Delta_{\rm F}) \Big]\mathrm{i}\Delta_{\rm R}^{ab}(x,y) &= b\delta^{bc}\delta^{(4)}(x-y). \label{eq:eom-for-delta-R} \end{align} \end{subequations} Let us finally point out that these equations are independent of the renormalization scale for the auxiliary renormalization conditions: one can show that $\partial_Q (M_{\rm eff}^2)=0$ using the gap equation~\cref{eq:effective-mass-full} together with the running equations~\cref{eq:running-parameters}. \paragraph{Physical parameters.} We have now renormalized our equations of motion, but we still have not related our parameters to observable quantities. We now address this problem for completeness, even though none of the parameters in the problem are directly observable. We start by specifying the Hartree-corrected effective potential in the limit of constant curvature, consistent with our renormalization conditions. The calculation is identical to the one given in~\cite{Kainulainen:2021eki} and we only quote the final result, first found in~\cite{AmelinoCamelia:1992nc}: \begin{equation} V_{\rm H}(\sigma_{\rm R}) = -\frac{\lRidx{4}}{2}\sigma^4_{\rm R} + \frac{\overbar m^4(\sigma_{\rm R})}{12\lRidx{0}} -\frac{\overbar m^4(\sigma_{\rm R})}{64\uppi^2}\biggl[\ln\biggl(\frac{\overbar m^2(\sigma_{\rm R})}{Q^2}\biggr) - \frac{1}{2} \biggr], \label{2PI-effective-potential-final} \end{equation} where $\overbar m^2$ is the solution to equation~\cref{eq:effective-mass-full} for $R=R_0$ and $\Delta_{\rm F} = \Delta_{\rm F0}\bigl(\overbar m^2\bigr)$. Now, differentiating the effective potential twice, we find \begin{equation} \Gamma_{\rm 1PI}^\idx{2}\bigl(p^2 = 0, \sigma_{\mathrm{R}}\bigr) \; = \; \frac{\partial^2 V_{\rm H}(\sigma_{\mathrm{R}})}{\partial \sigma_{\rm R}^2} = \overbar m^2(\sigma_{\rm R}) + 6\Bigl[ \lRidx{0}\bigl(\overbar m^2(\sigma_{\rm R})\bigr) - \lRidx{4} \Bigr]\sigma_{\rm R}^2. \label{eq:second-derivative-of-potential} \end{equation} Because $\smash{\overbar m^2(0) \equiv a^2m_{\mathrm{ph}}^2}$, we see that the mass parameter $m_{\mathrm{ph}}$ of the auxiliary propagator equals the value of the full two-point function $\smash{\Gamma_{\rm 1PI}^\idx{2}\bigl(p^2= 0,\sigma_{\rm R}=0\bigr)}$. Equation~\cref{eq:second-derivative-of-potential} also suggests that it is natural to define $\lRidx{0}(m_\mathrm{\rm ph}) \equiv \lRidx{4}$. Finally, one can easily show that $\lambda^\idx{4}_{\rm R}$ coincides with the four-point function measured at zero momentum: \begin{equation} \lambda_{\mathrm{R}} \; \equiv \;\Gamma_{\rm 1PI}^\idx{4}(p_i=0,\sigma_{\rm R}=0) \;= \; \frac{1}{6}\frac{\partial^4V_{\rm H}(\sigma_{\rm R})}{\partial \sigma_{\rm R}^4}\bigg|_{\sigma_{\rm R}=0} = \lRidx{4}. \label{eq:effective-coupling} \end{equation} The mass $m_{\rm ph}$ and the coupling $\lambda_{\rm R}$ can be related to an on-shell mass and a four-point function in the physical region without further reference to the 2PI-methods. Finally, we define the parameter $\xi^{\idx{0}}_{\rm R}$ as the $\overline{\rm MS}$-parameter at scale $m_{\rm ph}$: $\bar \xi_{\rm R} \equiv \xi^{\idx{0}}_{\rm R}(m_{\rm ph})$. These considerations now uniquely define all the parameters in our model. \section{Wigner-space and moment equations} \label{sec:moments} The direct numerical implementation of equations~\cref{eq:ds_eoms} would be very difficult and we shall use the phase space picture instead. To this end we define the Wigner transform of a generic function of two variables $\mathcal{O}(u,v)$ as follows: \begin{equation} \label{eq:wigner} \mathcal{O}(k,X) \equiv \int\mathrm{d}^4r\,\mathrm{e}^{\mathrm{i}k \cdot r}\,\mathcal{O}\left(X+\frac{r}{2}, X-\frac{r}{2}\right), \end{equation} where $r = u-v$ and $X = \frac{1}{2}(u+v)$ are the relative and average coordinates, respectively. For a homogeneous and isotropic system relevant here, the transformation with respect to spatial coordinates reduces to the ordinary Fourier transformation. In this case the equation~\cref{eq:eom-for-delta-R} for the two-point function in Wigner-space becomes just \begin{equation} \label{eq:sd_wigner_des} \left[\frac{1}{4}\partial_{\eta}^2-k^2-\mathrm{i}k_0 \partial_{\eta} + {{M_{\mathrm{eff}}^2}}\bigl(\eta-\sfrac{\mathrm{i}}{2}\partial_{k_0}\bigr)\right]\mathrm{i}\Delta^{bc}_{\bm k}(k_0,\eta) = b\delta^{bc}, \end{equation} where we denoted $M_{\rm eff}^2(\sigma_{\mathrm{R}},\Delta_{\rm F}) \equiv M_{\mathrm{eff}}^2(\eta)$. To study the dynamics of the coupled system of the one- and two-point functions it suffices to concentrate on any of the four components of the propagator $\Delta^{ab}$. We choose to work with $\Delta^{+-} = \Delta^<$ and define its $n$\textsuperscript{th} moment as \begin{equation} \label{eq:moment} \rho_{n{\bm k}} \equiv \int\frac{\mathrm{d}k_0}{2\uppi}\,k_0^n\,\Delta^<_{\bm{k}}(k_0,\eta). \end{equation} Integrating equation~\cref{eq:sd_wigner_des} over $k_0$, weighted by $1$ and by $k_0$, and taking real and imaginary parts of the resulting equations one finds a closed set of equations for the three lowest moments with $n \in \{0,1,2\}$~\cite{Herranen:2010mh,Kainulainen:2021eki}.~The equation for $\rho_{1{\bm k}}$ is simple: $\partial_\eta\rho_{1{\bm k}}=0$, which implies that $\rho_{1{\bm k}}$ is a constant. In addition we observe that the quantity \begin{equation} \label{eq:X_stab} X_{\bm k} \equiv 2\rho_{0{\bm k}} \rho_{2{\bm k}} - \Bigl( |\bm{k}|^2 + M^2_{\mathrm{eff}} \Bigr) \rho_{0{\bm k}}^2 - \sfrac{1}{4}\left(\partial_{\eta} \rho_{0{\bm k}}\right)^2 \end{equation} is conserved in our setup: $\partial_{\eta} X_{\bm k} = 0$. This is no longer true in an interacting system~\cite{Herranen:2010mh,Kainulainen:2021eki}, but even then using $X_{\bm k}$ as a variable instead of $\rho_{2{\bm k}}$ leads to numerically more stable equations. In the end we then have the following equations for the homogeneous field $\sigma_{\rm R}$ and the moments $\rho_{n{\bm k}}$: \begin{equation} \begin{split} \Bigl(\partial_{\eta}^2 + {M_{\mathrm{eff}}^2}\Bigr)\sigma_{\rm R} &= 2\lambda_{\rm R}\sigma^3_{\rm R}, \\[.2em] \Bigl(\sfrac{1}{4}\partial_{\eta}^2 + |\bm{k}|^2 + {M_{\mathrm{eff}}^2} \Bigr)\rho_{0{\bm k}} &= \rho_{2{\bm k}}, \label{eq:finaleom_desc_stab} \end{split} \end{equation} where $\rho_{2{\bm k}}$ is evaluated using equation~\cref{eq:X_stab}. The non-trivial nature of the evolution equations is hidden in the gap equation~\cref{eq:effective-mass-full}, which couples all the variables. Using the moments and the fact that $M_\mathrm{eff}^2$ is actually $Q$-independent, we can write the gap equation directly in terms of our chosen physical parameters, choosing $Q=am_{\rm ph}$: \begin{equation} \begin{split} M_{\rm eff}^2 = a^2m^2_{\rm ph} &- a^2\Big(\bar \xi_{\mathrm{R}}-\sfrac{1}{6}\Big)(R-R_0) + 3\lambda_{\rm R}\sigma_{\rm R}^2 + 3\lambda_{\rm R}\int_{\bm k} \nolimits \Biggl( \rho_{0\bm{k}} - \frac{\Theta_{\bm k}}{2\omega_{{\bm k}}} \Biggr) \\ &+ \frac{3\lambda_{\rm R}}{16\uppi^2} \left[M_{\mathrm{eff}}^2 \ln\left(\frac{M^2_\mathrm{eff}}{a^2m_{\rm ph}^2}\right) - M_{\mathrm{eff}}^2 + a^2m_{\rm ph}^2 \right], \label{eq:effective-mass-full_wigner} \end{split} \end{equation} where we defined $\int_{\bm k} \equiv \frac{1}{2\uppi^2}\int_0^{\infty} {\rm d}|{\bm k}| |{\bm k}|^2$, $\Theta_{\bm k} \equiv \theta\bigl(\omega_{\bm k}^2(t)\bigr)$, $\omega_{\bm k}^2 \equiv |{\bm k}|^2 + M_{\rm eff}^2$, $\bar\xi_{\mathrm{R}} \equiv \xi_{\rm R}^\idx{0}(m_{\rm ph})$ and $R_0$ is the background Ricci scalar at the renormalization point.\footnote{To get to equation~\cref{eq:effective-mass-full_wigner} one uses for example the relation $\smash{m}^2_{\mathrm{R}\idx{0}} = m^2_{\mathrm{ph}}\bigl(1 + \frac{3\lambda_{\mathrm{R}}}{16\uppi^2}\bigr) + \bigl(\bar \xi_{\mathrm{R}}-\sfrac{1}{6}\bigr)R_0$, which can be derived from equation~\cref{eq:tree-level-def} and the running equations for the mass and the couplings.}~We assume that renormalization is performed in a background with no curvature and set $R_0=0$ here. Finally, we define the particle number density and the quantum coherence functions in terms of the moments as follows~\cite{Herranen:2010mh,Kainulainen:2021eki}: \begin{subequations} \label{f-rho_HOM} \begin{align} n_{\bm{k}} &\equiv \frac{1}{\omega_{\bm k}}\rho_{2\bm{k}} + \rho_{1\bm{k}}, \label{f-rho_HOM-n}\\ \overbar{n}_{\bm{k}} &\equiv \frac{1}{\omega_{\bm k}}\rho_{2\bm{k}} - \rho_{1\bm{k}} - 1, \label{f-rho_HOM-nbar}\\ f^{c\pm}_{\bm k} &\equiv \omega_{\bm k}\rho_{0\bm{k}} - \frac{1}{\omega_{\bm k}}\rho_{2\bm{k}} \pm \frac{\mathrm{i}}{2}\partial_t\rho_{0\bm{k}}. \label{f-rho_HOM-fc} \end{align} \end{subequations} We will denote the momentum-integrated versions of these functions by $n \equiv \int_{\bm k} n_{\bm k}$ and $f^c \equiv \int_{\bm k} |f^{c\pm}_{\bm k}|$. In our case of a real field with no collisions $\rho_{1 \bm k} = -1/2$ throughout, so that $n_{\bm{k}}$ and $\overbar{n}_{\bm{k}}$ actually coincide. The functions $f^{c\pm}_{\bm k}$ in turn measure the degree of quantum coherence, or squeezing, between particle-antiparticle pairs with opposite 3-momenta~\cite{Fidler:2011yq}, and particle production can only take place when $f^{c\pm}_{\bm k} \neq 0$. The unique vacuum which corresponds to a state with no particles nor any coherence can then be defined as \begin{equation} \rho_{0\bm k}^{\rm vac} \equiv \frac{\Theta_{\bm k}}{2\omega_{\bm k}}, \qquad \partial_t\rho_{0\bm k}^{\rm vac}\equiv0, \qquad \rho_{1\bm k}^{\rm vac} \equiv -\frac{1}{2} \quad \mathrm{and} \quad \rho_{2\bm k}^{\rm vac} \equiv \frac{\omega_{\bm k}}{2}\Theta_{\bm k}. \label{eq:non-coherent-vacuum} \end{equation} The Heaviside theta function $\Theta_{\bm k}$ ensures that no spinodal modes are included in the vacuum. Finally, we define the non-equilibrium fluctuations in the moments as $\delta\rho_{n \bm k} \equiv \rho_{n \bm k} - \rho_{n\bm k}^{\rm vac}$. \section{Results} \label{sec:results} We numerically solve the equations~\cref{eq:finaleom_desc_stab} and~\cref{eq:effective-mass-full_wigner}, following the methods of~\cite{Kainulainen:2021eki}. We focus on a setup where the energy density of $\sigma$ stays negligible compared to the total energy density, $\rho_{\sigma}\ll 3 H^2 M_{\rm P}^2$, during the entire simulation time. The scale factor $a$ and the Ricci scalar $R$ are therefore entirely set by the inflaton and its decay products via equations~\cref{eq:inflaton_equations}, and they appear as externally given functions in equations~\cref{eq:finaleom_desc_stab,eq:effective-mass-full_wigner}. We choose $m_{\phi} = 1.5 \times 10^{13}$ GeV and set slow roll initial conditions with $\phi_{\rm in} = 15 M_{\rm P}$ on the inflaton sector. On the spectator sector we set $m_{\rm ph} = 150$ GeV, initialize the two-point function $\Delta_{\rm R, in}$ by giving the Minkowski vacuum values~\cref{eq:non-coherent-vacuum} for the moments, and give a small non-zero initial value for the one-point function $\sigma_{\rm R, in}$. In the following, we denote by $\eta_0$ the moment when $\epsilon_{\rm H}\equiv -\dot{H}/H^2=1$ for the first time. Our main results are summarized in the figures of this section. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{dDMxi50.pdf} \caption{The two-point function $\delta\Delta_{\mathrm{F}}$ (left panel) and the effective mass function \(M^2_{\mathrm{eff}}\) (right panel). The results are shown for $\lambda_{\mathrm{R}} \in \{10^{-7}, 10^{-4}, 10^{-1}\}$, $\bar\xi_{\mathrm{R}} = 50$ and $\Gamma = 0$.} \label{fig:variance1} \end{figure} \paragraph{Case I: {$\bm{\bar{\xi}_{\mathrm{R}} = 50, \Gamma = 0}$.}} We will first discuss a case with a non-minimal coupling $\bar{\xi}_{\mathrm{R}} = 50$ and a non-interacting inflaton, $\Gamma = 0$, where the results can be directly compared with those obtained in~\cite{Fairbairn:2018bsw}.~The left panel in figure~\cref{fig:variance1} shows the time evolution of the fluctuation in the contact limit for the comoving two-point function $\langle\sigma_{\rm R}^2\rangle$: $\delta \Delta_{\mathrm F} \equiv \Delta_{\rm F} - \Delta_{\mathrm{F}0}$. The right panel shows the effective mass function $M_{\rm eff}^2$ given by equation \cref{eq:effective-mass-full_wigner}. In both panels the self-coupling is given the values $\lambda_{\rm R} = 10^{-7}$ (blue lines), $10^{-4}$ (red lines) and $10^{-1}$ (orange lines). There are three components of different origin contributing to the effective mass function $M_{\rm eff}^2$: \begin{subequations} \label{eq:Meff_components} \begin{align} M^2_R &\equiv -a^2\bigl(\bar{\xi}_{\mathrm{R}}-\sfrac{1}{6}\bigr)R & {\rm (curvature)},\\[.8em] M^2_{\Delta} &\equiv 3\lambda_{\mathrm{R}}\delta\Delta_{\rm F} = 3\lambda_{\mathrm{R}}{ \int_{\bm k}} \delta \rho_{0 \bm k} & {\rm (fluctuations)}, \\ M^2_{\sigma} &\equiv M^2_{\rm eff}- M^2_R - M^2_{\Delta} & {\rm (field \; and \; background) }. \end{align} \end{subequations} The evolution and magnitudes of these components are displayed in figure~\cref{fig:meffcomponents}. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{Mcompxi50.pdf} \caption{The effective mass function $M^2_{\mathrm{eff}}$ (blue) and its component functions $M^2_{R}$ (red), $M^2_{\Delta}$ (violet) and $M^2_{\sigma}$ (yellow), defined in equations~\cref{eq:Meff_components}, for $\lambda_{\mathrm{R}} \in \{10^{-7}, 10^{-4}, 10^{-1}\}$, $\bar\xi_{\mathrm{R}} = 50$ and $\Gamma = 0$.} \label{fig:meffcomponents} \end{figure} For all three values of $\lambda_{\rm R}$ shown in the figures, the field-dependent mass term $M_\sigma^2$ is very small compared to the curvature and fluctuation corrections. In all cases the initial evolution is characterized by a rapid growth of the fluctuation contribution to the two-point function $\delta \Delta_{\mathrm F}$, which is driven by periodic tachyonic instabilities that occur when $M_{\rm eff}^2 < 0 $. The growing two-point function gives a positive definite contribution to the fluctuation part $M^2_\Delta$ in the effective mass function, which is known to eventually terminate the strong tachyonic growth~\cite{Dufaux:2006ee}. As seen in figure~\cref{fig:meffcomponents}, for $\lambda_{\rm R} = 10^{-7}$ the growth of $\delta\Delta_{\rm F}$ stops while the effective mass is still dominated by the curvature term, $\langle M^2_\Delta+M^2_\sigma \rangle_{\rm osc} \!\approx \! \langle M^2_\Delta \rangle_{\rm osc} \!\ll\! \langle M^2_{R} \rangle_{\rm osc}$, where the brackets $\langle \dots \rangle_{\rm osc}$ denote averaging over an oscillation cycle of the mean field $\sigma_{\rm R}$. The reason for this ending of the tachyonic growth is that the windows with $M_{\rm eff}^2 < 0$ become too narrow to generate a coherent net particle production. This effect is controlled by the evolution of $R$, whose oscillation period is a constant in physical time, proportional to the inverse inflaton mass $\smash{m_{\phi}^{-1}}$, but whose magnitude decreases rapidly, $R\propto a^{-3}$. The time available for tachyonic evolution per oscillation period then shrinks, while the oscillatory evolution between pulses grows, mixing growing and decaying modes. Eventually the tachyonic pulses lose all coherence and no net growth is registered. As a result our final value of $\delta \Delta_{\rm F}$ is about an order of magnitude smaller than in~\cite{Fairbairn:2018bsw}\footnote{Note that our results are expressed in terms of the comoving field $\sigma = a \chi$ while~\cite{Fairbairn:2018bsw} uses the physical field $\chi$. We have normalized the scale factor to $a_0 =12.6$.}, where the tachyonic growth was observed to continue up to $\langle M^2_\Delta \rangle_{\rm osc} \sim \langle M^2_{R} \rangle_{\rm osc}$.~This effect is spurious however, following from the use in~\cite{Fairbairn:2018bsw} of the adiabatic expansion in the regions where the adiabaticity condition $|\dot{\omega}/\omega^2|\ll 1$ for the mode function frequencies no longer holds between the tachyonic windows. The case with larger couplings $\lambda_{\rm R} = 10^{-4}$ and $10^{-1}$ is markedly different. Here the (mostly) tachyonic growth {\em does} continue until $\langle M^2_\Delta \rangle_{\rm osc} \sim \langle M^2_{R} \rangle_{\rm osc}$, after which $\delta \Delta_{\rm F}$ starts to backreact into the dynamics of the system. The evolution of $R$ is exactly the same as in the previous case but the larger coupling $\lambda_{\rm R}$ makes $\langle M^2_\Delta\rangle_{\rm osc}$ bigger, and the backreaction limit $\langle M^2_\Delta \rangle_{\rm osc} \sim \langle M^2_{R} \rangle_{\rm osc}$ is reached before the tachyonic windows become too narrow to support coherent particle production. After the tachyonic growth stops, the strongly non-linear system still undergoes a transient period of resonant particle production driven by the two-point function $\delta \Delta_{\rm F}$ itself, during which $M_{\rm eff}^2$ remains positive. The resonant nature of the particle production can be seen in figure~\cref{fig:rho0k}, which will be discussed further below. At the onset of the resonance, $M_{\rm eff}^2$ receives roughly equal contributions from the fluctuation term $M^2_\Delta = 3\lambda_{\rm R} \delta \Delta_{\rm F}$ and from the curvature term $M^2_{R} =a^2\bigl(\bar{\xi}_{\mathrm{R}}-1/6\bigr) R$, but as the latter redshifts as $a^{-1}$, it eventually becomes smaller than the fluctuation term. The resonance turns off after the effective mass becomes fully dominated by $M^2_\Delta$, and $\delta \Delta_{\rm F}$ on average settles to a constant value. For $\lambda_{\rm R} = 10^{-4}$ and $10^{-1}$, we find that $\delta \Delta_{\rm F}$ at the end of the tachyonic stage agrees relatively well with the adiabatic expansion results of~\cite{Fairbairn:2018bsw}. However, the subsequent strongly non-linear resonant stage is not at all captured in the treatment of~\cite{Fairbairn:2018bsw} and, as seen in figures~\cref{fig:variance1} and~\cref{fig:meffcomponents}, this stage gives the dominant contribution to $\delta \Delta_{\rm F}$ for $\lambda_{\rm R} = 10^{-4}$ and $10^{-1}$. \begin{figure}[t] \centering \includegraphics[trim={3.4cm 0 3.4cm 0},clip,width=0.95\linewidth]{drho0xi50log.pdf} \caption{The zeroth moment $\delta\rho_{0 \bm k}$ of the two point function for $\lambda_{\mathrm{R}} \in \{10^{-7}, 10^{-4}, 10^{-1}\}$ with $\bar\xi_{\mathrm{R}} = 50$ and $\Gamma = 0$.} \label{fig:rho0k} \end{figure} The momentum space structure of $\delta\rho_{0\bm k}$ is shown in figure~\cref{fig:rho0k}.~For all three coupling values $\lambda_{\rm R} \in \{10^{-7},10^{-4},10^{-1}\}$, the leftmost continuous vertical structures, extending from $|{\bm k}|=0$ to a finite cutoff set by the effective mass (and of the order of the Hubble scale), are states populated by the tachyonic instability. For $\lambda_{\mathrm{R}} = 10^{-7}$ the ultraviolet region develops, around $a/a_0\simeq 3$, discrete bands which reach to higher ${|\bm k|}$-modes than the initial structures, while the evolution is still dominated by $M^2_{R}$ (see figure~\cref{fig:meffcomponents}). These bands appear to signal a resonant particle production sourced by the $\xi R\chi^2$-term, which can coexist with the tachyonic production~\cite{Dufaux:2006ee,Bassett:1997az,Cembranos:2019qlm}.~We note that $\delta\rho_{0{\bm k}}$ continues to be strongly dominated by the lowest band but its peak shifts from $|{\bm k}|\approx 0$ towards the middle of the band. For $\lambda_{\mathrm{R}} = 10^{-4}$ and $10^{-1}$ the momentum space evolution looks quantitatively similar as above until the moment when the effective mass gets dominated by the two-point function, $\langle M^2_\Delta \rangle_{\rm osc} > \langle M^2_{R} \rangle_{\rm osc}$, and $\delta \Delta_{\rm F}$ starts to grow rapidly (see figures~\cref{fig:meffcomponents} and~\cref{fig:variance1}). At this point, pronounced band structures emerge in figure~\cref{fig:rho0k}, which we interpret to signal the onset of resonant particle production driven by $\delta \Delta_{\rm F}$ itself. The resonance bands carry significant power and extend considerably above the ${|\bm k|}$-region populated during the $M^2_{R}$-dominated stage. Furthermore, it can be seen that the moment at which the resonant growth effectively stops in figure~\cref{fig:variance1} corresponds to a further splitting and narrowing down of the resonance bands in figure~\cref{fig:rho0k}. After this band splitting the resonant particle production loses efficiency and the average value of $\delta\Delta_{\rm F}$ becomes essentially a constant. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{nfcxi50.pdf} \caption{The integrated comoving particle number density $n$ (left panel) and the integrated absolute value of the coherence functions $f^c$ (right panel) for $\lambda_{\mathrm{R}} \in \{10^{-7}, 10^{-4}, 10^{-1}\}$ with $\bar\xi_{\mathrm{R}} = 50$ and $\Gamma = 0$.} \label{fig:nandfc} \end{figure} The evolution of the comoving particle number density $n$ and the coherence function $f^{c}$ are shown in figure~\cref{fig:nandfc}. For $\lambda_{\mathrm{R}} = 10^{-7}$ both $n$ and $f^{c}$ settle to constant values after the end of the tachyonic growth. Comparing with \cite{Fairbairn:2018bsw}, we find an order of magnitude smaller final number density for $\lambda_{\rm R} = 10^{-7}$, the reason being the same as for the difference in $\delta\Delta_{\rm F}$ discussed above. On the other hand, for $\lambda_{\mathrm{R}} \in \{10^{-4}, 10^{-1}\}$ the tachyonic stage is followed by a transient resonance, during which $n$ and $f^{c}$ grow further, and the resonant contribution actually dominates their final values. In these cases our results for the net particle number density exceed the corresponding results of~\cite{Fairbairn:2018bsw} by an order of magnitude. Note that the particle production is necessarily associated with a growing coherence function~\cite{Fidler:2011yq}. The fact that coherence remains constant after particle production ends shows that the final state is highly squeezed. This is a special feature of our non-interacting system. In an interacting system the coherence function would eventually tend to zero, reducing the quantum system to a non-coherent statistical state, even if the interactions were conserving the particle number. Such behaviour was indeed observed and studied in detailed in a toy model in~\cite{Kainulainen:2021eki}. \begin{figure}[h!] \centering \includegraphics[trim={2.5cm 0 4cm 0},clip,width=0.95\linewidth]{nkxi50log.pdf} \caption{A contour plot of the comoving particle number density $n_{\bm k}$ for $\lambda_{\mathrm{R}} = 10^{-4}$ (left panel), and the final comoving particle number density $n_{\bm k}(\eta_{\mathrm{end}})$ as a function of momentum for $\lambda_{\mathrm{R}} \in \{10^{-7}, 10^{-4}, 10^{-1}\}$ (right panel). Both plots have $\bar\xi_{\mathrm{R}} = 50$ and $\Gamma = 0$.} \label{fig:nfinal} \end{figure} In figure~\cref{fig:nfinal} we show the comoving particle number density per momentum $n_{\bm k}$. The right panel shows the final spectrum $n_{\bm k}$ at the final time of our numerical simulation for all couplings considered: $\lambda_{\mathrm{R}} \in \{10^{-7}, 10^{-4}, 10^{-1}\}$. The left panel shows the full time evolution of $n_{\bm k}$ for the coupling $\lambda_{\mathrm{R}} = 10^{-4}$. Apart from the oscillatory features, the structure of $n_{\bm k}$ is qualitatively in agreement with the results of~\cite{Fairbairn:2018bsw}, which, we recall, are obtained using a semianalytical adiabatic expansion approximation for the tachyonic particle production~\cite{Dufaux:2006ee} and neglecting all resonant particle production (see also~\cite{Cembranos:2019qlm} for an analysis of resonant production through the $\xi R \chi^2$-term in the absence of self-couplings). The oscillatory features in $n_{\bm k}$ seen in our results arise from the transient resonance after the first tachyonic stage. As seen in the left panel of figure~\cref{fig:nfinal}, $n_{\bm k}$ displays strong peaks coinciding with the onset of the resonance, located at the resonance bands and with the peak heights varying from band to band. Interestingly, the peaks begin to flatten out while the resonance is still ongoing. This effect is caused by non-linear processes mediated by the self-coupling which, combined with the redshifting, can efficiently redistribute the momenta. \begin{figure} \centering \includegraphics[width=0.90\linewidth]{dDMxi50ga005.pdf} \caption{The two-point function $\delta\Delta_{\mathrm{F}}$ (left panel) and the effective mass function \(M^2_{\mathrm{eff}}\) (right panel). The results are shown for $\lambda_{\mathrm{R}} \in \{10^{-7}, 10^{-4}, 10^{-1}\}$, $\bar\xi_{\mathrm{R}} = 50$ and $\Gamma \simeq 0.1H_0$.} \label{fig:varianceII} \end{figure} \paragraph{Case II: {$\bm{\bar{\xi}_{\mathrm{R}} = 50, \Gamma \simeq 0.1 H_0}$.}} For comparison, we also present results for the case with $\bar{\xi}_{\mathrm{R}} = 50$ and a non-zero inflaton decay rate $\Gamma \simeq 0.1 H(\eta_0)\equiv 0.1 H_0$. As explained in section~\cref{sec:model}, inflaton decays into radiation, as a result of which the universe evolves from effective matter domination to radiation domination where $R = 0$. The evolution of $\delta\Delta_{\rm F}$ and $M_{\rm eff}^2$, and the components of $M_{\rm eff}^2$ defined in equations~\cref{eq:Meff_components}, are shown in figures~\cref{fig:varianceII} and \cref{fig:meffcomponentsII} for this case. As is seen in figure~\cref{fig:meffcomponentsII}, the initial scaling $\langle R\rangle_{\rm osc} \propto a^{-3}$ is now followed by an exponential decay of $\langle R\rangle_{\rm osc}$ once the inflaton decay becomes efficient. This decreases the efficiency of tachyonic particle production compared to case I. \begin{figure} \centering \includegraphics[width=0.90\linewidth]{Mcompxi50ga005.pdf} \caption{The effective mass function $M^2_{\mathrm{eff}}$ (blue) and its component functions $M^2_{R}$ (red), $M^2_{\Delta}$ (violet) and $M^2_{\sigma}$ (yellow), defined in equations~\cref{eq:Meff_components}, for $\lambda_{\mathrm{R}} \in \{10^{-7}, 10^{-4}, 10^{-1}\}$ in the case $\bar\xi_{\mathrm{R}} = 50$ and $\Gamma \simeq 0.1H_0$.} \label{fig:meffcomponentsII} \end{figure} The evolution of $\delta\Delta_{\mathrm F}$ seen in figure~\cref{fig:varianceII} is now almost identical for the couplings $\lambda_{\rm R} = 10^{-7}$ and $10^{-4}$. This is due to the fast decrease of $R$ resulting from the inflaton decay, which ends the tachyonic growth before the two-point function starts to backreact into the dynamics also for $\lambda_{\mathrm R} = 10^{-4}$. This can also be seen from figure~\cref{fig:meffcomponentsII}, which shows that in both these cases $\delta\Delta_{\rm F}$ stops growing before the two-point function backreacts into the dynamics. The evolution of $\delta\Delta_{\rm F}$ for $\lambda_{\rm R} = 10^{-7}$ is qualitatively similar to case I, but the final value of $\delta\Delta_{\rm F}$ is about two orders of magnitude smaller. For $\lambda_{\rm R} = 10^{-4}$, the evolution of $\delta\Delta_{\rm F}$ substantially differs from case I as the resonant stage that dominated the final value of $\delta\Delta_{\rm F}$ in case I is absent in case II. For the largest coupling $\lambda_{\mathrm{R}}= 10^{-1}$ the difference compared to case I is smallest as the tachyonic growth in this case still terminates via the backreaction when $\langle M^2_\Delta \rangle_{\rm osc} \sim \langle M^2_{R}\rangle$, and this happens before the exponential decrease of $R$ sets in. In this case, the tachyonic stage is followed by resonant amplification of $\delta \Delta_{\rm F}$ driven by $\delta \Delta_{\rm F}$ itself, but the resonance is somewhat less efficient than in case I, leading to a factor of two smaller final value for $\delta \Delta_{\rm F}$. Finally, the momentum structure of $\delta\rho_{0\bm k}$ is shown in figure~\cref{fig:rho0kII}.~For $\lambda_{\mathrm{R}}= 10^{-7}$ the result looks qualitatively similar to case I but the band structures generated during the $M^2_{R}$-dominated epoch are more pronounced in case II. In particular, in case II the tachyonic region splits into two discrete bands at $a/a_0 \simeq 3$. The results for $\lambda_{\mathrm{R}}= 10^{-4}$ look almost identical to those for $\lambda_{\mathrm{R}}= 10^{-7}$, and the $\delta \Delta_{\rm F}$-driven resonance that dominated the final $\delta\rho_{0\bm k}$ in case I is now completely absent. For $\lambda_{\mathrm{R}}= 10^{-1}$ the structure looks qualitatively similar to case I but it can be seen that the $\delta \Delta_{\rm F}$-driven resonance is less efficient and does not extend to as high momenta as in case II. All in all, the results of cases I and II manifest the presence of complicated non-linear dynamics after the initial tachyonic particle production which can substantially affect the final value of $\delta\Delta_{\rm F}$. In particular, our results indicate that when the two-point function grows large enough to backreact into the dynamics, the tachyonic instability is followed by resonant particle production driven by the two-point function itself. In all cases studied here, we find that if the resonance takes place it also gives a dominant contribution to the final value of $\delta\Delta_{\rm F}$. However, the amount by which $\delta\Delta_{\rm F}$ grows during the resonance after the tachyonic stage appears to depend quite sensitively on the non-linear evolution of the two-point function coupled to $R$. \begin{figure}[t] \centering \includegraphics[trim={3.1cm 0 3.1cm 0},clip,width=1\linewidth]{drho0xi50ga005log.pdf} \caption{The zeroth moment $\delta\rho_{0 \bm k}$ of the two point function for $\lambda_{\mathrm{R}} \in \{10^{-7}, 10^{-4}, 10^{-1}\}$ with $\bar\xi_{\mathrm{R}} = 50$ and $\Gamma \simeq 0.1H_0$.} \label{fig:rho0kII} \end{figure} \section{Conclusions} \label{sec:conclusions} We have studied particle production at the end of inflation with a non-minimally coupled spectator scalar field that contributes to dark matter. We first introduced consistently renormalized coupled equations for the one- and two-point functions of the spectator field in the Hartree approximation using 2PI-methods. These equations correctly account for the backreaction of the out-of-equilibrium quantum modes created by the spinodal instability triggered by the oscillating Ricci scalar as well as for the subsequent parametric resonances. This model was studied earlier in~\cite{Fairbairn:2018bsw} with an adiabatic treatment of the spinodal effects. Our results show that the interplay between the backreacting two-point function and the oscillating curvature sector lead to highly non-trivial dynamics which can have a significant effect on the net particle number density. We solved numerically the coupled equations for the one- and two-point functions of the spectator field (the latter expressed as moment equations in the Wigner representation) together with the dynamical evolution of the inflaton sector for different values of the spectator field self-coupling $\lambda_{\mathrm{R}}$ and for the minimal coupling $\bar{\xi}_{\mathrm{R}} = 50$. We studied first the case of a non-interacting inflaton field and found that for a small coupling $\lambda_{\mathrm{R}} = 10^{-7}$ the generated particle number density is an order of magnitude smaller than that found in~\cite{Fairbairn:2018bsw}, whereas for $\lambda_{\mathrm{R}} = 10^{-4}$ and $10^{-1}$ it becomes and order of magnitude larger. For $\lambda_{\mathrm{R}} = 10^{-7}$ this is due to the tachyonic particle production shutting off already before the competing mass contributions from the curvature and the two-point function become comparable, while for the larger couplings the difference is due to efficient resonant particle production occurring after the tachyonic stage. In particular the resonant production, which actually dominates the contribution to the particle number density for larger couplings, is completely absent in the adiabatic approach of~\cite{Fairbairn:2018bsw}. We also included a coupling between the inflaton and a radiation component to study the evolution under the transition from effective matter domination to radiation domination with $R = 0$. We found that the exponential decay of $R$ induced by the radiation coupling renders both the spinodal and the resonant particle production processes much less efficient compared to the case with a non-interacting inflaton. For the tachyonic processes this is easy to understand as the oscillating curvature term, which is responsible for the tachyonic bursts in the particle number density, is rapidly driven to zero. Our results suggest the presence of an $R$-assisted resonance enhancement, where the resonant particle production driven by the two-point function is boosted by the decaying $\xi R \chi^2$-term after the tachyonic stage has come to an end. This is a highly non-linear phenomenon which, when present, appears to dominate the net particle production. It cannot be properly captured without a full treatment of the backreaction effects. The final momentum distribution of the dark relics generated by the non-perturbative processes is highly non-thermal. This could lead to characteristic and potentially observable imprints in the structure formation, as pointed out in~\cite{Fairbairn:2018bsw}.~The evolution of the relic distribution after the epoch of reheating depends on dark sector interactions, possibly including new types not considered here. Although this would be an interesting problem in itself, we do not investigate it further here. It would obviously be interesting to extend our setup to the case of a spectator field coupled to other matter fields. This could be done rather easily by combining the current results with the quantum transport formalism for interacting fermions introduced in~\cite{Jukkala:2019slc}. Also, it would be interesting to extend our classical treatment of the inflaton to quantum level. It would then be particularly interesting to study the gravitational wave production during the reheating stage in the most general computational framework described above. \section*{Acknowledgements} \label{sec:ack} This work was supported by the Academy of Finland grant 318319 and by computer capacity from the Finnish Grid and Cloud Infrastructure (persistent identifier urn:nbn:fi:research-infras-2016072533). OK was in addition supported by a grant from the Magnus Ehrnrooth Foundation. We wish to thank Anna Tokareva for many useful discussions and comments on the manuscript.
{ "timestamp": "2022-09-23T02:13:19", "yymm": "2209", "arxiv_id": "2209.10945", "language": "en", "url": "https://arxiv.org/abs/2209.10945" }
\section{Introduction}\label{sect:intro} Let $n$ be an integer. We consider Emma Lehmer's quintic polynomial $f_n(X)\in{\mathbb Z}[X]$ defined by \begin{align}\label{eq:f_n} f_n(X) := & X^5+n^2X^4-(2n^3+6n^2+10n+10)X^3 \notag \\ & +(n^4+5n^3+11n^2+15n+5)X^2+ (n^3+4n^2+10n+10)X+1, \end{align} (see \cite{Leh}). The polynomial $f_n(X)$ is irreducible for any $n\in{\mathbb Z}$, and $K_n:={\mathbb Q}(\rho_n)$ is a cyclic quintic field where $\rho_n$ is a root of $f_n(X)$ (see \cite{SW}). Put $\delta_n := n^3+5n^2+10n+7$ and $\Delta_n:=n^4+5n^3+15n^2+25n+25$. We have $(n^3+5n^2+10n+18)\delta_n -(n^2+5n+5)\Delta_n=1$ and hence $(\delta_n, \Delta_n)=1$. For a primitive fifth root of unity $\zeta=\zeta_5$, we have $\Delta_n =N_{{\mathbb Q} (\zeta)/{\mathbb Q}} (n+2+2\zeta^4+\zeta^2) ( >0)$, where $N_{\mathbb Q(\zeta)/\mathbb Q}$ is the norm map from $\mathbb Q (\zeta)$ to $\mathbb Q$. Let $v_p(x)$ be the $p$-adic valuation of $x \in {\mathbb Q}$ for a prime number $p$. The discriminant of $f_n(X)$ is $d(f_n)=\delta_n^2 \Delta_n^4$, and the discriminant $D_{K_n}$ of $K_n$ has been determined by Jeannin \cite{Je} as follows. \begin{equation}\label{eq:disc} D_{K_n}=\mathfrak{f}_{K_n}^4 \end{equation} where $\mathfrak{f}_{K_n}$ is the conductor of $K_n$ given by \begin{equation}\label{eq:cond} \mathfrak{f}_{K_n}=5^c \!\!\!\!\!\!\!\!\! \prod_{\substack{ p\mid \Delta_n,\, p\ne 5 \\ v_p(\Delta_n )\not\equiv 0 \!\!\!\! \pmod{5} }} \!\!\!\!\!\!\!\!\! p,\quad c= \begin{cases} 0 & (\text{if}\ 5\nmid n), \\ 2 & (\text{if}\ 5\mid n), \end{cases} \end{equation} and proved that any prime number $p$ dividing $\Delta_n$ satisfies $p=5$ or $p\equiv 1 \pmod{5}$. Especially, $K_n/\mathbb{Q}$ is tamely ramified if and only if $5\nmid n$. Note that it is known that $D_L=\mathfrak{f}_L^{p-1}$ for a cyclic number field $L$ of prime degree $p$. Let $K/{\mathbb Q}$ be a finite Galois extension with the Galois group G. We denote the ring of integers of $K$ by $\mathcal O_K$. If the set $\{\sigma(\alpha)\mid\sigma\in G\}$ is a basis of $\mathcal O_K$ over ${\mathbb Z}$, we call the basis {\it a normal integral basis} (NIB), and $\alpha$ the generator. If $K/\mathbb{Q}$ has an NIB, then the extension is tamely ramified. For abelian number fields, we have the following (\cite[Chap.~9, Theorem 3.4]{L}). \begin{thm}[Hilbert-Speiser] Let $K$ be an abelian number field. The following three conditions are equivalent. \begin{itemize} \item[(i)] $K/{\mathbb Q}$ is tamely ramified. \item[(ii)] The conductor of $K$ is square-free. \item[(iii)] $K/{\mathbb Q}$ has a normal integral basis. \end{itemize} \end{thm} Let $K$ be an abelian number field with the conductor $\mathfrak f$. We call the conjugates of $\mathrm{Tr}_{\mathbb Q(\zeta_{\mathfrak f})/K} (\zeta_{\mathfrak f})$ {\it Gaussian periods}, where $\mathrm{Tr}_{\mathbb Q(\zeta_{\mathfrak f})/K}$ is the trace map from $\mathbb Q(\zeta_{\mathfrak f})$ to $K$. The Gaussian period is a generator of an NIB of $K$ if $K/\mathbb Q$ is tamely ramified (\cite[Proposition 4.31]{N}). We describe the NIBs of $K_n$ using only the roots of the polynomial $f_n(X)$, and do not use a root of unity $\zeta_{{\mathfrak f}_{K_n}}$ which depends on the conductor of $K_n$. From the Hilbert-Speiser theorem, we know that $K_n$ has an NIB if and only if $5 \nmid n$. If $\Delta_n=p$ is a prime number, then $\mathfrak f_{K_n}=p$ and $K_n/{\mathbb Q}$ is tamely ramified. Lehmer \cite{Leh} showed that $\left( \frac{n}{5} \right) \left(\rho +\left( n^2-\left( \frac{n}{5} \right)\right)/5 \right)$ is a Gaussian period in $K_n$. Therefore, it follows that $ \rho +\left( n^2-\left( \frac{n}{5} \right)\right)/5 $ is a generator of an NIB. Spearman and Williams \cite{SpWi} proved the following. \begin{theo}[Spearman-Williams] \label{theo:SW} Assume that $5\nmid n$. Then $K_n$ has a generator of a normal integral basis of the form $v+w\rho_n \ (v,w\in {\mathbb Z})$ if and only if $\Delta_n$ is square-free. Furthermore, in this case, the integers $v$ and $w$ are given by $w=\pm1$ and $ v=w\left( n^2-\left(\frac{n}{5} \right)\right)/5$, where $\left(\frac{n}{5} \right)$ is the Legendre symbol. \end{theo} We write $\Delta_n= ab^2c^3d^4e^5$ with $a,b,c,d,e \in {\mathbb Z}_{>0}$, where $a,b,c,d$ are square-free and pairwise coprime, and Gal$(K_n/{\mathbb Q}) =\langle \sigma \rangle$ where the generator $\sigma$ satisfies $ \sigma (\rho_n) =(n+2+n\rho_n -\rho_n^2)/(1+(n+2)\rho_n)$ (\cite[(3.2)]{SW}). For $i\in {\mathbb Z}/5{\mathbb Z}$, we put $\rho_n^{(i)} :=\sigma^i (\rho_n)$. In this paper, we prove that for some integers $\beta_0, \beta_1, \beta_2, \beta_3$ which satisfy $b^2c^4d^6e^6 =N_{{\mathbb Q} (\zeta_5)/{\mathbb Q}} (\beta_0 +\beta_1 \zeta_5 +\beta_2 \zeta_5^2+\beta_3 \zeta_5^3)$, \begin{align*} & \frac{1}{bc^2d^3e^4} (\beta_0 \rho_n+\beta_1 \rho^{(1)} +\beta_2 \rho_n^{(2)} +\beta_3 \rho_n^{(3)} -m), \\ \end{align*} is a generator of an NIB of $K_n$, where \[ m:=\frac{1}{5} \left( \left( \frac{n}{5} \right) bc^2d^3e^4-n^2(\beta_0+\beta_1+\beta_2+\beta_3) \right) \] and the integers $\beta_0, \beta_1, \beta_2, \beta_3$ are given explicitly (Theorem~\ref{theo:main}). For example, when $\Delta_n$ is square-free, then these integers are given by $\beta_0=1,\, \beta_1=\beta_2=\beta_3=0$ (Corollary~\ref{cor:NIB-squarefree}), and using $b=c=d=e=1$, we obtain Spearman and Williams' result (Theorem~\ref{theo:SW}). The smallest positive integer $n$ such that $\Delta_n$ is not square-free is $n=14$, and from $\Delta_n=11\cdot 71^2$ in this case, we have $a=11, b=71, c=d=e=1$. The integers are given by $\beta_0=6,\, \beta_1=7,\, \beta_2=8,\, \beta_3=10$ and $m=-1201$, it follows that $(6\rho_n +7\rho_n^{(1)}+8\rho_n^{(2)} +10\rho_n^{(3)}+1201)/71$ is a generator of an NIB of $K_n$ for $n=14$ (Example~\ref{ex:1to1000}). For the proof of the main theorem (Theorem~\ref{theo:main}), we use the method of constructing an NIB from a normal basis and an integral basis, proposed by Acciaro and Fieker in \cite{AF}. Therefore, we first obtain an integral basis for tamely ramified extensions $K_n/{\mathbb Q}$ (Theorem~\ref{theo:IB}). When $\Delta_n$ has no square factor of a prime other than 5, Ga\'al and Pohst \cite[Lemma~2]{GP} proved that $K_n$ has an integral basis of the form $\{ 1, \rho, \rho^2, \rho^3, * \}$. When $\Delta_n$ is cube-free, Eloff, Spearman and Williams \cite[Theorem]{ESW1} proved that $K_n$ has an integral basis of the form $\{ 1, \rho, \rho^2, *, * \}$. We will give an integral basis for all $n$ with $5\nmid n$. Furthermore, by acting the elements of ${\mathbb Z} [\mathrm{Gal}(K_n/{\mathbb Q})]^{\times}$ on the NIB obtained by Theorem~\ref{theo:main}, we give all the NIBs of $K_n$ (Theorem~\ref{theo:all-NIB}). This is a generalization of the case $ n=-1$ by Davis, Eloff, Spearman and Williams \cite{DES,ESW2}. Let $K/\mathbb{Q}$ be a finite Galois extension with the Galois group $G$, and $\mathcal{O}_K$ be the ring of integers of $K$. The action of the group ring $K[G]$ on $K$ is given by \[ x.a:=\sum_{\sigma\in G}n_{\sigma}\sigma(a)\ \ (x=\sum_{\sigma\in G}n_\sigma\sigma\in\ K[G],\ a\in K), \] and $K$ (resp. $\mathcal{O}_K$) is a $K[G]$ (resp. $\mathcal{O}_K[G]$)-module with this action. There is a one-to-one correspondence between the generators of an NIB of a finite Galois extension $K/{\mathbb Q}$ which has an NIB and elements of the multiplicative group $\mathbb Z[G]^{\times}$. In fact, if $\alpha$ is a generator of an NIB of $K$, then all generators of an NIB are given by $u.\alpha$ for some $u\in {\mathbb Z}[G]^{\times}$ (\cite[Lemma~2.3]{HA}). Let $K_n$ be the cyclic quintic field and $G=\mathrm{Gal}(K_n/{\mathbb Q})=\langle \sigma \rangle$, we have $1-\sigma^2-\sigma^3 \in \mathbb Z[G]^{\times} ,\ (1-\sigma^2-\sigma^3)^{-1}=1-\sigma-\sigma^4,\ \langle 1-\sigma^2-\sigma^3 \rangle \simeq {\mathbb Z},\, $ and \begin{equation}\label{eq:ZG} {\mathbb Z} [G]^{\times} =\langle \pm 1 \rangle \times G \times \langle 1-\sigma^2-\sigma^3 \rangle = \{\pm \sigma^{\ell} (1-\sigma^2-\sigma^3)^k \ |\ \ell \in {\mathbb Z}/5 {\mathbb Z}, k\in {\mathbb Z} \}, \end{equation} (\cite[p.~2934]{AB}). \begin{rem}\label{re:Shanks} In the previous paper \cite{HA}, we obtained all NIBs of the cubic cyclic extension ${\mathbb Q} (\rho_n)/{\mathbb Q}$ for a root $\rho_n$ of Shanks' cubic polynomial $X^3-nX^2-(n+3)X-1 \, (n\in {\mathbb Z})$ (namely, the simplest cubic field). Put $n^2+3n+9= de^2c^3$ with $d,e,c \in {\mathbb Z}_{>0}$ and $d,e $ being square-free and $(d,e)=1$. Let $\sigma$ be the generator of Gal$({\mathbb Q} (\rho_n)/{\mathbb Q})$ satisfying $\sigma(\rho_n)=-1/(1+\rho_n)$. Let $\rho_n':=\sigma (\rho_n)$ and $\zeta_3$ be a primitive cubic root of unity. Then, we proved that for some integers $a_0, a_1$ that satisfy $ec=N_{{\mathbb Q} (\zeta_3)/{\mathbb Q}} (a_0+a_1 \zeta_3) =a_0^2-a_0a_1+a_1^2$, \[ \alpha :=\frac{1}{ec^2} (a_0 \rho_n+a_1\rho_n'+m) \] is a generator of an NIB of ${\mathbb Q} (\rho_n)$, where \[ m:=\frac{ (\varepsilon ec^2 -n (a_0+a_1))}{3}, \quad \varepsilon (=\pm 1) := \begin{cases} \left( \frac{n(a_0+a_1)}{3} \right), & \text{if $3\nmid n$}, \\ \left( \frac{a_0}{3} \right), & \text{if $n\equiv 12 \pmod{27}$}. \end{cases} \] Furthermore, all generators of NIBs are given by $\{ \pm \sigma^{\ell}. \alpha \ |\ \ell \in {\mathbb Z}/3{\mathbb Z} \}$. We can choose integers $a_0,a_1$ such that $ec=a_0^2-a_0a_1+a_1^2$ and $a_0+a_1 \zeta_3 \, | \, n+3(1+\zeta_3)$ in ${\mathbb Z} [\zeta_3]$, and find them as follows. First, we can write $n^2+3n+9=(n+3(1+\zeta_3))(n+3(1+\zeta_3^2))$ and $ec=3^j p_1 \cdots p_k$ with $j=\begin{cases} 0 , & \text{ if } 3\nmid n,\\ 1, & \text{ if } n\equiv 12 \pmod{27} \end{cases}$ and $p_1,\ldots,p_k$ are prime numbers (not necessarily different) with $p_1 \equiv \cdots \equiv p_k \equiv 1\pmod{3}$. Let $\pi_i$ and $\pi_i'$ be prime elements of $\mathbb Z [\zeta_3 ]$ with $p_i=\pi_i \pi_i'$ and $\pi_i \, |\, (n+3(1+\zeta_3)),\ \pi_i' \, |\, (n+3(1+\zeta_3^2))$ in $\mathbb Z [\zeta_3]$. Then the integers $a_0$ and $a_1$ are given by $(1-\zeta_3)^j \pi_1 \cdots \pi_k=a_0+a_1 \zeta_3$ (note that if there exists a prime ideal containing $A_n$ and $A_n'$, then it is the prime ideal $(1-\zeta_3)$ above $3$). \end{rem} \section{Integral bases}\label{sec:IB} In this section, we consider an integral basis of $K_n$. Let $\rho =\rho_n$ be a root of the quintic polynomial $f_n(X)$ in (\ref{eq:f_n}). Ga\'al and Pohst \cite[Lemma~2]{GP} have shown that if $p^2$ does not divide $\Delta_n$ for any prime number $p$ different from $5$, then $K_n$ has an integral basis of the form $\{ 1, \rho, \rho^2, \rho^3 * \}$. Eloff, Spearman and Williams \cite[Theorem]{ESW1} have shown that if $\Delta_n$ is cube-free, then $K_n$ has an integral basis of the form $\{ 1, \rho, \rho^2, *, * \}$. In this section, we give an integral basis of $K_n$ in the case that $K_n/{\mathbb Q}$ is tamely ramified (that is, $5\nmid n$), which we use to give an NIB in \S~\ref{sec:NIB}. We assume that $5 \nmid n$, and hence, we have $5 \nmid \Delta_n$. We write $\Delta_n= ab^2c^3d^4e^5$ with $a,b,c,d,e \in {\mathbb Z}_{>0}$, where $a,b,c,d$ are square-free and pairwise coprime. From (\ref{eq:cond}), the conductor of $K_n$ is given by $\mathfrak f_{K_n} =abcd$. \begin{theo}\label{theo:IB} Assume that $5\nmid n$. Let $u$ and $t$ be integers satisfying $5u \equiv 1 \pmod{\Delta_n}$ and \begin{align} t\equiv & (n^2+3n+4)(11n^5+110n^4+440n^3+903n^2+940n+390)\Delta_n \notag \\ & -un^2(11n^3+55n^2+110n+199)\delta_n^2 \pmod{\Delta_n \delta_n^2}, \label{eq:t} \end{align} and define an algebraic integer $T$ by \begin{align} T :=& \dfrac{f_n (\rho)-f_n (t)}{\rho-t} \notag\\ =&(t^4+t^3 \rho +t^2 \rho^2 +t \rho^3+\rho^4) +n^2 (t^3 +t^2 \rho +t \rho^2+\rho^3) \notag \\ & -(2n^3+6n^2+10n+10)(t^2+t\rho +\rho^2) +(n^4+5n^3+11n^2+15n+5)(t+\rho) \label{eq:T} \\ & +(n^3+4n^2+10n+10). \notag \end{align} Put \[ \phi_1 := \frac{1}{e} (\rho -t), \quad \phi_2 := \frac{1}{cde^2} (\rho -t)^2, \quad \phi_3 :=\frac{1}{bcd^2e^3} (\rho -t)^3, \quad \phi_4 := \frac{1}{bc^2d^3e^4 \delta_n } T. \] Then $\{1,\phi_1,\phi_2,\phi_3,\phi_4\}$ is an integral basis of $K_n ={\mathbb Q} (\rho_n)$. \end{theo} We prove some lemmas for the proof of Theorem~\ref{theo:IB}. Fix an integer $n$ and put $\rho =\rho_n$. We can write Gal$(K_n/{\mathbb Q}) =\langle \sigma \rangle$ where the generator $\sigma$ satisfies \[ \sigma (\rho) =\frac{n+2+n\rho -\rho^2}{1+(n+2)\rho} \] (\cite[(3.2)]{SW}). For any $x\in K_n$ and $i\in {\mathbb Z}/5{\mathbb Z}$, we write $x^{(i)} =\sigma^i (x)$. In what follows in this paper, we use the explicit expression obtained by Spearman and Williams \cite[Proposition]{SpWi} for the conjugates of $\rho$.\footnote{ $\rho^{(i)}$ corresponds to $y_i$ in \cite[Proposition]{SpWi}. } By direct calculation, we obtain Lemmas~\ref{lem:powers}, \ref{lem:half-D} and \ref{lem:t-mod}. \begin{lem}\label{lem:powers} \begin{align*} \rho^2 =& -\frac{1}{n^2} \left\{ ( 4+4 n+3 n^2+2 n^3+n^4)\rho+ 2 (2+2 n+n^2) \rho^{(1)}+(2+n) (2+n+n^2) \rho^{(2)} \right. \\ & \left. +(2+n)^2 \rho^{(3)} +(2+n) (2+n+n^2) \rho^{(4)} \right\} ,\\ \rho^3 =& -\frac{1}{n^2} \left\{ -(3+3 n+n^2) (1+2 n+4 n^2+n^3+n^4) \rho -(1+n) (3+6 n+4 n^2+2 n^3) \rho^{(1)} \right. \\ & -(1+n)^2 (3+3 n+2 n^2+n^3) \rho^{(2)} -(1+n)^2 (3+3 n+n^2) \rho^{(3)} \\ & \left. - (3+9 n+13 n^2+9 n^3+4 n^4+n^5) \rho^{(4)} \right\}, \\ \rho^4 =& -\frac{1}{n^2} \left\{ (32+72 n+99 n^2+105 n^3+86 n^4 +50 n^5+21 n^6+6 n^7+n^8 )\rho \right. \\ &\, (32+72 n+83 n^2+60 n^3+30 n^4 +10 n^5+2 n^6 )\rho^{(1)} \\ &\, (32+72 n+94 n^2+80 n^3+47 n^4 +20 n^5+6 n^6+n^7 ) \rho^{(2)} \\ &\, (32+72 n+75 n^2+51 n^3+24 n^4 +7 n^5+n^6 ) \rho^{(3)} \\ &\left. (32+72 n+93 n^2+80 n^3+49 n^4 +21 n^5+6 n^6+n^7 ) \rho^{(4)} \right\} . \end{align*} \end{lem} \begin{lem}\label{lem:half-D} \begin{enumerate} \item[(1)] $\displaystyle{ \prod_{i=0}^4 (\rho^{(i)} -\rho^{(i+1)} )=\delta_n \Delta_n}$. \item[(2)] $\displaystyle{ \prod_{i=0}^4 (\rho^{(i)} -\rho^{(i+2)} )=-\Delta_n }$. \end{enumerate} \end{lem} \begin{lem}\label{lem:t-mod} Assume that $5\nmid n$. Let $u$ and $t$ be the integers in Theorem~\ref{theo:IB}. We have $t\equiv -un^2 \pmod{\Delta_n}$ and $t\equiv -(n^2+3n+4) \pmod{\delta_n^2}$. \end{lem} We put \begin{equation}\label{eq:abcd-prod} ab^2c^3d^4=\prod_{k=1}^{\ell }p_k^{m_k}, \quad \Delta_n =e^5 \prod_{k=1}^{\ell }p_k^{m_k} \end{equation} with $p_1,\ldots,p_{\ell}$ being distinct prime numbers and $m_k \in \{1,2,3,4 \}$. For each $ k$, let $\mathfrak p_k$ be a prime ideal of $K_n$ dividing $p_k$. Since $p_k$ is ramified in $K_n$, we have $p_k \mathcal O_{K_n} =(p_k)=\mathfrak p_k^5$. Let $v_{\mathfrak p}(x)$ be the $\mathfrak p$-adic valuation of $x \in K_n$ for a prime ideal $\mathfrak p$. \begin{lem}\label{lem:r-r} For any $i\in {\mathbb Z}/5{\mathbb Z}$, we have $(\rho^{(i)} -\rho^{(i+2)} )=e \prod_{k=1}^{\ell }\mathfrak p _k^{m_k}$, and hence $(\rho^{(i)}-\rho^{(i+2)})/(\rho^{(j)}-\rho^{(j+2)}) \in \mathcal O_{K_n}^{\times} $ for any $i,j\in {\mathbb Z}/5{\mathbb Z}$. \end{lem} \begin{proof} It follows that $v_{{\mathfrak p}_k}(\rho^{(0)}-\rho^{(2)})=v_{{\mathfrak p}_k}(\rho^{(1)}-\rho^{(3)})= v_{{\mathfrak p}_k}(\rho^{(2)}-\rho^{(4)})= v_{{\mathfrak p}_k}(\rho^{(3)}-\rho^{(0)})=v_{{\mathfrak p}_k}(\rho^{(4)}-\rho^{(1)}) $ for $k=1,\ldots \ell$ because these primes are ramified in $K_n$. Let $p$ be a prime number dividing $e$ but is different from $p_1,\ldots, p_{\ell}$, and $p^m || e \ (m\in \mathbb Z_{>0})$. The minimal polynomial of $\rho-\rho^{(1)}$ is given by \[ G_n(X):=X^5-\Delta_n X^3+(n+1)\Delta_n X^2+(n^2+4n+5)\Delta_n X-\delta_n \Delta_n \] (see \cite[p.78]{Je}). Let \begin{align*} g_n(X) & =p^{-5m} G_n (p^m X) \\ & =X^5-\Delta_n p^{-2m} X^3+(n+1)\Delta_n p^{-3m} X^2+ (n^2+4n+5)\Delta_n p^{-4m} X-\delta_n \Delta_n p^{-5m}. \end{align*} Then we have $g_n (X) \in \mathbb Z[X]$ and $g_n ((\rho-\rho^{(1)} )/p^m )=0$, and hence $(\rho-\rho^{(1)})/p^m \in O_{K_n}$. By acting elements of $\mathrm{Gal} (K_n/\mathbb Q)$, we obtain $(\rho^{(i)}-\rho^{(i+1)})/p^m \in O_{K_n}$ for any $i\in \mathbb Z/5\mathbb Z$, and $(\rho^{(i)}-\rho^{(i+2)})/p^m =(\rho^{(i)}-\rho^{(i+1)} )/p^m +(\rho^{(i+1)} -\rho^{(i+2)})/p^m \in O_{K_n}$. The claim follows from these facts, Lemma~\ref{lem:half-D} (2) and (\ref{eq:abcd-prod}). \end{proof} \begin{lem}\label{lem:r-integral} Assume that $5\nmid n$. Let $t$ be the integer and $T$ be the element of $\mathcal O_{K_n}$ in Theorem~\ref{theo:IB}. We have \[ \frac{\rho-t}{\rho -\rho^{(2)}}\ \in \mathcal O_{K_n}, \quad \frac{T}{(\rho-\rho^{(2)})^4 \delta_n} \ \in \mathcal O_{K_n}. \] \end{lem} \begin{proof} Put $y:=\rho -\rho^{(2)},\ \phi:=(\rho-t)/y$ and $\xi:=T/(y^4\delta_n) $. First, we show that $\phi \in \mathcal O_{K_n}$. Substituting $\rho=y \phi +t$ for $X$ in $f_n(X)=\sum_{k=0}^5 \frac{f_n^{(k)}(t)}{k!} (X-t)^k$, we get $\sum_{k=0}^5 \frac{f_n^{(k)} (t)}{k!} (y\phi )^k=0$ and hence $\phi$ is a root of the monic polynomial \begin{equation}\label{eq:phi-poly} \sum_{k=0}^5 \frac{f_n^{(k)}(t)}{k! y^{5-k}} X^k. \end{equation} We show that all coefficients of this polynomial are algebraic integers. By direct calculation, we have $f_n^{(k)}(-un^2)/k! \equiv 0 \pmod{\Delta_n}$ for any $k\in \{ 0,\ldots, 5\}$. Since $(\Delta_n )=(y^5)$ from Lemmas~\ref{lem:half-D} and \ref{lem:r-r}, and $t\equiv -un^2 \pmod{\Delta_n}$ from Lemma~\ref{lem:t-mod}, we have \begin{equation}\label{eq:phi-integral} \frac{f_n^{(k)}(t)}{k!} \in (\Delta_n )=(y^5) \quad (k\in \{0,\ldots, 5\}) \end{equation} and hence all the coefficients of the polynomial (\ref{eq:phi-poly}) are algebraic integers, which implies $\phi \in \mathcal O_{K_n}$. Next, we show that $\xi \in \mathcal O_{K_n}$. Since $\rho$ is a root of $f_n(X)$, we have $-f_n(t)=f_n (\rho)-f_n(t)=(\rho-t)T$ and hence \begin{equation}\label{eq:r-t} \rho -t =-\frac{f_n(t)}{T} =-\frac{f_n(t)}{y^4\delta_n \xi}. \end{equation} Furthermore, substituting $\rho$ for $X$ in $f_n(X)=\sum_{k=0}^5 \frac{f_n^{(k)}(t)}{k!} (X-t)^k$, we obtain \begin{equation}\label{eq:sum=0} \sum_{k=0}^5 \frac{f_n^{(k)} (t) }{k!} (\rho -t)^k=0. \end{equation} From (\ref{eq:r-t}) and (\ref{eq:sum=0}), we obtain \[ \sum_{k=0}^5 \frac{ f_n^{(k)}(t) f_n (t)^{k-1}}{k!} \left(- \frac{1}{y^4 \delta_n } \right)^k \xi^{5-k} =0. \] Therefore, $\xi$ is a root of the monic polynomial \begin{equation}\label{eq:xi-poly} \sum_{k=0}^5 \frac{ f_n^{(k)}(t) f_n (t)^{k-1}}{k!} \left(- \frac{1}{y^4 \delta_n } \right)^k X^{5-k}. \end{equation} We show that all coefficients of this polynomial are algebraic integers. By direct calculation, we obtain $f_n(-(n^2+3n+4)) \equiv 0 \pmod{\delta_n^2}$ and $f_n'(-(n^2+3n+4)) \equiv 0 \pmod{\delta_n}$. Since $t\equiv -(n^2+3n+4) \pmod{\delta_n^2}$ from Lemma~\ref{lem:t-mod}, we have \begin{equation}\label{eq:delta-delta} f_n(t) \in (\delta_n^2), \quad f_n'(t) \in (\delta_n). \end{equation} From (\ref{eq:phi-integral}), (\ref{eq:delta-delta}) and $(\Delta_n ,\delta_n )=1$, we obtain $f_n (t) \in (\Delta_n \delta_n^2) =(y^5 \delta_n^2)$ and $f_n'(t) \in (\Delta_n \delta_n )=(y^5 \delta_n)$, and hence \begin{equation}\label{eq:f-f'} \frac{f_n(t)}{y^5 \delta_n^2}\ \in \mathcal O_{K_n} , \quad \frac{f_n'(t)}{y^5\delta_n} \ \in \mathcal O_{K_n}. \end{equation} We see all coefficients of the polynomial (\ref{eq:xi-poly}) are algebraic integers from (\ref{eq:phi-integral}) and (\ref{eq:f-f'}), which implies $\xi\in \mathcal O_{K_n}$. \end{proof} We give the proof of Theorem~\ref{theo:IB}. For $x_1,\ldots, x_5 \in K_n$, put \[ d(x_1,\ldots, x_5) := \begin{vmatrix} x_1 & x_2 & \cdots & x_5 \\ x_1^{(1)} & x_2^{(1)} & \cdots & x_5^{(1)} \\ \vdots & \vdots & & \vdots \\ x_1^{(4)} & x_2^{(4)} & \cdots & x_5^{(4)} \end{vmatrix}^2. \] The set $ \{ x_1,\ldots, x_5 \}$ is an integral basis of $K_n$ if and only if $x_1,\ldots, x_5 \in \mathcal O_{K_n}$ and $d(x_1,\ldots ,x_5)=D_{K_n}$. \begin{proof}[Proof of Theorem~\ref{theo:IB}] From the multilinearity of a determinant, it follows that \[ d(1,\phi_1,\phi_2,\phi_3,\phi_4) =\frac{1}{(b^2c^4d^6e^{10} \delta_n)^2 }{d(1,\rho,\rho^2,\rho^3,\rho^4)} . \] Since $d(1,\rho, \rho^2,\rho^3,\rho^4)=d(f_n) =\delta_n^2 \Delta_n^4$, we obtain \[ d(1,\phi_1,\phi_2,\phi_3,\phi_4) = \mathfrak f_{K_n}^4=D_{K_n}. \] Next, we show that $\phi_1,\phi_2,\phi_3, \phi_4 \in \mathcal O_{K_n}$. From Lemmas \ref{lem:r-r} and \ref{lem:r-integral}, we have \begin{equation}\label{eq:(r-t)} (\rho -t) \subset (\rho -\rho^{(2)} ) =e \prod_{k=1}^{\ell} \mathfrak p_k^{m_k}, \end{equation} and hence $\phi_1 =(\rho -t)/e \in \mathcal O_{K_n}$. Let $p_j$ be a prime number dividing $cd$, then we have $m_j \in \{ 3,4 \}$. From (\ref{eq:(r-t)}), we obtain $( (\rho -t)/e)^2 \subset \prod_{k=1}^{\ell} \mathfrak p_k^{2m_k}$. Since $c$ and $d$ are square-free and coprime, we have \[ v_{\mathfrak p_j} (cd)=5 \leq 6 \leq v_{\mathfrak p_j} ( \prod_{k=1}^{\ell} \mathfrak p_k^{2m_k} ) \leq v_{\mathfrak p_j} ( ( \frac{\rho -t}{e} )^2 ). \] Since this inequality holds for all primes $p_j$ dividing $cd$, we conclude that $\phi_2= (\rho-t)^2/cde^2 \in \mathcal O_{K_n}$. Similarly, let $p_j$ be a prime number dividing $bcd$, then we have $m_j \in \{ 2,3,4 \}$. From (\ref{eq:(r-t)}), we obtain $( (\rho -t)/e)^3 \subset \prod_{k=1}^{\ell} \mathfrak p_k^{3m_k}$. Since $b,c$ and $d$ are square-free and pairwise coprime, if $p_j$ divides $bc$, then we have \[ v_{\mathfrak p_j} (bcd^2)=5 \leq 6 \leq v_{\mathfrak p_j} ( \prod_{k=1}^{\ell} \mathfrak p_k^{3m_k} ) \leq v_{\mathfrak p_j} ( ( \frac{\rho -t}{e} )^3 ), \] and if $p_j$ divides $d$, then we have \[ v_{\mathfrak p_j} (bcd^2)=10 \leq 12 = v_{\mathfrak p_j} ( \prod_{k=1}^{\ell} \mathfrak p_k^{3m_k} ) \leq v_{\mathfrak p_j} ( ( \frac{\rho -t}{e} )^3 ). \] We conclude that $\phi_3= (\rho-t)^3/bcd^2e^3 \in \mathcal O_{K_n}$. Finally, we show that $\phi_4 \in \mathcal O_{K_n}$. From Lemmas~\ref{lem:r-r} and \ref{lem:r-integral}, we have \[ \left( \frac{T}{\delta_n} \right) \subset (\rho-\rho^{(2)} )^4=e^4 \prod_{k=1}^{\ell} \mathfrak p_k^{4m_k}, \] and hence \[ \left( \frac{T}{e^4 \delta_n} \right) \subset \prod_{k=1}^{\ell} \mathfrak p_k^{4m_k}. \] Let $p_j$ be a prime number dividing $bcd$. If $p_j$ divides $b$, then we have $m_j=2$ and \[ v_{\mathfrak p_j} (bc^2d^3)=5 \leq 8 = v_{\mathfrak p_j} ( \prod_{k=1}^{\ell} \mathfrak p_k^{4m_k} ) \leq v_{\mathfrak p_j} ( \frac{T}{e^4 \delta_n} ), \] and if $p_j$ divides $c$, then we have $m_j=3$ and \[ v_{\mathfrak p_j} (bc^2d^3)=10 \leq 12 = v_{\mathfrak p_j} ( \prod_{k=1}^{\ell} \mathfrak p_k^{4m_k} ) \leq v_{\mathfrak p_j} ( \frac{T}{e^4\delta_n}). \] If $p_j$ divides $d$, then we have $m_j=4$ and \[ v_{\mathfrak p_j} (bc^2d^3)=15 \leq 16 = v_{\mathfrak p_j} ( \prod_{k=1}^{\ell} \mathfrak p_k^{4m_k} ) \leq v_{\mathfrak p_j} ( \frac{T}{e^4\delta_n}). \] Since these inequalities hold for all primes $p_j$ dividing $bcd$, we conclude that $\phi_4= T/(bc^2d^3e^4 $ $\delta_n ) \in \mathcal O_{K_n}$. The proof is complete. \end{proof} The following example is a case that $\Delta_n$ is cube-free, and Eloff, Spearman and Williams give an integral basis in their paper \cite[p.~770]{ESW1}. \begin{ex}\label{ex:n=14} Let $n=14$. We have $\Delta_n=11\times 71^2,\ a=11,\ b=71,\ c=d=e=1,\ \delta_n=79\times 7^2 ,\, \mathfrak f_{K_n}=11\times 71$ and $D_{K_n}=11^4 \times 71^4$. Integers $u=44361$ and $t=645583287961$ satisfy $5u \equiv 1 \pmod{\Delta_n}$ and (\ref{eq:t}) respectively. Let \[ \phi_1 := \rho-t,\ \phi_2:=(\rho-t)^2, \ \phi_3:= \frac{1}{71} (\rho-t)^3,\ \phi_4:= \frac{1}{71 \delta_n} T \] with \begin{align*} T:= &\rho^4 + 645583288157\rho^3 + 416777781821069771971063\rho^2 \\ & + 269064770737138517575467371288327050\rho \\ & + 173703719366954541829067611507591745478095648728. \end{align*} It follows that $\{ 1, \phi_1, \phi_2, \phi_3, \phi_4 \}$ is an integral basis of $K_n$ from Theorem~\ref{theo:IB}. On the other hand, Eloff, Spearman and Williams \cite[Example]{ESW1} gave another integral basis $\{ 1,\rho,\rho^2, v_4,v_5 \}$ with \begin{align*} v_4 & := \frac{1}{71} (5+29 \rho +4\rho^2+\rho^3) , \\ v_5 & := \frac{1}{274841}(50339+27624\rho+112706 \rho^2+220601 \rho^3+\rho^4). \end{align*} The two bases satisfy the relation $(1,\rho,\rho^2,v_4,v_5)R=(1,\phi_1, \phi_2,\phi_3,\phi_4)$ by the matrix $R$ with det$(R)$=1 given by \begin{align*} R:= &{\scriptsize \left[ \begin{array}{ccc} 1 & -645583287961 & 416777781694535447537521 \\ 0 & 1 &-1291166575922 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right.} \\ &{\scriptsize \left. \begin{array}{cc} -3789644657119015103770949691480066 & 632015308367217925378919489841733010177449 \\ 17610328803994455529754 & 978983378524814411152079381822 \\ -27278167097 & 1516432343858767213\\ 1 &166774236 \\ 0 & 1 \end{array} \right]. } \end{align*} \end{ex} The next example is a case that $\Delta_n$ is not cube-free and no NIB has been obtained in \cite[p.~771]{ESW1}. \begin{ex}\label{ex:n=44} Let $n=44$. We have $\Delta_n=61\times 41^3,\ a=61,\ b=1,\ c=41,\ d=e=1,\ \delta_n=95311 $ and $\mathfrak f_{K_n}=41\times 61$ and $D_{K_n}=41^4 \times 61^4$. Integers $u=3363345$ and $t=30447786579308863$ satisfy $5u \equiv 1 \pmod{\Delta_n}$ and (\ref{eq:t}) respectively. Let \[ \phi_1 := \rho-t,\ \phi_2:=\frac{1}{41} (\rho-t)^2, \ \phi_3:= \frac{1}{41} (\rho-t)^3,\ \phi_4:= \frac{1}{41^2 \delta_n} T \] with \begin{align*} T:= & \rho^4 + 30447786579310799\rho^3 + 927067707579199859568212292129103\rho^2\\ & +28227159704940594977085733369537146411892512335866 \rho \\ & +859454534436098172963532872007773204827694194742789092180006673736. \end{align*} It follows that $\{ 1, \phi_1, \phi_2, \phi_3, \phi_4 \}$ is an integral basis of $K_n$ from Theorem~\ref{theo:IB}. \end{ex} \section{Normal integral bases}\label{sec:NIB} Let $n$ be an integer and put $\zeta:=\zeta_5$. We construct the generator of an $\mathrm{NIB}$ from a normal basis and the integral basis given by Theorem\,\ref{theo:IB}. The idea is based on the method of Acciaro and Fieker \cite{AF}. \begin{lem}\label{lem:NB} The set $\{ \rho, \rho^{(1)}, \rho^{(2)}, \rho^{(3)},\rho^{(4)} \}$ is a normal basis if and only if $n\ne 0$. \end{lem} \begin{proof} Since $K_n$ and ${\mathbb Q} (\zeta)$ are linearly disjoint over ${\mathbb Q}$, it follows that $1, \zeta^k, \zeta^{2k}, \zeta^{3k}, \zeta^{4k} $ are linearly independent over $K_n$ for any $k\in \{1,2,3,4 \}$, and hence \[ \rho +\rho^{(1)} \zeta^k +\rho^{(2)} \zeta^{2k} +\rho^{(3)} \zeta^{3k} +\rho^{(4)} \zeta^{4k} \ne 0. \] We conclude that \[ d(\rho, \rho^{(1)} ,\rho^{(2)}, \rho^{(3)},\rho^{(4)} )=\prod_{k=0}^4 ( \rho +\rho^{(1)} \zeta^k +\rho^{(2)} \zeta^{2k} +\rho^{(3)} \zeta^{3k} +\rho^{(4)} \zeta^{4k})^2 \ne 0 \] if and only if \[ \rho+\rho^{(1)}+\rho^{(2)} +\rho^{(3)} +\rho^{(4)} =-n^2 \ne 0. \] \end{proof} Put $A_n:=n+2+2\zeta^4+\zeta^2,\ B_n:=n+2+2\zeta^2+\zeta,\ C_n:=n+2+2\zeta^3+\zeta^4,\ D_n:=n+2+2\zeta+\zeta^3$. We have $\Delta_n=A_nB_nC_nD_n=N_{{\mathbb Q}(\zeta)/{\mathbb Q}}(A_n)$. For $\lambda \in {\mathbb Z} [\zeta]$ with $\lambda \not\in (1-\zeta)$, we define $u_{\lambda} \in {\mathbb Z} [\zeta ]^{\times}$ by \[ u_{\lambda}=\begin{cases} 1, & \text{if $\lambda \equiv 1 \pmod{(1-\zeta)} $}, \\ -1, & \text{if $\lambda \equiv -1 \pmod{(1-\zeta)} $}, \\ \frac{1+\sqrt{5}}{2}, & \text{if $\lambda \equiv 2 \pmod{(1-\zeta)} $}, \\ -\frac{1+\sqrt{5}}{2} & \text{if $\lambda \equiv -2 \pmod{(1-\zeta)} $}. \end{cases} \] \begin{lem}\label{lem:prime-ideal} If $\mathfrak p$ is a prime ideal of ${\mathbb Q}(\zeta)$ including at least two of $A_n, B_n, C_n, D_n$, then we have $\mathfrak p =(1-\zeta)$ $($the prime ideal above $5)$. In particular, if $5 \nmid \Delta_n$ then $A_n, B_n, C_n,$ and $D_n$ are pairwise coprime. \end{lem} \begin{proof} Assume that $A_n, B_n \in \mathfrak p$ (we can show other cases similarly). Since $A_n-B_n=2\zeta^4-\zeta^2-\zeta \in \mathfrak p$, we have $N_{{\mathbb Q} (\zeta)/{\mathbb Q} } (2\zeta^4-\zeta^2-\zeta) \in N \mathfrak p$. Furthermore, we have $N_{{\mathbb Q}(\zeta)/{\mathbb Q}}(2\zeta^4-\zeta^2-\zeta)=5^2$, and hence $\mathfrak p$ is the prime ideal $(1-\zeta)$. \end{proof} \begin{lem}\label{lem:p=0,1} \cite[Lemma~2.1.1]{Je} Let $p$ be a prime number dividing $\Delta_n$. We have $p\equiv 0,1 \pmod{5}$. \end{lem} \begin{lem}\label{lem:alpha-A} Assume that $5\nmid n$. Let $s=p_1 \cdots p_k$ ($p_1,\cdots,p_k$ are not necessarily different prime numbers) be a positive integer dividing $\Delta_n$. Then, $p_i \, (i\in \{1,\ldots, k\})$ decomposes into a product of prime elements $\pi_i^{(t)} \, (t \in \{1,\ldots 4 \})$ in ${\mathbb Z} [\zeta]$ as follows. \begin{equation}\label{eq:alpha-A} p_i= \prod_{t=1}^4 \pi_i^{(t)}, \quad \pi_i^{(1)}|A_n ,\ \pi_i^{(2)} |B_n,\ \pi_i^{(3)} |C_n,\ \pi_i^{(4)} |D_n \ \text{ in } {\mathbb Z} [\zeta]. \end{equation} Furthermore, let $\lambda_t := \prod_{i=1}^k \pi_i^{(t)} \, (\in {\mathbb Z} [\zeta ])$, $u_{\lambda_t}$ be the unit defined before Lemma~\ref{lem:prime-ideal} and $\alpha_A:=u_{\lambda_1} \lambda_1,\ \alpha_B:=u_{\lambda_2} \lambda_2,\ \alpha_C:=u_{\lambda_3} \lambda_3,\ \alpha_D:=u_{\lambda_4} \lambda_4$ then we have $s=N_{{\mathbb Q} (\zeta)/{\mathbb Q}} (\alpha_A)=N_{{\mathbb Q} (\zeta)/{\mathbb Q}} (\alpha_B)=N_{{\mathbb Q} (\zeta)/{\mathbb Q}} (\alpha_C)=N_{{\mathbb Q} (\zeta)/{\mathbb Q}} (\alpha_D)$, $\alpha_A \equiv \alpha_B \equiv \alpha_C\equiv \alpha_D\equiv 1 \pmod{(1-\zeta)}$ and $\alpha_A |A_n,\ \alpha_B | B_n,\ \alpha_C | C_n ,\ \alpha_D |D_n$ in ${\mathbb Z}[\zeta]$. \end{lem} \begin{proof} From Lemma\,\ref{lem:p=0,1} and $5\nmid n$, we have $p_1 \equiv \cdots \equiv p_k \equiv 1 \pmod{5}$. Since $\mathbb{Z}[\zeta]$ is a unique factorization domain and $p_1,\cdots,p_k$ split completely in $\mathbb{Q}(\zeta)$, it follows that $p_i \, (i\in \{1,\ldots, k\})$ decomposes as (\ref{eq:alpha-A}). The latter claim follows from $N_{{\mathbb Q} (\zeta)/{\mathbb Q}}(u)=1$ for $u\in {\mathbb Z} [\zeta]^{\times}$ and Lemma~\ref{lem:prime-ideal}. \end{proof} Assume that $5\nmid n$ and let $ \{1,\phi_1,\phi_2,\phi_3,\phi_4 \}$ be the integral basis given by Theorem~\ref{theo:IB}. By direct calculation, we obtain the following lemma. \begin{lem}\label{lem:phi-to-rho} Assume that $5\nmid n$ and put $\ell :=-bc^2d^3e^4 \delta_n n^2$. We have \[ 1=g_0. \frac{\rho}{\ell}, \ \phi_1 =g_1. \frac{\rho}{\ell},\ \phi_2=g_2. \frac{\rho}{\ell},\ \phi_3=g_3. \frac{\rho}{\ell}, \ \phi_4=g_4. \frac{\rho}{\ell} \] with $g_0,g_1,g_2,g_3,g_4 \in {\mathbb Z} [G]$ given by \begin{align*} g_0 := &-\dfrac{\ell}{n^2} (1+\sigma+\sigma^2+\sigma^3+\sigma^4), \\ g_1 := & \dfrac{\ell}{en^2} \{ (n^2+t)+t\sigma +t\sigma^2+t\sigma^3+t\sigma^4 \}, \\ g_2 := & -\dfrac{\ell}{cde^2n^2} \{(4+4n+3n^2+2n^3+n^4+2n^2t+t^2) + (4+4n+2n^2+t^2)\sigma \\ & +(4+4n+3n^2+n^3+t^2)\sigma^2 +(4+4n+n^2+t^2)\sigma^3 \\ &+(4+4n+3n^2+n^3+t^2)\sigma^4 \}, \end{align*} \begin{align*} g_3 := & -\dfrac{\ell}{bcd^2e^3n^2} \{ (-3-9n-19n^2-17n^3-10n^4-4n^5-n^6-12t-12nt \\ & -9n^2t-6n^3t-3n^4t-3n^2t^2-t^3)\\ &+(-3-9n-10n^2-6n^3-2n^4-12t-12nt-6n^2t-t^3)\sigma \\ & +(-3-9n-11n^2-8n^3-4n^4-n^5-12t-12nt-9n^2t-3n^3t-t^3)\sigma^2 \\ &+(-3-9n-10n^2-5n^3-n^4-12t-12nt-3n^2t-t^3)\sigma^3 \\ &+(-3-9n-13n^2-9n^3-4n^4-n^5-12t-12nt-9n^2t-3n^3t-t^3)\sigma^4 \}, \\ g_4 := & -\dfrac{\ell}{bc^2d^3e^4\delta_n n^2} \{ (2+2n+n^2+2t+6nt+6n^2t+2n^3t-6t^2-6nt^2-3n^2t^2+t^4) \\ &+(2+2n+2t+6nt+5n^2t+3n^3t+n^4t-6t^2-6nt^2-4n^2t^2-2n^3t^2+n^2t^3+t^4)\sigma \\ & +(2+2n+n^2+2t+6nt+4n^2t+n^3t-6t^2-6nt^2-3n^2t^2-n^3t^2+n^2t^3+t^4)\sigma^2 \\ & +(2+2n+2n^2+n^3+2t+6nt+5n^2t+4n^3t+n^4t \\ & -6t^2-6nt^2-5n^2t^2-2n^3t^2+n^2t^3+t^4)\sigma^3 \\ &+ (2+2n+2t+6nt+2n^2t-6t^2-6nt^2-3n^2t^2-n^3t^2+n^2t^3+t^4)\sigma^4 \}. \end{align*} \end{lem} From Lemma~\ref{lem:phi-to-rho} it follows that \begin{align} \mathcal{O}_{K_n} & ={\mathbb Z} +\phi_1 {\mathbb Z}+\phi_2 {\mathbb Z}+\phi_3{\mathbb Z}+\phi_4{\mathbb Z} \notag \\ &= (g_0 {\mathbb Z} [G]+g_1 {\mathbb Z} [G] +g_2{\mathbb Z}[G] +g_3{\mathbb Z}[G] +g_4{\mathbb Z}[G] ). \dfrac{\rho}{\ell} \label{eq:O_Ln} \end{align} Let $\alpha$ be a generator of an NIB of $K_n$, then there exists $g \in {\mathbb Z} [G]$ satisfying \begin{equation}\label{eq:alpha-g} \alpha=g. \frac{\rho}{\ell} \end{equation} Therefore, we have \begin{equation}\label{eq:O_Ln-g} \mathcal O_{K_n }={\mathbb Z} [G] . \alpha =g{\mathbb Z} [G].\frac{\rho}{\ell} \end{equation} From (\ref{eq:O_Ln}) and (\ref{eq:O_Ln-g}) we obtain the equality as ideals of $\mathbb{Z}[G]$ : \[ (g)+\mathrm{Ann}_{\mathbb{Z}[G]}\left(\frac{\rho}{l}\right) =(g_0, g_1,g_2,g_3,g_4)+\mathrm{Ann}_{\mathbb{Z}[G]}\left(\frac{\rho}{l}\right). \] Since $\{\rho/\ell ,\rho^{(1)}/\ell ,\rho_n^{(2)}/\ell, \rho^{(3)} /\ell, \rho^{(4)}/\ell \}$ is a normal basis of $K_n$, we have $\mathrm{Ann}_{\mathbb{Z}[G]}\left(\rho_n/\ell \right)=0$, and hence we get the equality as ideals of $\mathbb{Z}[G]$ : \[ (g)=(g_0,g_1,g_2,g_3,g_4). \] Consider the surjective ring homomorphism \[ \nu\ :\ \mathbb{Z}[G]\longrightarrow\mathbb{Z}[\zeta], \] defined by $\nu(\sigma)=\zeta$. We calculate the image of the ideal $I:=(g)=(g_0,g_1,g_2,g_3,g_4)$ by $\nu$. Since $\nu$ is surjective, we obtain the ideal of $\mathbb{Z}[\zeta]$ : \begin{equation}\label{eq:nu-I} \nu(I)=(\nu(g))=(\nu(g_0), \nu(g_1),\nu(g_2),\nu(g_3),\nu(g_4)). \end{equation} \begin{lem}\label{lem:nu(g)} Assume that $5 \nmid n$. We have the following. \begin{itemize} \item[(1)] $\nu (g_0)=0$. \item[(2)] $\nu (g_1)=- n^2 \delta_n bc^2d^3e^3$. \item[(3)] $\nu (g_2)=n^2 \delta_n A_n bcd^2e^2 x$ with $x\in {\mathbb Z} [\zeta ]$ which is prime to $B_n$. \item[(4)] $\nu (g_3)=n^2 \delta_n A_n B_n cde y$ with $y\in {\mathbb Z} [\zeta ]$ which is prime to $C_n$. \item[(5)] $\nu (g_4)=n^2 \delta_n A_n B_nC_n z$ with $z\in {\mathbb Z} [\zeta ]$ which is prime to $D_n$. \end{itemize} \end{lem} \begin{proof} We can easily prove (1) and (2). \begin{itemize} \item[(3)] We have $\nu (g_2)=n^2 \delta_n bcd^2e^2 H_2$ with $H_2:=n^2+(1-\zeta-\zeta^3)n +2t-2\zeta^3-\zeta$. Since $5u \equiv 1 \pmod{\Delta_n }$ and $t\equiv -un^2 \pmod{ \Delta_n }$, it follows that $ H_2 \equiv uA_n (3n+4+2\zeta^2-\zeta^4) \pmod{\Delta_n}$. Furthermore, we have $3n+4+2\zeta^2-\zeta^4 \equiv \zeta^3-3\zeta^2-2\zeta-1 \pmod{B_n}$ and $N_{{\mathbb Q} (\zeta)/{\mathbb Q}}(\zeta^3-3\zeta^2-2\zeta-1)=5^3$. Put $x:=H_2/A_n \, (\in {\mathbb Z}[\zeta])$. Then it follows that $x$ is prime to $B_n$ and $\nu (g_2)=n^2 \delta_n A_n bcd^2e^2x$. \item[(4)] We have $\nu (g_3)=n^2 \delta_n cde H_3$ with \begin{align*} H_3 := & \{-n^4+(\zeta^3+\zeta-3)n^3+(3\zeta^3+2\zeta-6-3t)n^2+(4\zeta^3+\zeta^2+3\zeta-8 \\ & +3\zeta^3t+3\zeta t-3t)n -3t^2+6\zeta^3t+3\zeta t+3\zeta^3+2\zeta^2+3\zeta-6\}. \end{align*} Since $5u \equiv 1 \pmod{\Delta_n }$ and $ t\equiv -un^2 \pmod{ \Delta_n }$, it follows that $ H_3 \equiv u^2A_n B_n H_3'\pmod{\Delta_n}$ with \[ H_3':=-13n^2+(-16\zeta^3+13\zeta^2-3\zeta-34)n-5\zeta^3+40\zeta^2+10\zeta-5. \] Furthermore, we have $H_3' \equiv -10\zeta^3-10\zeta^2-5\zeta\pmod{C_n}$ and $N_{{\mathbb Q} (\zeta)/{\mathbb Q}}(-10\zeta^3-10\zeta^2-5\zeta)=5^5$. Put $y:=H_3/(A_n B_n) \, (\in {\mathbb Z}[\zeta])$. Then it follows that $y$ is prime to $C_n$ and $\nu (g_3)=n^2 \delta_n A_n B_n cdey$. \item[(5)] We have $\nu (g_4)=n^2 H_4$ with \begin{align*} H_4 := & \{ (\zeta^3+\zeta)n^2t +(-\zeta^3-\zeta+1)nt^2+(4\zeta^3+\zeta^2+3\zeta+2)nt +\zeta^3n-t^3 \\ & + (-2\zeta^3-\zeta)t^2+(3\zeta^3+2\zeta^2+3\zeta+4)t+(2\zeta^3+\zeta^2+1)\}. \end{align*} Since $5u \equiv 1 \pmod{\Delta_n }$ and $t\equiv -un^2 \pmod{ \Delta_n }$, it follows that $ H_4 \equiv u^3A_n B_n C_nH_4'\pmod{\Delta_n}$ with \[ H_4':=n^3+(-4\zeta^3-3\zeta+2)n^2+(-5\zeta^3+5\zeta^2-5\zeta)n+15\zeta^3-10\zeta^2-5. \] Furthermore, we have $H_4' \equiv 25\zeta^3-25\zeta^2+25\zeta \pmod{D_n}$ and $N_{{\mathbb Q} (\zeta)/{\mathbb Q}}(25\zeta^3-25\zeta^2+25\zeta)=5^8$. Therefore $H_4'$ is prime to $D_n$. Furthermore, since $t\equiv -(n^2+3n+4) \pmod{\delta_n}$, we have $H_4 \equiv 0 \pmod{\delta_n}$ and hence $H_4 =\delta_n H_4''$ with $H_4'' \in {\mathbb Z} [\zeta]$. Since $\Delta_n =A_nB_nC_nD_n$ and $\delta_n$ are coprime and $H_4 \equiv u^3A_n B_n C_nH_4'\pmod{\Delta_n}$, it follows that $A_nB_nC_n | H_4''$ in ${\mathbb Z} [\zeta]$ and $H_4''/(A_nB_nC_n)$ and $D_n$ are coprime. Put $z:=H_4''/(A_n B_nC_n) \, (\in {\mathbb Z}[\zeta])$. Then it follows that $z$ is prime to $D_n$ and $\nu (g_4)=n^2 \delta_n A_n B_n C_n z$. \end{itemize} \end{proof} The next lamma follows immediately from Lemma~\ref{lem:alpha-A}. \begin{lem}\label{lem:alpha123} Assume that $5\nmid n$. \begin{enumerate} \item[(1)] Let $bc^2d^3e^3=p_1 \cdots p_k$ ($p_1,\cdots,p_k$ are not necessarily different prime numbers). Then, $p_i \, (i\in \{1,\ldots, k\})$ decomposes into $p_i= \prod_{t=1}^4 \pi_i^{(t)}$ with $\pi_i^{(1)}|A_n $ in ${\mathbb Z} [\zeta]$ where $ \pi_i^{(t)} $ are prime elements of ${\mathbb Z} [\zeta]$. Furthermore, let $\lambda_1 := \prod_{i=1}^k \pi_i^{(1)} \, (\in {\mathbb Z} [\zeta ])$, $u_{\lambda_1}\, (\in {\mathbb Z}[\zeta]^{\times})$ the unit defined before Lemma~\ref{lem:prime-ideal} and $\alpha_1:=u_{\lambda_1} \lambda_1$ then we have $bc^2d^3e^3=N_{{\mathbb Q} (\zeta)/{\mathbb Q}} (\alpha_1)$, $\alpha_1 \equiv 1 \pmod{(1-\zeta)}$ and $\alpha_1 |A_n$ in ${\mathbb Z}[\zeta]$. \item[(2)] Let $bcd^2e^2=p_1 \cdots p_k$ ($p_1,\cdots,p_k$ are not necessarily different prime numbers). Then, $p_i \, (i\in \{1,\ldots, k\})$ decomposes into $p_i= \prod_{t=1}^4 \pi_i^{(t)}$ with $\pi_i^{(2)}|B_n $ in ${\mathbb Z} [\zeta]$ where $ \pi_i^{(t)} $ are prime elements of ${\mathbb Z} [\zeta]$. Furthermore, let $\lambda_2 := \prod_{i=1}^k \pi_i^{(2)} \, (\in {\mathbb Z} [\zeta ])$, $u_{\lambda_2}\, (\in {\mathbb Z}[\zeta]^{\times})$ the unit defined before Lemma~\ref{lem:prime-ideal} and $\alpha_2:=u_{\lambda_2} \lambda_2$ then we have $bcd^2e^2=N_{{\mathbb Q} (\zeta)/{\mathbb Q}} (\alpha_2)$, $\alpha_2 \equiv 1 \pmod{(1-\zeta)}$ and $\alpha_2 |B_n$ in ${\mathbb Z}[\zeta]$. \item[(3)] Let $cde=p_1 \cdots p_k$ ($p_1,\cdots,p_k$ are not necessarily different prime numbers). Then, $p_i \, (i\in \{1,\ldots, k\})$ decomposes into $p_i= \prod_{t=1}^4 \pi_i^{(t)}$ with $\pi_i^{(3)}|C_n $ in ${\mathbb Z} [\zeta]$ where $ \pi_i^{(t)} $ are prime elements of ${\mathbb Z} [\zeta]$. Furthermore, let $\lambda_3 := \prod_{i=1}^k \pi_i^{(3)} \, (\in {\mathbb Z} [\zeta ])$, $u_{\lambda_3}\, (\in {\mathbb Z}[\zeta]^{\times})$ the unit defined before Lemma~\ref{lem:prime-ideal} and $\alpha_3:=u_{\lambda_3} \lambda_3$ then we have $cde=N_{{\mathbb Q} (\zeta)/{\mathbb Q}} (\alpha_3)$, $\alpha_3 \equiv 1 \pmod{(1-\zeta)}$ and $\alpha_3 |C_n$ in ${\mathbb Z}[\zeta]$. \end{enumerate} \end{lem} The following proposition gives the generator of the ideal $\nu (I)$. \begin{prop}\label{prop:gen-nu} Assume that $5\nmid n$ and let $\alpha_1, \alpha_2, \alpha_3 \, (\in {\mathbb Z} [\zeta] )$ be the elements given in Lemma~\ref{lem:alpha123}. Then we have $\nu (I) =(n^2 \delta_n \alpha_1 \alpha_2 \alpha_3)$. \end{prop} \begin{proof} From (\ref{eq:nu-I}) and Lemma~\ref{lem:nu(g)}, we have \begin{align} \nu (I) & = (\nu (g_0), \nu (g_1), \nu (g_2) ,\nu(g_3), \nu (g_4)) \label{eq:fact-nu(I)} \\ & = (n^2 \delta_n )(bc^2d^3e^3, A_n bcd^2e^2x, A_nB_n cdey, A_nB_nC_nz) \notag \end{align} with $x,y$ and $z \, (\in {\mathbb Z} [\zeta])$ being prime to $B_n, C_n$ and $D_n$, respectively. Since \[ (bc^2d^3e^3, A_n bcd^2e^2x, A_nB_n cdey, A_nB_nC_nz)=(\alpha_1 \alpha_2 \alpha_3), \] it follows that $\nu (I)=(n^2 \delta_n \alpha_1 \alpha_2 \alpha_3)$. \end{proof} \begin{theo}\label{theo:main} Let $n$ be an integer with $5 \nmid n$ and $\alpha_1, \alpha_2, \alpha_3 \, (\in {\mathbb Z}[\zeta])$ the elements given in Lemma~\ref{lem:alpha123}. Put $\alpha_1 \alpha_2 \alpha_3 =\beta_0 +\beta_1 \zeta+\beta_2 \zeta^2+\beta_3 \zeta^3$ with $\beta_0,\beta_1,\beta_2,\beta_3 \in {\mathbb Z}$ and \[ m:=\dfrac{1}{5} \left(\left( \frac{n}{5} \right) bc^2d^3e^4-n^2 (\beta_0+\beta_1+\beta_2+\beta_3) \right) \quad (\in {\mathbb Z}). \] Then \[ \dfrac{1}{bc^2d^3e^4} (\beta_0 \rho +\beta_1 \rho^{(1)}+\beta_2 \rho^{(2)} +\beta_3 \rho^{(3)} -m) \] is a generator of a normal integral basis of Emma Lehmer's quintic field $K_n$. \end{theo} \begin{proof} Let $x:=\delta_n n^2 (\beta_0 +\beta_1 \sigma +\beta_2 \sigma^2+\beta_3 \sigma^3) \, (\in {\mathbb Z}[G])$ and $\beta := \alpha_1 \alpha_2 \alpha_3=\beta_0+\beta_1 \zeta+\beta_2\zeta^2+\beta_3\zeta^3 \, (\in {\mathbb Z} [\zeta])$, we have $\nu (x) =\delta_n n^2 \beta$. Furtermore, from Proposition~\ref{prop:gen-nu}, for the generator $g$ of the ideal $I=(g)=(g_0,g_1,g_2,g_3,g_4)$, we have $\nu (g)=v\delta_n n^2 \beta$ with $v\in {\mathbb Z} [\zeta]^{\times}$. Put \[ v_0 := \begin{cases} 1 & (v\equiv \pm 1 \pmod{ (1-\zeta)}, \\ \dfrac{1-\zeta^2}{1-\zeta}=1+\zeta & (v\equiv \pm 2 \pmod{(1-\zeta)}. \end{cases} \] Since $v_0 \, (\in {\mathbb Z} [\zeta]^{\times})$ satisfies $v^{-1} v_0 \equiv \pm 1 \pmod{(1-\zeta)}$, there exists $\xi \in {\mathbb Z} [G]^{\times}$ satisfying $\nu (\xi)=v^{-1} v_0 $ (\cite[p133, Theorem~1.6]{AF}). Put \[ \tau := \begin{cases} 1 & (v\equiv \pm 1 \pmod{ (1-\zeta)},\\ 1+\sigma & (v\equiv \pm 2 \pmod{(1-\zeta)}. \end{cases} \] Then we have \[ \nu (\xi g)=\nu (\xi)\nu(g)=v_0 \delta_n n^2 \beta=\nu (\tau) \nu (x)=\nu (\tau x), \] and hence \[ \xi g-\tau x \ \in \text{Ker}\, \nu =(1_G+\sigma+\sigma^2+\sigma^3+\sigma^4). \] It concludes that there exists $m' \in {\mathbb Z}$ satisfying \begin{equation}\label{eq:rho/ell} (\xi g- \tau x). \dfrac{\rho}{\ell} =m' \text{Tr} \left(\dfrac{\rho}{\ell} \right), \end{equation} where $\ell =-bc^2d^3e^4 \delta_n n^2$ and Tr is the trace map from $K_n$ to ${\mathbb Q}$. Let $\alpha =g. (\rho/\ell)$ be the generator of an NIB in (\ref{eq:alpha-g}). From (\ref{eq:rho/ell}) we have \begin{align} \xi.\alpha & =(\xi g).\frac{\rho}{\ell} \notag \\ & =(\tau x).\frac{\rho}{\ell }+m'\mathrm{Tr}\left(\frac{\rho}{\ell}\right) \label{eq:xi-alpha}\\ & = \dfrac{n^2}{\ell} (\delta_n \tau (\beta_0 \rho +\beta_1 \rho^{(1) }+\beta_2 \rho^{(2)} +\beta_3 \rho^{(3)})-m' ) . \notag \end{align} Since $\xi. \alpha$ is a generator of an NIB, we have Tr$(\xi.\alpha)=\pm 1$. Furthermore, we have Tr$(\rho^{(i)}) =-n^2\, (i \in {\mathbb Z}/5{\mathbb Z})$. Taking the traces on both sides of (\ref{eq:xi-alpha}) and multiplying them by $bc^2d^3e^4 \delta_n$, it follows that \begin{equation}\label{eq:trace-xi-alpha} \pm bc^2d^3e^4 \delta_n =\delta_n sn^2 (\beta_0+\beta_1+\beta_2+\beta_3)+5m',\ s:=\begin{cases} 1 & (v\equiv \pm 1 \pmod{(1-\zeta)}), \\ 2 & (v\equiv \pm 2 \pmod{(1-\zeta)}). \end{cases} \end{equation} We can write $m'=\delta_n m$ with $m \in \mathcal O_{K_n}$ from (\ref{eq:xi-alpha}), and $m\in {\mathbb Q} \cap \mathcal O_{K_n }={\mathbb Z}$ from (\ref{eq:trace-xi-alpha}). It follows from (\ref{eq:trace-xi-alpha}) that \begin{equation}\label{eq:trace-xi-alpha-m} \pm bc^2d^3e^4=sn^2(\beta_0 +\beta_1+\beta_2+\beta_3)+5m. \end{equation} Since $\alpha_1 \equiv \alpha_2 \equiv \alpha_3\equiv 1 \pmod{(1-\zeta)}$, we obtain \[ \beta_0+\beta_1+\beta_2+\beta_3 \equiv \alpha_1\alpha_2\alpha_3 \equiv 1 \pmod{(1-\zeta)}. \] Since $\beta_0 , \beta_1,\beta_2,\beta_3 \in \mathbb Z$, we have $\beta_0+\beta_1+\beta_2+\beta_3 \equiv 1\pmod{5}$, and $s\equiv \pm 1\pmod{5}$ from $b\equiv c\equiv d\equiv e\equiv 1 \pmod{5}$ \cite[Lemma~2.1.1]{Je} and (\ref{eq:trace-xi-alpha-m}). Therefore, we have $s=1$ ($v\equiv \pm 1 \pmod{(1-\zeta)}$ and $\tau=1$). and it concludes from (\ref{eq:trace-xi-alpha-m}) that \[ m=\frac{1}{5} \left( \left( \frac{n}{5} \right) bc^2d^3e^4 -n^2(\beta_0+\beta_1+\beta_2+\beta_3)\right) \ (\in {\mathbb Z}). \] From (\ref{eq:xi-alpha}) and $m'=\delta_n m$, it follows that $-(\beta_0 \rho +\beta_1 \rho^{(1)} +\beta_2 \rho^{(2)} +\beta_3 \rho^{(3)} -m) /bc^2d^3e^4$ and its $-1$ times are both generators of an NIB. The proof is complete. \end{proof} \begin{rem}\label{rem:alpha123} For the integers $\beta_0,\beta_1,\beta_2,\beta_3$ in Theorem~\ref{theo:main}, we have $N_{{\mathbb Q} (\zeta)/{\mathbb Q}} (\beta_0+\beta_1 \zeta +\beta_2 \zeta^2 +\beta_3 \zeta^3)=N_{{\mathbb Q} (\zeta)/{\mathbb Q}} (\alpha_1\alpha_2\alpha_3)=b^2c^4d^6e^6$ from Lemma~\ref{lem:alpha123}. \end{rem} \begin{cor}\label{cor:NIB-squarefree} Let $n$ be an integer with $5 \nmid n$. Then $K_n$ has a generator of a normal integral basis of the form $v+w \rho \ (v,w\in {\mathbb Z})$ if and only if $\Delta_n$ is square-free. Furthermore, in this case, the integers $v$ and $w$ are given by $w=\pm1,\, v=w\left(n^2-\left(\frac{n}{5} \right)\right)/5$. \end{cor} \begin{proof} First, we assume that $\Delta_n$ is square-free, we have $b=c=d=e=1$, and hence $\alpha_1=1,\, \alpha_2=1$ and $\alpha_3=1$ from Lemma~\ref{lem:alpha123}. Therefore $\beta_0, \beta_1, \beta_2$ and $\beta_3$ in Theorem~\ref{theo:main} are given by $\beta_0=1,\, \beta_1=\beta_2=\beta_3=0$. Therefore, the integer $m$ in Theorem~\ref{theo:main} is given by $m=\left( \left( \frac{n}{5} \right)-n^2 \right)/5$. It follows from Theorem~\ref{theo:main} that $\alpha:=\left( n^2 -\left(\frac{n}{5} \right) \right)/5 +\rho$ is a generator of an NIB of $K_n$. Next, we assume that $K_n$ has a generator of an NIB of the form $v+w\rho \ (v,w \in {\mathbb Z})$. Since \[ d (v+w\rho, v+w \rho^{(1)}, v+w\rho^{(2)} , v+w \rho^{(3)}, v+w \rho^{(4)}) =w^8 (5v-wn^2)^2 \Delta_n^4 \] and (\ref{eq:disc}), (\ref{eq:cond}), it follows that $w^8=(5v-wn^2)^2=1$ and $\Delta_n$ is square-free. \end{proof} \begin{ex}\label{ex:1to1000} Table~\ref{table:1000} shows a generator of an NIB of $K_n$ such that $n$ satisfies $1\leq n\leq 1000,\, 5\nmid n$ and $\Delta_n$ is not square-free. For example, for $n=14$, $\Delta_n=11 \cdot 71^2$ and $\alpha_1,\alpha_2,\alpha_3 \, (\in \mathbb Z[\zeta])$ in Theorem~\ref{theo:main} are given by $\alpha_1=3\zeta^2 + \zeta + 2 , \, \alpha_2=\zeta^3 + 3\zeta + 2 ,\, \alpha_3= 1 $. Since $\alpha_1 \alpha_2 \alpha_3= 10 \zeta^3 + 8 \zeta^2 + 7\zeta + 6$, we have $\beta_0=6,\, \beta_1=7,\, \beta_2=8,\, \beta_3=10$ and $m=\left(\left( \frac{n}{5} \right) bc^2d^3e^4-n^2 (\beta_0+\beta_1+\beta_2+\beta_3) \right)/5=-1201$. Therefore, $(6\rho+7\rho^{(1)} +8\rho^{(2)} +10\rho^{(3)} +1201)/71$ is an NIB of $K_n $ for $n=14$ by Theorem~\ref{theo:main}. \begin{table}[H] \caption{$1\leq n\leq 1000,\, 5\nmid n$ and $\Delta_n$ is not square-free } \label{table:1000} \begin{center} {\scriptsize \begin{tabular}{|c||c|c|c|} \hline \rule{0pt}{4mm} $n$ & $\Delta_n$ & $\mathfrak f_{K_n}$ & A generator of NIB \\ \hline \hline \rule{0pt}{4mm} $14$ & $11\cdot71^2$ & $11\cdot 71$ & $\frac{1}{71} (6\rho+7\rho^{(1)} +8\rho^{(2)} +10\rho^{(3)} +1201)$ \\ \hline \rule{0pt}{4mm} $44$ & $61 \cdot 41^3$ & $41\cdot 61$& $\frac{1}{ 41^2} ( 28\rho +39 \rho^{(1)} +36\rho^{(2)} +48\rho^{(3)}+58131 ) $ \\ \hline \rule{0pt}{4mm} $69$ & $201511\cdot 11^2$ & $11\cdot 201511 $ & $\frac{1}{11} (\rho-3\rho^{(1)} -\rho^{(2)} -\rho^{(3)} -3811)$ \\ \hline \rule{0pt}{4mm} $71$ & $7331\cdot 61^2$ & $61 \cdot 7331$ & $\frac{1}{61} (-3\rho -2\rho^{(2)} +6\rho^{(3)} +996)$ \\ \hline \rule{0pt}{4mm} $83$ & $11\cdot 2141^2$ & $11 \cdot 2141$ & $\frac{1}{2141} (16\rho +2 \rho^{(1)}-37\rho^{(2)} +10\rho^{(3)} -11972)$ \\ \hline \rule{0pt}{4mm} $86$ & $31 \cdot 15461\cdot 11^2$ & $11 \cdot 31 \cdot 15461$& $\frac{1}{11} (4\rho +3\rho^{(1)}+2\rho^{(2)} +2\rho^{(3)} +16269)$ \\ \hline \rule{0pt}{4mm} $98$ & $191\cdot4201\cdot 11^2$ & $11\cdot191\cdot4201$ & $\frac{1}{11} (2\rho -2\rho^{(2)} +\rho^{(3)} +1923)$ \\ \hline \rule{0pt}{4mm} $207$ & $15545731\cdot 11^2$ & $11\cdot 15545731$ & $\frac{1}{11} ( 4\rho +3 \rho^{(1)} +2\rho^{(2)} +2\rho^{(3)} +94270) $ \\ \hline \rule{0pt}{4mm} $219$ & $19450411\cdot 11^2$ & $11\cdot 19450411$ & $\frac{1}{11} ( 2\rho -2 \rho^{(2)} +\rho^{(3)} +9590) $ \\ \hline \rule{0pt}{4mm} $226$ & $101\cdot19841\cdot 11^3$ & $11\cdot 101 \cdot 19841$ & $\frac{1}{11^2} ( -2\rho -9 \rho^{(1)} -6\rho^{(2)} -12\rho^{(3)} -296265) $ \\ \hline \rule{0pt}{4mm} $276$ & $61\cdot100801\cdot 31^2$ & $31 \cdot 61 \cdot 100801$ & $\frac{1}{31} ( \rho -2\rho^{(1)} -2\rho^{(2)} +4\rho^{(3)} +15229) $ \\ \hline \rule{0pt}{4mm} $311$ & $131\cdot 461\cdot1301\cdot 11^2$ & $11\cdot 131 \cdot 461 \cdot 1301$ & $\frac{1}{11} ( \rho-3 \rho^{(1)} -\rho^{(2)} -\rho^{(3)} -77379) $ \\ \hline \rule{0pt}{4mm} $328$ & $97127081\cdot 11^2$ & $11 \cdot 97127081$ & $\frac{1}{11} ( 4\rho +3\rho^{(1)} +2\rho^{(2)} +2\rho^{(3)} +236687) $ \\ \hline \rule{0pt}{4mm} $347$ & $121562411\cdot 11^2$ & $11\cdot 121562411$ & $\frac{1}{11} ( 2\rho +\rho^{(2)} -2\rho^{(3)}+24084 ) $ \\ \hline \rule{0pt}{4mm} $432$ & $291193681 \cdot 11^2$ & $11\cdot 291193681$ & $\frac{1}{11} ( \rho -3\rho^{(1)} -\rho^{(2)} -\rho^{(3)}-149297 ) $ \\ \hline \rule{0pt}{4mm} $449$ & $30877981 \cdot 11^3$ & $11\cdot 30877981$ & $\frac{1}{ 11^2} ( 10\rho +6\rho^{(1)} +12\rho^{(2)} +3\rho^{(3)}+1249902 ) $ \\ \hline \rule{0pt}{4mm} $461$ & $377340791\cdot 11^2$ & $11 \cdot 377340791$ & $\frac{1}{11} ( 2\rho -2\rho^{(2)} +\rho^{(3)} +42502) $ \\ \hline \rule{0pt}{4mm} $468$ & $400721701 \cdot 11^2$ & $11 \cdot 400721701$ & $\frac{1}{11} ( 2\rho +\rho^{(2)} -2\rho^{(3)} +43807) $ \\ \hline \rule{0pt}{4mm} $484$ & $131\cdot440431\cdot 31^2$ & $31\cdot 131 \cdot 440431$& $\frac{1}{31} ( -\rho +3\rho^{(1)} -3\rho^{(2)} -3\rho^{(3)} -187411) $ \\ \hline \rule{0pt}{4mm} $544$ & $41\cdot 2243281 \cdot 31^2$ & $31\cdot 41 \cdot 2243281$ & $\frac{1}{31} ( 6\rho +3 \rho^{(1)} +2\rho^{(3)} +651053) $ \\ \hline \rule{0pt}{4mm} $553$ & $779911631 \cdot 11^2$ & $11 \cdot 779911631$ & $\frac{1}{11} ( \rho -3\rho^{(1)} -\rho^{(2)} -\rho^{(3)} -244645) $ \\ \hline \rule{0pt}{4mm} $582$ & $31\cdot71\cdot571\cdot761 \cdot 11^2$ & $11 \cdot 31 \cdot 71 \cdot 571 \cdot 761$ & $\frac{1}{11} ( 2\rho -2\rho^{(2)} +\rho^{(3)}+67747 ) $ \\ \hline \rule{0pt}{4mm} $589$ & $7151\cdot140281 \cdot 11^2$ & $11\cdot 7151 \cdot 140281$ & $\frac{1}{11} ( 2\rho +\rho^{(2)} -2\rho^{(3)} +69382) $ \\ \hline \rule{0pt}{4mm} $613$ & $1091\cdot 135781\cdot 31^2$ & $31 \cdot 1091 \cdot 135781$ & $\frac{1}{31} ( -6\rho -4\rho^{(1)} -6\rho^{(2)} -3\rho^{(3)} -1427916) $ \\ \hline \rule{0pt}{4mm} $674$ & $20411 \cdot 84181 \cdot 11^2$ & $11 \cdot 20411 \cdot 84181$ & $\frac{1}{11} ( \rho -3\rho^{(1)} -\rho^{(2)} -\rho^{(3)} -363423) $ \\ \hline \rule{0pt}{4mm} $691$ & $1897892411\cdot 11^2$ & $11 \cdot 1897892411$& $\frac{1}{11} ( 4\rho +3\rho^{(1)}+2 \rho^{(2)} +2\rho^{(3)} +1050456) $ \\ \hline \rule{0pt}{4mm} $703$ & $61\cdot 101\cdot 311 \cdot 1061 \cdot 11^2$ & $11\cdot 61 \cdot 101 \cdot 311 \cdot 1061$& $\frac{1}{11} ( 2\rho -2\rho^{(2)} +\rho^{(3)} +98844) $ \\ \hline \rule{0pt}{4mm} $726$ & $166407091\cdot 41^2$ & $41 \cdot 166407091$ & $\frac{1}{41} ( -2\rho -6\rho^{(1)} -4\rho^{(2)}-7\rho^{(3)} -2002897) $ \\ \hline \rule{0pt}{4mm} $812$ & $48491 \cdot 74551 \cdot 11^2$ & $11 \cdot 48491 \cdot 74551$ & $\frac{1}{11} ( 4\rho +3\rho^{(1)} +2\rho^{(2)} +2\rho^{(3)} +1450559) $ \\ \hline \rule{0pt}{4mm} $824$ & $61\cdot 3271 \cdot 19211 \cdot 11^2$ & $11 \cdot 61 \cdot 3271 \cdot 19211$ & $\frac{1}{11} ( 2\rho -2\rho^{(2)} +\rho^{(3)} +135793) $ \\ \hline \rule{0pt}{4mm} $831$ & $41 \cdot 571 \cdot 169361 \cdot 11^2$ & $11\cdot 41 \cdot 571 \cdot 169361$ & $\frac{1}{11} ( 2\rho +\rho^{(2)} -2\rho^{(3)} +138110) $ \\ \hline \rule{0pt}{4mm} $916$ & $ 31\cdot 17155921\cdot 11^3$ & $11\cdot 31\cdot 17155921$ & $\frac{1}{ 11^2} (4 \rho -6 \rho^{(1)} -3\rho^{(2)}+6 \rho^{(3)} +167787) $ \\ \hline \rule{0pt}{4mm} $933$ & $101\cdot 62337371\cdot 11^2$ & $11 \cdot 101 \cdot 62337371$ & $\frac{1}{11} ( 4\rho +3 \rho^{(1)} +2\rho^{(2)} +2\rho^{(3)}+1915078 ) $ \\ \hline \rule{0pt}{4mm} $952$ & $101\cdot 811\cdot 83311 \cdot 11^2$ & $11 \cdot 101 \cdot 811 \cdot 83311$& $\frac{1}{11} ( 2\rho + \rho^{(2)} -2\rho^{(3)} +181263) $ \\ \hline \end{tabular} } \end{center} \end{table} \end{ex} \begin{ex}\label{ex:d-not-1} The smallest positive integer with $d \ne 1$ is $n=2888$. In this case, we have $\Delta_n=11^4 \cdot 4759595441$ and $\mathfrak f_{K_n}=11 \cdot 4759595441$, and $ (-16\rho-6\rho^{(1)} -26\rho^{(2)} -41\rho^{(3)} -148461417)/11^3$ is an NIB of $K_n $ . \end{ex} \begin{ex}\label{ex:e-not-1} The smallest positive integer with $e \ne 1$ is $n=7721$. In this case, we have $\Delta_n=11^5\cdot 26501 \cdot 833201$ and $\mathfrak f_{K_n}=26501\cdot 833201$, and $ (-10\rho+6\rho^{(1)} -35\rho^{(2)} -20\rho^{(3)} -703446252)/11^4$ is an NIB of $K_n $ . \end{ex} \begin{ex}\label{two-not-1} The smallest positive integer for which two of $b,c,d$ and $e$ are not $1$ is $n=40846$. In this case, we have $\Delta_n=11^4 \cdot 31^2\cdot 197859618251$ and $\mathfrak f_{K_n}=11 \cdot 31 \cdot 197859618251$, and $(-211\rho-96\rho^{(1)} -158\rho^{(2)} -14\rho^{(3)} -159832317845)/(11^3 \cdot 31)$ is an NIB of $K_n $ . \end{ex} \section{All normal integral bases} Let $\alpha$ be a generator of an NIB of $K_n$. Then the set of all generators of NIBs is $\{ \pm \sigma^{\ell} (1-\sigma^2-\sigma^3 )^k.\alpha \, |\, \ell \in {\mathbb Z}/5{\mathbb Z}, k\in {\mathbb Z} \}$. Davis, Eloff, Spearman and Williams \cite{DES,ESW2} determined all NIBs of $K_n$ for $n=-1$ and parametrized them using the Fibonacci and Lucas numbers. In this section, we follow their method and find all NIBs of $L_n$ for the general $n$. Let $\lambda := (3+\sqrt{5})/2 =( (1+\sqrt{5})/2)^2$ be the square of the fundamental unit of ${\mathbb Q} (\sqrt{5})$, and $\overline{\lambda}:=(3-\sqrt{5})/2$ its conjugate. For any integer $ k$, define the sequences $a_k, b_k$, and $c_k$ by \begin{align} a_k & := \frac{1}{5} ( (-1)^k +2 (\lambda^k +\overline{\lambda}^k ) ), \notag \\ b_k & := \frac{1}{2} (a_k-a_{k-1}), \label{eq:abc} \\ c_k & :=\frac{1}{2} (a_{k+1} -a_k) \quad (=b_{k+1}). \notag \end{align} For example, $a_0=a_1=1,\, a_2=3,\, b_0=b_1=0,\, b_2=1,\, c_0=0,\, c_1=1,\, c_2=2 $. Let $L_k$ be the Lucas number defined by $L_0:=2,\, L_1=1$ and $L_k=L_{k-1}+L_{k-2}$. We can show the following lemma from the formula: $L_k= ( (1+\sqrt{5})/2)^k +((1-\sqrt{5})/2)^k$. \begin{lem}\label{lem:abc-L} For any $k\in {\mathbb Z}$, we have $a_k =((-1)^k +2L_{2k})/5,\, b_k =((-1)^k +L_{2k-1})/5$ and $ c_k=((-1)^{k+1} +L_{2k+1} )/5$. \end{lem} Using the above lemma, we can also prove the following lemma. \begin{lem}\label{lem:abc-rel} For any $k\in {\mathbb Z}$, we have $a_k,b_k,c_k \in {\mathbb Z}$ and $a_k=a_{-k},\, b_{-k}=-c_k,\, a_{k+1}-2a_k -2a_{k-1}+a_{k-2}=0,\, b_{k+1} -2b_k -2b_{k-1}+b_{k-2}=0,\, c_{k+1}-2c_k -2c_{k-1}+c_{k-2}=0$. \end{lem} Let $\rho =\rho_n$ be a root of the quintic polynomial $f_n(X)$ and $G=\mathrm{Gal}(K_n/{\mathbb Q})=\langle \sigma \rangle$. The action of $(1-\sigma^2-\sigma^3)^k $ on $\rho$ is given by the sequences $a_k, b_k$ and $c_k$ as follows. \begin{lem}\label{lem:rho-action} For any $k\in {\mathbb Z}$, we have the following. \begin{align*} (1-\sigma^2-\sigma^3)^k . \rho & =a_k \rho +b_k (\rho^{(1)}+\rho^{(4)} )-c_k (\rho^{(2)}+\rho^{(3)} ) \\ & =(a_k +b_k (\sigma+\sigma^4) -c_k (\sigma^2+\sigma^3)). \rho \end{align*} \end{lem} \begin{proof} The assertion can be shown by induction on $k$. \end{proof} By acting $\pm \sigma^{\ell} (1-\sigma^2-\sigma^3)^k \, (\ell \in {\mathbb Z}/5{\mathbb Z}, k\in {\mathbb Z})$ to the generator of an NIB obtained in Theorem~\ref{theo:main}, we can obtain all generators of NIBs. \begin{theo}\label{theo:all-NIB} Let $n$ be an integer with $5\nmid n$ and $\beta_0, \beta_1, \beta_2, \beta_3, m \, (\in {\mathbb Z})$ as in Theorem~\ref{theo:main}. Put \begin{align*} \theta_0 (k) & :=a_k \beta_0 +b_k \beta_1-c_k\beta_2-c_k\beta_3, \\ \theta_1(k) & :=b_k \beta_0 +a_k \beta_1+b_k \beta_2 -c_k \beta_3, \\ \theta_2(k) & :=-c_k \beta_0 +b_k \beta_1+a_k \beta_2+b_k \beta_3, \\ \theta_3 (k) & :=-c_k \beta_0-c_k \beta_1+b_k \beta_2+a_k \beta_3, \\ \theta_4(k) & :=b_k\beta_0-c_k\beta_1-c_k \beta_2+b_k \beta_3. \end{align*} Then we have \[ \{ x \ |\ \text{a generator of an NIB of $K_n$} \} =\{ \pm \sigma^{\ell}.\, \xi_k \ |\ \ell \in \mathbb Z/5\mathbb Z,\ k\in \mathbb Z \}, \] where \[ \xi_k:= \frac{1}{bc^2d^3e^4} \left( \sum_{t=0}^4 \theta_t(k) \rho^{(t)}-(-1)^km \right) \] \end{theo} \begin{proof} Let \begin{align*} \alpha & :=\dfrac{1}{bc^2d^3e^4} ( \beta_0 \rho +\beta_1 \rho^{(1)}+\beta_2 \rho^{(2)} +\beta_3 \rho^{(3)} -m) \\ & = \dfrac{1}{bc^2d^3e^4} ((\beta_0 +\beta_1 \sigma +\beta_2 \sigma^2 +\beta_3 \sigma^3 ). \rho -m) \end{align*} be the generator of an NIB obtained by Theorem~\ref{theo:main}. It follows from Lemma~\ref{lem:rho-action} that \begin{align*} & \pm \sigma^{\ell} (1-\sigma^2-\sigma^3)^k. \alpha \\ & =\pm \frac{1}{bc^2d^3e^4} \sigma^{\ell} ( (\beta_0+\beta_1 \sigma +\beta_2 \sigma^2 +\beta_3 \sigma^3 )(a_k+b_k (\sigma+\sigma^4)-c_k(\sigma^2+\sigma^3)) .\rho -(-1)^km). \end{align*} Furthermore, direct calculations yield \begin{align*} & (\beta_0+\beta_1 \sigma +\beta_2 \sigma^2 +\beta_3 \sigma^3 )(a_k+b_k (\sigma+\sigma^4)-c_k(\sigma^2+\sigma^3)) \\ & = \theta_0(k)+\theta_1(k)\sigma +\theta_2(k) \sigma^2 +\theta_3(k) \sigma^3+\theta_4(k)\sigma^4. \end{align*} The proof is complete. \end{proof} \begin{cor}\label{cor:all-NIB-squarefree} Let $n$ be an integer with $5\nmid n$ and $v:=\left( n^2-\left(\frac{n}{5} \right)\right)/5$. Assume that $\Delta_n$ is square-free. Then we have \begin{align*} & \{ x \, |\, \text{a generator of an NIB of $K_n$ } \} =\{ \pm \sigma^{\ell} .\, \xi_k \ |\ \ell \in \mathbb Z/5\mathbb Z, k\in \mathbb Z \}, \end{align*} where $ \xi_k :=(-1)^k v+a_k \rho +b_k (\rho^{(1)}+\rho^{(4)} )-c_k (\rho^{(2)}+\rho^{(3)} )$. \end{cor} \begin{proof} It follows that $\beta_0=1,\, \beta_1=\beta_2=\beta_3=0$ from $ b=c=d=e=1$ since $\Delta_n$ is square-free. Therefore, we obtain $\theta_0(k)=a_k,\, \theta_1(k)=\theta_4(k)=b_k,\, \theta_2(k)=\theta_3(k)=-c_k$. From Theorem~\ref{theo:all-NIB}, the generators of all NIBs of $K_n$ are given by $\pm \sigma^{\ell}. \alpha_k$ for some $\ell \in {\mathbb Z}/5{\mathbb Z},\, k\in \mathbb Z$. \end{proof} The following example is the case of $n=-1$ which Davis, Eloff, Spearman and Williams considered in \cite{DES,ESW2}. In this case, we have $K_n={\mathbb Q} (\zeta_{11}+\zeta_{11}^{-1})$ and $f_n(X)=X^5+X^4-4X^3-3X^2+3X+1$ is the minimal polynomial of $\zeta_{11}+\zeta_{11}^{-1}$. \begin{ex}\label{ex:n=-1} Let $n=-1$, we have $f_n(X)=X^5+X^4-4X^3-3X^2+3X+1,\, \Delta_n=11,\, a=11,\, b=c=d=e=1,\, \delta_n=1,\,\mathfrak f_{K_n}=11$ and $D_{K_n}=11^4$. Furthermore, we have $\rho^{(1)}=\rho^4-4\rho^2+2,\ \rho^{(2)}=-\rho^4-\rho^3+3\rho^2+2\rho-1,\ \rho^{(3)} =\rho^2-2$ and $\rho^{(4)} =\rho^3-3\rho$. Since $v=\left(n^2-\left(\frac{n}{5} \right)\right)/5=0$, it follows from Corollary~\ref{cor:all-NIB-squarefree} that \[ \{ x \, | \, \text{ a generator of an NIB of $K_n$} \} = \{ \pm \sigma^{\ell}. \, \xi_k \, |\, \ell \in {\mathbb Z}/5{\mathbb Z},\, k\in {\mathbb Z} \}, \] where \begin{align*} \xi_k & = a_k \rho +b_k (\rho^{(1)}+\rho^{(4)} )-c_k (\rho^{(2)}-\rho^{(3)} ) \\ & = \frac{(-1)^k+2L_{2k}}{5} \rho +\frac{(-1)^k+L_{2k-1}}{5} (\rho^{(1)}+\rho^{(4)} )-\frac{(-1)^{k+1}+L_{2k+1}}{5} (\rho^{(2)}+\rho^{(3)} )\\ & =\frac{1}{5} ( (-1)^{k+1}+2L_{2k-1}+3L_{2k+1} ) -L_{2k-1} \, \rho-\frac{4}{5} (L_{2k-1}+L_{2k+1}) \rho^2 \\ & \quad +\frac{1}{5} (L_{2k-1}+L_{2k+1}) \, \rho^3+ \frac{1}{5} (L_{2k-1}+L_{2k+1}) \, \rho^4. \end{align*} \end{ex} \begin{rem}\label{rem:n=-1} Let $F_k$ be the Fibonacci number defined by $F_0:=0,\, F_1:=1$ and $ F_k=F_{k-1}+F_{k-2}$. In \cite[theorem~1]{DES}, they showed that \[ \{ x \, | \, \text{ a generator of an NIB of $\mathbb Q (\zeta_{11}+\zeta_{11}^{-1})$} \} = \{ \pm \sigma^{\ell}. \gamma_k \, |\, \ell \in {\mathbb Z}/5{\mathbb Z},\, k\in {\mathbb Z} \} \] where \[ \gamma_k :=\frac{1}{10}(25F_{2k}+(-1)^k L_{2k}-2) +\frac{1}{2} (-5F_{2k} +(-1)^k L_{2k})\, \rho -4F_{2k}\, \rho^2+F_{2k}\, \rho^3 +F_{2k}\, \rho^4. \] Using formulae $F_{-k}=(-1)^{k+1} F_k,\, L_{-k}=(-1)^k L_k$ and $L_{k-1}+L_{k+1}=5F_k$, it can be shown that the relation $\gamma_k$ and $\xi_k$ in Example~\ref{ex:n=-1} is given by $\xi_k =(-1)^k \gamma_{(-1)^kk} $ holds for any $k\in {\mathbb Z}$. \end{rem} \begin{ex}\label{ex:NIBn=14} Let $n=14$. We have $f_n(X)=X^5+196X^4-6814X^3+54507X^2+3678X+1,\, \Delta_n=11\cdot 71^2,\, a=11,\, b=71,\, c=d=e=1,\, \delta_n=7^2\cdot 79,\,\mathfrak f_{K_n}=11\cdot 71$ and $D_{K_n}=11^4\cdot 71^4$. It follows from Theorem~\ref{theo:all-NIB} that \[ \{ x \, | \, \text{ a generator of an NIB of $K_n$} \} = \{ \pm \sigma^{\ell}. \xi_k \, |\, \ell \in {\mathbb Z}/5{\mathbb Z},\, k\in {\mathbb Z} \}, \] where \begin{align*} \xi_k = & (\theta_0(k)\rho+\theta_1(k)\rho^{(1)}+\theta_2(k)\rho^{(2)}+\theta_3(k)\rho^{(3)}+\theta_4(k)\rho^{(4)}+(-1)^k1201)/71,\\ \theta_0(k) = &\frac{31}{5}(-1)^k -L_{2k-1}-\frac{6}{5}L_{2k+1} \\ \theta_1(k) =& \frac{31}{5} (-1)^k+ \frac{4}{5} L_{2k+1}, \\ \theta_2(k) = & \frac{31}{5}(-1)^k+ \frac{1}{5} L_{2k-1} +2 L_{2k+1},\\ \theta_3(k) = & \frac{31}{5} (-1)^k -\frac{12}{5} L_{2k-1} +\frac{7}{5} L_{2k+1} , \\ \theta_4(k) =& \frac{31}{5} (-1)^k+ \frac{16}{5} L_{2k-1} -3 L_{2k+1} . \end{align*} The following table shows $\xi_k$ for $k$ satisfies $-5\leq k\leq 5$. \begin{table}[H]\label{table:n=14} \caption{$\xi_k$ for $-5 \leq k\leq 5$} \begin{center} {\scriptsize \begin{tabular}{|c||c|} \hline \rule{0pt}{4mm} $k$ & $\xi_k$ \\ \hline \hline \rule{0pt}{4mm} $-5$ & $(71+1451\rho-304\rho^{(1)} -959\rho^{(2)} +1856\rho^{(3)}-2044\rho^{(4)} )/355$\\ \hline \hline \rule{0pt}{4mm} $-4$ & $(-71+554\rho-116\rho^{(1)} -366\rho^{(2)} +709\rho^{(3)}-11\rho^{(4)} )/355$\\ \hline \hline \rule{0pt}{4mm} $-3$ & $(71+211\rho-44\rho^{(1)} -139\rho^{(2)} +271\rho^{(3)}-299\rho^{(4)} )/355$\\ \hline \hline \rule{0pt}{4mm} $-2$ & $(-71+79\rho-16\rho^{(1)} -51\rho^{(2)} +104\rho^{(3)}-116\rho^{(4)} )/355$ \\ \hline \hline \rule{0pt}{4mm} $-1$ & $(71+26\rho-4 \rho^{(1)} -14\rho^{(2)} +41\rho^{(3)}-49\rho^{(4)} )/355$\\ \hline \hline \rule{0pt}{4mm} $0$ & $(-71-\rho+4 \rho^{(1)} +9\rho^{(2)} +19\rho^{(3)}-31\rho^{(4)} )/355$\\ \hline \hline \rule{0pt}{4mm} $1$ & $(71-29\rho+16\rho^{(1)} +41\rho^{(2)} +16\rho^{(3)}-44\rho^{(4)} )/355$ \\ \hline \hline \rule{0pt}{4mm} $2$ & $(-71-86\rho+44\rho^{(1)} +114\rho^{(2)} +29\rho^{(3)}-101\rho^{(4)} )/355$ \\ \hline \hline \rule{0pt}{4mm} $3$ & $(71-229\rho+116\rho^{(1)} +301\rho^{(2)} +71\rho^{(3)}-259\rho^{(4)} )/355$ \\ \hline \hline \rule{0pt}{4mm} $4$ & $(-71-601\rho+304\rho^{(1)} +789\rho^{(2)} +184\rho^{(3)}-676\rho^{(4)} )/355$ \\ \hline \hline \rule{0pt}{4mm} $5$ & $(71-1574\rho+796\rho^{(1)} +2066\rho^{(2)} +481\rho^{(3)}-1769\rho^{(4)} )/355$ \\ \hline \hline \end{tabular} } \end{center} \end{table} \end{ex}
{ "timestamp": "2022-09-26T02:06:55", "yymm": "2209", "arxiv_id": "2209.10858", "language": "en", "url": "https://arxiv.org/abs/2209.10858" }
\section{Introduction} The pursuit of high performance quantum clocks has historically spurred important developments, including laser cooling \cite{chu1985three,aspect1988laser,lett1988observation} and optical trapping \cite{ashkin1970acceleration,ashkin1978trapping}. In one highly successful clock architecture, large numbers of quantum absorbers are tightly confined in a magic wavelength optical lattice, affording reduced quantum projection noise \cite{katori2003ultrastable}. Such lattice clocks, so far employing atomic optical transitions, have realized record performance in both precision \cite{bothwell2022resolving,zheng2022differential,oelker2019demonstration,schioppo2017ultrastable,mcgrew2018atomic,bloom2014optical} and accuracy \cite{mcgrew2018atomic,bloom2014optical,bothwell2019jila,nicholson2015systematic,hobson2020strontium,nemitz2016frequency,yamanaka2015frequency}, ushering in a new era in space-time sensing \cite{zheng2022lab,takamoto2020test,delva2017test,lisdat2016clock}. In parallel, there is growing interest in advancing the laser spectroscopy and quantum control of more complex particles\textbf{---}such as diatomic or polyatomic molecules with rich rovibrational structure\textbf{---}motivated by fundamental physics applications \cite{safronova2018search,mitra2022quantum} that include the search for particles beyond the Standard Model \cite{Andreev2018,cairncross2017precision,alauze2021ultracold,grasdijk2021centrex,hutzler2020polyatomic,yu2021probing}, fifth forces \cite{germann2021three,salumbides2013bounds,Borkowski2019scirep}, the time variation of the electron-to-proton mass ratio \cite{Kobayashi2019,hanneke2020optical,barontini2022measuring,zelevinsky2008precision}, and dark matter \cite{oswald2022search,kozyryev2021enhanced}. Molecules also hold promise as new platforms for quantum computation and simulation \cite{kaufman2021quantum,zhang2022optical,burchesky2021rotational,wang2022enriching,zhang2022quantum,vilas2022magneto,albert2020robust,tesch2002quantum,altman2021quantum,moses2017new}. Recent experiments have shown quantum logic rotational spectroscopy of trapped $\mathrm{CaH}^+$ ions \cite{collopy2022rotational} and magic wavelength vibrational spectroscopy of $\mathrm{Sr}_2$ \cite{Kondov2019} at a precision of a few parts in $10^{-13}$. However, accurate vibrational spectroscopy of neutral molecules at (or below) the $10^{-13}$ level remains difficult and unexplored \cite{patra2020proton,kortunov2021proton,molony2016measurement,cheng2018dissociation,beyer2019determination,hussels2022improved}. Here, we extend the lattice clock architecture to trapped neutral molecules and characterize the systematic frequency shifts in a $\mathrm{Sr}_2$ vibrational lattice clock, achieving a total fractional systematic uncertainty of $4.5\times10^{-14}$, comparable to the earliest realizations of optical atomic lattice clocks \cite{takamoto2005optical,ludlow2006systematic,le2006accurate}. By carefully controlling the systematic shifts, we measure the absolute clock frequency to 13 digits, establishing it as one of the best known molecular frequencies in the terahertz (THz) band \cite{riehle2018cipm}. We leverage this to characterize the electronic ground potential of the strontium dimer\textbf{---}originating from the van der Waals bonding of two closed-shell atoms\textbf{---}by determining the dissociation energy of $^{88}\mathrm{Sr}_2$ with an accuracy surpassing the previous record for a diatomic molecule \cite{molony2016measurement}. The results described here may be applied to a wide class of molecules (e.g., hydrogen isotopologues \cite{Jozwiak2022}), enabling the refinement of molecular quantum electrodynamics calculations, tests of fundamental laws, and potentially open new pathways for THz frequency metrology \cite{tonouchi2007cutting,Wang2018,nagano2021terahertz}. \section{Vibrational Clock \label{sec:methods}} \begin{figure*} \centering \includegraphics[width=2\columnwidth]{PaperFig1.pdf} \caption{Vibrational molecular lattice clock. (a) Raman lasers (upleg, red arrow; downleg, orange arrow) detuned from an intermediate state in $(1)0_u^+$ probe the vibrational clock transition between $(v=62, J=0)$ and $(v=0, J=0)$ in the $X^1\Sigma_g^+$ ground potential. The optical lattice (brown arrow) off-resonantly addresses an isolated rovibronic state in $(1)1_u$ to induce magic trapping conditions. (b) Experimental setup. The upleg master laser is stabilized to a reference cavity using the Pound-Drever-Hall (PDH) technique, and its phase coherence is transferred to the downleg laser via a frequency comb. The molecules are held in the 1D optical lattice. Co-propagating clock lasers are delivered to the molecules via an optical fiber with active fiber noise cancellation (FNC). The spectroscopic signal derives from absorption imaging of $X(62,0)$ photofragments at a slight grazing angle relative to the lattice. A Rb microwave standard acts as a transfer oscillator between the molecular clock and GPS time for the absolute frequency measurement. Further information is given in the main text and Appendices~\ref{sec:atomicprep} and \ref{sec:ramanlaser}. (c) Two-photon Rabi oscillations between the clock states driven at the operational probe intensities (filled circles, experimental data averaged over 8 consecutive runs, error bars represent $1\sigma$ uncertainties; solid red line, analytical fit to an exponentially decaying sinusoid). We observe lines as narrow as 11(1) Hz (inset, green squares). For clock operation, we perform Rabi spectroscopy with a 30 ms $\pi$-pulse duration (indicated by the black arrow), resolving 30(2) Hz linewidths consistent with the expected Fourier limit (inset, black open circles). Each point in the inset is a single shot of the experiment, and solid lines are Lorentzian fits.} \label{fig:expscheme} \end{figure*} The basic scheme of the molecular clock is as follows. We operate the clock on the pure vibrational transition between the weakest bound and most tightly bound irrotational states, $(v=62, J=0)\rightarrow(v=0, J=0)$, in the $X^1\Sigma_g^+$ ground potential of $^{88}\mathrm{Sr}_2$. Here, $v$ and $J$ denote the vibrational and total angular momentum quantum numbers, respectively. In the absence of other fields, the blackbody radiation (BBR) limited lifetimes of the clock states exceed $10^5$ years \cite{Kondov2019}. The vibrational splitting of $\sim$32 THz constitutes the clock frequency, $f_\mathrm{clock}$. As a direct transition between $J=0$ states is strictly forbidden, we drive the clock transition via a Raman process using two diode lasers detuned from the intermediate excited state $(1)0_u^+(v'=11,J'=1)$. The relevant potentials are shown in Fig.~\ref{fig:expscheme}(a). The measurements take place in a one-dimensional optical lattice at 1005 nm. Trapped samples of ultracold molecules are created by photoassociating laser cooled strontium atoms at 2 $\mu$K to the $(1)1_u(v'=-1,J'=1)$ rovibronic state. This efficiently produces $X(62,0)$ ground state molecules thanks to the large transition strength \cite{leung2020transition}. Molecules formed in the undesired $J=2$ excited rotational state are photodissociated, and the remaining atoms are wiped out of the trap with resonant 461 nm laser light. Our detection scheme relies on state-selective photofragmentation of $X(62,0)$ followed by absorption imaging of the slow-moving atoms. As this destroys the molecular sample, the entire sequence is iterated to scan the clock transition. See Appendix \ref{sec:atomicprep} for an elaboration of the state preparation. Raman clock spectroscopy is deeply in the Lamb-Dicke regime for co-propagating probes along the axial direction of the optical lattice (Lamb-Dicke parameter $\eta_\mathrm{LD} \lesssim 0.02$). The upleg (or pump) master laser at 378 THz (793 nm) is stabilized to a high finesse ultra-low expansion reference cavity with a measured drift rate of $0.03 \,\mathrm{Hz}/\mathrm{s}$ that is compensated using a linearly-ramped acousto-optic modulator. The phase coherence of the upleg is transferred to the teeth of an erbium-fiber-laser-based optical frequency comb by actuating on its repetition frequency [Fig.~\ref{fig:expscheme}(b)]. The carrier-envelope offset frequency of the comb is stabilized to a Rb standard that also serves as the laboratory timebase. The downleg (or anti-Stokes) laser at 410 THz (731 nm) is similarly phase locked to the comb to benefit from partial cancellation of laser noise, resulting in a relative frequency jitter between the up and downleg lasers that is approximately $12\times$ smaller than the jitter of the upleg laser. The upleg is passed through an acousto-optic modulator (AOM1 in Fig.~\ref{fig:expscheme}(b)) and the first order diffraction is used to iteratively step the difference frequency of the clock lasers across $f_\mathrm{clock}$. The same acousto-optic modulator controls the interrogation duration by pulsing the upleg, and we leave the downleg constantly irradiating (but blocked with a mechanical shutter during the state preparation process). Both clock lasers are delivered to the molecules via the same optical fiber, but active fiber noise cancellation \cite{ma1994delivering,rauf2018phase} is implemented separately; see Appendix \ref{sec:ramanlaser}. Figure~\ref{fig:expscheme}(c) shows two-photon Rabi oscillations driven by the clock lasers at the operational Rabi frequencies. Our apparatus is capable of producing clock lines with full width at half-maximum as narrow as 11(1) Hz corresponding to a $Q$-factor of $2.9 \times 10^{12}$, with no fundamental obstacles for future improvement. However, pulse durations of $\sim$100 ms complicate the determination of molecular densities due to single- and two-body losses \cite{Kondov2019,leung2020transition,leung2021ultracold}. As a compromise, we evaluate clock systematics by performing Rabi spectroscopy with a 30 ms $\pi$-pulse, scanning Fourier-limited peaks of 30(2) Hz. The inset to Fig.~\ref{fig:expscheme}(c) shows a narrow spectrum as well as a typical spectrum consisting of 15 experimental iterations (taking a total duration of $\sim$20 s) that is fitted to a Lorentzian function to determine the line center. \section{Results} \subsection{Systematic evaluation} Table \ref{tab:systable} details the uncertainty budget of the molecular clock under operational conditions. Summing the uncertainties of all contributors in quadrature, we report a total systematic uncertainty of $4.5\times 10^{-14}$. We leverage the short term frequency stability of our reference cavity to average down the uncertainty of a given systematic. Most frequency corrections in Table \ref{tab:systable} are determined by probing the clock transition in an interleaved fashion; i.e., we alternate an experimental parameter between two values and record the corresponding pair of line centers. This is repeated to gather statistics, and the clock shift $\Delta f_\mathrm{clock}$ is computed as a weighted average. We scale up all statistical uncertainties by the square root of the reduced chi-square statistic ($\chi^2_\mathrm{red}$) if $\chi^2_\mathrm{red} > 1$. Finally, the shift is extrapolated to determine the frequency correction for the clock at the operational parameter value. \begin{table} \caption{Systematic uncertainty budget for the strontium molecular clock under operating conditions. See Appendix \ref{sec:othersys} for the description of minor systematics not in the main text. All values are expressed in fractional units ($\times 10^{-14}$).} \label{tab:systable} \begin{ruledtabular} \centering \begin{tabular}{lll} Systematic & Correction& Uncertainty\\ \colrule Lattice Stark ($E1,M1,E2$) &100.1&3.4\\ Lattice Stark (hyperpolarizability) &-50.8&1.9\\ Probe Stark (total) &31.5&2.2\\ BBR &-2.2&0.4\\ Density&-0.6&0.3\\ Higher order Zeeman& 0&0.2\\ Doppler& 0& 0.1\\ \colrule \textbf{Total}&77.9&4.5 \end{tabular} \end{ruledtabular} \end{table} \subsubsection{Lattice light shift} Magic\textbf{---}or state-insensitive\textbf{---}trapping conditions can be engineered for the vibrational clock states by off-resonantly addressing $X^1\Sigma_g^+(0,0)\rightarrow (1)1_u(9,1)$ with the lattice. This protocol predominantly tunes the polarizability of $X(0,0)$, which matches that of $X(62,0)$ at a magic detuning of 4.493(3) GHz \cite{leung2021ultracold}. Importantly, the neighboring $(1)1_u(v',1)$ rovibronic resonances are spaced at intervals of $\sim$2 THz, and may cause deleterious shifts due to lattice light impurity (e.g., amplified spontaneous emission \cite{fasano2021characterization}). To mitigate this, the lattice light derives from a Ti:sapphire laser stabilized to the same optical frequency comb described in Section \ref{sec:methods}. This also permits the lattice frequency, $f_\mathrm{latt} = c/\lambda_\mathrm{latt}$, to be determined with kHz-level accuracy. The light is filtered through a linear cavity (finesse of 50, and free spectral range of 2.9 GHz) before delivery to the experiment by a single-mode polarization maintaining fiber and retroreflected to form the optical lattice. A weak reflection from the vacuum window is used for lattice intensity stabilization during normal operation. The lattice polarization is linear and defines the quantization axis for the $X^1\Sigma_g^+$ states \cite{mcguyer2015control}. \begin{figure} \centering \includegraphics[width=\columnwidth]{PaperFig2.pdf} \caption{(Color online) Clock shifts due to the lattice light. (a) Nonlinear shifts of the molecular clock frequency versus trap depth. For a given lattice frequency (color coded), we make interleaved measurements of clock shifts (open circles) with respect to a reference trap depth ($\sim500\,E_r$), and fit the data to parabolas (solid lines) with a global quadratic parameter, $-\beta^*$. (b) Linear light shift coefficient, $\alpha^*$, versus lattice frequency (color code matches (a)), and the linear fit (black solid line). $\alpha^*$ is predominantly due to the $E1$ differential polarizability and is nulled at $f_\mathrm{zero}$. By tuning $\alpha^*$, we can find conditions where the sensitivity of $\Delta f_\mathrm{clock}$ to fluctuations in $U_0$ is minimal at our operational trap depth of 487(4) $E_r$ (dark green points). Error bars represent 1$\sigma$ uncertainties.} \label{fig:latticelightshifts} \end{figure} We investigate the effect of lattice light over a range of $f_\mathrm{latt}$. At each $f_\mathrm{latt}$ we make interleaved measurements of the clock shifts, alternating the trap depth $U_0$ between a reference depth and four other depths spanning from 300 $E_r$ to 1100 $E_r$, where $E_r \equiv h^2/(2M\lambda_\mathrm{latt}^2)$ is the recoil energy and $M$ is the molecular mass. The trap depths are determined from the axial trapping frequencies (Appendix \ref{sec:trapcal}). Small corrections ($<0.3\times10^{-14} \times f_\mathrm{clock}$) were made to account for density shifts. As shown in Fig.~\ref{fig:latticelightshifts}(a), our measurements reveal nonlinear light shifts, demonstrating clear evidence for hyperpolarizability at this lattice wavelength. The origin of this effect is currently under investigation, but we hypothesize a connection with previously observed quadratic lattice scattering rates in a similar experiment \cite{Kondov2019}. In order to characterize the lattice shifts, we adopt the thermal model described in Ref.~\cite{brown2017hyperpolarizability}, \begin{equation}\label{eq:lattshifteqn} \Delta f_\mathrm{clock} = -\alpha^*U_0 - \beta^*U_0^2, \end{equation} where $\alpha^*$ and $\beta^*$ are empirically obtained from parabolic fits to the measured differential shifts. These parameters are effective values dependent on the trapping conditions: $\alpha^*$ is related to the differential electric-dipole ($E1$), magnetic-dipole ($M1$) and electric-quadrupole ($E2$) polarizabilities, while $\beta^*$ is related to the differential hyperpolarizability. Crucially, the polynomial form of Eq.~(\ref{eq:lattshifteqn}) hinges on a linear scaling of the sample temperature with $U_0$, which we verify to hold true for our molecules using Raman carrier thermometry (Appendix \ref{sec:trapcal}). We do not expect non-polynomial terms (e.g. $\propto\sqrt{U_0}$) to be significant at the level of the current evaluation. The fits give $\beta^* = -6.81(22) \times 10^{-5}\,\mathrm{Hz}/E_r^2$ as a global parameter. Additionally, the results for $\alpha^*$ versus $f_\mathrm{latt}$ are shown in Fig.~\ref{fig:latticelightshifts}(b), and a linear fit yields a sensitivity slope $\partial \alpha^*/\partial f_\mathrm{latt} = -0.0796(16) \,\mathrm{Hz}/(\mathrm{MHz}\, E_r)$ as well as an $x$-intercept $f_\mathrm{zero} = 298\,368\,568.844(21)\,\mathrm{MHz}$. Operating the molecular clock at a trap depth of $U_\mathrm{opt} = 487(4)\,E_r$ and $f_\mathrm{latt}-f_\mathrm{zero} = -0.821(21)\,\mathrm{MHz}$, we determine the correction terms to be $\alpha^*U_\mathrm{opt} = 31.8(1.1)\,\mathrm{Hz}$ and $\beta^*U_\mathrm{opt}^2 = -16.2(6)\,\mathrm{Hz}$, summing to a fractional correction of $49.3(3.8)\times 10^{-14}$. Under these conditions, $\Delta f_\mathrm{clock}$ is first-order insensitive to changes in $U_0$ (dark green points in Fig.~\ref{fig:latticelightshifts}). \subsubsection{Probe light shift} Probe light shifts pose an inherent challenge for two-photon spectroscopy. This is even more so for scalar clock states ($J=0$) that preclude the use of laser polarization-based cancellation schemes \cite{jackson2019magic}. Here, the clock shifts scale linearly as the probe intensities are low, and are related to the differential polarizability at the respective probe wavelength ($\lambda_p$), \begin{equation}\label{eq:probeshifteqn} \Delta f_\mathrm{clock} = \frac{I_{p}}{2h\epsilon_0c}\left[\alpha_0(\lambda_{p})-\alpha_{62}(\lambda_{p})\right], \end{equation} where $\alpha_v$ is the $E1$ polarizability for the vibrational state $v$, $I_{p}$ is the probe laser intensity, and $p\in\{\uparrow,\downarrow\}$ specifies the laser: upleg ($\uparrow$) or downleg ($\downarrow$). Figure~\ref{fig:probelightshift} shows that linear extrapolation of probe shifts suffices for a molecular clock at the few $10^{-14}$ level. \begin{figure} \centering \includegraphics[width=\columnwidth]{PaperFig3.pdf} \caption{Clock shifts at the operational Raman detuning as a function of (a) the upleg laser intensity, and (b) the downleg laser intensity. The horizontal axes are normalized by the respective operational intensities, $I_{\uparrow,0}$ and $I_{\downarrow,0}$. Solid lines are linear fits to the data. Residuals are plotted in units of Hz. Error bars represent 1$\sigma$ uncertainties.} \label{fig:probelightshift} \end{figure} While tailored pulse sequences to alleviate probe light shifts have been proposed \cite{yudin2018generalized,zanon2016probe,zanon2006cancellation,hobson2016modified}, for this evaluation we opted for a more straightforward strategy. We can minimize the total probe light shift by using so-called balanced intensity ratios satisfying the condition $I_{\uparrow}\left[\alpha_0(\lambda_\uparrow)-\alpha_{62}(\lambda_\uparrow)\right] = -I_{\downarrow} \left[\alpha_0(\lambda_\downarrow)-\alpha_{62}(\lambda_\downarrow)\right]$. At the same time, a large Raman detuning\textbf{---}relative to the intermediate $(1)0_u^+(11,1)$ excited state\textbf{---}is preferred so that off-resonant scattering from the probes have a negligible effect on the accessible coherence times. Figure~\ref{fig:probelightshift} demonstrates that such conditions exist in our clock for blue detunings where the baseline polarizability differences at the probe wavelengths have opposite signs, in agreement with our polarizability model (Appendix \ref{sec:abinitiopol}). We operate at a Raman detuning of +14.973 GHz, much greater than the 5 MHz natural linewidth of the intermediate state \cite{leung2021ultracold}. We evaluate $\Delta f_\mathrm{clock}$ for each leg separately. Using a motorized neutral density filter, we switch between two intensity values for one leg while keeping that of the other leg constant at its operational value. The $\pi$-pulse durations are adjusted accordingly. Typical settings for the interleaved measurements are $(P_{\uparrow,0},9 P_{\uparrow,0})$, and $(P_{\downarrow,0},3.5 P_{\downarrow,0})$ where $P_{p,0}=I_{p,0}(\pi w_p^2/2)$ are the operational powers measured with a calibrated power meter immediately before the vacuum window. These shifts are scaled by the measurement lever arms to obtain the clock corrections at the operational settings: $-(\Delta f_\mathrm{clock}/\Delta P_p) \times P_{p,0}$. We find the corrections to be $-277.5(1.4)\times 10^{-14}$ for the upleg, and $309.0(1.7)\times 10^{-14}$ for the downleg. Drifts in $\Delta P_{p}$ are at the sub-percent level over the $\sim$2000 s duration for each probe light shift evaluation, and the weighted averages of $f_\mathrm{clock}$ typically have $\chi^2_\mathrm{red}\sim1$. Accurate knowledge of the beam waists $w_p$ is not necessary as they are robust during an evaluation, and are common factors that drop out in calculations. Long-term drifts due to beam pointing instability may be monitored and countered by benchmarking the probe intensities using the molecules (e.g., through an Autler-Townes splitting, an on-resonance scattering rate, or the two-photon Rabi oscillation frequency), which we leave to future work. \subsubsection{Blackbody radiation shift} Homonuclear dimers are infrared inactive, conferring natural immunity to blackbody radiation (BBR). Using the formulas derived in Ref.~\cite{porsev2006multipolar}, the frequency correction due to BBR is calculated to be $-0.70(14)\,\mathrm{Hz}$ at an operating chamber temperature of $T_\mathrm{c,o} = 302(1)\,\mathrm{K}$; see Appendix \ref{sec:chambertherm} for a description of the chamber thermometry. The uncertainty is dominated by \textit{ab intio} calculations of the dc polarizabilities of the clock states (Appendix \ref{sec:abinitiopol}). Comparison with experimentally measured ac polarizability ratios show agreement at the level of 10--20\%, to be expected from typical accuracies of theoretical transition strengths. Therefore, we assign a conservative fractional uncertainty of 20\% for the BBR shift. \subsubsection{Density shift} \begin{figure} \centering \includegraphics[width=\columnwidth]{PaperFig4.pdf} \caption{Density shift evaluation. (a) Clock shifts due to molecular collisions extrapolated to operating conditions (1 molecule per lattice site, averaged over filled sites), plotted versus the change in molecule number per site used for the interleaved measurement. A single constant suffices to fit the data (0.20(10) Hz, $\chi^2_\mathrm{red}=1.7$). (b) In the same dataset, the shift between successive resonances taken under identical experimental settings serves as a control experiment to check for technical offsets. As expected, this averages to zero (0.03(20) Hz, $\chi^2_\mathrm{red}=2.0$). All statistical errors are scaled up by $\sqrt{\chi^2_\mathrm{red}}$. Error bars represent 1$\sigma$ uncertainties. Both insets show the histogram of normalized residuals, and the solid red lines are Gaussian fits. } \label{fig:densityshift} \end{figure} Our $^{88}\mathrm{Sr}_2$ molecules are unprotected against $s$-wave collisions due to their bosonic character. The one-dimensional lattice forms a series of microtraps, each with a trap volume proportional to $(T/ \bar{\omega}^2)^{3/2}$. Here, $T$ is the temperature of the molecules, and $\bar{\omega}$ is the geometric mean of the angular trapping frequencies. We investigate density dependent shifts arising from dimer-dimer collisions by modulating the average number of molecules per lattice site ($N_\mathrm{mol/site}$) at the beginning of the clock pulse. This is achieved by inserting a wait time immediately after photoassociation (PA) so that two-body collisions naturally reduce the molecule number \cite{leung2021ultracold,leung2020transition}. Fluctuations in $N_\mathrm{mol/site}$ are typically $<20\%$, and we assume equal occupancy across filled sites. Since both $T$ and $\bar{\omega}^2$ scale similarly with $U_0$, and the lattice intensity is stabilized, $N_\mathrm{mol/site}$ is a robust observable proportional to the molecular density. Assuming linear density shifts, we scale our differential measurements to find $\Delta f_\mathrm{clock}$ at the normal operating value of $N_\mathrm{mol/site}=1$. Figure~\ref{fig:densityshift}(a) summarizes the measurements performed at various number differences ($\Delta N_\mathrm{mol/site}$) suggesting a correction of $-0.20(10)\,\mathrm{Hz}$, or $-0.63(31)\times10^{-14}$ in fractional units, due to collisional shifts. Control measurements using spectra taken under common experimental settings do not show evidence of spurious offsets in our data [Fig.~\ref{fig:densityshift}(b)]. It is instructive to compare the magnitude of our density shift with similarly performing atomic clocks. From a trap calibration (Appendix \ref{sec:trapcal}) we estimate a shift coefficient of $8(4)\times10^{-25}\,\mathrm{cm^{3}}$ after normalizing by the transition frequency. This is rather similar to the analogous optical atomic clock with bosonic $^{88}\mathrm{Sr}$ ($\sim 2\times10^{-25} \,\mathrm{cm^{3}}$ \cite{lisdat2009collisional}), while being orders of magnitude smaller than in Cs ($\sim1\times10^{-21}\,\mathrm{cm^{3}}$ \cite{gibble1993laser,dos2002controlling}) or Rb microwave clocks ($\sim5\times10^{-23}\,\mathrm{cm^{3}}$ \cite{sortais2000cold}). Future work may circumvent collisional shifts altogether by preparing samples with single molecule occupancy in a three-dimensional lattice \cite{akatsuka2008optical,akatsuka2010three,takano2017precise,kato2012observation} or an optical tweezer array \cite{zhang2022optical,yu2021coherent,madjarov2019atomic,young2020half}. \subsection{Absolute frequency evaluation} As illustrated in Fig.~\ref{fig:expscheme}(b), we reference all RF frequency counters and direct digital frequency synthesizers (DDS) in the experiment to a free-running Rb microwave standard (our local timebase). Calibration of this Rb clock is accomplished by comparing its 1 pulse-per-second (PPS) output with that of a dual-band global navigation satellite system (GNSS) receiver on a time interval counter (TIC). The Rb clock, therefore, serves as a transfer oscillator between the molecular clock and Global Positioning System (GPS) time. Occasionally, we manually aligned the Rb clock frequency relative to GPS during periods of experimental downtime if it exceeded a fractional offset of $1\times10^{-11}$ (but not during a measurement trial or TIC log). Each measurement trial of the absolute clock frequency is performed under operational conditions, where the molecular clock systematics are controlled at the level quoted in Table~\ref{tab:systable}. We repeatedly scan the clock transition to obtain a time series of the line centers, while simultaneously counting the repetition rate of the frequency comb. The probe light shifts were evaluated every trial to account for potential daily variations in probe laser beam pointing. We log the TIC measurements continuously for at least 24 hours to average the satellite link, longer than the uptime of the molecular clock. The TIC measurements as a function of elapsed time are split into 6 segments, to which independent linear fits were made. We take the average (standard deviation) of the slopes of the linear fits to be the fractional frequency offset (uncertainty) of the Rb clock relative to GPS. Since each TIC measurement is started by the rising edge of the 1 PPS from the Rb clock and stopped by the rising edge of the 1 PPS from the GNSS receiver, an average positive (negative) slope implies that a positive (negative) correction has to be made to the uncalibrated molecular clock frequency. Figure~\ref{fig:absfreq} shows the results of the measurement campaign, consisting of 10 trials performed on separate days. A weighted average yields the absolute frequency of the $^{88}\mathrm{Sr}_2$ vibrational clock to be $f_\mathrm{clock} = \text{31 825 183 207 601.1(3.3) Hz}$, with a fractional uncertainty of $1.0\times 10^{-13}$ limited by the drift of the Rb standard during its calibration. The scale intervals of GPS time and International Atomic Time (TAI) differed by a few parts in $10^{-15}$ during the campaign \cite{circularT}. \begin{figure} \centering \includegraphics[width=\columnwidth]{PaperFig5.pdf} \caption{(a) Absolute frequency of the clock transition measured over 10 trials (filled black circles) with all known frequency offsets corrected, including that of the local timebase (see main text for details). Blue error bars are 1$\sigma$ statistical uncertainties, dominated by the determination of the comb repetition rate rather than the stability of the scanned molecular clock lines. Red error bars are 1$\sigma$ systematic uncertainties due to the molecular clock only (see Table~\ref{tab:systable}). Black error bars are 1$\sigma$ total uncertainties, where the uncertainties of the local timebase calibrations are added in quadrature with the statistical and molecular clock systematic uncertainties. The black horizontal line shows the weighted average, and the shaded grey area shows the associated $\pm1\sigma$ standard error of the mean ($\chi^2_\mathrm{red} = 0.57$). (b) Histogram of all clock frequency measurements in the 10 trials, relative to the weighted average of $f_\mathrm{clock}$. The solid red line is a Gaussian fit to the histogram.} \label{fig:absfreq} \end{figure} \section{Conclusion} Few frequency standards currently exist in the THz band \cite{riehle2018cipm}. Our molecular clock serves as a THz reference and can generate stable radiation at 9.4 $\mu$m via photomixing \cite{preu2011tunable,hindle2011widely}. Alternatively, transitions in heteronuclear isotopologues could be driven directly with quantum cascade lasers \cite{bartalini2014frequency,consolino2019qcl}. To our knowledge, $f_\mathrm{clock}$ represents one of the most accurately measured molecular frequencies to date, on par with the unidentified rovibrational interval in $\mathrm{OsO}_4$ near the $\mathrm{R}(10)\,(00^01)\text{--}(10^00)$ emission line of the $\mathrm{^{12}C^{16}O}_2$ laser. This absorption line in $\mathrm{OsO}_4$ is a secondary representation of the SI second \cite{riehle2018cipm}, and was compared directly against a primary cesium standard by stabilizing a $\mathrm{CO}_2$ laser to the specific saturated absorption feature of $\mathrm{OsO}_4$ in a high-finesse cavity \cite{daussy2000performances,rovera2001optical}. We expect to reduce the uncertainty of our local timebase calibration to the same level as the molecular clock systematics (or better) by upgrading to a standard with intrinsically lower instability and utilizing two-way time transfer schemes. Molecular spectroscopy is increasingly appreciated as a fertile ground in the search for new physics. The reported Hz-level molecular clock is a starting point for elucidating the bonding of the $\mathrm{Sr}_2$ dimer across a large range of internuclear distances and investigating hypothesized hadron-hadron interactions for differing nucleon numbers \cite{salumbides2013bounds}. The sum of $f_\mathrm{clock}$ with the binding energy of the least bound state $X(62,0)$ yields the dissociation energy ($D_0$) of our molecule with respect to the ${^1S}_0+{^1S}_0$ threshold. While the analogous least bound vibrational states of $^{84}\mathrm{Sr}_2$ and $^{86}\mathrm{Sr}_2$ are known with sub-kHz uncertainties \cite{stellmer2012creation,aman2018photoassociative}, the current best measurement for $^{88}\mathrm{Sr}_2$ is at the kHz-level \cite{McDonald2017}. Nevertheless, taking the binding energy of $X(62,0)$ to be $136.6447(50) \,\mathrm{MHz}$ from Ref.~\cite{McDonald2017}, which was determined using two-photon dissociation, we find $D_0(^{88}\mathrm{Sr}_2) = \text{31 825 319 852(5) kHz}$, or $\text{1 061.578 402 09(17)} \,\mathrm{cm}^{-1}$. This is an improvement by 5 orders of magnitude over the previously reported value for $\mathrm{Sr}_2$ in available literature \cite{Stein2010}, and sets a new accuracy record for the determination of a molecular dissociation energy ($1.6\times 10^{-10}$ fractional uncertainty). To list a few competitive results, dissociation energies have been reported with fractional uncertainties of $4.4\times 10^{-10}$ for $^{87}\mathrm{Rb}^{133}\mathrm{Cs}$ \cite{molony2016measurement}, $6.9\times 10^{-10}$ for ortho-$\mathrm{H}_2$ \cite{cheng2018dissociation}, $8.6\times 10^{-10}$ for para-$\mathrm{H}_2$ \cite{beyer2019determination}, and $7.1\times 10^{-10}$ for ortho-$\mathrm{D}_2$ \cite{hussels2022improved}. In summary, we have demonstrated a vibrational molecular clock with a total systematic uncertainty of $4.5\times10^{-14}$, entering a new domain in high resolution molecular spectroscopy. Our results are enabled by merging the key strengths of atomic clock techniques with molecular quantum science. Implementation of deeper atomic cooling \cite{zhang2022sub,akatsuka2021three}, operation at lower lattice and probe laser intensities, strengthening of the overall optomechanical stability, and other strategies for accessing longer coherence times should realistically improve the systematic uncertainty and facilitate its evaluation. \begin{acknowledgments} We gratefully thank J. Sherman for insightful discussions and invaluable advice on the absolute frequency measurement, and V. Lochab for early contributions to the vacuum chamber thermometry. This work was supported by NSF grant PHY-1911959, AFOSR MURI FA9550-21-1-0069, ONR grant N00014-21-1-2644, a Center for Fundamental Physics grant from the John Templeton Foundation \& Northerwestern University, the Brown Science Foundation, and the Polish National Science Centre (NCN) grant 2016/20/W/ST4/00314. M. B. was partially funded by the Polish National Agency for Academic Exchange within the Bekker Programme, project PPN/BEK/2020/1/00306/U/00001, and by NCN, grant 2017/25/B/ST4/01486. \end{acknowledgments}
{ "timestamp": "2022-09-29T02:19:42", "yymm": "2209", "arxiv_id": "2209.10864", "language": "en", "url": "https://arxiv.org/abs/2209.10864" }
\section{Introduction} Autonomous robots are sometimes deployed in environments that include hazards, i.e., locations that might disrupt the robot's operation, possibly causing it to crash, get stuck, and more generally to fail its mission. Robots are usually capable to perceive hazards that are expected during system development, and that can be explicitly accounted for when designing the perception subsystem. Nonetheless, during deployment a robot might incur in situations that were not expected during system design (anomalies). In this paper, we discuss the challenges related to detecting anomalies from visual inputs and provide a dataset including many anomaly types. Because we don't have any model of how these hazards might appear, we consider anything that is novel or unusual as a potential hazard to be avoided. We do not deal with the problem of choosing an appropriate reaction once an anomaly is detected, which depends on the specific scenario. Ruff~\cite{ruff2021unifying} states that an anomaly is ``an observation that deviates considerably from some concept of normality''. This definition highlights the importance of the context: an anomaly is defined only when a concept of normality exists. In the literature, anomalies have taken different meanings depending on the applications: unusual patterns in flight data~\cite{birnbaum2015unmanned}, texture changes in manufactured products~\cite{haselmann2018anomaly}, unexpected obstacles in cultivated fields~\cite{christiansen2016deepanomaly}. Surveys~\cite{chandola2009anomaly, ruff2021unifying} on Anomaly Detection propose a general categorization of anomalies, based on two criteria: one classifying point, contextual and collective anomalies; the other distinguishing between low-level sensory and high-level semantic anomalies. \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{img/tabellozza.png} \caption{Summary of the anomalies represented in the dataset, with an example of each. Top row reports the score of our model for each scenario. Below: the categorization of the dataset's anomalies along three of the four proposed axes.} \label{fig:tabellozza} \end{figure*} \subsection{Categorization of anomalies} We propose to categorize visual anomalies encountered by mobile robots during their operation along four independent axes. The first, following Ruff et~al.~\cite{ruff2021unifying}, differentiates low-level sensory and high-level semantic anomalies. Low-level anomalies are described by features close to the image space, such as image brightness, smoothness, noise, and texture; one example of such anomaly occurs when the robot suddenly finds itself in the dark or when it is blinded by direct light. High-level anomalies refer to the semantic contents of the image: examples include the observation of a pressure gauge reporting a different value than usual, or of a puddle of liquid on the ground. The second axis represents whether an anomaly is a hazard to the robot. This is specific to the robot's characteristics (an oil puddle on the floor might represent a hazard for a ground robot but not for a drone) and its task (the puddle is not dangerous unless it lies on the robot's path). The third axis differentiates anomalies that are relevant to the robot mission and anomalies that are not. For example, a patrolling robot might want to detect and report the fact that a door that is usually closed is observed to be open, whereas a delivery robot should not be affected by this observation unless it impacts its path planning. The fourth axis discriminates visual anomalies that are geometric in nature from those that are not. Geometric anomalies have a well-defined 3D shape in the robot environment (e.g. a never-before-seen object in a normally-free corridor) or consist of changes in the position or shape of a part of the environment (a wall that collapsed). Non-geometric anomalies are anomalies that, while perceivable using an RGB camera, would be undetectable with an ideal depth sensor: for example, a puddle on the ground; plaster rubble scattered across a building floor; dust, fog, or smoke; a wet ceiling in a tunnel \section{A Visual Anomaly Dataset for Robotics} Any machine learning model requires a large, representative dataset to be trained and evaluated on. Since dataset collection is expensive, visual anomaly detection literature often relies on existing classification datasets such as MNIST~\cite{lecun1998gradient}, ImageNet~\cite{deng2009imagenet}, CIFAR~\cite{krizhevsky2009learning}; a set of classes is selected as normal while the others, often with synthetic variations, represent anomalies. This approach has major limitations because it does not capture the visual characteristics of realistic anomalies with respect to normal data. This is solved with task-specific datasets, which have been proposed for industrial inspection and healthcare. We introduce a new dataset~\footnote{The dataset is available here \url{https://github.com/idsia-robotics/hazard-detection}} that is specific to visual anomaly detection for mobile robots. In the robotic field, to the best of our knowledge, there is no dataset publicly available for anomaly detection. Our dataset covers three different scenarios. Data for each scenario is divided into three sets: a training and a validation set, composed exclusively of normal samples; and a testing set, composed of normal and anomalous samples; each anomalous sample is annotated with the exact type of anomaly that it represents. The scenarios are: \emph{Tunnel}, a simulation of a drone flying inside an underground tunnel, with 3 kinds of anomalies; \emph{Factory}, recordings from a real drone inside a factory, with 2 types of anomalies; \emph{Corridor}, a real wheeled mobile robot moving inside some university's corridors, with 8 anomaly types. All the datasets are recorded from the front-facing camera of the robots at 30 frames per second. For the experiments, all samples are reshaped to square images with size $64\times64$ pixels. In Figure~\ref{fig:tabellozza} we show examples of all the represented anomalies. We also classify anomaly types along the axes introduced before. The third axis is not represented in the Figure since it is mission dependent and our dataset is gathered with no specific task in mind. \section{Experiments and Perspectives} Anomaly detection can be defined as a binary classification problem, where each sample is associated with one of two classes: normal samples are classified as negative and anomalies as positive. The problem could be solved using a supervised machine learning approach that relies on a labeled train set that contains examples of both classes. However, in our definition anomalies are \emph{rare} and \emph{unexpected} events. Because of this, collecting a large training set of anomalies would be very time-consuming, and collecting a representative training set of all possible anomalies is unfeasible. Therefore, we focus our analysis on \emph{unsupervised} methods. In this setting, the anomaly detector is learned from a training set composed exclusively of normal samples. When used for inference, the anomaly detection model will return an anomaly score for each sample that is low for normal samples and high for hazardous ones. State-of-the-art visual anomaly detection models rely on Deep Learning techniques to learn a similarity metric that accounts for the expected variability in the normal training images. We developed an anomaly detector based on Autoencoder~\cite{kramer1992autoassociative} and Real NVP~\cite{dinh2016density}, that takes as input an image and produces an anomaly score. The model is trained only on normal samples and is tested on both normal and anomalous images. From anomaly scores computed on a testing set with normal and anomalous samples, we measure the quality of the detector using the Area Under the ROC Curve (AUC) as a metric. The AUC value ranges from 0 to 1 where 1 is a perfect anomaly detector and 0.5 equals a random classifier. On the top row of Figure~\ref{fig:tabellozza} we show the AUC obtained by our model tested on the three scenarios. We finally deployed the model trained on the \emph{Factory} dataset on an autonomous drone, that stops its mission and backtracks in case an anomaly is detected. Figure~\ref{fig:drone} illustrates that anomaly detection avoids collision with an unforeseen obstacle (a thin tape that would not be otherwise seen by the drone's obstacle detection suite) \begin{figure}[!htp] \centering \includegraphics[width=0.8\columnwidth]{img/drone-vertical.png} \caption{Deployment in the Factory scenario: the drone advances normally until it detects an anomaly (a tape crossing its path); a time series of anomaly scores in previous frames is reported at the bottom (see supplementary video \url{https://youtu.be/SylhxUl20C0}).} \label{fig:drone} \end{figure} We are currently focusing on two related topics: allowing pre-trained models to rapidly adapt to new environments via active learning and domain adaptation, and exploiting recordings of known anomalies through outlier exposure~\cite{hendrycks2018deep}. \newpage \bibliographystyle{IEEEtran}
{ "timestamp": "2022-09-23T02:15:13", "yymm": "2209", "arxiv_id": "2209.10995", "language": "en", "url": "https://arxiv.org/abs/2209.10995" }
\section{Introduction} Recently, quantum annealing (QA) has attracted considerable attention owing to its potential applications for solving practical problems. QA is expected to not only solve combinatorial optimization problems but also simulate quantum many-body systems\cite{kadowaki1998quantum,farhi2000quantum,farhi2001quantum}. In QA, after we prepare a ground state of a driver Hamiltonian, we gradually change the Hamiltonian from the driver Hamiltonian to the problem Hamiltonian. Furthermore, an adiabatic theorem guarantees that we can obtain a ground state of the problem Hamiltonian with QA as long as the dynamics is adiabatic. There are two types of applications of QA. One application is to find a ground state of the Ising Hamiltonian\cite{schrijver2005history}. A combinational optimization problem can be mapped into a ground state search of the Ising Hamiltonian. Moreover, some types of clustering and machine learning can be conducted using this type of QA. Efficient clustering using QA has been reported \cite{kumar2018quantum, kurihara2014quantum}. In addition, machine learning using QA has been proposed \cite{kumar2018quantum, kurihara2014quantum,adachi2015application, wilson2021quantum, li2020limitations, sasdelli2021quantum, neven2008training, neven2012qboost, willsch2020support, winci2020path}. Furthermore, QA has been applied to topological data analysis(TDA)\cite{berwald2018computing}. The other application is to find a ground state of the problem Hamiltonian including non-zero off-diagonal terms. To understand the condensed matter physics, it is crucial to investigate a correlation function of the ground state, and QA is useful for such a study. In addition, applications of QA to quantum chemistry have been reported \cite{bravyi2002fermionic, seeley2012bravyi, tranter2015b, babbush2014adiabatic, xia2017electronic, seki2021excited}. Several devices have been developed for performing QA. D-Wave Systems Inc. developed a device for performing QA using thousands of superconducting flux qubits~\cite{johnson2011quantum, barends2016digitized, harris2018phase}. Many demonstrations using this device for QA have been reported~\cite{adachi2015application, hu2019quantum, joseph2021two, king2018observation}. A Kerr-nonlinear parametric oscillator(KPO) is another device for performing QA~\cite{goto2016bifurcation,puri2017quantum,wang2019quantum,grimm2020stabilization,yamaji2022spectroscopic}. Some methods use capacitively shunted flux qubits with a long coherence time for QA~\cite{matsuzaki2020quantum, imoto2022obtaining}. Decoherence is one of the main obstacles for QA \cite{albash2015decoherence}. Owing to unwanted coupling with the environment, decoherence should be considered during QA. In particular, thermal excitation can lead to a decrease in the success probability of QA. Non-adiabatic transitions also pose problems for QA. For example, when the first-order quantum phase transition occurs, the energy gap becomes exponentially smaller as the system size increases, and it becomes difficult to satisfy the adiabacity condition in QA. A promising approach for avoiding the first-order quantum phase transition is to add a non-stoquastic Hamiltonian to the driver Hamiltonian. By adding anti-ferromagnetic interactions (which are non-stoquastic) to the Hamiltonian, we can avoid the first-order phase transition for some specific problem Hamiltonians~\cite{seki2012quantum, seki2015quantum,susa2022nonsto}. In this paper, we challenge the common wisdom that non-stoquastic Hamiltonians improve the performance of QA whereas decoherence degrades the performance of QA. More specifically, we present counter-intuitive examples showing that non-stoquastic Hamiltonians can cause catastrophic failure of QA in the sense that we cannot obtain a ground state even for an infinitely long annealing time, and a certain type of decoherence can be used to recover the performance of QA in such cases. In our examples, we consider either an Ising Hamiltonian or an XXZ model for the problem Hamiltonian and transverse fields with anti-ferromagnetic interactions for the driver Hamiltonian. The driver Hamiltonian and problem Hamiltonian have a common symmetry in that both of these Hamiltonians commute with a certain observable. In this case, the Hamiltonian can be block diagonalized (Fig \ref{fig:concept_this_paper}). If the coupling strength of the anti-ferromagnetic interactions is smaller than a certain threshold, the ground state of the driver Hamiltonian and that of the problem Hamiltonian belong to the same sector, and successful QA can be achieved if the annealing time is sufficiently long. However, if the coupling strength of the anti-ferromagnetic interactions is larger than the threshold, the ground state of the driver Hamiltonian and that of the problem Hamiltonian belong to different sectors. In this case, the ground-state search with QA completely fails in the sense that the fidelity with the ground state becomes 0, because the transitions between different sectors are prohibited owing to the block-diagonal structure of the Hamiltonian. Even for such cases, we show that, if we add a certain type of decoherence to break the symmetry, we can obtain the ground state with QA. In addition, we confirm that, when the environmental temperature is sufficiently low, the success probability becomes nearly unity with decoherence. Thus, our results challenge the common wisdom about non-stoquastic Hamiltonians and decoherence, thereby providing a deeper understanding of QA. \begin{figure}[ht] \includegraphics[width=90mm]{figure_ver1/Fig1.pdf} \caption{ Structure of the annealing Hamiltonian investigated in the present paper. The annealing Hamiltonian can be bock-diagonalized when there is symmetry. For our choice of the problem Hamiltonian, when the driver Hamiltonian is the transverse magnetic field, the ground state of the driver Hamiltonian belongs to the same sector as that of the problem Hamiltonian. Meanwhile, if we add a non-stoquastic Hamiltonian to the transverse-field driver Hamiltonian, the sector of the ground state of the driver Hamiltonian becomes different from that of the problem Hamiltonian. Moreover, when a certain type of decoherence breaks the symmetry, we can induce transitions from the ground state of the driver Hamiltonian to that of the problem Hamiltonian. } \label{fig:concept_this_paper} \end{figure} The remainder of this paper is organized as follows. Section II reviews QA as well as the relation between symmetry and QA. Section III discusses the main results. Finally, Section IV summarizes our findings. \section{Overview} In this section, we review the QA as well as the relation between symmetry and QA.. \subsection{Review of QA} We review QA for the ground-state search~\cite{kadowaki1998quantum, farhi2000quantum, farhi2001quantum}. In QA, there are two types of Hamiltonians: a driver Hamiltonian and a problem Hamiltonian. We use the driver Hamiltonian to induce quantum fluctuations during QA, and we aim to find a ground state of the problem Hamiltonian. We define the annealing Hamiltonian using the driver Hamiltonian and the problem Hamiltonian as follows: \begin{align} H(t)\equiv\biggl(1-\frac{t}{T}\biggr)H_{D}+\frac{t}{T}H_{P} \end{align} where $H_{D}$ denotes the drive Hamiltonian, $H_{P}$ denotes the problem Hamiltonian, $t$ denotes the time, and $T$ denotes the annealing time. In QA, we prepare a ground state of the drive Hamiltonian at $t=0$, and we gradually change the Hamiltonian from the driver Hamiltonian to the problem Hamiltonian. As long as the dynamics is adiabatic, the ground state of the problem Hamiltonian is prepared at $t=T$. There is a known condition for satisfying the adiabacity condition during QA. The adiabatic condition is given by \begin{align} \frac{\bra{j(t)}\partial_{t}H(t)\ket{0(t)}}{\Delta_{j}(t)^{2}} \ll 1 \end{align} where $\Delta_{j}(t)$ denote the energy gap between the ground state and the $j$-th excited state at the time $t$, $\ket{j(t)}$ is the $j$-th excited state, and $\ket{0(t)}$ is the ground state~\cite{kato1950adiabatic, messiah2014quantum, morita2008mathematical}. It is possible to evaluate this adiabatic condition in an experiment \cite{matsuzaki2021direct,russo2021evaluating,schiffer2022adiabatic,mori2022evaluate}. Thus, when the minimum energy gap becomes exponentially smaller as the number of qubits increases, which corresponds to the first-order quantum phase transition, the annealing time should be exponentially large in the number of qubits. If the dynamics is not adiabatic during QA, some population of the ground state is transferred to that of the excited states. This is referred to as a non-adiabatic transition. Several attempts have been made to avoid the first-order phase transition during QA. A promising approach is to include non-stoquastic terms in the driver Hamiltonian. A Hamiltonian is said to be stoquastic if all the off-diagonal matrix elements are real and non-positive in a given basis. It is known that, when one uses a transverse-field driver Hamiltonian (that is stoquastic) to obtain a ground state of the p-spin model, the first-order phase transition occurs during QA\cite{jorg2010energy}. In this case, it has been shown that, for the driver Hamiltonian, if we add anti-ferromagnetic interactions that are non-stoquastic, then we can avoid the first-order quantum phase transition \cite{seki2012quantum}. A similar conclusion has been drawn for the case of the Hopfield model as the problem Hamiltonian \cite{seki2015quantum}. Moreover, an inhomogeneous driver Hamiltonian helps avoid the first-order phase transition for QA to obtain a ground state of the p-spin model~\cite{susa2018exponential, susa2018quantum}. Even when we perform bifurcation-based quantum annealing with spin-1 particles \cite{takahashi2022bifurcation}, the non-stoquastic Hamiltonian is useful to obtain a ground state of the p-spin model by avoiding the first-order phase transition~\cite{susa2022nonsto}. For actual devices, we must consider the effect of decoherence from the environment. Studies have investigated the existence of competition between non-adiabatic transitions and decoherence\cite{keck2017dissipation, novotny2016quantum}. In addition, some cases of improved QA accuracy have been reported under strong decoherence\cite{passarelli2018dissipative}. There are other methods to suppress decoherence using variational techniques \cite{susa2021variational, matsuura2021variationally,imoto2021quantum}. \subsection{Review of the relation between a symmetry and QA}\label{sec:review_symm} We review how the symmetry of a system can impair the performance of QA. Suppose that there exists an observable $K$ that commutes with the annealing Hamiltonian at any time ,i.e., \begin{align} [K, H(t)] = 0\ \ (0\leq\forall t\leq T). \end{align} In this case, the annealing Hamiltonian can be block diagonalized into sectors by applying a suitable unitary operator. The operator $K$ is referred to as a conservation quantity. Thus, transitions between different sectors are prohibited during the QA\cite{imoto2022quantum}. When the sector of the ground state of the drive Hamiltonian is different from that of the ground state of the problem Hamiltonian, i.e. \begin{align} \bra{\mbox{gs}(t=0)}K\ket{\mbox{gs}(t=0)}\neq\bra{\mbox{gs}(t=T)}K\ket{\mbox{gs}(t=T)} \end{align} where $\ket{\mbox{gs}(t)}$ denotes the ground state of the annealing Hamiltonian at $t$, QA fails \cite{imoto2022obtaining, francis2022determining}. Concretely, the fidelity between the exact ground state of the problem Hamiltonian and the state obtained by QA become zero even if an infinitely long annealing time is considered. \section{Main result} In this section, we present two main results. First, we analyze QA with a non-stoquastic Hamiltonian, and we present an example showing that, owing to the inclusion of non-stoquastic terms, QA can fail to find a ground state even after an infinitely long annealing time. Second, we investigate the effect of decoherence during QA for such a case, and we show that a certain decoherence process recovers the ability to find the ground state. \subsection{Failure of QA owing to a non-stoquastic Hamiltonian} In general, a stoquastic Hamiltonian is defined as an operator having only non-positive (or only non-negative) off-diagonal elements in the computational basis~\cite{albash2018adiabatic}. By contrast, a non-stoquastic Hamiltonian has both positive and negative off-diagonal elements in the computational basis. A typical example of a non-stoquastic Hamiltonian is the XX anti-ferromagnetic interaction, which can improve the accuracy of QA for specific cases \cite{seki2012quantum, seki2015quantum}. This anti-ferromagnetic interaction Hamiltonian is defined as \begin{align} H_{XX}=N\biggl(\frac{1}{N}\sum_{i}^{N}\hat{\sigma}_{i}^{x}\biggr)\label{eq:xx_non-stoquastic} \end{align} where $N$ denote the number of qubits, and $\hat{\sigma}_{i}^{a}, (a=x,y,z)$ denote the the Pauli matrices defined on the $i$-th site. In this study, we focus on the XX anti-ferromagnetic interaction as the non-stoquastic Hamiltonian. Here, we define the annealing Hamiltonian with the non-stoquastic Hamiltonian as \begin{align} H^{(NS)}(t, \alpha)\equiv\biggl(1-\frac{t}{T}\biggr)(H_{D}+\alpha H_{XX})+\frac{t}{T}H_{P}.\label{hamiltonianwithns} \end{align} In this paper, we consider the case where $H^{(NS)}(t, \alpha)$ has a conserved observable $K$ for any time $t$ and any parameters $\alpha$. In other words, \begin{align} \exists\ K\ s.t\ [K, H^{(NS)}(t, \alpha)] = 0\ \ (0\leq\forall t\leq T, \forall \alpha). \end{align} Therefore, the ground state of the Hamiltonian is also an eigenstate of $K$. We present an example showing that, owing to the inclusion of non-stoquastic terms, QA can fail to find a ground state. Thus, there exists a case such that the sector of the ground states satisfies \begin{align} \bra{gs(t=0, \alpha=0)}&K\ket{gs(t=0, \alpha=0)}\notag\\ &=\bra{gs(t=T)}K\ket{gs(t=T)} \end{align} and \begin{align} \exists\ \alpha_{0}\ s.t. \bra{gs(t=0, \alpha=\alpha_{0})}&K\ket{gs(t=0, \alpha=\alpha_{0})}\notag\\ &\neq\bra{gs(t=T)}K\ket{gs(t=T)} \end{align} where $\ket{gs(t, \alpha)}$ denotes the ground state of the annealing Hamiltonian with the non-stoquastic term $H^{(NS)}(t, \alpha)$. In these examples, the problem Hamiltonian is either a fully connected Ising model or an XXZ spin chain. Further details are discussed in Sections \ref{sec:example_ising} and \ref{sec:example_xxz}. \subsection{Avoidance of failure caused by symmetry using decoherence} In this subsection, we show that, by adding a certain decoherence process, we can avoid the problem of QA failure due to symmetry. In previous calculations, we assumed that quantum systems are closed to any environment. Even if a conserved quantity exists in an isolated quantum system, a perturbation from the environment can break such symmetry. Indeed, decoherence can not only break the symmetry but also induce unwanted transitions to excited states. To avoid the thermal excitation, the environmental temperature should be sufficiently low. In previous studies, the Gorini--Kossakowski--Sudarshan--Lindblad (GKSL) master equations~\cite{gorini1976completely, lindblad1976generators} were occasionally employed to consider decoherence, where the Lindblad operators were selected as $\hat{\sigma}_{\pm}$ phenomenologically. However, in this case, there is no information about the system Hamiltonian in the Lindblad operator. Therefore, the Lindblad operator induces a transition to the excited states of the system, even at a low temperature in our case (see Appendix \ref{sec:numerical_gksl}). The problem arises from the fact that the Lindblad operator $\hat{\sigma}_{\pm}$ is derived when there is no interaction between qubits. This means that, if there are non-negligible interactions between qubits, we cannot employ $\hat{\sigma}_{\pm}$ as the Lindblad operator for the GKSL master equation to describe the energy relaxation. Therefore, to consider more realistic situations, we employ the quantum adiabatic Markovian master equations ~\cite{redfield1957theory, redfield1965theory, albash2012quantum}, where the noise operator is derived from the first-principles calculation by using a microscopic model. We explain the master equation in detail in the next subsection. \subsection{Quantum adiabatic Markovian master equations without the rotating wave approximation} Here, we introduce the quantum adiabatic Markovian master equations\cite{redfield1957theory, redfield1965theory, rivas2012open, breuer2002theory,albash2012quantum} that we employ in this paper. Suppose that the total Hamiltonian for the system and the environment is given by \begin{align} H=H_{sys}+H_{bath}+H_{Int} \end{align} where $H_{sys}$, $H_{bath}$, and, $H_{I}$ are the system, bath, and interaction Hamiltonian, respectively. Here, the interaction Hamiltonian can be expressed as $H_{I}=\sum_{k=1}^{M}A_{k}\otimes B_{k}$ where $A_{\alpha}$ denotes the noise operator acting on the system and $B_{\alpha}$ denotes the operator acting on the environment. The dynamics of the total Hamiltonian is described by the von Neumann equation: \begin{align} \frac{d}{dt}\rho(t)=-i[H, \rho]. \end{align} By assuming the Born-Markov approximation and small non-adiabatic transitions, the quantum adiabatic Markovian master equations are given by \cite{albash2012quantum} \begin{align} \frac{d}{dt}&\rho(t)=\sum_{k,l}\sum_{\omega, \omega'}e^{i(\omega-\omega')}\Gamma_{kl}(\omega')\notag\\ &\times\biggl\{A_{l}(\omega')\rho(t)A_{k}^{\dag}(\omega)-A_{k}^{\dag}(\omega)A_{l}(\omega')\rho(t)\biggr\}+h.c \label{eq:redfield_eq} \end{align} where $\rho(t)$ denotes the density matrix, $A_{k}(\omega)=\sum_{\epsilon'-\epsilon=\omega}\ket{\psi_{\epsilon}}\bra{\psi_{\epsilon}}A_{k}\ket{\psi_{\epsilon'}}\bra{\psi_{\epsilon'}}$, $\epsilon$ denotes the eigenvalue of the Hamiltonian $H(t)$, $\ket{\psi_{\epsilon}}$ denotes the eigenvector of the Hamiltonian $H(t)$, $\omega$ denotes the energy difference, and $\Gamma_{kl}$ denotes the power spectrum density. Throughout this paper, we set $M=1$; hence, we denote $A_{k}$ simply as $A$. We select the noise operator as $A=\sum_{j=1}^{N}\hat{\sigma}^{(y)}_{j}$, and select an Ohmic spectral density given by \begin{align} \Gamma(\omega)= \begin{cases} {\eta\omega\biggl(\frac{1}{e^{\omega/T_{env}}-1}+1\biggr) \ (\omega > 0)}\\ {\eta T \ (\omega = 0)}\\ {\eta(-\omega)\biggl(\frac{1}{e^{-\omega/T_{env}}-1}+1\biggr) \ (\omega < 0)} \end{cases} \end{align} where $\eta$ denote the strength of decoherence and $T_{env}$ denote the temperature of the environment. When we conduct numerical simulations, we introduce the cut-off parameters $\omega_{c}$ and $\epsilon$. The resulting power spectrum density is given by \begin{align} \Gamma(\omega)^{(co)}= \begin{cases} {\eta\omega e^{-\frac{w}{w_{c}}}\biggl(\frac{1}{e^{\omega/T_{env}}-1+\epsilon}+1\biggr) \ (\omega > 0)}\\ {\eta T_{env} \ (\omega = 0)}\\ {\eta(-\omega)e^{\frac{w}{w_{c}}}\biggl(\frac{1}{e^{-\omega/T_{env}}-1+\epsilon}+1\biggr) \ (\omega < 0)} \end{cases} \end{align} In this paper, we use this power spectrum density and we set $\omega_{c}=20$, $\epsilon=10^{-7}$, and $\eta=0.1$. \subsection{Example 1: Ising problem Hamiltonian}\label{sec:example_ising} \begin{figure}[h!] \includegraphics[width=80mm]{figure_ver1/Fig2.pdf} \caption{For the Ising model as the problem Hamiltonian, we plot the energy spectrum of the annealing Hamiltonian against time $t$. Here, the driver Hamiltonian is the transverse field and the problem Hamiltonian is the fully connected Ising model. There is no level crossing in this energy diagram. } \label{fig:ising_no_non-stoquastic_energy} \end{figure} \begin{figure}[h!] \includegraphics[width=80mm]{figure_ver1/Fig3.pdf} \caption{ With the Ising model as the problem Hamiltonian, where we employ the driver Hamiltonian with the non-stoquastic term, we solve the time-dependent Schrodinger equation and plot the expectation values of the Hamiltonian during QA against $t/T$ by using a continuous line. The dotted line represents the energy spectrum during QA. This plot shows the existence of the level crossing. Here, we set $\alpha =100$ and $T=1000$. } \label{fig:ising_with_non-stoquastic_energy} \end{figure} \begin{figure}[h!] \includegraphics[width=80mm]{figure_ver1/Fig4.pdf} \caption{With the Ising model as the problem Hamiltonian, we plot the energy against $t/T$ via QA for the temperatures of $1.0, 10.0, 100.0$ by using continuous lines. The dotted line represents the energy spectrum against each time $t/T$. QA succeeds in preparing the ground state with high accuracy if the temperature of the environment is sufficiently low. We set $\alpha=100$ and $T=1000$. } \label{fig:ising_with_non-stoquastic_energy_with_decoherence} \end{figure} As the first example, we consider the case in which the problem Hamiltonian is the fully connected Ising model. When the driver Hamiltonian is the transverse field without the non-stoquatic term, the Hamiltonians are given by \begin{align} H_{D}&=\sum_{i=1}^{N}\hat{\sigma}_{i}^{x}\label{nononsd}\\ H_{P}&=N\biggl(\frac{1}{N}\sum_{i=1}^{N}\hat{\sigma}_{i}^{z}\biggr)^{2} \label{nononsp}, \end{align} where $N$ is the number of qubits. In this case, the annealing Hamiltonian $H(t)\equiv (1-t/T)H_{D}+(t/T)H_{P}$ has a conserved quantity $K=e^{i\frac{\pi}{2}\sum_{i=1}^{N}\hat{\sigma}_{i}^{(x)}}$. In other words, the operator $K$ satisfies the following condition: \begin{align} [H(t), K]=0. \end{align} for all $t$. Here, the sector of the ground state of $H_{D}$ is the same as that of $H_{P}$. This is analytically demonstrated in Appendix \ref{sec:appendix_symm}. For $N=2$, we plot an energy diagram of the annealing Hamiltonian with Eqs (\ref{nononsd}) and (\ref{nononsp}) as shown in Fig. \ref{fig:ising_no_non-stoquastic_energy}. Throughout this paper, as the state is confined in the maximum angular momentum, we conduct numerical simulations using the Dicke basis \cite{dicke1954coherence}. As expected, the crossing does not occur in the energy diagram; hence, we can find the ground state with QA as long as the dynamics is adiabatic. Meanwhile, let us consider the annealing Hamiltonian with a non-stoquastic term $H^{(NS)}(t, \alpha)= (1-t/T)(H_{D}+\alpha H_{XX})+(t/T)H_{P}$. The quantity $K$ is conserved for this Hamiltonian, and we have $[H^{(NS)}(t, \alpha), K]=0$. Moreover, in this case, the sector of the ground state of the driver Hamiltonian is different from that of the problem Hamiltonian (see Appendix \ref{sec:appendix_symm}). This means that the ground-state search with QA fails even if we consider an infinitely long annealing time. For $N=2$, we plot the energy diagram of the annealing Hamiltonian with the non-stoquastic term, as shown in Fig. \ref{fig:ising_with_non-stoquastic_energy}. This result shows that there is a level crossing during QA. Furthermore, we solve the time-dependent Schrodinger equation and plot the expectation value of the Hamiltonian during QA, as shown in Fig. \ref{fig:ising_with_non-stoquastic_energy}. Again, we confirm that the crossing occurs during QA. Next, we propose a method to avoid failure of QA using decoherence. We consider a noise model to break the symmetry of the Hamiltonian. More specifically, we select a noise operator to satisfy $[A, K]\neq 0$. In this case, we expect that, owing to noise-induced symmetry breaking, there should be a transition from the ground state of the driver Hamiltonian to that of the problem Hamiltonian. We conduct the numerical calculation using the quantum adiabatic Markovian master equations (\ref{eq:redfield_eq}). We plot the energy spectrum and the energy expectation value $\langle H\rangle ={\rm{Tr}}[H(t)\rho(t)]$, as shown in Fig. \ref{fig:ising_with_non-stoquastic_energy_with_decoherence}. This figure shows that the ground state is obtained by QA with high probability when the temperature is sufficiently low. We can interpret this result as follows. Owing to the symmetry of the Hamiltonian, the system will be prepared in the excited state if the decoherence is negligible. However, owing to the existence of the decoherence, an energy relaxation from the excited state to the ground state occurs, and the system is prepared in a ground state as long as the temperature is sufficiently low. Furthermore, we conduct numerical simulations in the case of many qubits (see Appendix \ref{manyqubits}), and we draw the same conclusion. \subsection{Example 2: XXZ problem Hamiltonian}\label{sec:example_xxz} As the second example, we consider the case in which the problem Hamiltonian is a fully connected XXZ spin chain. The problem Hamiltonian is given by \begin{align} H_{P}^{(XXZ)}=N\biggl(\frac{1}{N}\sum_{i=1}^{N}\hat{\sigma}_{i}^{x}\biggr)^{2}&+N\biggl(\frac{1}{N}\sum_{i=1}^{N}\hat{\sigma}_{i}^{y}\biggr)^{2}\notag\\ &+\Delta N\biggl(\frac{1}{N}\sum_{i=1}^{N}\hat{\sigma}_{i}^{z}\biggr)^{2}\label{eq:xxz} \end{align} where $N$ is the number of qubits. We select the transverse field as the driver Hamiltonian described in Eq. (\ref{nononsd}). In this case, again, the operator $K=e^{i\frac{\pi}{2}\sum_{i=1}^{N}\hat{\sigma}_{i}^{(x)}}$ is the conserved quantity of the annealing Hamiltonian. Throughout this paper, we set $\Delta=1.5$ and $N=2$. We plot an energy diagram of the annealing Hamiltonian with Eqs. (\ref{eq:xxz}) and (\ref{nononsd}), as shown in Fig. \ref{fig:xxz_no_non-stoquastic_energy}. There is no level crossing in the energy diagram. \begin{figure}[ht] \includegraphics[width=80mm]{figure_ver1/Fig5.pdf} \caption{With the XXZ model as the problem Hamiltonian, we plot the energy spectrum of the annealing Hamiltonian without the non-stoquastic Hamiltonian against each time $t/T$. We note that the anisotropic parameter $\Delta=1.5$. We can see that the crossing does not occur. Here, we set $T =1000$. } \label{fig:xxz_no_non-stoquastic_energy} \end{figure} Let us consider the annealing Hamiltonian with a non-stoquastic term $H^{(NS)}(t, \alpha)= (1-t/T)(H_{D}+\alpha H_{XX})+(t/T)H_{P}^{(XXZ)}$. We plot the energy diagram when the problem Hamiltonian is the XXZ model, as shown in Fig. \ref{fig:xxz_with_non-stoquastic_energy}. There is a level crossing; hence, the ground-state search with QA does not succeed even when we consider an infinitely long annealing time. We discuss this point using an analytical method (see Appendix \ref{sec:appendix_symm}). \begin{figure}[ht] \includegraphics[width=80mm]{figure_ver1/Fig6.pdf} \caption{ With the XXZ model as the problem Hamiltonian, we plot the energy during QA against $t/T$ by using a continuous line, where we employ the driver Hamiltonian with the non-stoquastic term. The dotted line represents the energy spectrum during QA. We note that the anisotropic parameter is set as $\Delta=1.5$. In addition, we set $\alpha=100$ and $T=1000$. We can see that the crossing occurs. } \label{fig:xxz_with_non-stoquastic_energy} \end{figure} \begin{figure}[ht] \includegraphics[width=80mm]{figure_ver1/Fig7.pdf} \caption{ With the XXZ model as the problem Hamiltonian, we plot the energy against $t/T$ via QA for the temperatures of $1.0, 10.0, 100.0$ by using continuous lines. The dotted line represents the energy spectrum against each time $t/T$. QA succeeds in preparing a ground state with high accuracy if the temperature of the environment is sufficiently low. We set $\alpha=100$ and $T=1000$. } \label{fig:xxz_with_non-stoquastic_energy_decoherence} \end{figure} Similarly, we consider a noise operator to break the symmetry of the Hamiltonian. We conduct the numerical calculation using the quantum adiabatic Markovian master equations. As shown in Fig. \ref{fig:xxz_with_non-stoquastic_energy_decoherence}, we plot the energy spectrum and the energy expectation value $\langle H\rangle ={\rm{Tr}}[H(t)\rho(t)]$ during QA. In contrast to the case with unitary dynamics without noise, we succeed in obtaining the ground state via QA with noise, especially when the environmental temperature is low. The population of the excited state increases with the temperature owing to the thermal excitation. \section{conclusion} In this paper, we presented examples showing that, when a non-stoquastic Hamiltonian is used, catastrophic failure of QA occurs in the sense that we cannot obtain a ground state even with an infinitely long annealing time. Moreover, we find that we can avoid such failure by using a certain type of decoherence. The key aspect of this finding is the symmetry of the Hamiltonian. As there exists an observable to commute with the driver Hamiltonian and problem Hamiltonian, we can block-diagonalize the annealing Hamiltonian. When we add the anti-ferromagnetic interaction term (i.e., a typical non-stoquastic term) to the transverse-field driver Hamiltonian, the sector of the ground state of the problem Hamiltonian becomes different from that of the driver Hamiltonian. In this case, there is no transition from the ground state of the driver Hamiltonian to that of the problem Hamiltonian; hence, the ground-state search with QA does not succeed despite a long annealing time. However, by adding decoherence to break the symmetry, the transition between the ground state of the driver Hamiltonian and that of the problem Hamiltonian becomes possible, and we can find the ground state using QA. In summary, we challenged the common wisdom that a non-stoquastic Hamiltonian improves the performance of QA whereas decoherence degrades the performance. Therefore, our results provide a deeper understanding of QA. \begin{acknowledgments} This work was supported by Leading Initiative for Excellent Young Researchers MEXT Japan and JST presto (Grant No. JPMJPR1919) Japan. This paper is partly based on results obtained from a project, JPNP16007, commissioned by the New Energy and Industrial Technology Development Organization (NEDO), Japan. \end{acknowledgments}
{ "timestamp": "2022-09-23T02:14:48", "yymm": "2209", "arxiv_id": "2209.10983", "language": "en", "url": "https://arxiv.org/abs/2209.10983" }
\section{Introduction} Suppose that $G$ is a graph with $E(G)$ as the edge set of $G$ while $V(G)$ denotes the vertex set of $G$. Let $M$ be subset of $E(G)$ such that for every $e_1, e_2 \in M$ there is no such edge in $E(G)$ to which any of the end points of $e_1$ and $e_2$ are commonly adjacent. Maximum Induced matching (MIM) problem is the generalization of the older graph matching problem, and it was introduced in \cite{SV1}. Suppose that $M$ is the largest induced matching in $G$ then the cardinal number of $M$, denoted by $im(G)$ is called the maximum induced matching number of $G$. Many work has has been on this subject. It has attracted interest mostly because of it is theoretically interested and it has a number of direct applications. In \cite{SV1}, the authors described MIM problem as "risk free" marriage where married couples who are perfectly matched are identified. Its usefulness in cryptography is also obvious. Cameron in her earlier work \cite{C1} showed that even though the MIM problem is NP-complete for bipartite graphs, it is easier to resolve for chordal graphs. This was also confirmed for circular graph in \cite{GL1}. Golumbic and Lewenstein \cite{GL2} established that the a relationship between MIM number and redundancy number in graphs and also showed that the MIM problem is polynomial-time solvable for tree graphs, while \cite{C2} investigated the MIM problem in intersection graphs. Recent works on MIM problem include \cite{M1} where the MIM number was extensively probed for grids $G_{n,m}=P_n \Box P_m$, the Cartesian product of paths $P_n$ and $P_m$. For odd $nm$, a bound $im(G_{n,m}) \leq \lfloor \frac{nm+1}{4} \rfloor$ was obtained. The bound was tightened in \cite{AA2} and further in \cite{AA1}. In \cite{XT1} investigation was made into obtaining exact algorithm for MIM problem of graphs on $n-$vertices. In this work, we probe the maximum induced matching problem for stacked-book graph $G_{m,n}$ class which are graphs obtained from the Cartesian product of star graphs $S_m$ and paths $P_n$. The MIM numbers are obtained for the initial range of these graphs while lower bounds of MIM number are derived for the general class. \section{Definitions} To make this works self-contained, we give the following definitions, which we shall adopt in the course of the paper. Definitions that are not considered as general will be given at the point of application. The vertex set of graph $G$ is $V(G)$ and $M$ is a subset of $E(G)$, the edge set of $G$, and $M$ is the induced matching of $G$. A vertex $v \in V(G)$ is called saturated if $v \in V(G)$ and unsaturated if otherwise. A star graph $S_m$ contains a central vertex $v_1$ (except if specifically indicated otherwise) with $m-1$ leaves, which are all incident to $v_1$ as pendants. A path $P_n$ contains $n$ edges and $n-1$ paths, while a cycle $C_m$ contains $m$ vertices and $n$ edges. Supposed that $u$ and $v$ are members of $V(G)$, then $d(u,v)$ is a positive integer, which is the distance between $u$ and $v$ in $G$. A vertex $v \in V(G)$ is called unstaurable if by the virtue of its position, can not be saturated either because of its distance from a saturated vertex or it is a the the right distance but not adjacent to a vertex that can be saturated in other to form an edge in the induced matching. A saturable vertex therefore, is the opposite of an un saturable vertex. The diameter of a graph is the largest distance between any two vertices on a graph $u$ and $v$, demoted by $diam(G)$. The set $[a,b]$ denoted set of integers from $a$ to $b$ while $[a]$ is a shortened for for $[1,a]$. \subsection{Structure of a Stacked-book graph} The stacked-book graph is the Cartesian product $S_m \Box P_n$ of a star graph $S_m$ and path $P_n$. Structurally, a $S_m \Box P_n$ contains $n$ number of $S_m$ stars such that there exist the $E(G') \in E(S_m \Box P_n)$, where $E(G')=\left\lbrace v_iu_i: v_i \in V(S_m(i)); u_i \in V(S_m(i+1), i \in [n])\right\rbrace $. Clearly, $E(S_m \Box P_n)=E(G') \cup E(\cup^n_{i=1}S_m(i))$, where $S_m(i)$ is designated as the the $i$th $S_m$ star graph for all $1 \leq i \leq n$ \subsection{Initial Results} The following results are obvious \begin{theorem}\label{thm1} Let $P_n$ be a path graph on $n$ vertices. Then, $im(P_n)= \lceil \frac{n-1}{3}\rceil$. \end{theorem} \begin{theorem} \label{thm2} Let $C_n$ be a circle graph on $n$ vertices. Then $im(C_n)= \lfloor \frac{n}{3}\rfloor$. \end{theorem} \begin{theorem}\label{thm3}\cite{M1} Suppose that $G_{3,n}$ is a grid graph obtained by the Cartesian product $P_3 \Box P_n$, where $n$ is even or odd. Then for a positive integer $k$, \begin{center} $im(P_3 \Box P_n) = \left\{ \begin{array}{ll} \lceil \frac{3n}{4} \rceil & \mbox{if} \;\; n \; \mbox {is even}; \\ \frac{3(n-1)}{4} & \mbox{if} \;\; n=4k+1 \\ \frac{3(n-1)+2}{4} & \mbox{if} \; \; n=4k+3\\ \end{array} \right.$ \end{center} \end{theorem} \section{Result} Now we present the results we have obtained in this work. First we show a result on induced matching on star graph $S_m$. \begin{theorem}\label{thm4} Let $S_m$ be a given a star graph such that the central vertex is $v_1$ and it is adjacent to $m-1$ leaves. Then $im(S_m)=1$ \end{theorem} (The implication of this result is that every star contains at most one element in its induced matching set.) \begin{proof} Let $S_m$ be a start with $v_1$ being the central vertex. Then $v_1$ is saturated. Suppose that $v_k \in V(S_m)$, $k \leq m$, such that $v_k$ is saturated. Then $v_1v_k \in M$. Now for all $i$, $i \neq k$, $v_i \in V(G)$ is unsaturated since the $diam(S_m)=2$. Thus $im(S_m)=|m|=1$. \end{proof} Now we present our first results on the induced matching of stacked-book graph $G_{m,n}$. \begin{lemma}\label{lem1} Suppose that $G_{m,n}$ is a stacked-book graph. Then if the induced matching number of $G_{m,n}$ is obtained, then the central vertices $v_1(1)$, $v_1(2)$ of factor stars $S_m(1)$ and $S_m(2)$ of $G_{m,n}$ are not unsaturated. \end{lemma} \begin{proof} Suppose that $v_1$ and $u_1$ are the central vertices of $S_m(1)$ and $S_m(2)$ stars. Now, suppose that $v_1$ is saturated. Then either $v_1v_i \in M$, $v_i \in V(S_m(1))$ for some $2 \leq i < m $ or $v_1u_1 \in M$. Suppose that $v_1 v_i \in M$. By Theorem \ref{thm4}, if $v_1$ is saturated, then at least $m-2$ vertices on $S_m$ will be unsaturated. Thus, for all $v_i \in V(S_m(1))$, $v_iu_i \notin M$. Same argument holds if $u_1 \in S_m(2)$ is saturated. Thus, $im(G_{m,2}) = 1$. Now, suppose that $v_1u_1 \in M$. Since $d(v_1,v_i)=1=d(u_1,u_i)$ for all $i \in [2,m]$, then $v_i,u_i$ are unsaturated for all $i \in [2,m]$. Clearly there exists a path $P_5$ in $G_{m,2}$. From Theorem \ref{thm1}, $P_5$ contains two edges in $M$ of $G_{m,2}$. Thus a contradiction. \end{proof} Now we present the first theorem, \begin{theorem} Let $G_{m,2}$ be a stacked-book graph. Then $im(G_{m,2})=m-1$. \end{theorem} \begin{proof} Let $G_{m,2}$ be a stacked-book graph. Then there exist $S_m(1), S_m(2) \subseteq G_{m,2}$ with vertices $v_1, v_2 \cdots v_m$ and $u_1, u_2, \cdots u_m$ and a path $P_5(i) = v_i \rightarrow u_i \rightarrow u_1 \rightarrow u_{i+1} \rightarrow v_{i+1}$, for all $i \in [2,m]$. Thus, there exits, the set $\bar{P}=P_5(2), P_5(3), \cdots , P_5(\frac{m-1}{2})$, if $m$ is odd. Thus, there are $\frac{m-1}{2}$ number of $P_5-$paths. Now, by Theorem \ref{thm1}, $im(P_5)= 2$. Clearly, $\bar{P}$ consists of all the edges in $E(G_{m,2})$ that can be in $M$. Therefore, $im(G_{m.2}) \leq 2\left(\frac{m-1}{2} \right)=m-1$. Suppose that $m$ is even. Then, set $P*=\left\lbrace P_5(2), P_5(3), \cdots , P_5(\frac{m-2}{2}), P_3(t) \right\rbrace $, where $P_3(t)= v_k \rightarrow u_k \rightarrow u_1$. So, $im(P* \backslash P_3(t))=2 \left(\frac{m-2}{2}\right)=m-2$. By an earlier result, $im(P_3(t))=1$. Therefore, $im(P*)=m-1$. Hence, for any integer $m$, $im(G_{m,2}) \leq m-1$. Conversely, by definition of induced matching and stacked-book graph, $v_2u_2, v_3u_3, \cdots, v_mu_m$, satisfying the distance conditions to belong to $M$. Thus, $im(G_{m,2}) \geq m-1$ and hence the claim. \end{proof} Next we consider the induced matching in $G_{m,3}$, where $m$ is either even or odd and show that the graph contains the same induced matching as $G_{m,2}$. \begin{theorem}\label{thm5} Let $G_{m,3}$ be a stacked-book graph. Then $im(G_{m,3})=m-1$. \end{theorem} To proof Theorem \ref{thm5}, we need two results, the first one, which is about about the nature of induced matching and distances between vertices of graphs, is more like a folklore because it follows from the definitions of induced matching of graphs. \begin{lemma}\label{lem2} Let $e_1$ be a member of $M$ of a graph $G$. Then some edge $e_2 \in E(G)$ also belongs to $M$if there exists $v_1 \in e_1$ and $u_1 \in e_2$ such that $d(v_1,u_1)\geq 3$ and $v_2 \in e_1$ and $u_2 \in e_2 $, such that $d(v_2,u_2) \geq 2$. \end{lemma} \begin{proof} The proof follows from the definition of induced matching $M$ of graph $G$ \end{proof} \begin{lemma} \label{lem3} Let $G_{m,3}$ be a stacked-book graph with factor star graphs $S_m(1)$, $S_m(2)$ and $S_m(3)$ such that $v_1 \rightarrow u_1 \rightarrow w_1$ is a $P_3$ path in $G_{m,3}$, where $v_1, u_1$ and $w_1$ are the central vertices of the respective factor star graphs.Then if $u_1$ is saturated, and $u_1v_k \in M$ for some $v_k \in V(G_{m,3})$, then $|M|=1$ and thus, not the maximum induced matching of $G_{m,3}$. \end{lemma} \begin{proof} For $v_k \in V(G_{m,3})$, $v_k \neq u_1$, for which $v_i \in G_{m,3}$ such that $d(v_k,v_i)=3$ since the $diam(G_{m,3})= 3$. However, suppose that $v_iv_j \in E(G_{m,3})$, for which $d(v_k,v_i)=3$. It is clear that $v_i$ is a leaf if some $S_m(t)$, $t \in \left\lbrace 1,3\right\rbrace $. Thus, $d(u_1,v_j) =1$, hence a contradiction to Lemma \ref{lem2} and hence the result. \end{proof} \subsection{Proof of Theorem \ref{thm5}} Now we proceed to proof Theorem \ref{thm5}. \begin{proof} Suppose that $|M| > m-1$. Let $v_1,u_1$ and $w_1$ be the central vertices of $S_m(1), S_m(2)$ and $S_m(3)$ respectively. Clearly, $v_1u_1, u_1w_1 \notin M$ from Lemma \ref{lem3}. Now, first we show that $v_1$ is not saturable. Suppose that $v_1$ is saturable, then $v_1v_q \in M$, where $v_q$ is a leaf on $S_m(1)$. By an earlier result, subgraph induced $S_m(1)$ and $S_m(2)$ does not contain another member of $M$. Also, let $v_qu_q \in E(G_{m,3})$, with $u_q \in S_m(2)$ and $u_qw_q \in E(G)$, with $w_q \in S_m(3)$. By earlier result, $u_qw_q \notin M$. In like manner, if $w_1$ is saturated, and $w_1w_q \in M$ no other edge in subgraph of $G_{m,3}$ induced by $S_m(2)$ and $S_m(3)$ is a member of $M$, and $v_qu_q \notin M$. Without loss of generality, suppose that $v_1,v_q \in M$, then only $\bar{M}=\left\lbrace u_iw_i : i \in [2,m]; \; i \neq\right\rbrace \subset E(G_{m,3}) $ will be member of $M$. Thus $|\bar{M}|=m-2$ and so $|M|=m-1$, which is a contradiction. Now it has been established that none of the pendants of $S_m(1), S_m(2)$ and $S_m(3)$ can be in $M$. Thus, the possible members of $M$ are $\left\lbrace v_iu_i: i \in [2,m]\right\rbrace \cup \left\lbrace u_iw_i: i \in [2,m] \right\rbrace = M' $. Clearly, $|\bar{M}|=2(m-1)$. By Lemma \ref{lem2}, only half of the members of $\bar{M}$ can be in $M$. Thus, $im(G_{m,3}) \leq m-1$. Reasonably, $im(G_{m,2}) \leq im(G_{m,3})$. By earlier result, therefore, $iu(G_{m,3}) \geq m-1$ and thus $im(G_{m,3}) = m-1$. \end{proof} Next we investigate the induced matching number of $G_{m,4}$. We start with a lemma that will be employed in the main result. \begin{lemma} \label{lem4} Let $G_{m,4}$ be a stacked-book graph such that $S_m(1), S_m(2), S_m(3)$ and $S_m(4)$ are the factor stars of $G_{m,4}$. Suppose that $im(G_{m,4}) \geq m$. Then if $M' = \left\lbrace u_iw_i: i[2,m]; \; u_i \in S_m(2), w_i \in S_m(3) \right\rbrace $, then $M'$ is not a subset of $M$. \end{lemma} \begin{proof} It is eay to see that $|M'| = m-1$. Now, suppose that $M' \subset M$, then $u_i, w_i$ are saturated for all $i \in [2,m]$. Thus, no vertex $v_i \in S_m(1)$ and $r_i \in S_m(4)$ is saturable, for $i \in [2,m]$, which implies that $im(G_{m,4})=m-1$ and thus, a contradiction. \end{proof} Nex we consider the main theorem. \begin{theorem}\label{thm6} Let $S_m(1), S_m(2), S_m(3)$ and $S_m(4)$ be the factor star graphs of the stacked-book graph $G_{m,4}$. Then, $im(G_{m,4})=m$. \end{theorem} \begin{proof} By Lemma \ref{lem4}, suppose that at least some edge in \\$M' = \left\lbrace u_iw_i: i[2,m]; \; u_i\in S_m(2), w_i \in S_m(3) \right\rbrace $ is not in $M$. Suppose therefore that $u_kw_k \in M$. Then for $v_k\in S_m(1)$, and $r_k \in S_m(4)$, $v_1v_k, r_1r_k \in M$, where $v_1$ and $r_1$ are the central vertices of $S_m(1)$ and $S_m(4)$ respectively. Thus, $im(G_{m,4}) \geq m$. Conversely, suppose that $im(G_{m,4})=m+1$. Now, let $u_1, w_1$ be the central vertices of $S_m(2)$ and $S_m(3)$ respectively. Suppose that one of $u_i,w_i$, say $u_i$ is saturated such that $u_1u_i \in M$. Then, from earlier result, no edge in the subgraph of $G_{m,4}$ induced by $S_m(1)$, $S_m(2)$ and $S_m(3)$ is contained in $M$. Likewise, if $w_1w_i \in M,$ then all other vertices on the subgraph of $G_{m,4}$ induced by $S_m(2)$, $S_m(3)$ and $S_m(4)$ are unstaurable. If any of the pendant of $S_m(2)$ and $S_m(3)$ is in $M$, then $M=2$. Now, note as well that if $u_1w_1 \in M$, then by the distances of $u_1$ and $w_1$ to the rest of vertices on $S_m(1), S_m(2), S_m(3)$ and $S_m(4)$, only $u_1w_1$ will be in $M$. Thus for optimal $M$, some members of $M''= \left\lbrace v_iu_i; i \in [2,m]\right\rbrace$ or $M'''= \left\lbrace w_ir_i: i \in[2,m] \right\rbrace $ will have to be in $M$. Now clearly, it can be see that $|M' \cup M''|=2(m-1)$ and only $m-1$ members of $M' \cup M''$ can be in $M$. Based on this observable fact, at least there will exist a $w_i \in S_m(3)$ that is not saturable. Thus, there exist a saturable vertex $r_i \in S_m(4)$, such that $r_1r_i \in M$. By earlier result, there is no other pendant of $S_m(4)$ that is in $M$. Thus, $im(G_{m,4}) < m+1$ and hence a contradiction. Therefore, $im(G_{m,4}) \leq m$ and the claim follows. \end{proof} Now we consider the case of $G_{m,5}$. We shall need some new results to aid the proof. \begin{lemma}\label{lem5} Suppose that $w_1 \in S_m(3)$ is the central vertex of $S_m(3)$, where $\left\lbrace S_m(i): i \in [1,5]\right\rbrace $ is the set of factor stars of $G_{m,5}$. If $w_1$ is saturated, then for $M$ of $G_{m,5}$, $|M| \leq 2m-3$. \end{lemma} Suppose that $w_1$ is the central vertex of $S_m(3)$ and it is saturated. Then one of the $w_1w_k, u_1w_1$ and $w_1r_1$ belongs to $M$ where $u_1, r_1$ are central vertices of $S_m(2)$ and $S_m(4)$respectively. Suppose that $w_1w_k \in M$, where $k \leq m$. Now for all $i \in [2,m], i \neq k$, $w_i \in S_m(3)$ is unsaturable by earlier results. Thus members of $\left\lbrace u_iw_i:i \in [2,m] \right\rbrace $ and $\left\lbrace w_ir_i: r_i \in S_M(4), i \in [2,m] \right\rbrace $ do not belong in $M$. Also it clear to see that both edges $v_ku_k, r_kt_k \notin M$, where $t_k \in S_m(5)$. Now, from earlier technique, it can be deduced that $v_1v_i, t_1t_i \notin M$ for all $i \in [2,m]$. Thus, only $E'=\left\lbrace v_iu_i: i \in [2,m], i \neq k \right\rbrace $ and $E'' = \left\lbrace r_it_i: i \in [2,m], i \neq k \right\rbrace $ can be in $M$. Clearly, $|E' \cup E''|=2(m-2)$. Thus $|M|=2m-3$. Also, if $u_1w_i \in M$, it can be seen by following the definitions of induced matching that no other edges in the subgraph of $G_{m,5}$ induced by $S_m(1), S_m(2)$ and $S_m(3)$ is a member of $M$ and from earlier results, only $m-1$ edges of the subgraph of $G_{m,5}$ induced by $S_m(3), S_m(4)$ and $S_m(5)$ can be in $M$. Thus, $M$ consists of at most $m$ edges, which is not more that $2m-3$, since $m \geq 3$. \begin{lemma}\label{lem6} Suppose that $im(G_{m,5}) \geq 2(m-1)$. Then $u_1, w_1$ and $r_1$, the central vertices of $S_m(2), S_m(3)$ and $S_m(4)$ respectively are unsaturated. \end{lemma} \begin{proof} Proof follows from last theorem and an earlier result. \end{proof} Now we proceed to the probe the induced matching of $G_{m,5}$. \begin{theorem} Let $G_{m,5}$ be a stacked-book graph. Then, $im(G_{m,5}) = 2(m-1)$. \end{theorem} \begin{proof} From the last results, we see that if $u_1, w_1, r_1 $ are unsaturated, then $|M| \geq 2m-3$. Now we show that $im(G_{m,5}) \geq 2(m-1)$. Note that there exists a path $P_5(i) = v_i \rightarrow u_i \rightarrow w_i \rightarrow r_i \rightarrow t_i$, for all $i \in [2,m]$. Therefore, there are $m-1$ such paths in $G_{m,5}$. From earlier results, $im(P_5)=2$. Thus, $im(G_{m,5})\geq 2(m-1)$. Conversely, $u_1, w_1, r_1$ are established not to be saturated for the claim to hold. The edges in $E(G_{m,5})$ left to be members of $M$ the pendants of $S_m(1)$ and $S_m(5)$ and the paths $P_5(i)$ defined earlier. Suppose that a pendant each from $S_m(1)$ and $S_m(5)$ belong to $M$, then by definition of induced matching, at most one edge on each of the paths $P_5(i)$ can be a member of $M$. Thus $|M|=m+1$. The only alternative is if no pendant of $S_m(1)$ and $S_m(5)$ is a member of $M$. Thus, at most two edges on each member of $P_5(i)$ will be in $M$. Thus, $|M| \leq 2(m-1)$ and so, $im(G_{m,5})=2(m-1)$. \end{proof} Now we generalize the results. \begin{theorem} Let $G_{m,n}$ be a stacked-book graph such that $n$ is even. Then \begin{center} $im(G_{m,n}) \geq \left\{ \begin{array}{ll} m\lceil \frac{n}{4} \rceil -1 & \mbox{if} \;\; n \equiv 2 \mod 4; \\ \frac{mn}{4} & \mbox{if} \;\; n \equiv 0 \mod 4. \end{array} \right.$ \end{center} \end{theorem} \begin{proof} the claims follow from combining the results earlier proved where n are even numbers. \end{proof} \begin{theorem} Let $G_{m,n}$ be stacked-book graph with $n$ odd. Then \begin{center} $im(G_{m,n}) \geq \left\{ \begin{array}{ll} m\lfloor \frac{n}{4} \rfloor +2 & \mbox{if} \;\; n \equiv 3 \mod 4; \\ \frac{mn+3m-8}{4} & \mbox{if} \;\; n \equiv 1 \mod 4. \end{array} \right.$ \end{center} \end{theorem} We have established the lower bound for the the MIM numbers for the stacked-book graphs. From our preliminary work into establishing the tighter bounds, we have reasons to suggest that the results in the last two theorems may coincide with the upper bounds, and thus we come up with the conjectures below. \begin{conj} Let $G_{m,n}$ be a stacked-book graph such that $n$ is even. Then \begin{center} $im(G_{m,n}) = \left\{ \begin{array}{ll} m\lceil \frac{n}{4} \rceil -1 & \mbox{if} \;\; n \equiv 2 \mod 4; \\ \frac{mn}{4} & \mbox{if} \;\; n \equiv 0 \mod 4. \end{array} \right.$ \end{center} \end{conj} \begin{conj} Let $G_{m,n}$ be stacked-book graph with $n$ odd. Then \begin{center} $im(G_{m,n}) = \left\{ \begin{array}{ll} m\lfloor \frac{n}{4} \rfloor +2 & \mbox{if} \;\; n \equiv 3 \mod 4; \\ \frac{mn+3m-8}{4} & \mbox{if} \;\; n \equiv 1 \mod 4. \end{array} \right.$ \end{center} \end{conj} \section{Conclusion} We have obtained the MIM number of stacked-book graphs $G_{m,n}$ for all $m$ and for $n \in [1,5]$. These results are building blocks for obtaining the lower bounds for the cases where $n \geq 6$. The conjecture at the end of the work suggests that the lower bounds obtained in this work will in fact be equal to the upper bounds if those can be found. It must be noted that finding the lower bounds or the MIM numbers for the complete stacked-book graphs class will take rigorous effort and therefore may worth considering as a new task.
{ "timestamp": "2022-09-23T02:10:40", "yymm": "2209", "arxiv_id": "2209.10855", "language": "en", "url": "https://arxiv.org/abs/2209.10855" }
\section{Introduction} In 1969 R. Penrose theoretically predicted the effect of the negative energy formation in Kerr metric during the process of the decay or the collision. Further the nature of the geodesics for particles with negative energy was investigated~\cite{bib:grib, bib:ver4}. It was shown that in the ergosphere of a rotating black hole closed orbits for such particles are absent. This geodesics must appear from the region inside the gravitational radius. Also there was research devoted to the particles with negative energy in Schwarzschild spacetime which was conducted by A.A.Grib and Yu.V.Pavlov~\cite{bib:pavlov}. They showed that the particles with negative energy can exist only in the region inside of the event horizon. However, the Schwarzschild black hole is the eternal one and we must consider the gravitational collapse to speak about the past of the geodesics for particles with negative energy. Black hole was considered to be the only result of critical gravitational collapse. P. Joshi~\cite{bib:joshi} showed that the result of the gravitational collapse might be the naked singularity (For detailed information see~\cite{bib:rev, bib:ver6}). It means that during the gravitational collapse process the time of the singularity formation is less than the time of the apparent horizon formation and there exists a family of non-spacelike, future-directed geodesics which terminate at the central singularity in the past. M. Mkenyley et al investigated the question about the gravitational collapse of generalized Vaidya spacetime~\cite{bib:mah} and showed that the result of this collapse might be the naked singularity. Further the condition for the mass function were obtained~\cite{bib:ver1, bib:ver2}. Vaidya spacetime is the one of the earliest examples of cosmic censorship conjecture violation~\cite{bib:pap}. Usual Vaidya spacetime has the following form: \begin{equation} ds^2=-\left(1-\frac{2M(v)}{r} \right )dv^2+2 \varepsilon dvdr+r^2(d\theta^2+\sin^2\theta d\varphi^2) \,, \end{equation} where $v$ represents time and $\varepsilon=\pm 1$ depending on ingoing or outgoing radiation. Vaidya spacetime is known as radiating Schwarzschild spacetime. Here $M$ is the function of time and if the mass doesn't depend on time we have the Schwarzschild spacetime in which the time axis represents the radial null geodesic. If $M$ depends not only on $v$ but also on $r$ then we have generalized Vaidya spacetime. In usual Vaidya spacetime we have non-zero right hand-side of the Einstein equation. The matter is the type I of the matter field and represents so-called null dust. The energy momentum tensor has the following form: \begin{equation} T_{ik}=\mu L_i L_k \,, \end{equation} where $\mu$ is the energy density of the null dust and $L_i$ is a null vector: \begin{equation} L_{i}=\delta^0_{i} \,. \end{equation} The properties of generalized Vaidya spacetime will be described below. So if we have the naked singularity formation then we have the question about the negative energy in this case. There is no negative energy in our universe so in the case of naked singularity formation this particles must be forbidden. We consider one explicit model of Vaidya-Anti-de Sitter spacetime when we have the eternal naked singularity and then we consider the negative energy problem in generalized Vaidya spacetime when the type-II of the matter field satisfies the equation of the state $P=\alpha \rho$ where $\alpha \in [0\,, 1]$. This paper is organized as follows: In sec. 2 we briefly discuss methods which we use preparing this paper, in sec. 3 we describe the generalized Vaidya spacetime. In sec. 4 we consider the negative energy problem in Vaidya-Anti-de Sitter spacetime. In sec. 5 we consider this problem in the generalized Vaidya spacetime. Sec. 6 is the Discussion. The system of units $G=c=1$ is used throughout the paper. Also latin indices take values $0\,, 1\,, 2\,, 3$ and greek indices - $1\,, 2\,, 3\,.$ \section{Geodesic Equation, \\ Energy and Angular Momentum.} In this section we provide necessary method to conduct this research. This paper is devoted to the geodesics for particles with negative energy. So first of all we should provide the necessary methods to find the geodesic equations and the energy expressions. The geodesic equation in metric $g_{ik}$ is given by: \begin{equation} \label{eq:a} \frac{d^2x^i}{d\lambda ^2}+\Gamma^i_{jk}\frac{dx^j}{d\lambda}\frac{dx^k}{d\lambda}=0 \,, \end{equation} where $\Gamma^i_{jk}$ are Cristoffel symbols: \begin{equation} \Gamma^i_{jk}=\frac{1}{2}g^{im}\left(g_{jm,k}+g_{mk,j}-g_{jk,m}\right ) \,, \end{equation} here sign comma denotes partial derivative: \begin{equation} g_{ik,j} =\frac{\partial g_{ik}}{\partial x^j} \,, \end{equation} and $g^{ik}$ is contravariant components of metric tensor: \begin{equation} g^{ik}=\frac{g_{IK}}{|g|} \,, \end{equation} where $g_{IK}$ is the cofactor of the matrix $g_{ik}$ and $|g|$ is the determinant of the matrix $g_{ik}$. The equations \eqref{eq:a} is the second order partial differential equations. But for our purpose it is better to use the first order equations. For this aim we use the lagrangian which is given by: \begin{equation} \label{eq:laga} \mathcal{L}=\frac{1}{2}g_{ik}\frac{dx^i}{d\lambda}\frac{dx^k}{d\lambda} \,. \end{equation} We will consider the spherical symmetry spacetime. So in our case we can put $\theta=\frac{\pi}{2}$ and use only three coordinates $\{v\,, r\,, \varphi \}$. Here $v$ is the time. We can obtain the energy and angular momentum expressions using \eqref{eq:laga} i.e.: \begin{equation} \label{eq:enam} \begin{split} -E=\frac{\partial \mathcal{L}}{\partial \dot{v}}=g_{00}\dot{v}+g_{0\alpha}\dot{x}^\alpha \,, \\ L=\frac{\partial \mathcal{L}}{\partial \dot{\varphi}}=g_{22}\dot{\varphi}+g_{2 \alpha}\dot{x}^\alpha \,, \end{split} \end{equation} here dot sign denotes partial derivative with respect to affine parameter $\lambda$. The expressions \eqref{eq:enam} gives only two equations. To solve this system we need one more equation. We can use the following equation: \begin{equation} \label{eq:rada} g_{ik}\frac{dx^i}{d\lambda}\frac{dx^k}{d\lambda}=\delta \,, \end{equation} where $\delta =+1\,, -1\,, 0$ denotes timelike, spacelike or null geodesic respectively. The proof that \eqref{eq:enam} and \eqref{eq:rada} are the geodesic equations and correspond to \eqref{eq:a} the reader can see in~\cite{bib:fom}. We have one important condition - the particles must move in future in time i.e.: \begin{equation} \dot{v}=\frac{dv}{d\lambda}>0\,. \end{equation} If we satisfy this condition then we can note that if $g_{0\alpha}=0$ in \eqref{eq:enam} then we don't have the negative energy because $-g_{00}$ is always positive. If we consider $g_{00}>0$ then this situation corresponds to the black hole and the particles with negative energy can exist only in the region inside the event horizon. Thus to have the particles with negative energy outside the event horizon the metric must have off-diagonal term $g_{0 \alpha}\neq 0$. In this paper we consider the generalized Vaidya spacetime which contains off-diagonal term $2dvdr$. So in this case we might expect the particles with negative energy to exist outside the apparent horizon. As we said above we want to prove the absence of particles with negative energy outside the apparent horizon or in the case of the naked singularity formation in the case of generalized Vaidya spacetime. We have the naked singularity formation as a result of the continuous gravitational collapse if the time of the apparent horizon formation is more than the time of the singularity formation. Moreover there must exist a family of non-spacelike, future-directed geodesics which terminate at the central singularity in the past. \section{The Generalized Vaidya Spacetime} The generalized Vaidya spacetime is widely used to describe the model of gravitational collapse~\cite{bib:mah, bib:mah2, bib:ver1, bib:ver2, bib:ver3, bib:ver5} and the exterior metric of the radiating stars~\cite{bib:l1, bib:l2, bib:l3, bib:l4}. The generalized Vaidya spacetime has the following form~\cite{bib:vunk}: \begin{equation} \begin{split} ds^2=-\left ( 1-\frac{2M(r,v)}{r} \right ) dv^2+2dvdr+r^2d\Omega^2 \,, \\ d\Omega^2=d\theta^2+\sin^2\theta d\varphi^2 \,, \end{split} \end{equation} here $M(v,r)$ - the mass function depending on coordinates $r$ and $v$ which corresponds to advanced/retarded time. The generalized Vaidya spacetime differs from usual Vaidya spacetime only by dependence of the mass function on both $r$ and $v$. The right hand-side of the Einstein equations is the mixture of two matter fields type -I and -II. Like in Vaidya spacetime case the type-I of matter field is purely the null dust but the type-II is cosmic strings. We can write down the energy momentum tensor in the following form: \begin{equation} T_{ik}=T^{(n)}_{ik}+T^{(m)}_{ik}\,, \end{equation} where the first term corresponds to the matter field type-I and the other one corresponds to the matter field type-II~\cite{bib:hok}. Now let us write down the expression of the energy momentum tensor:~\cite{bib:vunk} \begin{equation} \label{eq:ten} \begin{split} T^{(n)}_{ik}=\mu L_{i}L_{k}\,, \\ T^{(m)}_{ik}=(\rho+P)(L_{i}N_{k}+L_{k}N_{i})+Pg_{ik} \,, \\ \mu=\frac{2 \dot{M}}{ r^2} \,, \\ \rho=\frac{2M'}{ r^2} \,, \\ P=-\frac{M''}{r} \,, \\ L_{i}=\delta^0_{i} \,, \\ N_{i}=\frac{1}{2} \left (1-\frac{2M}{r} \right )\delta^0_{i}-\delta^1_{i} \,, \\ L_{i}L^{i}=N_{i}N^{i}=0 \,, \\ L_{i}N^{i}=-1 \,. \end{split} \end{equation} here $P$ - pressure, $\rho$ - density of the II matter field, $\mu$ - the density of the null dust and $L,N$ - two null vectors. This model must be physically reasonable so the energy momentum tensor should satisfy weak, strong and dominant energy conditions~\cite{bib:pos}. It means that $\rho$ must be positive and for any non-spacelike vector $v^{i}$: \begin{equation} \label{eq:emt} T_{ik} v^{i} v^{k}>0\,, \end{equation} and the vector $T_{ik}v^{i}$ must be timelike. Weak energy conditions also demands that the energy density must be non-negative I.e. \begin{equation} \rho >0 \,, \mu > 0 \,. \end{equation} The dominant energy condition means that the energy density should not be less than the pressure. If we violate the dominant energy condition then our matter moves along spacelike geodesics which is unphysical. Strong and weak energy conditions demand: \begin{equation} \begin{split} \label{eq:ws} \mu \geq 0 \,, \\ \rho \geq 0 \,, \\ P\geq 0 \,. \end{split} \end{equation} The dominant energy condition imposes following conditions on the energy momentum tensor: \begin{equation} \label{eq:dom} \begin{split} \mu \geq 0 \,, \\ \rho \geq P\geq 0\,. \end{split} \end{equation} The properties of generalized Vaidya spacetime has been studied for the equation of the state $P=\alpha \rho$ where $\alpha$ belongs to the interval $[0\,, 1]$ in articles~\cite{bib:ver1, bib:ver2}. If we satisfy this equation of the state then the mass function $M(r,v)$ has the form \begin{equation} \label{eq:mass} M(r,v)=C(v)+D(v)r^{1-2\alpha}\,, \end{equation} where $C(v)$ and $D(v)$ are an arbitrary functions of time $v$. It is also worth noting that the case $\alpha = \frac{1}{2}$ is the special one and we don't consider it here. When we speak about the naked singularity formation then it means that the time of the singularity formation is less than the time of the apparent horizon formation. And there is a family of non-spacelike future-directed geodesics which terminate at the central singularity in the past. The singularity forms at the time $v=0$ at $r=0$. So now we should know the equation of the apparent horizon. To derive the apparent horizon equation we need a tangent vector $u^i$: \begin{equation} u^i=-\left ( 1-\frac{2M(v,r)}{r} \right ) \frac{d}{dv}+2\frac{d}{dr} \,. \end{equation} Vector $u^i$ tangent to a congruence of outgoing null geodesies. However it doesn't satisfy the geodesic equations \begin{equation} u_{i;k}u^k=\xi u_i \neq 0 \,, \end{equation} where sign ';' denotes the covariant derivative and \begin{equation} \label{eq:term} \xi =2\frac{M(v,r)-rM'(v,r)}{r^2} \,. \end{equation} We can't calculate the expansion of the outgoing null geodesies. First of all we need to use an affine parameter $\dot{\lambda}$ and rescaled tangent vector $k^i$: \begin{equation} \label{eq:vector} k^i=e^{-\gamma}u^i \,, \end{equation} where \begin{equation} \frac{\partial \gamma}{\partial \lambda} = \xi (\lambda) \,, \end{equation} where $\lambda$ is an original affine parameter. To calculate the expansion $\Theta$ which is \begin{equation} \label{eq:ex} \Theta = k^i_{;i} \,, \end{equation} now using \eqref{eq:term}, \eqref{eq:vector} and \eqref{eq:ex} we can write down the expansion expression: \begin{equation} \Theta =2\frac{e^{-\gamma}}{r^2} \left (1-\frac{2M(v,r)}{r} \right ) \,. \end{equation} Note that the term $2\frac{e^{-\gamma}}{r^2}$ doesn't have impact on the sign of $\Theta$. Finally we have the apparent horizon equation \begin{equation} 1-\frac{2M(v,r)}{r}=0 \,. \end{equation} \section{The Negative Energy in Vaidya-Anti-de Sitter spacetime} First of all, we decided to consider the energy problem in Vaidya-Anti-de Sitter spacetime because in this case we have: \begin{itemize} \item The eternal naked singularity i.e. the singularity will be never covered with the apparent horizon, \item Under some conditions we have two apparent horizons, \item at the limit $\lim\limits_{r \to \infty}$ our spacetime becomes Minkowski spacetime and like in Kerr black hole the energy which we consider is the energy with regard to infinity. \end{itemize} In the case of Vaidya-Anti-de Sitter spacetime the type-II of the matter field satisfies the equation of the state: \begin{equation} P=\rho \,. \end{equation} it means $\alpha=1$ and using \eqref{eq:mass} we obtain: \begin{equation} M(v,r)=C(v)+D(v)r^{-1} \,. \end{equation} Here we must consider the function $D(v)$ properly. It can't be positive because the density expression is given by: \begin{equation} \rho= -\frac{D(v)}{r^4} < 0 \,, \end{equation} and if $D(v)>0$ then we violate energy conditions \eqref{eq:ws} and \eqref{eq:dom}. So to satisfy these conditions we must assume that $D(v)<0$ or we can write: \begin{equation} D(v)=-D'(v) \,, \end{equation} where now function $D'(v)$ is positive. We want to investigate the question about the singularities in this spacetime which are formed at $v=0$ and $r=0$. But in this case we must impose some extra conditions on functions $C(V)$ and $D'(v)$ not to violate energy conditions \eqref{eq:ws} and \eqref{eq:dom} because if $\dot{C}(v)<\frac{\dot{D'}(v)}{r}$ at some point then $\mu$ becomes negative and we violate our energy conditions. So we must consider two cases: \begin{enumerate} \item $D'(v)\equiv \mu$, where $\mu$ is positive real constant. In this case $\dot{D'}(v)\equiv 0$ and if we impose the following condition $C(V) \geq 0$ then we satisfy all energy condition. \item $D'(0)=0$ but in this case even if $\lim\limits_{v\to 0, r\to 0} \frac{D'(v)}{r}=X_1$ where $X_1$ finite positive constant and if $C(v)>X_1$ then anyway we would violate energy conditions at the later stage. \end{enumerate} So the Vaidya-Anti-de Sitter spacetime is given by: \begin{equation} ds^2=-\left( 1-\frac{2C(v)-2\mu r^{-1}}{r}\right )+2dv dr+r^2(d\theta+\sin^2\theta d\varphi^2 ) \,. \end{equation} The apparent horizon equation in the case of Vaidya-anti-de Sitter spacetime is given by: \begin{equation} \frac{2C(v)}{r}-\frac{2\mu}{r^2}-1=0 \,. \end{equation} The time of the singularity formation is $v=0$. So solving the above equation we obtain: \begin{equation} \label{eq:vad} \begin{split} r^2-2C(v)r+2\mu =0 \,, \\ \frac{D}{4}=C^2(v)-2\mu \,. \end{split} \end{equation} From this equation we can conclude that if \begin{equation} \label{eq:conenf} C^2(v)< 2\mu \,, \end{equation} then the apparent horizon will be never formed. In this paper we will not prove that the radial null geodesics can terminate at the central singularity in the past. To obtain the energy expression we use the lagrangian: \begin{equation} \mathcal{L}=\frac{1}{2} g{ik}\dot{x}^i\dot{x}^k \,. \end{equation} In Vaidya-Anti-de Sitter spacetime the Lagrangian is given by: \begin{equation} \label{eq:lagva} 2\mathcal{L}=-\left(1-\frac{2M(v,r)}{r}\right )\dot{v}+2\dot{v}\dot{r} +r^2(\dot{\theta}+\sin^2\theta \dot{\varphi}^2) \,. \end{equation} Due to spherical symmetry we can put $\theta=\frac{\pi}{2}$ and $\dot{\theta}=0$. Using \eqref{eq:lagva} we can obtain the energy and angular momentum expressions: \begin{equation} \label{eq:en} \begin{split} -E(v)=\frac{\partial \mathcal{L}}{\partial \dot{v}} =-\left (1-\frac{2C(v)-2\mu r^{-1}}{r}\right ) \dot{v} +\dot{r} \,, \\ L=\frac{\partial \mathcal{L}}{\partial \dot{\varphi}}=r^2\dot{\varphi} \,, \end{split} \end{equation} here $E(v)\,, L$ are energy and angular momentum respectively and dot represents the partial derivative with respect to affine parameter $\lambda$. The radial geodesic is given by \begin{equation} \label{eq:geodesic} \dot{r}=\pm\sqrt{E^2(v)-\left(1-\frac{2C(v)-2\mu r^{-1}}{r}\right ) \left(\frac{L^2}{r^2}-\delta \right )}\,, \end{equation} where $\delta=-1$ in the case of timelike geodesics and $0$ in the case of null ones. The Vaidya-Anti-de Sitter spacetime contains the eternal naked singularity if we can satisfy the condition \eqref{eq:conenf} in this case $\Theta$ is always positive. We also have the condition $\dot{v}>0$ because we must go to future in time. So the first term in the energy expression \begin{equation} E(v)=\left( 1- \frac{2C(v)-2\mu r^{-1}}{r} \right ) \dot{v}-\dot{r} \,, \end{equation} is always positive. Thus to obtain the negative energy $\dot{r}$ must be positive. If the negative energy exist it must move from the center to infinity. Due to these conditions and the equation \eqref{eq:geodesic} we obtain: \begin{equation} \label{eq:cond} \left (1-\frac{2C(v)-2\mu r^{-1}}{r}\right ) \dot{v} =E-\sqrt{E^2(v)-\left(1-\frac{2C(v)-2\mu r^{-1}}{r}\right ) \left(\frac{L^2}{r^2}-\delta \right )}\,. \end{equation} Let's look at the expression which is under the root. We can rewrite it in the form \begin{equation} \label{eq:root} E^2+g_{00}\left(\frac{L^2}{r^2}-\delta \right ) \,. \end{equation} The expression in round bracket is positive i.e. $\delta=0$ or $\delta=-1$, $g_{00}<0$ is the condition for the positivity of the expansion $\Theta$. So $g_{00}\left( \frac{L^2}{r^2}-\delta \right ) <0$. Due to this, the whole expression \eqref{eq:root} is less than $E^2$. It means that the right hand-side of the equation \eqref{eq:cond} \begin{equation} E^2-|\sqrt{E^2+g_{00}\left( \frac{L^2}{r^2}-\delta \right)}|_{<|E|}<0 \,, \end{equation} is negative. So the negative energy in the case of the naked singularity formation is possible only if we violate the condition $\dot{v}>0$. So we can conclude that there is no non-spacelike geodesics for particles with negative energy in the case of the naked singularity. In the next part we will show that in generalized Vaidya spacetime if we have the naked singularity formation that the particles with negative energy are forbidden. Now let's consider the violation of the condition \eqref{eq:conenf}. In this case the equation \eqref{eq:vad} has two different roots: \begin{equation} r_{\pm }=C(v)\pm \sqrt{C^2(v)-2\mu} \,. \end{equation} In this case we have two different apparent horizons at $r=r_-$ and $r=r_+$. The expansion $\Theta$ is positive in two regions: \begin{equation} \begin{split} 0 \leq r < r_- | \Theta >0 \,, \\ r_-<r<r_+ | \Theta <0 \,, \\ r_+<r<\infty | \Theta>0 \,. \end{split} \end{equation} Again if we consider the particles with negative energy in the regions where $\Theta$ is positive then we are in the previous case and such particles are forbidden. If we consider the region where $\Theta$ is negative then we should put $g_{00}>0$. So let's consider this case. The energy expression has the same form as \eqref{eq:en}: \begin{equation} E(v)=\left( 1-\frac{2C(v)-2\mu r^{-1}}{r}\right ) \dot{v}-\dot{r} \,. \end{equation} In this case we have two possibilities: \begin{enumerate} \item $\dot{r}>0$ the movement from the singularity to infinity, \item $\dot{r}<0$ the movement to the singularity. \end{enumerate} Let's consider the first case when $\dot{r}>0$. As we showed in previous case the expression \eqref{eq:root} is less than $E^2$ but in this case $g_{00}>0$ and it means that the expression \eqref{eq:root} is bigger than $E^2$. Hence the right hand-side of \eqref{eq:cond} is more than zero. And it means that again the non-spacelike geodesics for particles with negative energy are forbidden due to the violation of the condition $\dot{v}>0$. Now let's return to the second case $\dot{r}<0$. Here we have: \begin{equation} \left ( 1-\frac{2C(v)-2\mu r^{-1}}{r}\right ) \dot{v}=E(v)-\sqrt{E^2+g_{00}\left (\frac{L^2}{r^2}-\delta \right )} \,. \end{equation} We can see that the right hand-side of the expression above is always negative. In this case the non-spacelike geodesics for particles with negative energy are not forbidden. Now let's consider the question about the existence of circular or closed non-circular orbits. Both cases demand that the effective potential must be zero at some point $r_0$. First of all let's define the effective potential which is given by: \begin{equation} \label{eq:sb} v_{eff}=-(\dot{r})^2=-\left [E^2+g_{00}\left (\frac{L^2}{r^2}-\delta \right ) \right] \,. \end{equation} We know that $E<0$ and both $g_{00}$ and $\frac{L^2}{r^2}-\delta$ are positive. So the whole expression in square brackets \eqref{eq:sb} are positive and strictly more than zero. Thus there is no any $r_0$ that $v_{eff}(r_0)=0$. So in this case the closed non-circular and circular orbits are absent. We showed that in the region $r_-<r<r_+$ the particles with negative energy can exist. However in this particular case this is unphysical because we demand $g_{00}>0$ and it means that all components of the metric tensor i.e. $g_{00}\,, g_{01}\,, g_{22}\,, g_{33}$ are positive and the line element is spacelike in this region. To have the non-spacelike line element we must change the sign in front of the off-diagonal term and consider not falling matter but radiation. We consider this case in the section below. \section{The Negative Energy in Generalized Vaidya Spacetime} In the case of generalized Vaidya spacetime we will consider only the case when the type-II of the matter field satisfies the equation of the state $P=\alpha \rho$ where $\alpha \in [0\,, 1]$. In this case we can satisfy all energy conditions \eqref{eq:ws}, \eqref{eq:dom}. The result which was obtained in the previous section is valid in generalized Vaidya spacetime e.g. if we have the naked singularity formation then the negative energy is impossible. Let's prove it. The line element we write down in the following form: \begin{equation} ds^2=-\left(1-\frac{2M(v,r)}{r}\right ) dv^2+2\varepsilon dv dr+r^2 (d\theta^2+\sin^2\theta d\varphi^2) \,, \end{equation} here $\varepsilon=\pm 1$ corresponding ingoing or outgoing matter respectively. The energy and the angular momentum has the following form: \begin{equation} \label{eq:engv} \begin{split} E(v)=\left ( 1 - \frac{2M(v,r)}{r} \right ) \dot{v}-\varepsilon \dot{r} \,, \\ L=r^2\dot{\varphi} \,. \end{split} \end{equation} The radial geodesic is given by: \begin{equation} \label{eq:radgv} \dot{r}^2=E(v)^2-\left ( 1 - \frac{2M(v,r)}{r}\right ) \left( \frac{L^2}{r^2} -\delta \right ) \,. \end{equation} Note that the radial equation doesn't depend on $\varepsilon$. Now we consider the case of the naked singularity formation. It means that $g_{00}<0$. Let's start with the case $\varepsilon=+1$. In this case the first expression \eqref{eq:engv} becomes: \begin{equation} E(v)=\left(1-\frac{2M(v,r)}{r}\right )\dot{v}-\dot{r} \,. \end{equation} Here the negative energy can exist only in the case of $\dot{r}>0$. It gives: \begin{equation} \label{eq:coin} \left(1-\frac{2M(v,r)}{r}\right )\dot{v}=E(v)+\sqrt{E^{2}(v)+g_{00}\left(\frac{L^2}{r^2}-\delta\right)} \,. \end{equation} Like in previous section the expression under the root is less than $E^2(v)$ (due to the fact that $g_{00}<0$). Hence it means that the right hand-side of the equation above is less than zero. However the left hand-side must be always positive ($-g_{00}\dot{v}$) and it can be negative only in the case of violating the main principle $\dot{v}>0$. So in the case $\varepsilon=+1$ and the naked singularity formation the particles with negative energy are forbidden. Now let's consider the second case $\varepsilon=-1$. In this case we have: \begin{equation} E=-g_{00}\dot{v}+\dot{r}\,. \end{equation} In this case the only possibility for the negative energy to exist is the condition $\dot{r}<0$. It gives: \begin{equation} -g_{00}\dot{v}=E+\sqrt{E^2+g_{00}\left (\frac{L^2}{r^2}-\delta \right )} \,. \end{equation} This equation coincides with \eqref{eq:coin} above. It means that in general case if we have the naked singularity formation the particles with negative energy are forbidden. Now let's consider the case $g_{00}>0$ i.e. the black hole formation. Note, that in this case $\varepsilon$ must be equal to $-1$ otherwise the line element will be spacelike which is unphysical. The first expression of \eqref{eq:engv} becomes: \begin{equation} E=-g_{00}\dot{v}+\dot{r} \,. \end{equation} Here we have two possibilities $\dot{r}>0$ and $\dot{r}<0$. Let's consider the first case. When $\dot{r}>0$ we have the negative energy if: \begin{equation} -g_{00}\dot{v}=E(v)-\sqrt{E^2(v)+g_{00}\left (\frac{L^2}{r}-\delta \right )} \,. \end{equation} Now let's prove that there is no circular and elliptical orbits inside the apparent horizon. Under the notion 'elliptical' we mean non-circular closed orbits. The circular orbit can exist if: \begin{equation} \begin{split} v_{eff}(r_0)=0 \,, \\ \frac{d v_{eff}(r)}{dr}|_{r=r_0}=0 \,, \end{split} \end{equation} here $v_{eff}$ is the effective potential. In our case the effective potential has the form \begin{equation} v_{eff}=-\dot{r}^2=-\left[E^2+g_{00}\left ( \frac{L^2}{r^2}-\delta \right ) \right ] \,. \end{equation} From \eqref{eq:radgv} we can see that $E^2>0\,, -g_{00}<0$ and $\frac{L^2}{r^2}-\delta >0$ and because $E(v)\neq 0$ the condition $v_{eff}(r_0)=0$ is impossible. The elliptical orbits also has the condition $v_{eff}(r_0)=0$. So in this case we can conclude that there is no circular and elliptical orbits for particles with negative energy. Now let's consider the second case when $\dot{r}<0$. In this case we have: \begin{equation} -g_{00}\dot{v}=E+\sqrt{E^2+g_{00}\left ( \frac{L^2}{r^2}-\delta \right ) } \,. \end{equation} Again the expression under the root is bigger than $E^2$ due to $E <0 \,, E\neq 0$ and positivity of $g_{00}$ and $\frac{L^2}{r^2}-\delta$. And it means we can't satisfy the condition $\dot{v}>0$. So in this case the particles with negative energy are forbidden. \section{Discussion} The main goal of this article is to prove the absence of the particles with negative energy in the case of the naked singularity formation in the generalized Vaidya spacetime. We have considered one explicit example of Vaidya-Anti-de Sitter spacetime and proved that in the case of naked singularity formation the particles with negative energy are forbidden. Then we have considered the general case and also proved the absence of the particles with negative energy in the case of the naked singularity formation. According to the Penrose process the particles with negative energy can exist in the ergoregion of a rotating black hole. Geodesics for such particles have been considered in~\cite{bib:grib}. It has been shown that this type of geodesic can't cross the static limit and these particles are forbidden in the region outside of the static limit. These geodesics have been also studied in the case of Schwarzschild spacetime~\cite{bib:pavlov}. In this metric the particles with negative energy can exist only in the region inside the event horizon and are forbidden outside it. But in the case of the naked singularity formation there is no the apparent horizon and such particles can exist in our universe. Vaidya spacetime is one of the first examples of the naked singularity formation. We hasn't considered the case of usual Vaidya spacetime in this paper because it a particular case of the generalized Vaidya spacetime and the results were obtained in this article are also valid in the case of usual Vaidya metric. Due to off-diagonal term in generalized Vaidya spacetime there is a possibility for the existence of the particles with negative energy but they can exist only in the region inside the apparent horizon. If the horizon is absent then we can have such particles only if they move back in time which is unphysical. There are many models of the gravitational collapse when the result is the naked singularity (See for example~\cite{bib:joshi}). The physics demands that the particles with negative energy can not exist in the case of the naked singularity formation. If they existed then the distant observer would be able to detect but it is forbidden. In these models if the metric doesn't have off-diagonal term then due to the fact that $g_{00}<0$ the particles with negative energy can not exist. If the metric has off-diagonal term $g_{0\alpha} \neq 0$ then these metrics should be investigated to prove the absence of such particles in the case of the naked singularity formation. The generalized Vaidya spacetime is one of such type of metrics and as we have proven the negative energy for this metric is forbidden in the case of the naked singularity formation. \textbf{acknowledgments} The author says thanks to professor Pankaj Joshi for scientific discussion. This work was supported by RFBR grant 15-02-06818-a.The work was performed within the SAO RAS state assignment in the part "Conducting Fundamental Science Research".
{ "timestamp": "2022-09-23T02:14:40", "yymm": "2209", "arxiv_id": "2209.10976", "language": "en", "url": "https://arxiv.org/abs/2209.10976" }
\section{Introduction} At the turn of the decade, the logistics of operations in hospitals and healthcare centers have been severely disrupted worldwide by the COVID-19 pandemic. Its impact has been profound and damaging in all aspects of life, but in no context it has been more damaging than in healthcare: the safety and well-being of physicians and medical personnel, the supply chain of drugs and equipment, and the capacity of hospitals were all challenged by the pandemic. One of the most critical points for healthcare systems involved in the treatment process is the management of COVID-19 patients needing acute and respiratory care. Therefore, healthcare organizations are increasingly pushed to improve the efficiency of care processes and the resource management for such category of patients. One way to attain such improvement is to leverage historical data from information systems of hospitals. These data can be then cleaned and analyzed, to individuate non-compliant behavior and inefficiencies in the care process The aim of our work is to analyze the care process for the COVID-19 patients treated at the Intensive Care Unit (ICU) ward of the Uniklinik Aachen hospital in Germany, in order to identify divergences or anomalies within the process. To do so, our work intends to develop an executable process model representing the clinical guidelines for the treatment of COVID-19 patients and evaluate the adherence of the observed behavior (recorded by the information system of the hospital) to such guidelines. The STAKOB guidelines\footnote{\url{https://www.rki.de/DE/Content/Kommissionen/Stakob/Stakob_node.html}} (``Ständigen Arbeitskreis der Kompetenz- und Behandlungszentren für Krankheiten durch hochpathogene Erreger'', ``Permanent working group of competence and treatment centers for diseases caused by highly pathogenic agents'') are widely accepted and recognized protocols for the treatment of COVID-19, compiled and verified by a large consensus of medical scientists, physicians, and research institutions. They provide a comprehensive overview of recommendations on the management of hospitalized COVID-19 patients. The process model was obtained starting from such guidelines, and was validated by the physicians working in the intensive and intermediate care unit of the Uniklinik. We openly share the resulting BPMN model, as well as the related documentation. The conformance with the guidelines was assessed by using process mining techniques. The results provide hospital managers with information about the main deviations and/or anomalies in the process and their possible causes. In addition, they suggest improvements to make the process more compliant, cost-effective, and performant. The remainder of the paper is structured as follows. Section~\ref{sec:related} explores related work and sets the context of our research. Section~\ref{sec:method} lays out the methodology we employed in our case study. Section~\ref{sec:results} illustrates the results of our case study. Finally, Section~\ref{sec:conclusion} concludes the paper. \section{Related Work}\label{sec:related} The global effort to fight the pandemic has stimulated the adoption of new technologies in healthcare practice~\cite{golinelli2020adoption}. An area where this effect has been radical is the digitization of healthcare processes, both medical and administrative. Data recording and availability have improved during the years of the pandemic. Stakeholders realized that data are a valuable information source to support the management and improvement of healthcare processes~\cite{munoz2022process}. In addition, the reliance of medical personnel on digital support systems is now much more significant. Fields of science that have recently shown to be particularly promising when applied to healthcare operations are the process sciences, and specifically Business Process Management (BPM) and process mining~\cite{munoz2022process}. This is mainly due to the characteristics of healthcare process, which are complex and flexible and involve a multidisciplinary team~\cite{munoz2022process,rebuge2012business}. Particularly, process mining has emerged as a suitable approach to analyze, discover, improve, and manage real-life and complex processes, by extracting knowledge from event logs~\cite{van2016process}. Currently, process scientists have gathered event data on the process of treatment for COVID-19 and leveraged process mining techniques to obtain insights on various aspects of the healthcare process~\cite{pegoraro2022analyzing,augusto2022process,dos2021process} or on how other business processes have been impacted by the disruption caused by COVID-19~\cite{DBLP:conf/bpm/ZabkaBA21}. Among process mining techniques, conformance checking aims to measure the adherence of a (discovered or known) process with a given set of data, or vice-versa~\cite{gatta2019clinical}. Conformance checking helps medics to understand major deviations from clinical guidelines, as well as to identify areas for improvement in practices and protocols~\cite{munoz2022process}. Some studies have applied these techniques in different healthcare contexts, such as oncology~\cite{rojas2016process}. However, no studies have addressed the compliance analysis on the care process of COVID-19 patients in a real-life scenario. To do so, it is essential to have a normative model, reflecting clinical guidelines and protocols, that can be interpreted by machines. Currently, executable process models representing the guidelines for the treatment of COVID-19 patients are still absent and needed, given the uncertainty and variability of the disease. \section{Methodology}\label{sec:method} The methodology conducted in this study consists of the following three main steps, also shown in Figure~\ref{fig:metodo}: \begin{itemize} \item Development of a normative model based on the STAKOB guidelines. A normative model is a process model that reflects and implements rules, guidelines, and policies of the process, mandated by process owners or other supervisory bodies. This phase involves (i) the analysis of the STAKOB documentation and interview with ICU physicians, (ii) the development of the model from the guidelines, and (iii) the validation of the model with ICU physicians. \item Data collection and preparation, which involves the extraction and preprocessing of event data, gathered from the information system of the hospital. The event log is refined by removing duplicate and irrelevant data, handling missing data, and detecting outliers to ensure data reliability. \item Conformance checking, which involves the use of conformance checking techniques to compare the normative model with the event logs for the three COVID-19 waves and determine whether the behavior observed in practice conforms to the documented process. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{img/metodo} \caption{Case study methodology. Our work measures the deviation between the expected and real behavior of the COVID-19 treatment process, respectively represented by the STAKOB guidelines, and by the COVAS dataset.} \label{fig:metodo} \end{figure} \subsection{Development of a normative model based on the STAKOB guidelines} The STAKOB guidelines provide information on the disease and its related symptoms, and describe the diagnostic and treatment activities to be performed on COVID-19 patients and the therapies to be administered. The treatment of COVID-19 patients requires a multi-disciplinary approach: in addition to intensive care physicians and nurses, specialists in infectious diseases and infection control must also be part of the team~\cite{malin2022key}. The guidelines guide the operations of the medical team involved in the inpatient care of COVID-19 patients, but are also intended to provide information for individuals and/or organizations directly involved in this topic. To make the guidelines interpretable by machines---and thus suitable for conformance checking---we developed a normative process model of the STAKOB guidelines in the BPMN language using the Signavio tool\footnote{\url{https://www.signavio.com/}}. The choice of the BPMN standard is due to its ability to be executable but, at the same time, easy to understand by physicians and practitioners. The BPMN model of the STAKOB guidelines was validated by using a qualitative approach. Specifically, the model was presented and discussed with three physicians working in the intensive and intermediate care unit of the Uniklinik during three meetings. During the meetings, several refinements were applied to the model, until it was approved by all. \subsection{Data Collection and Preparation} We collected and pre-processed data of COVID-19 patients monitored in the context of the COVID-19 Aachen Study (COVAS). The log contains event information regarding COVID-19 patients treated by the Uniklinik between January 2020 and June 2021. Events (patient admittance, symptoms, treatments, drug administration) are labeled with the date, creating timestamps with a coarseness at the day level. Data were gathered from the information system of the hospital. The initial database consisted of 269 cases, 33 activity labels, 210 variants, and 3542 events. Before the analysis, we refined the raw event log, to guarantee its quality. Data cleaning and preparation were executed with Python and included: (i) outliers and incomplete cases removal based on the number of hospitalization days, (ii) less significant activities abstraction, and (iii) filtering of infrequent variants. As an example, we removed the cases with a duration of more than 70 days: this value was validated with the doctors, according to whom durations longer than 70 days may be due to registration delays. In the end, the refined event log consisted of 187 patient cases, 32 activities, 135 variants, and 2397 events. To evaluate the adherence of the COVAS dataset to the normative model during the three COVID-19 waves, we split the dataset into three sub-event logs. As illustrated in the next sections, this is done with the goal of examining how treatment operations for COVID-19 change between infection waves with respect to the adherence to the STAKOB guidelines. As shown by the dotted chart of the event log in Figure~\ref{fig:3onde}, the three waves can be clearly identified. Such a choice of wave separation was also supported by the literature~\cite{dongelmans2022characteristics}. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{img/dottedchart.pdf} \caption{Dotted chart of the COVAS event log. The cases are sorted by the first recorded event, which is highlighted in orange. Every blue dot corresponds to a recorded event. The vertical dashed lines separate the first, second, and third COVID-19 waves, based on the knowledge of physicians.} \label{fig:3onde} \end{figure} The event log of the first wave contains 106 cases and 1410 events. The average duration of the process is 25.38 days. The log of the second wave contains 59 cases and 892 events, with an average duration of 22.42 days. The log of the third wave contains 22 cases and 282 events, with an average duration of 16.38 days. \subsection{Conformance Checking} For each sub-event log, we applied conformance checking techniques to identify deviations within the process. Specifically, we utilized the plug-in ``Replay a Log on Petri Net for Conformance Analysis'' as implemented on ProM, with standard setting parameters. The choice is due to the fact that alignment-based techniques can exactly pinpoint where deviations are observed~\cite{van2016process,adriansyah2010towards}. The alignment-based technique allowed to estimate a global conformance measure, which quantifies the overall conformance of the model and event log, and local diagnostics, which identify points where the model and event log do not agree. In the first case, we calculated fitness, which measures ``the proportion of behavior in the event log possible according to the model''~\cite{van2016process}. In the second case, we estimated for each activity within the model the following~\cite{dixit2017enabling}: \begin{itemize} \item the number of ``moves on log'': Occurrences of an activity in the trace cannot be mapped to any enabled activity in the process model. \item the number of ``moves on model'': Occurrences of an enabled activity in the process model cannot be mapped to any event in the trace sequence. \item the number of ``synchronous moves'': Occurrences of an activity belonging to a trace can be mapped to occurrences of an enabled activity in the process model. \end{itemize} \section{Results}\label{sec:results} In this section, we presented the results from the development of the normative model and the conformance checking analysis. \subsection{Normative Model} The developed normative model consists of 3 sub-processes, 23 activities and approximately 36 gateways (XOR, AND and OR). Figure~\ref{fig:stakob1} shows a section of the model. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{img/stakob1} \caption{A section of the STAKOB COVID-19 model, depicting some activities related to the ICU operations for COVID-19 patients.} \label{fig:stakob1} \end{figure} The model clearly underlines the fact that the treatment of hospitalized patients with COVID-19 is complex and is characterized by several pursuable pathways (see the presence of XOR and OR gateways). It also requires the collaboration of different departments and specialists. More in detail, the care treatment includes an antibiotic/drug therapy phase and, if necessary, an oxygenation phase. At this point, if the patient’s health condition deteriorates, the transfer to the ICU is planned (partially shown in Figure~\ref{fig:stakob1}). In the ICU, the patient may undergo mechanical ventilation, ECMO (ExtraCorporeal Membrane Oxygenation) or pronation in addition to the medical therapy. A section of the sub-process showing the respiratory support for the patient can be seen in Figure~\ref{fig:stakob2}. Recovery and subsequent discharge are confirmed by two negative COVID-19 tests. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{img/stakob2} \caption{A section of the STAKOB COVID-19 model, depicting some activities related to the respiration support operations for COVID-19 patients.} \label{fig:stakob2} \end{figure} The full model is openly available on GitHub\footnote{\url{https://github.com/marcopegoraro/pm-healthcare/tree/main/stakob}}. It is rendered in the XML export format of the BPMN standard\footnote{\url{https://www.bpmn.org/}}. The folder also contains a PDF depicting the entire model, a license declaration, and an addendum describing the model schematic in more detail. \subsection{Conformance Checking Results} \subsubsection{COVID-19 First Wave Results} For the first wave, the fitness between the model and the data is 0.69. This suggests that some trace variants are not reproduced by the model. This may be due to the variability of the process (health conditions vary from patient to patient). In addition, the coarseness of the timestamps in the dataset has an impact: events are recorded at the date level, so the order in which they are recorded may vary in some instances. Table~\ref{tab:wave1} shows the results of the conformance checking for the first wave. Specifically, for each activity, it shows the misalignments between the normative model and the event log. \begin{table}[t] \centering \caption{Results of conformance checking alignments with the STAKOB model for the patient sub-log corresponding to the first COVID-19 wave. For each activity in the log, we show the count of moves on log, moves on model, and synchronous moves.} \label{tab:wave1} \scriptsize \begin{tabular}{|l|c|c|c||l|c|c|c|} \hline \textbf{Activity} & \textbf{\vbox{\hbox{Move on}\hbox{log}}} & \textbf{\vbox{\hbox{Syncro}\hbox{move}}} & \textbf{\vbox{\hbox{Move on}\hbox{model}}} & \textbf{Activity} & \textbf{\vbox{\hbox{Move on}\hbox{log}}} & \textbf{\vbox{\hbox{Syncro}\hbox{move}}} & \textbf{\vbox{\hbox{Move on}\hbox{model}}} \\ \hline Symptobegin & 0 & 106 & 0 & Ventilation Start & 33 & 9 & 2 \\ \hline Hospitalization & 1 & 105 & 1 & Ventilation End & 35 & 8 & 6 \\ \hline UKA Admission & 12 & 96 & 10 & NMB Start & 4 & 11 & 0 \\ \hline Abx Start & 2 & 58 & 0 & NMB End & 4 & 11 & 0 \\ \hline Abx End & 2 & 58 & 0 & CVVH Start & 16 & 11 & 0 \\ \hline Start Oxygen & 22 & 85 & 0 & CVVH End & 16 & 11 & 0 \\ \hline Remdesivir Start & 0 & 3 & 0 & Prone Start & 25 & 10 & 0 \\ \hline Remdesivir End & 0 & 3 & 0 & Prone End & 25 & 10 & 0 \\ \hline Admission ICU & 35 & 20 & 0 & ECMO Start & 10 & 0 & 0 \\ \hline HiFlo Start & 0 & 1 & 19 & ECMO End & 10 & 0 & 0 \\ \hline Hiflo End & 0 & 1 & 19 & End Of Fever & 22 & 53 & 53 \\ \hline NIV Start & 6 & 5 & 9 & Discharge ICU & 48 & 6 & 14 \\ \hline NIV End & 10 & 5 & 9 & Last Oxygen Day & 39 & 53 & 53 \\ \hline iNO Start & 13 & 10 & 1 & Discharge dead & 0 & 33 & 0 \\ \hline iNO End & 13 & 10 & 1 & Discharge alive & 0 & 73 & 0 \\ \hline \end{tabular} \normalsize \end{table} Several misalignments can be observed. In particular: \begin{itemize} \item The \emph{HiFlo Start} and \emph{HiFlo End} activities (corresponding to high flow oxygenation) present 19 moves on model and one synchronous move. This means that, although it is required by the guidelines, this activity is only performed in one case. This indicates that, given the patient's condition, the physicians may have seen fit to skip this treatment. \item There are several tasks that have both moves on model and moves on log. This means that these tasks often deviate from the normative model (in some cases they are present in the model but not in reality, in others vice-versa). This may be due to the variability of patients' conditions and the lack of familiarity with COVID-19 and its standardized treatment, since this data was recorded in the early days of the pandemic. For example, the guidelines suggest that the \emph{Discharge ICU} should occur after ventilation and pronation, while in reality, in some cases, it occurs before. Thus, many activities occur while the patient is hospitalized, but not still formally admitted to the ICU. \item Some activities present only moves on log and synchronous moves, i.e., they are present in reality but at times not in the normative model. This means that they are performed at different times than the guidelines suggest. For example, \emph{Admission ICU} may be anticipated because of a particularly critical course not foreseen by the physicians or be delayed because no space in ICU is available at that time; or \emph{Prone End} (the interruption of the treatment of pronation) may be brought forward because of the negative effects on the patient, e.g., the appearance of pressure sores. Alternatively, pronation may be delayed because the patient has not achieved optimal arterial blood oxygenation. \end{itemize} \subsubsection{COVID-19 Second Wave Results} For the log of the second wave, the fitness with the STAKOB model is 0.66. Table~\ref{tab:wave2} shows the results of conformance checking for the second wave. \begin{table}[t] \centering \caption{Results of conformance checking alignments with the STAKOB model for the patient sub-log corresponding to the second COVID-19 wave. For each activity in the log, we show the count of moves on log, moves on model, and synchronous moves.} \label{tab:wave2} \begin{adjustbox}{width=\columnwidth, center} \begin{tabular}{|l|c|c|c||l|c|c|c|} \hline \textbf{Activity} & \textbf{\vbox{\hbox{Move on}\hbox{log}}} & \textbf{\vbox{\hbox{Syncro}\hbox{move}}} & \textbf{\vbox{\hbox{Move on}\hbox{model}}} & \textbf{Activity} & \textbf{\vbox{\hbox{Move on}\hbox{log}}} & \textbf{\vbox{\hbox{Syncro}\hbox{move}}} & \textbf{\vbox{\hbox{Move on}\hbox{model}}} \\ \hline Symptobegin & 0 & 59 & 0 & Dexamethasone End & 24 & 14 & 1 \\ \hline Hospitalization & 0 & 59 & 0 & Ventilation Start & 11 & 8 & 1 \\ \hline UKA Admission & 8 & 50 & 9 & Ventilation End & 11 & 8 & 1 \\ \hline Abx Start & 0 & 29 & 0 & NMB Start & 2 & 9 & 0 \\ \hline Abx End & 0 & 29 & 0 & NMB End & 2 & 9 & 0 \\ \hline Start Oxygen & 5 & 54 & 0 & CVVH Start & 7 & 8 & 1 \\ \hline Remdesivir Start & 8 & 12 & 0 & CVVH End & 7 & 8 & 1 \\ \hline Remdesivir End & 8 & 12 & 0 & Prone Start & 8 & 8 & 0 \\ \hline Admission ICU & 8 & 15 & 1 & Prone End & 8 & 8 & 0 \\ \hline HiFlo Start & 0 & 2 & 14 & ECMO Start & 7 & 0 & 0 \\ \hline Hiflo End & 0 & 2 & 14 & ECMO End & 7 & 0 & 0 \\ \hline NIV Start & 6 & 8 & 5 & End Of Fever & 27 & 13 & 43 \\ \hline NIV End & 8 & 5 & 8 & Discharge ICU & 20 & 2 & 14 \\ \hline iNO Start & 2 & 9 & 0 & Last Oxygen Day & 19 & 36 & 23 \\ \hline iNO End & 2 & 9 & 0 & Discharge dead & 0 & 17 & 0 \\ \hline Dexamethasone Start & 23 & 15 & 0 & Discharge alive & 0 & 42 & 0 \\ \hline \end{tabular} \end{adjustbox} \end{table} In the second wave, \emph{Hospitalization} is only performed after the onset of symptoms, as suggested by the guidelines. However, deviations are also encountered. As in the first wave, the most affected activities are \emph{End Of Fever}, \emph{Admission ICU} and \emph{Discharge ICU}, and \emph{Last Oxygen Day}, which have both moves on log and moves on model. This may be related to the mutability of the disease becoming difficult to manage with common protocols and the variability of the patients' conditions. Compared to the first wave, the use of drugs has changed. In particular, a new drug is being administered, i.e., Dexamethasone, and the use of Remdesivir is increased. The administration of both drugs has moves on log mismatches, indicating that the physicians needed to administer such treatments more frequently than recommended. The former is also used in patients who do not require intensive care, contrary to what the guidelines suggest. The second, which is preferred for non-critical hospitalized patients, is also used in intensive care. In addition, high flow oxygenation is rarely performed here, despite being included in the guidelines. \subsubsection{COVID-19 Third Wave Results} The fitness between the log and the model is 0.69 for the third COVID-19 wave. Table~\ref{tab:wave3} shows the results of conformance checking for the third wave. \begin{table}[] \centering \caption{Results of conformance checking alignments with the STAKOB model for the patient sub-log corresponding to the third COVID-19 wave. For each activity in the log, we show the count of moves on log, moves on model, and synchronous moves.} \label{tab:wave3} \begin{adjustbox}{width=\columnwidth, center} \begin{tabular}{|l|c|c|c||l|c|c|c|} \hline \textbf{Activity} & \textbf{\vbox{\hbox{Move on}\hbox{log}}} & \textbf{\vbox{\hbox{Syncro}\hbox{move}}} & \textbf{\vbox{\hbox{Move on}\hbox{model}}} & \textbf{Activity} & \textbf{\vbox{\hbox{Move on}\hbox{log}}} & \textbf{\vbox{\hbox{Syncro}\hbox{move}}} & \textbf{\vbox{\hbox{Move on}\hbox{model}}} \\ \hline Symptobegin & 0 & 22 & 0 & Dexamethasone End & 8 & 4 & 0 \\ \hline Hospitalization & 2 & 19 & 3 & Ventilation Start & 1 & 9 & 1 \\ \hline UKA Admission & 0 & 22 & 0 & Ventilation End & 1 & 9 & 1 \\ \hline Abx Start & 0 & 8 & 0 & NMB Start & 0 & 1 & 0 \\ \hline Abx End & 0 & 8 & 0 & NMB End & 0 & 1 & 0 \\ \hline Start Oxygen & 0 & 38 & 0 & CVVH Start & 2 & 1 & 0 \\ \hline Remdesivir Start & 0 & 1 & 0 & CVVH End & 2 & 1 & 0 \\ \hline Remdesivir End & 0 & 1 & 0 & Prone Start & 1 & 1 & 0 \\ \hline Admission ICU & 1 & 2 & 1 & Prone End & 1 & 1 & 0 \\ \hline HiFlo Start & 0 & 2 & 1 & ECMO Start & 0 & 0 & 0 \\ \hline Hiflo End & 0 & 2 & 1 & ECMO End & 0 & 0 & 0 \\ \hline NIV Start & 4 & 1 & 0 & End Of Fever & 11 & 6 & 16 \\ \hline NIV End & 5 & 0 & 1 & Discharge ICU & 3 & 2 & 1 \\ \hline iNO Start & 1 & 1 & 0 & Last Oxygen Day & 3 & 17 & 5 \\ \hline iNO End & 1 & 1 & 0 & Discharge dead & 0 & 3 & 0 \\ \hline Dexamethasone Start & 9 & 3 & 1 & Discharge alive & 0 & 19 & 0 \\ \hline \end{tabular} \end{adjustbox} \end{table} The physicians' experience and familiarity with the disease appear to have increased. However, many of the misaligned activities have similar behavior to those performed during past waves. Note that the ECMO treatment has zero values in all columns. This is because it is not performed in the third wave (unlike the first two). Since ECMO is the most invasive oxygenation treatment, this may be due to the fact that the severity of the patients' condition has decreased. To summarize, alignments-based techniques make it possible to detect and analyze process deviations, providing useful insights for physicians. Furthermore, in the three waves, most activities remained misaligned, while some moved closer to the guidelines' suggestion. This shows that the process is highly variable and specific care pathways are required for each patient, which do not always coincide with those stated in the guidelines. \section{Conclusion}\label{sec:conclusion} Our work aimed to analyze the care process for COVID-19 patients, bringing to light deviations from the clinical guidelines. Specifically, the work proposed a normative model bases on the STAKOB guidelines, which can be interpreted by software tools (e.g., process mining software). The BPMN model is openly accessible to any analyst, and can also be loaded into any commercial software supporting the BPMN standard, like Celonis and Signavio. This addresses the need for computer-interpretable and usable guidelines in healthcare, particularly for the treatment of COVID-19 patients~\cite{oliart2022we}. In addition, the work provided physicians with guidance on the management of COVID-19 patients, highlighting deviations and critical points in the three infection waves. The contributions of our work are: \begin{itemize} \item One of the first attempts to apply a process mining-based methodology for the analysis of process deviations in a real, complex, and uncertain healthcare context, like the recent and ongoing COVID-19 pandemic. \item The development of a normative model that can advise physicians in the treatment of COVID-19 patients by providing specific guidelines and procedures to follow. This is particularly helpful in dealing with the uncertainty and complexity of healthcare operations brought about by the pandemic. In addition, the model can be used as input for the development of a decision support system, which alerts in real-time in case of violations of the guidelines. \item The extraction of valuable insights for physicians regarding the main deviations and the related causes in the COVID-19 patient care process. This knowledge is crucial for improving the process and ensuring service quality and patient satisfaction, e.g., better management of drug administration (when to administer and how often), more targeted execution of certain treatments---e.g., pronation---(who to treat and when to do it), and execution of treatments suggested by guidelines but never performed in reality that can enhance the care pathway and reduce hospitalization time (such as high flow oxygenation). \end{itemize} The work presents some open questions and directions for future research. The limited size, especially for the third wave, and the coarseness of the timestamps in the dataset may impact the results. Furthermore, the physician's consensus on both the validity of the STAKOB model and the interpretation of the conformance checking results can definitely be enlarged, by soliciting the expert opinion of a larger group of medics. As future developments, we plan to: (i) extend the research and collect new data from other German hospitals, in order to generalize the results and identify best practices in the treatment of COVID-19 patients; (ii) improve the validation of results; (iii) actively involve physicians in the analysis of deviations, using qualitative approaches such as interviews and field observations; (iv) conduct a more extensive comparative analysis based on process mining, including a structural model comparison, concept drift, and performance analysis. \bibliographystyle{splncs04}
{ "timestamp": "2022-09-23T02:12:08", "yymm": "2209", "arxiv_id": "2209.10897", "language": "en", "url": "https://arxiv.org/abs/2209.10897" }
\section{} Quasi-periodic pulsations (QPPs), which carry time features and plasma characteristics of flare emissions, are frequently observed in light curves of solar/stellar flares. In this paper, we investigate non-stationary QPPs associated with recurrent jets during an M1.2 flare on 2022 July 14. A quasi-period of $\sim$45$\pm$10~s, determined by the wavelet transform technique, is simultaneously identified at wavelengths of soft/hard X-ray and microwave emissions, which are recorded by the Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor, Fermi, and the Nobeyama Radio Polarimeters, respectively. A group of recurrent jets with an intermittent cadence of about 45$\pm$10~s are found in Atmospheric Imaging Assembly (AIA) image series at 304~{\AA}, but they are 180-s earlier than the flare QPP. All observational facts suggest that the flare QPP could be excited by recurrent jets, and they should be associated with nonthermal electrons that are periodically accelerated by a repeated energy release process, like repetitive magnetic reconnection. Moreover, the same quasi-period is discovered at double footpoints connected by a hot flare loop in AIA~94~{\AA}, and the phase speed is measured to $\sim$1420~km~s$^{-1}$. Based on the differential emission measure, the average temperatures, number densities, and magnetic field strengths at the loop top and footpoint are estimated to $\sim$7.7/6.7~MK, $\sim$7.5/3.6$\times$10$^{10}$~cm$^{-3}$, and $\sim$143/99~G, respectively. Our measurements indicate that the 45-s QPP is probably modulated by the kink-mode wave of the flare loop. \tiny \fontsize{8}{11}\helveticabold {\section{Keywords:} Sun: flares, Sun: oscillations, Sun: UV emission, Sun: X-ray emission, Sun: radio emission, MHD waves} \end{abstract} \section{Introduction} Quasi-periodic pulsations (QPPs) observed in solar/stellar flares usually appear as temporal intensity oscillations of electromagnetic radiation \citep[see,][and references therein]{Kupriyanova20, Zimovets21}. They are frequently identified as a series of repetitive but irregular pulsations with anharmonic and symmetric triangular shapes, referring to non-stationary QPPs \citep[e.g.,][]{Nakariakov19}. The observation of QPPs has been reported in flare time series over a broad range of wavelengths, ranging from radio/microwave emissions through ultraviolet (UV) and white light wavelengths to soft and hard X-rays (SXR/HXR) channels, and even in the $\gamma$-ray emission \citep[e.g.,][]{Nakariakov10a,Tan16,Milligan17,Li17a,Kashapova21,Kolotkov21,Lu21,Doyle22,Smith22,Zhang22b}. Generally, a typical QPP should be at least three successive and complete pulsations. There is not reason to talk about the QPP behavior if there are only one or two pulsations, which might be just a coincidence, for instance, the similar time interval between successive pulsations occurred by chance \citep{Nakariakov19}. The characteristic time of all pulsations in one QPP event is expected to be same, which can be regarded as the period. However, the characteristic time of these pulsations could be varied, indicating the irregular nature of flare QPPs. Thus, they often show the variation of quasi-periods \citep[e.g.,][]{Nakariakov18}. In observations, the quasi-periods of flare QPPs are found to vary from a fraction of seconds to a few dozens of minutes \citep{Tan10,Yuan13,Ning14,Meszarosova16,Kolotkov18,Hayes20,Karlicky20,Hong21,Bate22}. It has been accepted that the quasi-periods of flare QPPs are often related to their generation mechanisms \citep{Kupriyanova20}. The short-period (i.e., $<$1~s) QPPs, which are usually observed in radio/microwave emissions, are often driven by the dynamic interaction between plasma waves and energetic particles in complex magnetic structures \citep{Nakariakov18,Yu19,Karlicky20}. The flare QPPs with long periods in the order of seconds and minutes, which could detect in almost all wavelengths, are frequently interpreted in terms of magnetohydrodynamic (MHD) waves in slow modes \citep[e.g.,][]{Wang21}, kink modes \citep[e.g.,][]{Nakariakov21}, and sausage modes \citep[e.g.,][]{Lib20}. In such case, the flare QPPs with periods larger than 1~minute could be associated with slow sausage waves \citep{Sadeghi19,Gao21}, global kink waves \citep{Duckenfield19,Gao22}, and slow magnetoacoustic waves \citep{Wang11,Ofman12,Yuan15,Prasad22}; while those with periods in the order of seconds are often explained as fast sausage or kink waves \citep{Inglis09,Guo21,Kashapova21}, depending on whether the plasma loop can be compressible or incompressible \citep{Yuan16,Nakariakov20}. Those long-period QPPs might be also associated with the repetitive magnetic reconnection \citep{Thurgood19,Karampelas22}. The idea is that the released energy via intermittent magnetic reconnection is repeated, which can periodically accelerate nonthermal electrons. Thus, it is often used to explain the QPPs seen in the impulsive phase of solar flares \citep[e.g.,][]{Yuan19,Li21}. Moreover, this reconnection process could either be spontaneous such as `magnetic dripping' \citep[e.g.,][]{Nakariakov10} and `magnetic tuning fork' \citep[e.g.,][]{Takasao16}, or it might be triggered by an external MHD wave \citep{Foullon05,Nakariakov18}. Solar jets, which often show columnar and beam-like structures, are usually associated with solar flares, type III radio bursts, and filament eruptions \citep{Shibata07,Shen11,Paraschiv15,Raouafi16}. They can be observed everywhere on the Sun, such as active regions, quiet-Sun regions, and coronal holes \citep{Brueckner83,Shen21}. The recurrent jets, which always reveal ejected plasmas repeatedly and have the same base source \citep{Tian18,Lu19}, become a topic of particular interest because they could be associated with flare QPPs \citep{Ning22,Shi22}, fast-mode EUV waves and quasi-periodic fast-propagating (QFP) magnetosonic waves \citep{Shen18a,Shen18b}. The observed QFP waves often consist of multiple concentric and coherent wavefronts, termed as `QFP wave trains', and they are produced successively within periods of dozens of seconds or a few minutes near the epicenter of the accompanying flares \citep{Shen22,Shen22b}. Sometimes, the quasi-periods of QFP wave trains are quite similar to those of associated flare QPPs, implying that the two different phenomena might manifest the two different aspects of the same physical process, i.e., the pulsed energy release via repeating magnetic reconnection \citep{Liu11,Shen12b,Shen13,Shen18,Kolotkov18,Zhou22}. On the other hand, some quasi-periods of QFP wave trains are completely unassociated with those of flares QPPs, indicating that the periodicity of QFP wave trains is diverse and could not be associated with flare QPPs \citep{Shen18c,Shen19}. Therefore, the relationship between flare QPPs and QFP wave trains still needs in-depth investigation \citep{Shen22}. The observed QPPs could provide the time feature and plasma characteristic of flare emissions, which are helpful for diagnosing plasma properties on the Sun or Sun-like stars, especially at the flare location \citep{Pugh19,Zimovets21}. When considering that flare QPPs are modulated by MHD waves, they might potentially lead to coronal heating through dissipating of those waves \citep{Reale19,Van20,White21,Li22a}. Moreover, they can allow us to map coronal magnetic fields and estimate plasma parameters in the corona, named as `coronal seismology' \citep[e.g.,][]{Yang20,Anfinogentov22}. In this paper, we report multi-wavelength observations of the flare QPP associated with recurrent jets, and the flare QPP is also found at two opposite footpoints connected by a hot flare loop seen in AIA~94~{\AA} images. Our measurements suggest that the flare QPP could be interpreted as kink-mode MHD wave of the flare loop. \section{Observations} On 2022 July 14, a solar flare occurred in the active region NOAA 13058 (N15E81), which was close to the solar limb and erupted after a group of recurrent jets. It was simultaneously observed by several space-based telescopes, such as the Geostationary Operational Environmental Satellite X-ray Sensor \citep[GOES/XRS;][]{Hanser96}, the Fermi Gamma-ray Burst Monitor \citep[GBM;][]{Meegan09}, the Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor \citep[GECAM;][]{Xiao22}, the Atmospheric Imaging Assembly \citep[AIA;][]{Lemen12} and the Extreme Ultraviolet Variability Experiment \citep[EVE;][]{Woods12} on board the Solar Dynamics Observatory \citep[SDO;][]{Pesnell12}, and the ground-based radio telescope, i.e., the Nobeyama Radio Polarimeters \citep[NoRP;][]{Nakajima85}, as seen in Table~\ref{tab1} and Figure~\ref{over}. It should be pointed out that all light curves expected for GOES have been multiplied by a factor, so that they can be well displayed in a same window. GOES/XRS \citep{Hanser96,Lotoaniu17} is used to monitor the full-disk solar irradiance at SXR channels with a time cadence of 1~s, particularly for monitoring the flare emission, as shown by the black line in Figure~\ref{over}~(A). According to the GOES~1$-$8~{\AA} flux, the solar flare was identified as an M1.2 class, it began at $\sim$04:22~UT, reached its maximum at about 04:31~UT, and stopped at $\sim$04:40~UT. The gold line shows the derivative flux at GOES~1$-$8~{\AA}. The EUV SpectroPhotometer \citep[ESP;][]{Didkovsky12} for SDO/EVE could also provide the SXR flux at 1$-$70~{\AA} with a time cadence of 0.25~s, as indicated by the red line. The SXR light curves observed by GOES and ESP match well with each other, and they both appear double peaks before the onset time of the M1.2 flare, i.e., from 04:16~UT to 04:20~UT, as indicated by the black arrow. They might be a candidate of the flare precursor. Fermi/GBM can provide the solar irradiance that is integrated over the whole Sun at both SXR and HXR channels. The temporal cadence is commonly 0.256~s, but it becomes 0.064~s automatically during solar flares \citep{Meegan09}. Thus, we first interpolate them into an uniform temporal resolution of 0.256~s before analysing and such temporal resolution is sufficient to study the flare QPP with a quasi-period of tens of seconds \citep[cf.][]{Li15,Ning17}. Figure~\ref{over}~(A) draws the Fermi/GBM light curve at 11.5$-$102.4~keV, as shown by the cyan curve, which is measured by the n5 detector. GECAM is designed to detect and localize high-energy transients, such as Gamma-ray bursts and solar flares. It consists of 25 gamma-ray detectors (GRDs), which are used to detect the X-ray and $\gamma$-ray radiation \citep{Xiao22}. Figure~\ref{over}~(A) shows the solar flux at 25$-$120~keV (blue) during the M1.2 flare with an uniform temporal cadence of 0.5~s, the GRD numbers and their averaged incident angles for each GRD used in this study are listed in Table~\ref{tab2}. Both Fermi/GBM and GECAM/GRD light curves appear double peaks between about 04:16~UT and 04:20~UT, similarly to what have seen in SXR fluxes recorded by GOES/XRS and SDO/EVE/ESP. The M1.2 flare was also observed by NoRP at the radio/microwave emission with a temporal cadence of 1~s, as shown by the magenta line in Figure~\ref{over}~(A). It matches well with the GOES~1-8~{\AA} derivative flux, indicating the Neupert effect during the M1.2 flare \citep[cf.][]{Neupert68}. The microwave flux also reveals several successive sub-peaks during the flare impulsive phase, similarly to what observed in the Fermi (cyan) and GECAM (blue) light curves, which could be regarded as QPPs. On the other hand, we do not see the small peak before the M1.2 flare in the NoRP light curve. So, it is impossible to determine a flare precursor here. Fortunately, SDO/AIA can provide full-disk spatial-resolved maps in seven EUV and two UV wavelength bands. The spatial scale for each AIA map is 0.6$''$~per~pixel, and the temporal cadence is 12~s for EUV maps. Before analysing, all the AIA maps have been preprocessed by `aia\_prep.pro' \citep{Lemen12}. Figure~\ref{over}~(B-C) presents AIA maps with a sub-field of about 90$''$$\times$90$''$ at 304~{\AA} and 94~{\AA}, respectively. A group of jets can be seen in the AIA~304~{\AA} map (see also the animation.mp4), as outlined by two cyan lines. In order to cover the bulk of these jets as much as possible during their lifetime, we used a constant width of about 15$''$. The base of jets is close to one flare ribbon in AIA~304~{\AA} maps. A post flare loop can be seen in the AIA~94~{\AA} map, and two pairs of magenta lines with a width of about 3$''$ are used to outline double footpoints (or loop legs). Finally, the light curve at AIA~94~{\AA} is integrated from the flare region, as indicated by the green line in Figure~\ref{over}~(A). We can not see the small peak during $\sim$04:16$-$04:20~UT before the flare onset. Thus, we can conclude that the small double peaks (indicated by the black arrow) in SXR/HXR channels are not identified as the flare precursor \citep[e.g.,][]{Dudk16,Benz17,Yan17,Li18b,Li20c} \section{Results and Discussions} \subsection{Multi-wavelength observations of flare QPP} The small double peaks before the M1.2 flare seen in SXR/HXR fluxes can not be regarded as the flare precursor, because they are not homologous with the flare source, as shown in Figure~\ref{over}. Herein, only the successive sub-peaks seen in HXR and microwave emissions during the flare impulsive (i.e., $\sim$04:27$-$04:32~UT) are investigated in this study. Figure~\ref{flux} presents HXR light curves at GECAM~25$-$120~keV (black), Fermi~11.5$-$26.6~keV (magenta) and 26.6$-$102.4~keV (cyan). They appear to be characterized by several small-amplitude sub-peaks superimposed on the large-amplitude pulse. These sub-peaks with small amplitudes are successive and could be regarded as QPPs, while the main pulse with the large amplitude can be regraded as a strong background trend. The vertical lines indicate seven sub-peaks from roughly 04:27:50~UT to about 04:32:20~UT, and the average duration is 45~s, corresponding to a quasi-period of 45~s. We also note that some sub-peaks might be not very clear in the raw light curve, largely due to their small amplitudes. Using a smooth window of 60~s \citep{Nakariakov10a,Yuan11,Li15,Li22}, the raw light curve is decomposed into two components: a rapidly varying component (QPP) plus a slowly varying component (background). Thereby, the shorter-period oscillation (i.e, 45-s QPP) is enhanced, while the long-period background trend is suppressed \citep[see][for the discussion of this method]{Kupriyanova10,Gruber11,Auchere16}. The overplotted blue dashed lines represent the slowly varying components, and the rapidly varying components are shown in panel~(B). Obviously, the rapidly varying components are dominated by the QPP feature, i.e., some repetitive but irregular pulsations, as marked by the vertical lines. They match well with the successive sub-peaks seen in the raw light curves, indicating that the smooth method only enhance the short-period oscillation, but does not change it. Therefore, these repetitive but irregular pulsations could regard as the signature of non-stationary QPPs \citep[cf.][]{Nakariakov19}, and they can not be the artifact of smoothing \citep[cf.][]{Li21}. Here, the modulation depth of flare QPPs, which is regarded as the ratio between rapidly varying components and the maximum value of slowly varying components, are roughly equal to 10\%$-$25\%. This result is consistent with previous findings for flare QPPs in HXR emissions \citep[e.g.,][]{Nakariakov10a,Li22}. Next, the Morlet wavelet analysis method is applied to the rapidly varying components at Fermi~11.5$-$26.6~keV and GECAM~25$-$120~keV, as shown in Figure~\ref{hxr}. Based on the Parseval's theorem for wavelet analysis \citep{Torrence98}, the wavelet power has been normalized, which could provide the conservation of total energy signals under the wavelet transform, and then obtained a distribution of the spectral power across wavelet periods. Panels~(A1) and (B1) show the wavelet power spectra, and they both exhibit an enhanced power over a wide range in almost the same time interval from about 04:27:50~UT to 04:32:20~UT, indicating a flare QPP within large uncertainties. The bulk of power spectrum (at the confidence level of 99\%) is dominated by a quasi-period centered at $\sim$45~s. The dominant period of $\sim$45~s is confirmed by the global wavelet power spectrum, as shown in panels~(A2) and (B2). From which, a significant peak at about 45~s is seen in the global wavelet power spectrum. On the other hand, the period uncertainty of $\pm$10~s could be determined by the full width at half maximum value of the peak global power above the 99\% confidence level \citep[as performed by][]{Yuan11,Tian16,Li20a}. The flare QPP with a quasi-period of about 45$\pm$10~s is seen in the HXR radiation observed by Fermi and GECAM. However, the Fermi flux at 11.5$-$26.6~keV might consist of SXR and HXR components. In order to know if the flare QPP could be found in the SXR emission, we then perform the Morlet wavelet analysis on SXR light curves at GOES~1$-$8~{\AA} and ESP~1$-$70~{\AA}, as shown in Figure~\ref{sxr}. Panels~(A1) and (B1) present the raw SXR light curves (black) and their slowly varying components (dashed blue) after applying a smooth window of 60~s. It should be pointed out that the slowly varying components have been multiplied by 0.95 to avoid overlap with the raw light curves \citep[cf.][]{Ning22}. Panels~(A2) and (B2) plot the corresponding rapidly varying components, which are characterized by a series of successive pulsations. The modulation depth of SXR radiation is only about 0.4\%$-$0.6\%, which is much smaller than that of HXR emissions. This is consistent with previous observations, for instance, the flare SXR emission often reveals the small-amplitude oscillation, while the HXR QPP usually has a large amplitude \citep[e.g.,][]{Nakariakov10a,Ning17,Li20b,Ning22}. Panels~(A3) and (B3) show the Morlet wavelet power spectra of rapidly varying components. They both reveal an enhanced power at the period center of about 45~s over a time interval from roughly 04:27~UT to 04:31~UT, suggesting a dominant period of $\sim$45~s, similarly to what observed in HXR QPPs. Figure~\ref{radio} presents the Morlet wavelet analysis on radio fluxes at frequencies of NoRP~2~GHz (A1-A3) and 3.75~GHz (B1-B3). Using the same smooth window of 60~s, the raw light curves (black) are decomposed into slowly (dashed blue) and rapidly varying components (A2-B2). The modulation depth of radio QPPs is estimated to about 1\%$-$2\%, which is larger than that of SXR QPPs, but is still smaller than that of HXR QPPs. We also note that only 3 or 4 successive pulsations appear in radio fluxes, which are less than that in HXR fluxes. On the other hand, a same quasi-period centered at $\sim$45~s is seen in the wavelet power spectrum, which agrees with the 45-s QPP observed by Fermi, GECAM, GOES and EVE/ESP. The same quasi-period of 45~s is simultaneously detected in SXR, HXR, and microwave emissions during the impulsive phase of the M1.2 flare, suggesting that the 45-s QPP seen at multiple wavelengths should originate from a same process of energy release, i.e., the repetitive magnetic reconnection. The flare QPP could be observed at multiple wavelengths of HXR, SXR, and microwave emissions, suggesting that the 45-s QPP simultaneously appear in both the nonthermal and thermal emissions. In other words, the nonthermal and thermal processes could be coexisted during the M1.2 flare \citep[e.g.,][]{Warmuth16,Li20a,Ning22}. The 45-s QPP observed in the thermal emission at SXR wavelengths may share the same origin as the QPP feature seen in the nonthermal emission at HXR and microwave channels. The M1.2 flare showed the Neupert effect (Figire~\ref{over}), which is a plasma heating via energy releasing through electron beams \citep{Neupert68,Ning08,Ning09}. The flare QPP observed at multiple wavelengths is most likely to be associated with the nonthermal process, i.e., the periodically accelerated electron beams via the repetitive magnetic reconnection \citep[e.g.,][]{Li21,Karampelas22}. The idea is that the released energy via periodic reconnection could periodically accelerate electron beams, producing repetitive HXR and microwave pulsations in the solar corona. Meanwhile, the repeated SXR pulsations are periodically generated by plasma heating after magnetic reconnection \citep[see][for a recent review]{Zimovets21}. \subsection{Recurrent jets associated with flare QPP} Figure~\ref{over}~(B) and the animation.mp4 show that a group of plasma ejections during the M1.2 flare. They manifest as collimated and beam-like structures in AIA~304~{\AA}, which could be identified as `solar jets' \citep[e.g.,][]{Shen21}. To look closely the jet eruptions and periodicity, we draw the time-distance image along the slit of S1 that is made from AIA~304~{\AA} image series, as shown in Figure~\ref{jet}~(A). Here, the slit is selected to be a constant width of about 15$''$, and thus it can cover the bulk of jet bodies as much as possible. A series of solar jets can be seen in the time-distance image, and their apparent speed is estimated to about 110$-$300~km~s$^{-1}$, as indicated by the blue arrows. A total of nine jets are found during the time interval of about 450~s, and the average intermittent cadence is roughly equal to 50~s. Such intermittent cadence is quite close to the quasi-period of the flare QPP, implying that those jets occur periodically. Then the intensity variation integrated over two short cyan lines is overplotted, as shown by the cyan line. The intensity curve seems to reveal several sub-peaks corresponding to solar jets. However, it is hard to show a one-to-one correspondence, mainly due to the small-amplitude sub-peaks superimposed on the strong background emission. Therefore, the slowly (dashed green) and rapidly varying components are distinguished with the smooth window of 60~s, and the Morlet wavelet analysis is applied to the rapidly varying component. Panels~(B) and (C) show the Morlet wavelet power spectrum and its global wavelet power spectrum. They both reveal a period centered at about 45~s, confirming that the recurrent jets are associated with flare QPPs. Moreover, the recurrent jets appear to start at about 04:24:50~UT, which are $\sim$180-s earlier than the flare QPP. Our observation suggest that the flare QPP could be excited by these recurrent jets. Previous findings \citep[e.g.,][]{Reid12,Shen12,Lu19} found that solar jets were always accompanied by solar flares, coronal bright points, or filament eruptions. Recent observations also showed that solar jets triggered by a solar flare had repetitive and regular occurrences with a period of about 72~s, but they did not find the similar quasi-period between flare QPPs and recurrent jets \citep{Ning22}. The same quasi-period of about 60~s was also discovered both in flare QPPs and recurrent jets, and they took place almost simultaneously \citep{Shi22}. However, it is impossible to conclude that whether these recurrent jets have affected the flare QPP or they are just the result of the flare QPP \citep[cf.][]{Ning22}. In our study, a same quasi-period of 45~s is observed in both the flare QPP and recurrent jets, and the onset time of these recurrent jets are $\sim$180-s earlier than the beginning of flare QPP. Based on these observational facts, we may infer that the flare QPP seen in the SXR/HXR and microwave emissions is probably excited by recurrent jets. The associated video (animation.mp4) shows that the eruption of the first jet is like a mini-filament-driven jet very much, indicating that the recurrent jets could be driven by the eruption of mini-filaments that is associated with magnetic reconnection \citep{Sterling20,Shen21}. Thus, both the recurrent jets and the accompanying flare QPP could be associated with the magnetic reconnection that is modulated by some periodic processes. \subsection{Geometric and differential emission measure analysis} The flare QPP observed in SXR, HXR and microwave emissions could be excited by a group of recurrent jets with the same intermittent cadence, and they are most likely to be associated with a nonthermal process, i.e., electron beams periodically accelerated by the repetitive magnetic reconnection \citep[e.g.,][]{Yuan19,Li21,Karampelas22}. In order to further know whether the quasi-period of 45-s is modulated by an external MHD wave \citep[e.g.,][]{Foullon05,Li15,Nakariakov18}, or it is only a self-oscillating process \citep[e.g.,][]{Nakariakov10,Takasao16}, we perform the geometric and differential emission measure (DEM) analysis for the M1.2 flare, as shown in Figures~\ref{loop} and \ref{dem}. Figure~\ref{loop}~(A1-B1) present time-distance diagrams at AIA~94~{\AA} along two slits (S2 and S3) in Figure~\ref{over}~(C), and the magenta symbols (`$\ast$') mark their start points. Here, the slits are selected to cross two opposite footpoints of the flare loop, but they are not cross the loop top. Because there are much more saturated pixels at loop top than those at footpoints (see also the animation.mp4). In the two time-distance diagrams, it does not see any signatures of displacement oscillations that are perpendicular to loop legs. However, they appear clearly signatures of brightness variations at double footpoints, as outlined by two short magenta lines. Thus, the normalized light curves at AIA~94~{\AA}, which are integrated intensities between two short magenta lines, are overplotted in corresponding time-distance diagrams, as shown by the solid magenta curves. Similar to the microwave flux, at least four sub-peaks are found to superimpose on the background emission, as indicated by the gold vertical lines, which are less than those in HXR fluxes. They appear as non-stationary QPPs, for instance, each pulsation is mainly characterized by an anharmonic and triangular shape \citep[e.g.,][]{Nakariakov19}. Using the same smooth window of 60~s, the slowly (dashed red) and rapidly varying components are distinguished from the raw light curves. Panels~(A2-B2) show Morlet wavelet power spectra of the rapidly varying components at AIA~94~{\AA}. They both reveal an enhanced power at the period center of about 45~s from around 04:27:50~UT to 04:30:05~UT, suggesting a dominant period of $\sim$45~s, similarly to what observed in SXR/HXR and microwave emissions. Panel~(C) presents the cross-correlation analysis \citep[e.g.,][]{Tian16} between two rapidly varying components in AIA~94~{\AA} at double footpoints, the maximum correlation coefficient of 0.74 is seen at the time lag of 0~s, as indicated by the vertical line. This observational result suggest that the flare QPP at double footpoints is in phase. Figure~\ref{dem} shows the DEM analysis result. It is calculated from six EUV-wavelength observations measured by SDO/AIA. The DEM($T$) distribution for each pixel is estimated by an improved sparse-inversion code \citep{Cheung15,Su18}, and the DEM($T$) uncertainty can be estimated from 100 Monte Carlo (MC) simulations, for instance, the three times of standard deviations of 100~MC simulations (3$\delta$). Panel~(A) presents the EM map integrated in the temperature range of 0.31$-$20~MK. Similar to the AIA~94~{\AA}~map, a post-flare loop can be seen in the EM map. Then, three small regions (cyan boxes) with a FOV of about 1.8$''$$\times$1.8$''$ are selected to display DEM profiles, and they are located at the non-flare region (or coronal background, p1), loop-top region (p2), and one footpoint (p3), respectively. Panels~(B-D) draw DEM profiles as the function of temperature, and the error bars represent their uncertainties, i.e., 3$\delta$. The EM and DEM-weighted mean temperature ($T_{\rm e}$) are calculated in the temperature range between 0.31$-$20~MK, as labeled in each panel. It can be seen that both the EM and $T_e$ at the loop top are higher than that at the footpoint, and thus loop-top region is more saturated. The $T_{\rm e}$ is estimated to $\sim$7.7~MK (C) at the loop top and $\sim$6.7~MK (D) at the footpoint, which is consistent with that the post-flare loop is most visible at AIA~94~{\AA} (T$\approx$6.3~MK). At the non-flare region, the $T_{\rm e}$ is $\sim$1.8~MK (B), which is roughly equal to quiet coronal temperature. \subsection{MHD explanation and Coronal seismology} Based on the AIA~94~{\AA} map and EM map in Figures~\ref{over}~(C) and \ref{dem}~(A), the distance between two footpoints of the flare loop is estimated to $\sim$20.3~Mm, which leads to a loop length ($L$) of $\sim$31.9~Mm when assuming a semi-circular shape for the flare loop \citep[cf.][]{Tian16,Gao22,Li22}. Then under the assumption that the oscillation is associated with a standing wave, the phase speed ($v_{\rm ph}$) can be determined by Equation~(\ref{eq1}), for instance, twice the ratio of the loop length to the quasi-period ($P$), which is about 1420~km~s$^{-1}$. \begin{equation} v_{\rm ph} = \frac{2L}{P}. \label{eq1} \end{equation} The local sound speed in the flare loop can be estimated by using $v_{\rm s}\approx152\sqrt{T_{\rm e}/{\rm MK}}$ \citep[cf.][]{Nakariakov01,Kumar15,Li17b}. The average temperatures at the loop top and footpoint are estimated to 7.7~MK and 6.7~MK, which lead to the local sound speeds of $\sim$420~km~s$^{-1}$ and $\sim$390~km~s$^{-1}$, respectively. Obviously, the estimated phase speed of the flare loop is much faster than the local sound speeds at the loop top and footpoints. Therefore, the 45-s period observed in the M1.2 flare could not be modulated by the slow-mode wave in the flare loop \citep[e.g.,][]{Wang21}, although the quasi-periods less than 1~minute have been reported in flare QPPs and explained as standing slow-mode waves \citep[e.g.,][]{Welsh06,Cho16}. The estimated phase speed is much slower than that requires for the global sausage-mode wave, i.e., the speed in the range of $\sim$2400$-$5000~km~s$^{-1}$ \citep[e.g.][]{Nakariakov03,Melnikov05,Tian16}. Moreover, the global sausage-mode wave is often found in the broader and denser plasma loop, and the necessary condition is given by \cite{Nakariakov03} as in Equation~(\ref{eq2}). \begin{equation} \frac{n_{\rm i}}{n_{\rm o}} > (\frac{L}{0.65w})^{2}. \label{eq2} \end{equation} \noindent Here, $n_{\rm i}$ and $n_{\rm o}$ are the number densities inside and outside of the flare loop (or non-flare region). $w$ stands for the loop width, and could regard as the full width at half maximum of a Gaussian profile across the flare loop, which is about 2.5~Mm. Thus, the density contrasty should be as high as 385 if the 45-s QPP is modulated by the global sausage-mode wave of flare loop. The number density ($n_{\rm i}$) inside the flare loop can be estimated by $\sqrt{{\rm EM}/w}$, which are $\sim$7.5$\times$10$^{10}$~cm$^{-3}$ at the loop top and $\sim$3.6$\times$10$^{10}$~cm$^{-3}$ at the footpoint. At the non-flare region that has not plasma loops, the effective line-of-sight depth (i.e., $w \approx 4 \times 10^{10}$~cm) is used to calculate the $n_o$ \citep[see,][]{Zucca14,Li18,Suw18}, leading to $\sim$9.7$\times$10$^{8}$~cm$^{-3}$. Then, the density contrast is in the range of $\sim$37$-$77 from double footpoints to the loop top. Such density contrast is rather low, compared to the necessary condition of the global sausage oscillation in flare loops \citep[e.g.,][]{Nakariakov03,Chen15}. Therefore, the quasi-period at about 45~s seen in the M1.2 flare is impossible to be modulated by the global sausage-mode wave of the flare loop. In our study, the phase speed is quite close to the average speed of about 1328~km~s$^{-1}$ in a catalog of kink-mode oscillations \citep{Nechaeva19,Nakariakov21}, which are often identified as transverse oscillations of plasma loops \citep[e.g.,][]{Nakariakov99,Anfinogentov15,Suw18,Li20b,Tiwari21}. In the corona, kink oscillations are always compressive, or weakly compressive in the long wavelength regime \citep{Goossens12,Nakariakov21}. On the other hand, they could be seen as the brightness variation or intensity disturbance if the loop displacement is not exactly perpendicular to the line-of-sight \citep{Cooper03,Tian12,Wang12,Zimovets15,Antolin17,Li18}. In such case, the local Alfv\'{e}n speed ($v_{\rm A}$) could be determined by the phase speed ($v_{\rm ph}$) and the density contrast ($n_{\rm o}/n_{\rm i}$), and the magnetic field strength ($B$) can be estimated by using the local Alfv\'{e}n speed and mass density at the loop top and footpoints, as shown in Equations~\ref{eq3} and \ref{eq4} \citep[e.g.,][]{Yang20,Zimovets21,Tan22,Zhang22}. \begin{equation} v_{\rm A} = v_{\rm ph}~(\frac{2}{1+n_{\rm o}/n_{\rm i}})^{-\frac{1}{2}}. \label{eq3} \end{equation} \begin{equation} B \approx v_{\rm A}~(\mu_{\rm 0}~n_{\rm i}~m_{\rm p}~\widetilde{\mu})^{\frac{1}{2}}. \label{eq4} \end{equation} \noindent Where, $\mu_{\rm 0}$ and $m_{\rm p}$ stand for the magnetic permittivity of free space and the Proton mass, $n_i$ is the number density at the flare loop, and $\widetilde{\mu} \approx 1.27$ represents the average molecular weight in the solar corona \citep[e.g.,][]{Nakariakov01,Zhang20}. Then, the mass density ($\rho_{\rm i}$) could be roughly equal to $n_{\rm i}~m_{\rm p}~\widetilde{\mu}$. Herein, the Alfv\'{e}n speed inside the oscillating loop is estimated to about 1010~km~s$^{-1}$, leading to the magnetic field strength of about 99~G and 143~G at the footpoint and loop top, respectively. These strengths at the flare loop are consistent with previous estimations in solar flares \citep[e.g.,][]{Qiu09,Li17b,Li18,Zimovets21b}. Our measurement and estimations support the idea that the quasi-period of 45~s in the M1.2 flare could be modulated by the kink-mode wave of a flare loop \citep{Nakariakov21}. \section{Summary} Based on observations recorded by Fermi, GECAM, GOES, SDO/EVE, and NoRP, we investigate the non-stationary QPP at wavelengths of HXR, SXR, microwave and EUV during the impulsive phase of an M1.2 flare on 2022 July 14. Combined with the imaging observation from SDO/AIA, the excitation and modulation of the flare QPP are discussed. Our conclusions are summarized as following: \begin{enumerate} \item A quasi-period of $\sim$45$\pm$10~s is simultaneously detected at Fermi~11.5$-$102.4~keV, GECAM~25$-$120~keV, GOES~1$-$8~{\AA}, ESP~1$-$70~{\AA}, NoRP~2~GHz and 3.75~GHz during the flare impulsive phase, i.e., from about 04:27:50~UT to 04:32:20~UT. Our observations suggest the coexistence of nonthermal and thermal processes in the M1.2 flare, and the 45-s QPP at multiple wavelengths could share the same periodic process of energy release, like the repetitive magnetic reconnection \citep[e.g.,][]{Yuan19,Li21,Karampelas22}. \item A group of recurrent jets with a periodicity of about $\sim$45$\pm$10 are seen in AIA~304~{\AA} image series during $\sim$04:24:50$-$04:32:20~UT. The onset time of the flare QPP is 180-s later than that of recurrent jets, but they show the same quasi-period, indicating that the flare QPP is probably excited by recurrent jets. This observational result is different from previous findings, for instance, solar jets were always triggered by the flare eruption \citep{Reid12,Lu19}, or the periodicity of the solar flare and accompanied jets is different \citep[e.g.,][]{Ning22}. \item Thanks to the imaging observation from SDO/AIA at 94~{\AA}, the quasi-period of $\sim$45$\pm$10~s is also seen at two opposite footpoints of the flare loop. And the phase speed is estimated to about 1420~km~s$^{-1}$. Our measurements imply that the 45-s period is most likely to be modulated by the kink-mode wave \citep[cf.][]{Nakariakov10a,Nechaeva19}. \item Based on the kink oscillation model, the Alfv\'{e}n speed inside the flare loop is estimated to $\sim$1010~km~s$^{-1}$. The magnetic field strengths are measured in the range of 99$-$143~G from the footpoint to the loop top, similarly to what have estimated in solar flares at the magnitude order of 100~G \citep[e.g.,][]{Qiu09,Li18,Zimovets21b}. \end{enumerate} \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contributions} D.~Li selected the topic, performed the main data analysis, led to discussions and prepared the manuscript. F.~Shi contributed the SDO/AIA data analysis and joined to modify the manuscript. H.~Zhao, S.~Xiong, L.~Song, W.~Peng, and X.~Li provided the GECAM data analysis. W. Chen contributed to analyse the Fermi data. Z.~Ning joined to discuss the explanation of the flare QPP and recurrent jets. \section*{Funding} This work is funded by the NSFC under grants 11973092, U1931138, 12073081, U1938102, as well as CAS Strategic Pioneer Program on Space Science, Grant No. XDA15052200, and XDA15320301. D. Li is also supported by the Surface Project of Jiangsu Province (BK20211402). \section*{Acknowledgments} The authors would like to acknowledge two anonymous referees for their inspiring and valuable comments. We thank the teams of Fermi, GECAM, GOES, SDO/AIA, SDO/EVE, and NoRP for their open data use policy. GECAM is a mission funded by the Chinese Academy of Sciences (CAS) under the Strategic Priority Research Program on Space Science. SDO is a mission of NASA's Living With a Star Program (LWS). AIA and EVE data are courtesy of the NASA/SDO science teams. NoRP is operated by Solar Science Observatory, a branch of National Astronomical Observatory of Japan, and their observing data are verified scientifically by the consortium for NoRP scientific operations. \section*{Data Availability Statement} The datasets for this study can be found here: \href{https://fermi.gsfc.nasa.gov/ssc/data/}{https://fermi.gsfc.nasa.gov/ssc/data/}, \href{http://jsoc.stanford.edu/ajax/lookdata.html}{http://jsoc.stanford.edu/ajax/lookdata.html}, \href{https://lasp.colorado.edu/home/eve/data/}{https://lasp.colorado.edu/home/eve/data/}, \href{https://solar.nro.nao.ac.jp/norp/index.html}{https://solar.nro.nao.ac.jp/norp/index.html}. \bibliographystyle{frontiersinSCNS_ENG_HUMS}
{ "timestamp": "2022-09-23T02:13:36", "yymm": "2209", "arxiv_id": "2209.10952", "language": "en", "url": "https://arxiv.org/abs/2209.10952" }
\section{Introduction} The Leo I group is well-known for its numerous optically dark H\textsc{i}{} features. At approximately 11 Mpc distance, the group consists of at least 4 major galaxies. Its most unusual feature is the spectacular Leo Ring, a 225 kpc diameter arc of approximately 1.7$\times$10$^{9}$M$_{\odot}${} of H\textsc{i}{} (\citealt{schnleo}, \citealt{aaleo}). Some isolated patches of optical emission have been detected at various points within the Ring (\citealt{thileo}), but on the large scale the H\textsc{i}{} has no associated optical component. The origin of this structure is still unclear. It has been suggested to be primordial (\citealt{schnleo}, \citealt{leoprim}), though metallicity measurements suggest otherwise (\citealt{rose}, \citealt{leomet}). One alternative explanation is a galaxy-galaxy collision (\citealt{leocollide}), which well reproduces the overall morphology but has difficulties explaining the observed kinematics. At approximately the same location on the sky as some parts of the Ring, but at a higher velocity, extended H\textsc{i}{} emission is also seen around the galaxy NGC 3389 (\citealt{aaleo}). This feature, although less dramatic, itself still extends across $\sim$\,90 kpc. It contains several optically bright galaxies, and appears relatively easy to explain by tidal interactions. Elsewhere in Leo, the Leo Triplet contains a $\sim$100 kpc H\textsc{i}{} plume (\citealt{aaleo}) while even larger features are reported in \cite{aaleoleis}. Giant, elongated H\textsc{i}{} structures are often easily explained by galaxy-galaxy interactions (e.g. \citealt{tooms}, \citealt{bekki}, \citealt{duc}, \citealt{me17},) or ram pressure stripping (\citealt{oo05}, \citealt{ton10}, \citealt{me20}). Less common are small, discrete H\textsc{i}{} clouds with no obvious extended feature to indicate their parent galaxy - we describe a few of these in section \ref{sec:others}, with a longer review given in \cite{me16}. The origin of such clouds tend to be controversial. While they may be the remnants of dispersed streams from tidal encounters, an alternative is that they are dark matter dominated but extremely dim (or even entirely dark) galaxies, either primordial (\citealt{d04}, \citealt{m07}) or resulting from interactions (\citealt{fadug}). We tested these explanations in \cite{me16} and \cite{me17}. We found that the \textit{majority} of cases of such isolated clouds can indeed be explained as tidal debris, as may particular features embedded within larger structures, such as VIRGOHI21 (\citealt{m07}, \citealt{aavhi21}). In contrast, we also showed that while our simulations readily produced isolated clouds of low velocity widths ($<$\,50 km\,s$^{-1}${}), clouds of line width $>$ 100 km\,s$^{-1}${} were virtually non-existent in those same simulations. This makes the clouds described in \cite{me12} and \cite{me13} particularly challenging to explain. In this paper we present the discovery of five new clouds in the Leo I region using a deep Arecibo survey. While a handful of such clouds are known in other groups, this is the first time such clouds have been found in this environment. We discuss the possible origin of the clouds, considering the strengths and weaknesses of different models proposed to explain the formation of the Leo Ring. The rest of this paper is organised as follows. In section \ref{sec:obs} we describe the observations, data reduction and source extraction of the H\textsc{i}{} data used in this analysis. The results are presented in section \ref{sec:results} and analysed in section \ref{sec:analysis}. Finally, we summarise our findings in section \ref{sec:findings}. Throughout, we assume a group distance of 11.1 Mpc following \cite{aaleo}. At this distance, the Arecibo 3.5$^{\prime}$ diameter beam has a physical size of 11.3 kpc. \cite{aaleo} give the group velocity dispersion as 175 km\,s$^{-1}${}. Since all of our clouds have systemic velocities within 75 km\,s$^{-1}${} of M96, we assume that they are all at the group distance of 11.1 Mpc unless stated otherwise. \begin{deluxetable*}{c c c c c c c c c c} \tablecaption{Observed H\textsc{i}{} parameters for our selected clouds. All coordinates are J2000. Clouds are ordered in ascending declination.} \label{tab:clouds} \tablehead{ \colhead{Name} & \colhead{R.A.} & \colhead{Declination} & \colhead{Velocity} & \colhead{W50} & \colhead{W20} & \colhead{F$_{tot}$} & \colhead{MH\textsc{i}{}} & \colhead{S/N$_{peak}$} & \colhead{S/N$_{tot}$}\\ \colhead{} & \colhead{} & \colhead{} & \colhead{km\,s$^{-1}${}} & \colhead{km\,s$^{-1}${}} & \colhead{km\,s$^{-1}${}} & \colhead{Jy\,km\,s$^{-1}${}} & \colhead{M$_{\odot}${}} & \colhead{} & \colhead{} } \startdata Cloud 1 & 10:44:47.79 & 11:27:34.05 & 960 & 38 & 100 & 0.188 & 5.5E6 & 12.3 & 17.0\\ Cloud 2 & 10:45:59.27 & 11:30:34.66 & 891 & 33 & 58 & 0.201 & 5.9E6 & 13.5 & 15.7\\ Cloud 3 & 10:45:36.17 & 11:44:20.90 & 894 & 42 & 73 & 0.275 & 8.0E6 & 13.6 & 19.0\\ Cloud 4 & 10:44:56.44 & 11:54:52.40 & 879 & 31 & 46 & 0.308 & 9.0E6 & 19.6 & 24.7\\ Cloud 5 & 10:50:25.91 & 12:09:54.62 & 916 & 16 & 27 & 0.088 & 2.6E6 & 9.2 & 8.2\\ Cloud 6 & 10:45:00.04 & 13:26:04.37 & 860 & 34 & 45 & 0.172 & 5.0E6 & 9.9 & 11.0\\ \enddata \end{deluxetable*} \section{Observations} \label{sec:obs} \subsection{The AGES data} \label{sec:obspartone} The Arecibo Galaxy Environment Survey AGES, \citealt{auld}) was a blind H\textsc{i}{} survey performed at the Arecibo radio telescope from 2006-2019. This was part of a tiered approach to H\textsc{i}{} surveys taking advantage of the then-new ALFA (Arecibo L-band Feed Array) receiver. To this end, ALFALFA (Arecibo Legacy Fast ALFA survey; \citealt{alfalfa}) covered approximately 7,000 square degrees to an \textit{rms} sensitivity of about 2.2 mJy; AGES provides a medium-deep survey of 200 square degrees to 0.7 mJy; and AUDS (Arecibo Ultra Deep Survey, \citealt{auds}) examined 1.4 square degree to 0.08 mJy \textit{rms}). AGES targeted the full range of galaxy environments from voids to clusters, with one of the major goals being the detection of optically dark H\textsc{i}{} sources that could not be discovered by optical surveys. The 1$\sigma$ column density sensitivity of AGES is N$_{H\textsc{i}{}}$ = 1.5$\times$10$^{17}$cm$^{-2}$ (0.001 M$_{\odot}${}pc$^{-2}$) for a source which fills the beam at 10 km\,s$^{-1}${} velocity resolution (\citealt{olivia}); while AGES has a maximum velocity resolution of 5 km\,s$^{-1}${}, we Hanning smooth the data to 10 km\,s$^{-1}${} for better sensitivity and reduction of artifacts such as Gibbs ringing. The AGES observing strategy and data reduction techniques have been described in detail in \cite{auld}, \cite{m10} and \cite{d11}. They are here only summarised. AGES was a fixed-azimuth drift scan survey. The telescope was pointed to the start position of a scan and the sky allowed to drift overhead, typically for 20 minutes (as here) to cover 5 degrees of Right Ascension with the seven beams of the ALFA receiver. Each point took about 12 seconds to cross the beam. When complete, the telescope was re-oriented to the next scan, staggered by one-third of the beam size (3.5$^\prime$) to create a fully Nyquist-sampled map. The total on-source integration time per point was 300 seconds. Observations of the Leo field were carried out in 2012 (January--February; November--December), 2013 (February, April--May, November--December), 2014 (January), 2015 (January--March, May, December), 2016 (December), 2017 (January--February), 2018 (January--May, December), and 2019 (January). The observations cover a field of approximately 5 degrees of Right Ascension and 4 degrees of declination, centred on 10:45:00, 12:48:00. The full range of the field is 10:34:49\,--\,10:55:14 in Right Ascension and 10:42:58\,--\,14:52:01 in Declination. The bandwidth covered is equivalent in heliocentric velocity from -2,000 to +20,000\,km\,s$^{-1}${}. Owing to the hexagonal beam pattern of ALFA, sensitivity is reduced over a few arcminutes near the spatial edges of the field. Sensitivity is also reduced over the final 1,000\,km\,s$^{-1}${} of spectral baseline. We are here only presenting observations of the M96 subgroup, which is in a frequency range unaffected by interference. The data from the two linear polarisations for each of ALFA's seven beams is reduced using the \textsc{livedata} and \textsc{gridzilla} software packages. Following \cite{me14}, we further apply a second-order polynomial to the spectra at every pixel in the resulting data cube, further reducing the impact of continuum sources. Additionally, we apply a \textsc{medmed} (\citealt{put}) correction to each spatial baseline. This divides each baseline into five equal segments, computes the median value of the flux in each one, and then subtracts the median of the medians. The advantage of this is a significant reduction in `shadows' in the data, caused by over-subtraction of bright extended sources (an example is shown in \citealt{m10}). \subsection{Source extraction} We here concentrate on the primary target region, the M96 subgroup in Leo. The primary target for AGES in this region was the Leo Ring. In the course of investigating this feature, we serendipitously discovered several discrete H\textsc{i}{} clouds which are optically undetected (we refer to them as `dark' by convention, see section \ref{sec:otherlambda} for details), at least one of which appears to have no association with the Ring. We restrict our present analysis to these clouds, and leave discussion of the Ring itself, as well as a catalogue over the full AGES spectral bandwidth, to future works. Here we limit our search to the M96 subgroup, within the velocity range 300\,--\,1,600 km\,s$^{-1}${}. All of the clouds are within 75 km\,s$^{-1}${} of the velocity of M96. We searched the full spatial field by eye using the FRELLED software described in \cite{me15}. While this has been considerably updated since its initial release, to be described in a future paper, the basic source extraction procedure remains the same (see, e.g. \citealt{me14}). We first display the cube in a volume render, defining masks around anything that visually resembles an H\textsc{i}{} source. The advantage of the volume rendering technique is that it enables very rapid masking, since the user can see the three-dimensional (velocity space) structure of the source at a glance. The downside is that this often makes the faintest sources hard to detect. We therefore proceed to a second stage. With the masks still hiding the sources already found, we search each individual slice (i.e. channels and position-velocity slices) of the data cube and apply additional masks if any more sources are detected - in this case, no further sources were found. This approach is designed to combine the key advantages of the rapid cataloguing speed enabled through volumetric rendering, together with the sensitivity of more traditional visualisation techniques. We quantify the H\textsc{i}{} parameters of each detection using the \textsc{miriad} package \textit{mbspect} (\citealt{miriad}). We define the spectral profile window by eye and \textit{mbspect} applies position fitting by computing the centroid of a 2D-Gaussian to the integrated map over the specified velocity range. It then computes the total integrated flux (S$_{H\textsc{i}{}}$) and velocity width at 50\% (W50) and 20\% (W20) of the peak flux. The H\textsc{i}{} mass is computed by the standard equation : \[MH\textsc{i}{} = 2.36\times10^{5}\times d^{2}\times S_{H\textsc{i}{}}\] We searched for optical counterparts by checking the Sloan Digital Sky Survey (SDSS) and NASA Extragalactic Database (NED) at the position-fitted coordinates of the source, searching within a 1.7$^\prime$ (half the FWHM) radius and checking whether any visible or listed sources have optical redshift measurements within 200 km\,s$^{-1}${} of the H\textsc{i}{} measurement. These search parameters are deliberately very generous to maximise the chance of finding any plausible optical counterpart. Typically, from other AGES data sets (e.g. \citealt{me12}) optical counterparts have spatial positions within about 20$''$ of the H\textsc{i}{} coordinates; the resolution of the survey is usually more than sufficient for identifying any counterparts where they exist. Similarly the difference between optical and H\textsc{i}{} redshifts for candidate optical counterparts is usually less than 50 km\,s$^{-1}${}. Quantifying the H\textsc{i}{} mass sensitivity is complicated as the signal-to-noise (S/N) level for a source of a given mass varies according to both its line width and peak flux : no matter how massive, at a sufficiently high line width a source will always become so faint as to be undetectable. As a rough estimate, a source of 4$\sigma$ with top-hat profile of width 50 km\,s$^{-1}${} is a reasonable approximation for the faintest readily detectable source, which at 11.1 Mpc corresponds to an H\textsc{i}{} mass of 4.1$\times$10$^{6}$\,M$_{\odot}$. Some of the sources presented here have masses close to or even below this value, raising the issue as to whether our source extraction is complete for clouds of the given parameters. But a top-hat profile is not realistic. A more sophisticated sensitivity estimate can be provided using the \cite{aasn} definition of integrated S/N, which accounts for the variation in line width and peak flux together. ALFALFA is reckoned to be complete for total S/N values, by this definition, above 6.5. By injecting several thousand artificial sources of different line widths and fluxes into galaxy-free data cubes, and running our own source extraction procedures, we show that AGES completeness well-matches the ALFALFA criteria (Taylor et al. in preparation). All of the clouds presented herein have total S/N values exceeding 6.5, showing that we are indeed complete for clouds similar to those we detect. \section{Results} \label{sec:results} \subsection{AGES H\textsc{i}{} measurements} Excluding the Leo Ring, we found a total of 29 sources. Most of these are previously identified sources with clear optical counterparts, generally bright galaxies with matching optical redshift determinations. Here we present five sources which lack any apparent optical counterpart, and, for comparison, one otherwise similar H\textsc{i}{} detection (cloud 4) which has a likely association with an optically identified galaxy (LeG 13). We include this source in our discussion since it is found in close proximity to three optically dark sources and its H\textsc{i}{} otherwise strongly resembles those clouds, raising the question as to why this cloud alone should be optically visible (we will refer to it as a `cloud' throughout this work only for convenience, but note that the optical source does not have an optical redshift measurement - see section \ref{sec:otherlambda}). We present the observational parameters of our six H\textsc{i}{} clouds in table \ref{tab:clouds}. All the clouds are broadly similar in their H\textsc{i}{} content, with masses ranging from 2.5\,--\,9.0$\times$10$^{6}$\,M$_{\odot}$ and velocity widths (W50) from 15\,--\,42\,km\,s$^{-1}${}. There is some ambiguity in the velocity widths though, with some of the W20 measurement being up to 100 km\,s$^{-1}${}, which we will discuss further in section \ref{sec:btfr}. Table \ref{tab:clouds} also gives the integrated S/N as defined by \cite{aasn}, where a value exceeding 6.5 generally indicates a reliable detection. All clouds exceed this value. Although cloud 5 is only marginally above the threshold, it is easily visible in the data cube. H\textsc{i}{} spectra and SDSS images, overlaid with H\textsc{i}{} contours, are shown in figures \ref{fig:cloudset1} and \ref{fig:cloudset2}. Following \cite{me20}, the contour level is chosen to be 3.5$\sigma$. We found that sources present at this level which appear to span at least one beam and three channels can be considered reliable detections, which is the case for all of our objects. Cloud 3 appears to be marginally resolved, while all the other clouds are point sources. We show a map of the full AGES Leo field in figure \ref{fig:wholeleo}. An interactive 3D version is included \href{http://www.rhysy.net/Resources/WholeLeo_Interactive_Blend4Web_v9.html}{at this url}. \begin{figure*} \centering \includegraphics[width=180mm]{Clouds1-3.png} \caption{Clouds 1-3. The upper panel shows renzograms with the contour at 3.5$\sigma$ and the Arecibo 3.$5^\prime$ beam (equivalent diameter 11.3 kpc) as a green line, overlaid on an a RGB image from the SDSS. The middle panel shows the spectra - red dashed lines show the profile window used for computing the H\textsc{i}{} parameters, blue circles show the positions of the W50 measurement and blue squares the W20 values. The lower panel shows the stacked $g,r,i$ bands from the SDSS, with the black circle showing the Arecibo beam size, centred on the coordinates of the H\textsc{i}{} detections.} \label{fig:cloudset1} \end{figure*} \begin{figure*} \centering \includegraphics[width=180mm]{Clouds4-6.png} \caption{As for figure \ref{fig:cloudset1} but for clouds 4-6.} \label{fig:cloudset2} \end{figure*} \begin{figure*} \centering \includegraphics[width=180mm]{LeoGroupRenzo_print.png} \caption{Renzogram of the whole survey region over the velocity range of the M96 subgroup. Each H\textsc{i}{} contour is at 3.5$\sigma$ and coloured according to velocity, overlaid on an RGB image from the SDSS. The Arecibo beam size (3.5$^{\prime}$, 11.3 kpc) is shown as a filled green circle in the lower left. H\textsc{i}{} clouds are numbered and selected major galaxies labelled in red. Other objects used for a few comparisons but not examined in detail are highlighted in orange, while other known objects not used in this study (here shown for the sake of completeness) are highlighted in grey.} \label{fig:wholeleo} \end{figure*} \subsection{Other wavelengths} \label{sec:otherlambda} The presence of the giant H\textsc{i}{} ring has attracted many searches for optical features of low surface brightness, including dedicated deep optical imaging in \cite{schnleo}, \cite{fergie}, \cite{leocollide} and \cite{watkins}, the latter reaching a sensitivity of $\mu_{B} = 31.0$\,mag\,arcsec$^{-2}$. In addition, \cite{kk} used reprocessed Palomar Sky Survey data to search for low surface brightness galaxies, while \cite{mull} performed sensitivity enhancement on SDSS data to search for Ultra Diffuse Galaxies. We searched the catalogues from all of these papers. In addition, we stacked both the SDSS \textit{g}, \textit{r} and \textit{i} band data and (separately) the Digitized Sky Survey blue and red images, searching the position of the clouds within the Arecibo beam area. We found no hint of any optical counterpart for clouds 1, 2, 3 and 6. While there is a galaxy (LeoGroup 46) 20$''$ from the H\textsc{i}{} coordinates of cloud 5, the SDSS optical redshift is $>$\,28,000\,km\,s$^{-1}${} and it is therefore unassociated. Only cloud 4 has a clear detection at optical wavelengths : the galaxy LeG 13 (AGC 202024). We regard the optical counterpart as being almost certainly associated with the H\textsc{i}{} emission even though there is no corroborating optical redshift measurement, as the optical and H\textsc{i}{} coordinates differ by just 13$\arcsec$ (with no other similar objects nearby). Furthermore the optical component is sufficiently resolved that it appears more likely to be a group member than a more distant background object. There are no instances in the rest of the AGES sample of finding a single optical counterpart candidate so close to the H\textsc{i}{} coordinates in which, where available, the optical redshift determination did not closely match the H\textsc{i}{} measurement. The possibility of a coincidental alignment of the optical and H\textsc{i}{} components can therefore be neglected. Although it is only marginally more massive than the other clouds, cloud 4 is also the only cloud in this sample detected by ALFALFA (\citealt{aaleo}, \citealt{aa100}). The ALFALFA data (\citealt{aaleo}) gives a slightly lower H\textsc{i}{} mass (6.4$\times$10$^{6}$\,M$_{\odot}${}) than the AGES estimate (9.0$\times$10$^{6}$\,M$_{\odot}${}), and the estimated velocity widths from ALFALFA (W50 = 24, W20 = 37 km\,s$^{-1}${}) are also slightly lower than the AGES values (W50 = 31, W20 = 46 km\,s$^{-1}${}). This difference is unsurprising given the differing sensitivity levels. We hereafter only use the more sensitive AGES measurements, which have a high S/N level and a detection that is clear and unambiguous in the spectrum. The ALFALFA SDSS catalogue (\citealt{aasdss}) provides two estimates of the stellar mass for LeG 13 (i.e. cloud 4) -- we use the mean value of 6.6$\times$10$^{6}$M$_{\odot}${}; the individual values differ by a factor three. This gives an MH\textsc{i}{}/M$_{*}$ ratio of 1.4 (no star formation rate estimate is available in the catalogue as it is not detected in WISE band W4 and does not have a UV flux measurement in the NASA-Sloan Atlas, the criteria required for the star formation estimates of \citealt{aasdss}). Given this ratio, and that the H\textsc{i}{} mass is only marginally higher than the other clouds, it is not obvious why cloud 4 has an easily detectable optical counterpart but the others are, if perhaps not actually dark, then significantly fainter. However, we caution that LeG 13 is itself only detected at about 6\,$\sigma$ in the SDSS. This implies that the MH\textsc{i}{}/M$_{\ast}$ ratio of the other detections need not be extraordinarily high for the optical counterpart to be undetectable, perhaps only of order a few (contrast this with the more massive H\textsc{i}{} detection of \cite{jozsa} which has a ratio in excess of 1,000), though the deeper optical imaging of the studies cited above would likely have revealed such an object. \section{Analysis} \label{sec:analysis} There are two main possibilities for the nature of the clouds : transient tidal debris, and long-lived galaxies in which the H\textsc{i}{} resides in a dark matter halo and the line width indicates rotation. We follow the analysis presented in \cite{me16} for the clouds discovered in the Virgo cluster. \subsection{Size of the clouds} \label{sec:size} Cloud 3 is marginally resolved, with the 3.5$\sigma$ contours spanning approximately two beams. This gives a diameter of $\sim$20 kpc. Although the other clouds are not resolved, the large size of this cloud and its similar mass tentatively suggest that the others are unlikely to be much smaller than the beam size. At a typical mass of 5.0$\times$10$^{6}$\,M$_{\odot}${}, a source filling the 11.3 kpc diameter beam would have a column density of 0.05 M$_{\odot}${}\,pc$^{-2}$ (6.3$\times$10$^{18}$\,cm$^{-2}$). Conversely, to have a column density of the more typical 6\,M$_{\odot}${}\,pc$^{-2}$ found in dwarf galaxies (\citealt{leroy}), the diameter would be around 1 kpc. Such a column density is technically possible, as the circular-average column density may be misleading : since even cloud 3 is only marginally resolved, it is possible that it is much smaller along one axis. But, while a higher column density is consistent with observations, it is unlikely that this is actually the case. At densities comparable to dwarf galaxies we would expect to see star formation and therefore optical counterparts. The true size of most of the clouds is therefore likely of the order of a few kpc, especially given the size of cloud 3; supporting evidence of this can also be found in section \ref{sec:env}. We can more confidently rule out that the clouds might be gravitationally self-bound by their H\textsc{i}{} mass alone. As with the Virgo clouds, this would require an unphysically high column density ($>>$100M$_{\odot}${}\,pc$^{-2}$, a value which is unknown for H\textsc{i}{}, with a diameter of 0.1 kpc) even given the Leo clouds lower velocity widths. We would certainly expect to see obvious optical counterparts at densities this high, though only cloud 4 has any direct association with an optically bright galaxy. Note that while cloud 4 does have the highest H\textsc{i}{} mass of the clouds, it is less than factor two greater than their median H\textsc{i}{} mass. \subsection{Expansion time} \label{sec:expand} If the clouds are unstable debris, the line width can give an indication of their detectable lifetime. For the high velocity dispersion clouds in Virgo, we found that the lifetimes were so short as to make a tidal debris hypothesis unlikely. In contrast the Leo clouds have substantially narrower line widths, which does not give such a stringent constraint on their evolution. For cloud 3, which is marginally resolved, to reach its 11 kpc radius at its presumed expansion velocity (W50/2 = 21 km\,s$^{-1}${}) would take 0.5 Gyr. Therefore, with similar expansion speeds and assuming sizes half that of cloud 3, the other clouds could have existed for $\sim$250 Myr. Although obviously approximate, these estimates can be considered upper limits of the cloud survival times based on their line widths. We have assumed the clouds began as point sources, which is unphysical - a more realistic initial size could only lead to shorter expansion times. Likewise we assume that the clouds fill the beam, whereas most of them are probably somewhat smaller (but see section \ref{sec:env} for an important caveat). More significantly, in some cases the W20, which is much larger than the W50 value, is a more accurate estimate of the true line width (see section \ref{sec:btfr}), and using this value for the expansion velocity would roughly halve the timescale estimates. Even so, the long lifetime estimates are at least consistent with the clouds being tidal debris, but we consider this further in section \ref{sec:env}. \subsection{Dynamical masses} The optical counterpart of cloud 4, AGC 202024 (LeG 13), allows for different interpretations of the nature of the clouds. One possibility is that the clouds are tidal debris in different stages of evolution, with only cloud 4 as yet having undergone any significant star formation or perhaps the only one in which star formation continues (see \citealt{fadug} for a discussion on fading tidal dwarfs). Another option is that most clouds are tidal debris but cloud 4 is an ordinary galaxy, and its close position and similar H\textsc{i}{} properties to the other clouds is purely coincidental. It is also possible that the clouds are in fact \textit{all} primordial galaxies (e.g. satellites), with their H\textsc{i}{} embedded in dark matter halos. In this case cloud 4 would be significantly optically brighter than the others, though this does not necessarily mean the other clouds are totally dark. In this interpretation, we can use their line widths to estimate their dynamical masses, i.e. : \[M_{dyn} = \frac{r\,v_{circ}^{2}}{G} \] For cloud 3 we may take the radius to be 11 kpc since it is marginally resolved. With a rotation speed $v_{circ}$ of W50/2 = 21 km\,s$^{-1}${} (note that though the line width is small, it does appear to show a velocity gradient - though we lack sufficient resolution to say if this is the result of ordered motions), this gives M$_{dyn}\geq$1$\times$10$^{9}$M$_{\odot}${} (this is a lower limit as we do not correct for inclination), giving a ratio M$_{dyn}$/MH\textsc{i}{}$\geq\,$140. This would be an extremely dark matter-dominated object. For the other clouds, assuming a radius of 5.5 kpc (half the beam size) and a typical line width 30 km\,s$^{-1}${}, dynamical masses would still exceed 3$\times$10$^{8}$M$_{\odot}${}, with M$_{dyn}$/MH\textsc{i}{}$\,\gtrsim\,$50. At the smallest plausible size of a radius of 0.5 kpc (see section \ref{sec:size}), M$_{dyn}\,\geq$3$\times$10$^{7}$M$_{\odot}${} and M$_{dyn}$/MH\textsc{i}{}$\,\gtrsim\,$5. Despite their low line widths, the clouds are consistent with requiring a significant quantity of dark matter in order to be stable and long-lived. \subsection{Environment of the clouds} \label{sec:env} \subsubsection{Clouds 1-4} The proximity of the nearest galaxies is crucial to understanding the origin of the clouds. Clouds 1-4 are midway between the giant spiral galaxies M95 and M96, which have a projected separation of approximately 0.7 degrees, equivalent to 136 kpc. The clouds are also at similar velocities to the two spirals (M95 is at 779 km\,s$^{-1}${}, M96 is at 888 km\,s$^{-1}${}), as shown by the PV-diagram in figure \ref{fig:pvclouds}. Cloud 1 is however somewhat deviant from the trend in P-V space seen from clouds 1-4. \begin{figure} \centering \includegraphics[width=80mm]{M95-6PPV.png} \caption{PV diagram of the M95-M96 region with a linear colour stretch, highlighting clouds 1-4.} \label{fig:pvclouds} \end{figure} In section \ref{sec:expand} we calculated that the clouds could have survived for as long as 250 Myr even if in free expansion. To travel the approximate 70 kpc from the nearest spiral in 250 Myr would require a projected velocity of 270\,km\,s$^{-1}${}. In comparison, the dispersion of the Leo I group as a whole is 175\,km\,s$^{-1}${} (\citealt{aaleo}), with the line-of-sight velocity differences of the clouds from M95/M96 ranging from 6 to 181\,km\,s$^{-1}${}. However 70 kpc is considered from the distances to the centres of the galaxies. A more realistic value, accounting for the apparent size of the H\textsc{i}{} discs, would be about 40 kpc, reducing the velocity to about 150\,km\,s$^{-1}${}. Given that the survival timescales are approximate upper limits, and that the true 3D velocity must necessarily be higher than the projected velocity estimate, it seems reasonable to say that the clouds are consistent with being tidal debris, albeit with velocities towards the higher end expected for objects in this environment. Perhaps more problematic for this interpretation is that neither M96 nor M95 show much indication of a significant disturbance in their H\textsc{i}{} discs. M95 in particular shows no signs of abnormality - by the criteria established in \cite{me20}, we would identify this galaxy as undisturbed. In contrast, M96 does show evidence of a warp (this is much easier to see in the online 3D tool than the renzogram or PV map), as well as being asymmetrical in its optical appearance, but not any larger extension that could be identified as a tail that would indicate gas removal. One possibility is that the clouds in this region are the brightest peaks in a lower-density bridge of material connecting the two galaxies, as suggested for the M31-M33 clouds (\citealt{wolfe}). If so, the large-scale H\textsc{i}{} feature expected in tidal encounters (\citealt{me16}, \citealt{me17}) would exist but happen to be below our column density sensitivity. We investigate this in figure \ref{fig:m95renzo} (see figure \ref{fig:wholeleo} for a map of the whole survey region). The figure shows renzogram contours at the 3.5$\sigma$ level from the standard AGES cube, and also thinner contours at 4$\sigma$ from the cube after additional Hanning smoothing (width 15 channels, equivalent to 40\,km\,s$^{-1}${} resolution) of the spectral axis - spectral smoothing has the advantage of increasing sensitivity without degrading spatial resolution. The smoothing reduces the $rms$ to 0.3 mJy, about half that of the standard cube. Note that due to the lower spectral resolution, the sensitivity to \textit{total} mass present, i.e. column density, is degraded to N$_{H\textsc{i}{}}$ = 3.0$\times$10$^{17}$cm$^{-2}$ at 1$\sigma$ - smoothing improves the sensitivity to the \textit{average} mass present per unit velocity, which is the relevant factor when searching for diffuse emission. Interestingly, this extra smoothing does not alter the appearance of M96, M95 or cloud 1, but the other clouds do appear more extended. Cloud 2 shows a tentative connection to M96. Somewhat surprisingly, cloud 4 appears to be marginally resolved, spanning approximately two beams - given that it has an optical counterpart, we would expect it to have a higher column density than the other features and so be more compact, yet this is not the case. However, this is a further indication that our size estimates in section \ref{sec:size} are unlikely to be wide of the mark. Cloud 3 becomes significantly more extended, spanning four beams (45 kpc) with a clear velocity gradient across the whole feature (the velocity difference between the most extreme contours where the cloud is visible is 67\,km\,s$^{-1}${}), though it does not appear to directly connect with the disc of M96. The extra Hanning smoothing also reveals a tentative additional detection here labelled as cloud 7. In the standard cube this is just visible but appears as an extension of cloud 2. Owing to its small size (the contours do not even span a whole Arecibo beam) and low S/N (peak 5.4, integrated 6.25), we treat this detection with caution. If real, it would have an H\textsc{i}{} mass of 3.5$\times$10$^{6}$\,M$_{\odot}${}, a W50 of 52\,km\,s$^{-1}${} and a W20 value of 80\,km\,s$^{-1}${}. Despite the detection of extended features and a possible additional cloud, the extra processing does not reveal any large-scale bridge of material linking M95 and M96 - at this sensitivity level, there is no evidence that the clouds are peaks in a larger structure. While cloud 2 does resemble some of the clouds discussed in \cite{olivia} (where features which appeared as discrete at a lower sensitivity were revealed as being connected to the disc of M33 at a higher sensitivity), the smoothing does not suggest the disc itself is likely to be much larger than it appears in the standard cube. This does not rule out a tidal origin for the clouds, but does not fit the general expectation of tidal stripping. Of course, the elephant in the room is the Leo Ring - M96 is directly connected to the main body of the Ring\footnote{The impression one gets from inspecting the data is that the H\textsc{i}{} of M96 is superimposed on that from the Ring. This is somewhat subjective, but clearly there is nothing like the classical tail and counter-tail structure predicted in most tidal encounters (\citealt{tooms}).}, and it is possible M95 was also involved in the creation of this feature. Potentially M96 could be interacting with the other major galaxies in the Ring (e.g. M105, NGC 3384) as well as M95. It is interesting to note that the clouds 2, 3 and 4 are close in velocity to the warp seen in the H\textsc{i}{} of M96 -- again, in PV space these clouds do resemble a bridge, but this is not seen in PP space. Finally, another scenario is that the clouds are not the result of a direct interaction between M95 and M96 but rather result from whatever process formed the Leo Ring. The appeal of this explanation is that the Leo Ring has a velocity width of $\sim$400\,km\,s$^{-1}${}, making it much easier for the clouds to reach a large separation from their parent galaxies (see also section \ref{sec:clouds5-6}). On the other hand the kinematics of the clouds do not match the velocity gradient of the Ring at all, being at velocities approximately 150\,km\,s$^{-1}${} lower than the material in the nearest point on the Ring. \begin{figure*} \centering \includegraphics[width=180mm]{M95-6MapWithLabels.png} \caption{Renzogram of the M95-M96 region. The thick contours are from the standard AGES cube at 3.5$\sigma$. The thin contours are from a cube with additional Hanning smoothing (width 15) at 4$\sigma$. The green circle in the upper right shows the AGES beam. An additional feature tentatively identified as cloud 7 is also shown.} \label{fig:m95renzo} \end{figure*} Though these caveats are important, overall the evidence would still seem to point towards a tidal interpretation for clouds 1\,--\,4. They lie midway between two galaxies, one of which shows signs of disturbances (albeit more weakly than we would expect), and at a distance consistent with the clouds being unstable debris. The lack of a large stream of H\textsc{i}{} is, in our view, a relatively minor difficulty in this case. Despite the increase in sensitivity from smoothing, it is still possible that such a stream exists but is below our detection threshold, with the clouds being significant density enhancements within it (as in. \citealt{wolfe}). In our Virgo simulations (\citealt{me16}, \citealt{me17}), we found that clouds themselves detached from their parent galaxies were usually found near large, easily detectable H\textsc{i}{} tails which were directly connected to their parent galaxies. Such features are not seen here unless we count the Ring itself, but the cluster and group dynamics are markedly different. Simulations of low velocity dispersion groups could help address whether (and for how long) we should expect large streams to accompany detached clouds in these environments, and/or whether the detached clouds themselves are an expected result of tidal encounters. \subsection{Clouds 5 and 6} \label{sec:clouds5-6} While it does appear distinct, cloud 5 is in close proximity to the Leo Ring. This region of the Ring is highly disorganised, and it is possible this is an outlying cloud formed by the same processes that produced the rest of the large-scale H\textsc{i}{} emission in this region. To the north, a much larger and more massive cloud is also detached but appears to be part of the Ring. Whereas the northern feature clearly fits neatly into the ellipsoid structure of the rest of the H\textsc{i}{}, cloud 5 is not aligned with any other features. Moreover, the additional smoothing described for clouds 1\,--\,4 does not reveal any additional component to cloud 5, and its narrower line width suggests that it is - at least at present - truly detached from the Ring and not an outlying part of the larger object. Still, the most obvious explanation is that cloud 5 was produced along with the rest of the Ring, a process we will explore in a future work. Cloud 6 is much more intriguing. While it is possible that it too was formed by the same mechanism that created the ring (e.g. the collisional origin proposed by \citealt{leocollide}), its distances and kinematics argue against this interpretation. It is 0.65 degrees away from the nearest point on the Ring, about 125 kpc in projection, and does not align with any features in the Ring. The distance to M105, at the approximate centre of the Ring, is about 1.1 degrees, 210 kpc. At the estimated evolution time of 250 Myr (section \ref{sec:expand}), this would require a projected velocity of 820\,km\,s$^{-1}${}. Almost all of this would have to be across the plane of the sky, since the velocity of the cloud falls well within the velocity of material of the Ring. In contrast, the Ring itself only spans about 400 km\,s$^{-1}$. The kinematics of the cloud and the Ring thus do not match : it is very difficult to see what sort of process could produce a gigantic, coherent arc of material and also one single, compact cloud that was ejected at least at twice the velocity of everything else. While we cannot absolutely exclude a common origin of the Ring and cloud 6, there are other possible candidate parents for the cloud. There are four other nearby H\textsc{i}{}-detected galaxies that suggest themselves as potential parents based on their angular separation from cloud 6. Two of these, UGC 05832 (a disturbed spiral) and CGCG 065-090 (an irregular) form a close pair, and we cannot fully distinguish the H\textsc{i}{} detection of the individual galaxies in this case. These are at least 0.47 degrees away in projection (90 kpc) from the cloud, with a velocity separation of 360 km\,s$^{-1}${}. As discussed in the introduction, this velocity difference is much greater than the group velocity dispersion, making an association less likely. Near to this pair of galaxies is a third, much more massive spiral NGC 3338 (note that while the pair are not visible in figure \ref{fig:wholeleo} as they are outside the velocity range shown, part of NGC 3338 is just visible). This is rather further away at 0.77 degrees (147 kpc) and even more separated in velocity (440\,km\,s$^{-1}${}) from the cloud. While the spiral/irregular pair do show disturbances in their optical morphology, none of these three galaxies show significantly extended H\textsc{i}{} emission : we can easily distinguish the H\textsc{i}{} in the pair from the giant spiral, and there is no hint of any extension towards cloud 6. In addition, NED gives a mean redshift-independent distance estimate to NGC 3338 of 22.6 Mpc, twice the distance to the M96 subgroup. Likewise, the fact that \textit{all} of the clouds are found within a velocity range of 100 km\,s$^{-1}${} while UGC 05832 and CGC 065-090 are found at a difference of 360 km\,s$^{-1}${} argues against their association with any of the clouds. Again, given the low velocity dispersion of the Leo group, it seems much more probable that all the clouds are found at this much lower distance and are unrelated to any of the background galaxies. The fourth candidate is the spiral NGC 3377A, rather uniform in morphology, which is 0.86 degrees (164 kpc) away in projection and separated in velocity by 287 km\,s$^{-1}${} from the cloud. As with the other candidates, this galaxy also shows no hint of any H\textsc{i}{} extension and its velocity separation is quite high compared to the dispersion of the M96 subgroup. Frustratingly, none of these candidates can be definitively assigned or eliminated as the parent of cloud 6. The H\textsc{i}{} masses of all four candidates (\citealt{aaleo}) range from two to four orders of magnitude greater than the clouds, implying that we should easily be able to detect any disturbance in the parent galaxy sufficient to account for the clouds. The low mass of the clouds means that estimating the H\textsc{i}{} deficiencies of the candidate parent galaxies would be useless since they would only constitute a small fraction of any missing gas (see \citealt{me17} for a discussion on the well-known, more massive example of VIRGOHI21 and its parent galaxy NGC 4254). \subsection{Comparisons with other clouds} \label{sec:others} In \cite{me16} we presented a table of isolated dark clouds with possible primordial origins. Very few clouds are known which are similar to the Leo clouds. We briefly review those with notably similar features below. These were selected on the basis of being clearly detached from their parent galaxies (which are usually difficult to identify) and having no large-scale extended H\textsc{i}{} emission (such as tails) visible in their vicinity. \subsubsection{GBT1355+5439} Discovered in \citealt{mihosm101} and with follow-up observations described in \citealt{oo13}, this cloud is located close to M101 in projection. Due to its low systemic velocity (210 km\,s$^{-1}${}), the distance is highly ambiguous. If this object is in the Local Group, then \cite{oo13} propose it could be a dark matter minihalo, with a size of about 1 kpc and MH\textsc{i}{} $\sim$1$\times$10$^{5}$\,M$_{\odot}${}. At the distance of M101, it would be 150 kpc from M101 with an H\textsc{i}{} mass just a bit larger than the Leo clouds at 1$\times$10$^{7}$M$_{\odot}${}. The W20 is given in \citealt{mihosm101} as 41 km\,s$^{-1}${}; W50 is not reported. \citealt{oo13} concluded that there is no clear explanation for the object, with all of the proposed interpretations (galactic mini-halo, dark galaxy, tidal debris) having advantages and disadvantages. \subsubsection{GEMS\_N3783\_2} Discovered in \cite{kilborn} in a survey of the NGC 3783 group, this cloud is considerably more massive than the Leo clouds at 3.8$\times$10$^{8}$M$_{\odot}${}. The W50 is 106 km\,s$^{-1}${} and the W20 116 km\,s$^{-1}${}. Although the nearest spiral does show an extended, distorted H\textsc{i}{} disc that might indicate an interaction, it is 450 kpc away from the H\textsc{i}{} cloud and does not show any long H\textsc{i}{} tails. \cite{kilborn} do not rule out a dark galaxy interpretation, but consider a tidal origin more probable. \subsubsection{NGC 1395 clouds} \cite{wong} discovered two clouds (C1 and C2) in the vicinity of NGC 1395, of mass 2\,--\,3$\times$10$^{8}$M$_{\odot}${} - again considerably more massive than the Leo clouds. These clouds appear to be of similar velocity width to the Leo clouds, with W50 measured at 17\,--\,41 km\,s$^{-1}${} from ATCA and Parkes respectively (W20 is not reported - from the figures, W20 may be substantially higher but affected by noise). Unfortunately one cloud is projected in front of the elliptical galaxy NGC 1403, hampering the identification of any optical counterparts (NGC 1403 itself almost certainly has no association with the cloud, it is at a systemic velocity more than 2,000 km\,s$^{-1}${} higher than the H\textsc{i}{} detection). The clouds range from 240\,--\,360 kpc in projected distance from NGC 1395. In marked contrast to the Leo clouds, \cite{wong} note that the line width and size of these clouds are consistent with being gravitationally self-bound by the mass of the H\textsc{i}{} alone. Their favoured hypothesis is that the clouds could be optically dim or dark tidal dwarf galaxies, though the lack of any observed tidal tails in the region is arguably problematic for this interpretation. \subsubsection{SECCO 1} SECCO 1, also known as AGC 226067, is discussed in \cite{adamsclouds}, \cite{bellazz} and \cite{secco}. The object is an optically faint (but not dark) H\textsc{i}{} cloud in the direction of the Virgo cluster. Candidate parent galaxies are at least 250 kpc away in projection and the stellar component appears to lack an old population. Its H\textsc{i}{} mass is a bit larger than the Leo clouds at 1.5$\times$10$^{7}$M$_{\odot}${} (at the Virgo cluster distance), as is its W50 line width at 54 km\,s$^{-1}${}. Simulations in \cite{bellazz} show that a low line width cloud could survive for over 1 Gyr moving through the intracluster medium, certainly long enough for the cloud to have reached its separation from its candidate parent galaxies. What remains unclear is why, if there is really no old stellar population, the star formation should apparently have begun only very recently despite no obvious source of perturbation. \subsection{The baryonic Tully-Fisher relation} \label{sec:btfr} One of the most intriguing features of the AGES Virgo clouds is their offset from the baryonic Tully-Fisher relation. Most normal galaxies appear to show a tight correlation between their rotation velocity and baryonic mass (\citealt{btfr}). While some ultra-diffuse galaxies have been found to have velocity widths well below the expectations from the BTFR, given their baryonic mass (\citealt{pina}\footnote{While not directly comparable in terms of the BTFR, systems with a similar apparent deficit of dark matter are discussed in, for example, \cite{vd} and \citealt{guo}.}), the AGES Virgo clouds of high line widths show the opposite behaviour. In figure \ref{fig:btfr} we compare the AGES Virgo and Leo clouds on the BTFR, along with a selection of other optically dark clouds from other surveys. We also include other features found in Leo : the tentative feature dubbed cloud 7 (see section \ref{sec:env}), the faint galaxy Leo Dw A (figure \ref{fig:wholeleo}, \citealt{schnleo}), the cloud adjacent to NGC 3384 (figure \ref{fig:wholeleo}, note that the measurements are uncertain due to the close proximity of material from the Ring), and a `cloud' labelled 5R (again see figure \ref{fig:wholeleo}) which is likely part of the Ring itself. \begin{figure*} \centering \includegraphics[width=160mm]{BTFR.png} \caption{Baryonic Tully-Fisher relation for normal galaxies (black points, from the AGES Virgo background fields) and a selection of optically dark H\textsc{i}{} clouds, using the W50 (left) and W20 (right) estimates for the line width. The line widths are corrected for inclination for the galaxies but not the H\textsc{i}{} clouds. The baryonic mass for the clouds is computed using their H\textsc{i}{} mass multiplied by a factor 1.36 to account for the presence of helium. The dashed line is the best fit to the optically bright galaxies. Our main sample of Leo clouds are shown with red squares while additional objects in Leo are shown as open red squares (highlighted with orange circles in figure \ref{fig:wholeleo}; cloud 7 is shown in figure \ref{fig:m95renzo}). Clouds from the Virgo surveys of AGES are shown with open blue circles while other clouds in the Virgo cluster are shown as labelled open blue squares. Other clouds in green are described in section \ref{sec:others}. The solid line in the left panel shows the completeness line for an integrated S/N ratio of 6.5.} \label{fig:btfr} \end{figure*} The results vary depending on whether we use the W50 or W20 estimators for the line width. Before discussing this in relation to the clouds, it is important to note that this is also true for normal galaxies, with three optically bright galaxies having apparently anomalously low circular velocities when using W50 but normal values (i.e. lying well within the general scatter) when using W20. This is not because they are similar to the \cite{pina} objects, but only because of the profile shape : a high S/N but narrow spike can lead to an erroneously low value for W50, even when the W20 estimator is still itself at a high S/N level. We consider the W20 estimate to be generally more reliable for the Virgo clouds, and the deviation towards high velocities is seen even using W50 for some of them. It is important to note that neither W50 nor W20 are infallible measures of the true width of the line. While W20 may give overestimates due to being measured at a lower S/N level, W50 can give underestimates despite being measured at a higher S/N level (possible improvements to measuring the velocity width are described in \citealt{yu20} and \citealt{yu22}). Some of the Leo clouds appear to follow the BTFR for bright galaxies. Ostensibly, there is no particular reason to expect tidal debris to follow the BTFR - non-rotating, unbound debris (regardless of its formation mechanism, tidal or otherwise) has no constraints to follow the same relation as for rotating, stable galaxies. In the case of the Leo clouds, most seem likely to have formed as a result of interactions, based on their proximity to galaxies and/or the Leo Ring, with the notable exception of cloud 6. Yet most of the clouds in our Leo sample do seem to follow (or are at found close to) the BTFR for normal galaxies -- including features which seem to be part of the Ring (the single exception is the tentative detection of cloud 7, which we discuss further below). An important caveat is that, using W50, most of the clouds are found at higher velocity widths than expected, with only cloud 5 at lower velocities (though clouds 4 and 5R are very close) -- if the clouds were really following the relation, we might have expected about half to have higher and half to have lower widths than the general relation. There is no obvious reason why the clouds should follow (or be close to) the BTFR for bright galaxies, as it cannot be a selection effect. Given the typical maximum column density of H\textsc{i}{} of $\sim$10\,M$_{\odot}$\,pc$^{-2}$ (\citealt{leroy}) and the beam size, we could expect to detect up to 10$^{9}$\,M$_{\odot}${} of H\textsc{i}{} within the Arecibo beam at the 11.1 Mpc of the group -- about two orders of magnitude greater than the BTFR implies for clouds of these line widths. Indeed, as discussed, dark clouds are known elsewhere which are considerably more massive than the Leo clouds and which do show a strong deviation from the BTFR. Conversely, we can also consider how large the line widths of the clouds would be for them to become undetectable if their mass was kept constant. Although the clouds have narrow line widths, they are reasonably bright. By the prescription of \cite{aasn} we should be complete to sources of these typical masses (equivalently, $\sim$\,0.2 Jy\,km\,s$^{-1}${} flux) for velocity widths of up to 130 km/s, far wider than their actual values. In short, there is nothing prohibiting the existence of much more massive clouds given their observed $\sim$30\,km/s widths, nor any reason they could not be detected at much higher line widths for the same total flux : they could be detected over a much larger part of the BTFR parameter space than the region in which they are actually found. Examining the W20 estimator complicates the situation. Using W50, most Leo clouds are found at somewhat higher velocities than the fit for optically bright galaxies, even if only slightly. For the W20, most lie closer to the fit, but there are some stronger deviations as well. With W50, of our Leo objects only cloud 7 shows an appreciable deviation from the standard BTFR, whereas with W20, a similar deviation is also seen for cloud 1 and possibly the cloud adjacent to NGC 3384. The faint nature of cloud 7 implies a need for observations of high sensitivity and especially of higher spatial resolution than AGES to determine if this is really a discrete object or only part of cloud 2, and similarly accurate measurements of the NGC 3384 cloud are confused by intervening material from the ring -- it is likely that the W20 measurement is an overestimate in this case. Intriguingly, the W20 measurement of cloud 1 appears to be accurate - indeed, if anything it seems more likely that the W50 is an underestimate. We note that, unlike the other clouds, its spectral profile is clearly asymmetric (figure \ref{fig:cloudset1}), suggesting that it might have multiple components and reinforcing the need for higher resolution observations. Based on their proximity to other galaxies and H\textsc{i}{} structures, all of the Leo objects with excessively high W20 values are likely tidal in origin. An important caveat is that the results discussed above are sensitive to the best fit line used for the BTFR, which in figure \ref{fig:btfr} is heavily dominated by optically bright galaxies which are more than two orders of magnitude brighter than the optically dark clouds. For this figure we used the same basic methodology as in \cite{me13} : the baryonic mass is the combination of the measured MH\textsc{i}{} (with an assumed correction for helium) plus the stellar mass, with the line width corrected only for inclination. This approach has the advantage of simplicity and relies entirely on our own measurements. A more sophisticated approach necessitates greater complexity in correcting the measurements, but allows us to compare our results with the gas-dominated, lower-mass sample presented in \cite{btfr12}. This should provide a more robust comparison with the Leo clouds, in particular a quantitative assessment of how well they agree or deviate from the BTFR for normal galaxies given the general scatter. Importantly, \cite{btfr12} used resolved rotation curves to estimate the circular velocity which gives much more accurate results than line widths. Note that the corrections applied to our own data are extensive and fully described in the appendix. The result is shown in figure \ref{fig:btfrmcg}. \begin{figure} \centering \includegraphics[width=85mm]{BTFRMcGaugh.png} \caption{Baryonic Tully-Fisher relation using more sophisticated corrections for velocity width (derived for our sample from W20) and stellar mass, following the prescriptions of \cite{btfr12} and \cite{spring} -- for full details see the appendix. The colour scheme is as for figure \ref{fig:btfr}, except that the open circles show the sample of \cite{btfr12} (his table 1). The black solid line shows the best fit relation according to \cite{btfr12}, with the dashed and dotted lines respectively showing the 1$\sigma$ (0.24 dex in mass) and 2$\sigma$ scatter, again according to \cite{btfr12}. Clouds from non-AGES samples are not included as we lack the data need to make the appropriate corrections to velocity width, which is corrected for inclination, spectral resolution, and cosmological expansion.} \label{fig:btfrmcg} \end{figure} Figure \ref{fig:btfrmcg} supports the basic result of the right-hand panel of figure \ref{fig:btfr}. Clouds 1, 7 and the NGC 3384 feature clearly deviate from the BTFR in the sense of having an unexpectedly high velocity width, while cloud 5 may have a width somewhat lower than expectations. Clouds 2, 4, 6, as well as 5R and Leo Dw A, all lie well within the general scatter. Cloud 3 is a marginal case, having a velocity width slightly exceeding the 2$\sigma$ scatter. Overall, the main result appears robust : some clouds do deviate from the BTFR, but others do not. We note that there is a significant caveat in that the exact form of the BTFR depends strongly on the corrections used, especially for the stellar mass, with \cite{manybtfr, btfr12} describing how the slope -- the exponent in the power-law fit --can vary from 3\,--\,4 -- readers are strongly encouraged to consult the appendix for details. While these deviations from the BTFR might seem to indicate that these clouds do indeed have a non-galaxian nature (i.e. they are transient and unstable), several difficulties arise. This does not explain why the other objects stubbornly remain within the observed scatter of the BTFR, including cloud 4 and arguably cloud 3, which are both close in projection to clouds 1 and 2 (with significant caveats : cloud 3's extended nature may mean its velocity width has been underestimated, while cloud 4 has an optical counterpart so may be an ordinary galaxy). Moreover, the similarly excessive velocity widths of the Virgo clouds has proved difficult to explain : in \cite{me16}, we found that it was very difficult to reproduce the high velocity widths of the Virgo clouds if they were produced as tidal debris\footnote{Essentially, the higher the line width, the faster the dispersion and so the faster the clouds reach an undetectably low S/N ratio - the more excessive the velocity widths, the shorter the lifetime of any such objects. See \cite{me17} for a full discussion.}, whereas if they were faint or dark galaxies their deviation from the BTFR could be easily explained and was robust to tidal interactions with other galaxies. However, as discussed, the dynamics of tidal encounters in a cluster will be significantly different than in a group environment, again indicating a strong need to simulate a more Leo-like region before any firm conclusions can be drawn. Of the non-AGES clouds, even more caution is needed since the values for W50 and W20 are not reported homogeneously (they are shown only in figure \ref{fig:btfr} but not figure \ref{fig:btfrmcg} as we lack the data for corrections necessary to estimate their velocity widths). Both of the \cite{wong} clouds appear to have lower than expected velocity widths given their gas mass, even using the line widths without any additional corrections : \cite{wong} note that while the clouds do appear to be rotating, W50 likely overestimates the rotation velocity with a significant component of the line width originating from velocity dispersion. The other two clouds (GBT1355+5439 and GEMS\_N3783\_2), though differing in mass by almost two orders of magnitude, both lie on the same BTFR as for normal galaxies. The amount of intrinsic scatter on the BTFR is controversial, with \cite{pina} arguing that some galaxies are strongly deviant (see also \citealt{pina22}) whereas \cite{lelli} say the intrinsic scatter is below Standard Model expectations, and \cite{btfr12} find there is no scatter unexplained by observational errors. If the latter is correct, then it seems remarkable that this is even true for some objects which are likely debris where the usual dynamical relations should not apply. Paradoxically, it seems that some objects for which the evidence generally indicates a tidal origin (e.g. clouds 1\,--\,5 in Leo) actually lie on the standard BTFR, while some for which the evidence is - arguably - against a tidal origin (e.g. the AGES Virgo clouds, the \citealt{pina} UDGs) clearly deviate from it. This is exactly opposite to what we might expect. \section{Summary and discussion} \label{sec:findings} We reported the discovery of at least five optically undetected H\textsc{i}{} clouds in the M96 subgroup. One of the objects (cloud 5) is close to the giant Leo Ring, and most likely part of that structure -- we leave analysis of this to a future work. Three of the optically undetected H\textsc{i}{} clouds (as well as an additional more tentative detection not included in our main sample) lie between M96 and M95, suggesting a tidal origin. In position-velocity space these clouds appear to be part of a bridge connecting the two spiral galaxies, but in position-position space this structure is not seen. No elongated structures directly attached to M95 are visible, though M96 is connected to the Ring. Very near to these three clouds, and at a similar velocity, the dwarf galaxy LeG 13 (cloud 4 in our catalogue) is detected with similar H\textsc{i}{} properties to the dark clouds but with a clear optical counterpart. Elsewhere in the group, another object (cloud 6) is seen with no apparent association to the Ring or any of the galaxies present. None of its nearest H\textsc{i}{}-detected neighbours show any signs of disturbances in their gas discs. With the nearby galaxies having H\textsc{i}{} masses two to four orders of magnitude greater than the clouds, which are themselves readily detected, even a small perturbation to their H\textsc{i}{} ought to be easily detectable with AGES -- and their distances are anyway well outside the Leo Group. This makes the parent galaxy of cloud 6 extremely challenging to identify. All of the clouds (including LeG 13) possess comparable H\textsc{i}{} masses and velocity widths, but in other aspects the clouds are dissimilar. While clouds 1-4 are all between M95 and M96, cloud 1 is offset in velocity from the others and has a higher velocity width. Cloud 2 may be connected to the disc of M96, while cloud 3 appears significantly extended when spectral smoothing is applied. Only cloud 4, LeG 13, shows an optical counterpart. Cloud 5 is near the Leo Ring, while cloud 6 is isolated. Given this, it seems probable that the clouds formed from a variety of mechanisms, though a full exploration of this will require detailed numerical simulations. Overall, clouds 1\,--\,4 seem most likely to be tidal, but it is not at all obvious why cloud 4 alone is optically bright while its neighbours are undetected (it is tempting to suggest that as it has the highest H\textsc{i}{} mass it might have the highest column density, but its mass is only marginally greater than the others and it shows some hints of being more extended as well). Cloud 6 is least likely to be tidal given its separation from its nearest galaxies, but the low line width of all clouds would allow them to survive travel across distances exceeding 100 kpc at velocities compatible with the group velocity dispersion. The main difficulty for the tidal debris interpretation is the lack of any elongated tails found around any of the galaxies in this region, the Leo Ring itself notwithstanding. If the clouds were produced by the same process that created the Ring, then they provide additional constraints for any future modelling efforts -- in particular, it is not obvious how cloud 6 could be explained by a simple galaxy-galaxy collision model. A further oddity for the tidal debris scenario is that most clouds seem to obey the baryonic Tully-Fisher relation seen for normal, stable galaxies. This is unexpected as the clouds are bright enough that they could be detected at much higher line widths (with the caveat that they would have shorter detectable lifetimes) and large enough that they could have much higher gas densities before star formation would be expected. In essence, both their baryonic masses and velocity widths are not constrained by selection effects to obey the BTFR, but this is where the clouds are found nonetheless. Ironically, it may be easier to explain the clouds with the excessive velocity widths than those which have the same widths as ordinary galaxies. Unlike the cases of the Virgo clouds, which remain poorly explained as tidal debris, most of the Leo clouds are relatively close to major galaxies. This means they could indeed be unstable, transient features, as they would not have to survive for very long to reach their current separation from their parents. Cloud 6 is problematic in this regard due to its isolation, but even here its line width is sufficiently low as to allow for a slow dispersal. Applying heavy spectral smoothing to the data revealed that some of the clouds are more spatially extended than in the standard cube, but we did not find any evidence that they are connected to each other -- there does not appear to be any large-scale bridge connecting them, even for those clouds which are close in projection to each other. Although cloud 1 shows evidence of multiple kinematic components, spectral smoothing did not show this cloud had any significant extensions. While we cannot absolutely rule out that the clouds might be part of larger structure, in which case their measurements regarding the BTFR might have to be revised, we see no evidence for this in the data. To summarise, there are a variety of possible explanations of the clouds, none of which can be definitively ruled out but none are entirely satisfying either : \begin{itemize} \item All the clouds are tidal debris : some appear too isolated for this, and none of the nearest galaxies show the expected long H\textsc{i}{} extensions. \item All the clouds are galaxian in nature though some are optically dark or very dim : there is no obvious reason why one cloud should be optically much brighter than the rest. \item Cloud 4, which alone has an optical counterpart, is a galaxy, while the rest are tidal debris : it seems an unlikely coincidence that cloud 4 should have such a qualitatively different nature to three other clouds in its vicinity with otherwise very similar quantitative H\textsc{i}{} properties. \item If the clouds are tidal debris, some may have arisen in an interaction between M95 and M96 while some were produced by whatever process gave rise to the Leo Ring : the kinematics of the clouds do not match the velocity gradient of the Ring, and though M96 does show a warp in its H\textsc{i}{} disc, neither it nor M95 show any larger-scale H\textsc{i}{} emission predicted in numerical simulations of the formation of tidal debris. \end{itemize} In short, the clouds are consistent with both tidal and galaxian interpretations and neither scenario can be definitively ruled out. We are currently examining a variety of formation scenarios for the Leo Ring using numerical simulations, which may eventually shed light on these smaller but intriguing clouds. \section*{Acknowledgments} This work was supported by the Czech Ministry of Education, Youth and Sports from the large lnfrastructures for Research, Experimental Development and Innovations project LM 2015067, the Czech Science Foundation grant CSF 19-18647S, and the institutional project RVO 67985815. RM acknowledges support of the NRAO. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work is based on observations collected at Arecibo Observatory. The Arecibo Observatory is operated by SRI International under a cooperative agreement with the National Science Foundation (AST-1100968), and in alliance with Ana G. M\'{e}ndez-Universidad Metropolitana, and the Universities Space Research Association. The SOFIA Science Center is operated by the Universities Space Research Association under NASA contract NNA17BF53C. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work has made use of the SDSS. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. This work has made use of the Digitized Sky Survey. The Digitized Sky Survey was produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The plates were processed into the present compressed digital form with the permission of these institutions. \subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document}
{ "timestamp": "2022-09-23T02:15:11", "yymm": "2209", "arxiv_id": "2209.10994", "language": "en", "url": "https://arxiv.org/abs/2209.10994" }
\section{Introduction}\label{sec1} The peptide bond, \ce{-C(O)NH-}, found in amides connects amino acids to peptides --- of paramount importance to present day life on Earth. Unsurprisingly, how, when and where peptide bond formation arose is of immediate interest in prebiotic astrochemistry, tackling that challenging question: the origins of life \cite{kolesnikova, ligterink}. The simplest amides, formamide \ce{HC(O)NH2}, and acetamide, \ce{CH3C(O)NH2}, are common constituents of star-forming regions in our galaxy \cite{adande,mcguire} but apparently propionamide, \ce{CH3CH2C(O)NH2}, is not \cite{caden}. Adande et al. \cite{adande} have shown, based on their observations of formamide towards star-forming regions of dense molecular clouds, that the compound could have been brought to Earth by exogenous delivery in substantial amounts of $\sim 0.18$ mmol m$^{-2}$ in a single impact. The formation routes to formamide are still unclear; some have suggested gas-phase pathways via formaldehyde and amidogen \cite{barone,skouteris}: $$\ce{H2CO + NH2^. -> HCONH2 + H^.}$$ but disputed by Song and K\"{a}stner \cite{song} and convincingly refuted by Douglas et al. \cite{douglas} or surface reactions \cite{fedoseev} by the hydrogenation of isocyanic acid, \ce{HNCO}, and on carbon monoxide--ammonia ices \cite{jones}: \begin{eqnarray} \ce{NH3 -> NH2^. + H^.} \\ \ce{H^. + CO -> HCO^.}\\ \ce{HCO^. + NH2^. -> HC(O)NH2} \end{eqnarray} and more recently by metal-ion mediated substitution reactions \cite{thripati}: $$ \ce{HC(O)X + NH3 ->[M+] HC(O)NH2 + HX}$$ where \ce{M = Na+, K+, Mg+, Mg++, Al+} and X = \ce{H, OH, CH2OH} but the evidence for all these is underwhelming. A comprehensive summary has been given recently by Chuang et al. \cite{chuang} during the course of their laboratory work on the formation of formamide in water- and carbon monoxide-rich water ices with ammonia as the substrate. VUV irradiation of water-rich and CO-rich ammonia ices at 10~K with compositions \ce{H2O{:}CO{:}NH3}=10:5:1, \ce{CO{:}NH3}=4:1 and \ce{CO{:}NH3}=0.6:1 have shown that formamide is preferentially formed, although mechanistically the situation is complicated with no clear indication as to actual formation routes \cite{chuang}. Indirect evidence for the formation routes of formamide based on the stratified distribution of the molecules \ce{HNCO} and \ce{H2CO}, putative parents of \ce{HCONH2}, in the atmosphere of the HH 212 protostellar disk appear to rule out \ce{HNCO} as a parent \cite{codella}. Studies of comets have indicated that the early Solar Nebula had nitriles (cyanides) such as \ce{HCN} and \ce{CH3CN} in abundance \cite{cordiner,loomis}. It is currently assumed that reactions in the bulk or on the surface of water-ice grains are the most likely formation routes for complex organic molecules which are then liberated into the gas-phase by UV irradiation, electron and cosmic ray bombardment, via thermal shocks or grain--grain collisions \cite{colzi,kalv,mini}. In a computational study of the direct reaction \ce{HC#N + H2O -> HC(OH)=NH} neither the presence of a second \ce{H2O} as a catalyst, or as a spectator or as a reactant was sufficient to reduce the high barriers encountered which thereby rule out the possibility of it occurring in cold molecular clouds \cite{darla}. The most pertinent previous work simulated a 33 \ce{H2O} molecule cluster and showed that \ce{HC#N} cannot react to \ce{HC(O)NH2} under interstellar ice conditions because of large energy barriers. Any reactivity that was feasible proceeded through the \ce{CN^.} radical \cite{rimola}. UV photolysis of water methyl cyanide ices at 20~K (\ce{H2O$:$CH3CN} = 20:1) does give rise to the formation of \ce{CH3C(O)NH2}, and its isomer N-methyl formamide \ce{CH3NHCHO}, but many other products as well \cite{bulak}. A consideration of the various suggested reactions in the literature led one of us to suggest that \ce{H3O+} induced water addition to nitriles on water-ice grains was likely to provide the most probable route \cite{simmie}. It is known that \ce{H3O+} exists in the ISM and even in other galaxies, that it reacts in water-ices and that it has the potential to drive subsequent reaction \cite{wootten,tak,moon,lee}. Woon has very recently reviewed what is known about cation--ice reactions from quantum chemical cluster studies, highlighting the novel and more efficient pathways, vis-\`a-vis the gas-phase, that \ce{HCO+}, \ce{CH3+} and \ce{C+} (but not \ce{H3O+}) undergo and appealed for experimental confirmation \cite{woon}. Laboratory experiments have shown that bombardment of water ice samples on a copper substrate at 10~K yields a number of secondary ions including $ \ce{(H2O)_n.H+}$ with $n=1\to 8$, although these ions are perhaps more accurately denoted as $\ce{(H2O)_{n-1}.H3O+}$ \cite{martinez}. The production efficiency is much lower when crystalline ice is bombarded in comparison to amorphous ice. The first step in the proposed addition of water is the protonation at nitrogen of the nitrile to form \ce{RC#NH+}, and in the case of HCN to form imino methylium, \ce{HC#NH+}; this species has been widely detected in star-forming regions \cite{ziurys, schilke, quenard} and in Titan's atmosphere \cite{titan}. Although it had been characterised as a precursor of \ce{HC#N} in the most recent observations Fontani et al. show that it is formed from \ce{HC#N} or \ce{HC#N+} \cite{fontani}. Fundamental laboratory work to characterise the spectroscopic parameters for \ce{CH3C#NH+} have been carried out but interstellar searches have not so far been successful \cite{mari}. A primary consideration for suggesting acidified amorphous water-ice is the high mobility of the proton through the lattice; the transfer mechanism, via Grotthus hops, takes place on a sub-picosecond time scale and with barriers of $\approx 1$ kJ mol$^{-1}$ --- these effectively increase the ``collision rate'' or encounter between reactant and \ce{H3O+} \cite{lee,hops}. The hydronium induced addition of water was seen as a two-step process with the first forming hydroxy imines or imidic acids \ce{RC(OH)=NH} and the second converting these to amides \ce{RC(O)NH2}; the latter reaction can occur either by intramolecular hydrogen-transfer via quantum-mechanical tunnelling or by the further protonation of the nitrogen-atom followed by deprotonation from the O-atom. The actual formation of peptides as a by-product was observed \cite{krasno} after the deposition of C atoms onto a CO + \ce{NH3} ice at 10 K and subsequent warming to 300~K. The authors argue that the initial reaction product is aminoketene, \ce{H2NCH=C=O}, which polymerises on warming yielding \ce{(-CH2-C(O)-NH-)$_n$} chains. While the experiments are compelling the conditions are somewhat artificial since they consider pure CO + \ce{NH3} ices with substantial quantities of carbon atoms. The initial step \ce{H3N + C($^3\mathrm{P}_0$) -> H3N\bond{~}C} is highly exothermic, $\Delta _rH=-103$ kJ mol$^{-1}$, this is followed by a 1,2-H-transfer to \ce{H2N-\mbox{\"{C}}H} and finally reaction with CO to \ce{H2N-CH=C=O}. Interestingly Canepa \cite{canepa} in considering the survival rates of glycine, \ce{NH2CH2C(O)OH}, embedded in micrometeorites undergoing atmospheric re-entry has shown that aminoketene, the product of dehydration, would also survive. In this paper we focus on a solid-state chemistry formation mechanism of \ce{R-C(OH)=NH} and subsequently \ce{R-C(O)NH2}. We perform electronic structure investigations of the energetics and also thermal rate constants calculations of this reaction mechanism. This information can then be used in astrochemical modelling and may prompt experimental laboratory confirmation. \begin{figure}[h] \centering \includegraphics[width=\textwidth,scale=0.2]{Water-HCN} \caption{HCN embedded in acidified water} \label{water-hcn} \end{figure} \section{Methods}\label{sec11} Calculations were performed with the application Gaussian \cite{gauss} and used the long-range dispersion corrected hybrid meta-GGA density functional $\omega$B97X-D together with the triple $\zeta$ basis set with added polarization and diffuse functions 6-311++G(d,p) \cite{wB97} with a factor of 0.96 was applied to scale the zero-point energy. A system with thirty-two water molecules and one hydronium ion, \ce{H3O+}, with a total `volume' of $\approx 2,600$ \AA$^3$, was chosen together with one reactant, either \ce{HCN} or \ce{CH3CN}. This choice represents a compromise between realistic ISM concentrations and computational effort Fig.~\ref{water-hcn}. All structures were fully optimized and the harmonic frequencies computed using DFT. Frequency calculations were performed in order to verify that all intermediates are true minima on the potential energy surface, and that all transition states exhibit a single imaginary frequency. We study all species in the reaction mechanism with the unrestricted $\omega$B97XD/6-311$++$G(d,p) model chemistry. Gaussian 16 automatically includes an ultrafine integration grid in the DFT calculations in order to improve the accuracy of the results. The grid greatly enhances the accuracy at reasonable additional cost. The reaction paths are computed using the intrinsic reaction coordinate (IRC) methodology \cite{Hratchian2004,Hratchian2005} to confirm the identities of the reactants and products for every transition state. IRC calculations require initial force constants of the transition state. Then, the first and second order energy derivatives are obtained to calculate the projected harmonic vibrational frequencies along each reaction path. The minimum energy paths (MEPs) were computed using the Page--McIver integrator with a gradient step size of 0.1 $a_0$ \cite{page}. Small curvature quantum mechanical and quantised-reactant-states tunnelling calculations \cite{sct,qrc} employed the PILGRIM application \cite{pil} for the computation of rate constants via transition state theory (TST) and variational TST (VTST) necessitating calculations along the minimum energy path. \section{Results}\label{sec2} Prior to the first reaction steps we begin by considering a previously published \cite{Rim} cluster of 33 water molecules, \ce{[33(H2O)]}, to which a low energy hydron \ce{H+} is added in a highly exothermic process, $\Delta _rH(0 \mathrm{K})$ of $-1,065$ kJ mol$^{-1}$; this in turn can be dissipated throughout the cluster and/or serves to drive subsequent reactions.The only notable difference between the two clusters \ce{[33(H2O)]} and \ce{[33(H2O).(H+)]}, which is more realistically depicted as \ce{[32(H2O).(H3O+)]}, are the three additional vibrational modes, two \ce{O-H} asymmetric stretching vibrations near 2,400 cm$^{-1}$ and a characteristic symmetric at 2,800 cm$^{-1}$. It is to this cluster that the reactants \ce{HCN} and \ce{CH3CN} are then added. \subsection{First step: imidic acid formation} As originally envisaged the hydrolysis of \ce{HC#N} proceeded in three distinct phases: \begin{eqnarray} \ce{HC#N + H3O+ -> HC=NH+ + H2O }\\ \ce{HC=NH+ + H2O -> HC(OH2)=NH+}\\ \ce{HC(OH2)=NH+ + H2O -> HC(OH)=NH + H3O+} \end{eqnarray} the exothermic first step $\Delta _rH\stst(0~K)=-20.5$ kJ mol$^{-1}$ simply a reflection of the larger proton affinity of \ce{HC#N} of 712.9 kJ mol$^{-1}$ vis-\`a-vis \ce{H2O} of 691.0 kJ mol$^{-1}$ \cite{hunter}. In a water cluster of an additional 32 \ce{H2O} molecules however these later steps are elided since as \ce{H2O} adds to the C-atom it simultaneously transfers H to a neighbouring O-atom. \begin{eqnarray} \ce{HC#N + H3O+ + 32 H2O -> HC(OH)=NH + H3O+ + 31 H2O} \end{eqnarray} Overall reaction (4) is exothermic by $-80$ kJ mol$^{-1}$ which is considerably different to the gas-phase $\Delta _rH\stst(0 \mathrm{K})=-22.2$ kJ mol$^{-1}$ reflecting the tighter binding of the hydrolysed product in comparison to the reactant. The barrier to reaction (1), that is protonation at the N-atom, in the case of HCN is 78.5 kJ mol$^{-1}$ and is even lower at 51.9 kJ mol$^{-1}$ for acetonitrile. Although Fig.~\ref{water-hcn} shows the actual reaction structure, an oversimplified version, Fig.~\ref{key}, shows the parts played by four key water molecules; the first is the proton donor \ce{H3O+}, the second is the ``reactant'' which will attack the C-atom and also transfer a \ce{H+} to the ``acceptor'' water, whilst the ``companion'' \ce{H2O} is less involved but nevertheless stabilises the system through \ce{H\bond{...}OH2} hydrogen-bonding; snapshots along the IRC path are shown in Fig.~\ref{snap} to the final product, methanimidic acid in its (\textit{E,Z}) conformation with respect to the dihedrals $\angle$ OCNH and HOCN, respectively. \begin{figure} \centering \includegraphics[width=\textwidth,scale=0.2]{Initial-HCN} \caption{Key elements of the reaction scheme for the first step} \label{key} \end{figure} \begin{figure} \centering \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{TS-HCNH-0} \caption{Transition state} \label{fig:first} \end{subfigure} Click to show the PDF \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{TS-HCNH-5} \caption{Addition of \ce{H2O}} \label{fig:second} \end{subfigure} \ \\ \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{TS-HCNH-16} \caption{Transfer of \ce{H+}} \label{fig:third} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{TS-HCNH-final} \caption{Final product} \label{fig:fourth} \end{subfigure} \caption{Structures along the IRC path for the first step} \label{snap} \end{figure} In Fig.~\ref{Figure:4} we plot the potential energy along the minimum energy path for the imidic acid formation given by reaction (4). \begin{figure} \centering \includegraphics[width=\textwidth,scale=0.05]{imidic} \caption{Classical potential energies $V_{MEP}$ as functions of $s$ / Bohr for imidic acid formation in ice.} \label{Figure:4} \end{figure} \subsection{Second step: amide formation} We can distinguish between two different routes from the hydroxy imines to the corresponding amides which can proceed intra-molecularly or inter-molecularly. \subsubsection{Intramolecular route} Once the hydroxy imines, methanimidic and ethanimidic acids, are formed then an intra-molecular 1,3-H-transfer leads to formamide, Fig.~\ref{13H}, or acetamide; however, the barriers are considerable ranging from 136 or 128 kJ mol$^{-1}$ in the gas-phase to 169 and 142 kJ mol$^{-1}$ in this water-cluster; clearly, surmounting such barriers is unfeasible at temperatures much less than 300~K except by tunnelling. Note that the presence of additional water molecules makes very little difference to the energetics of the process in comparison to the gas-phase. Exactly the same conclusion can be drawn from the gas-phase work of Darla et al. which showed in $\omega$B97xD/aug--cc-pVTZ calculations that even in the presence of a ``catalytic'' water molecule the 1,3[H]-transfer faces a barrier of 131 kJ mol$^{-1}$ or 132 kJ mol$^{-1}$ with the additional water present as a ``spectator'' \cite{darla}. \begin{figure} \centering \includegraphics[width=\textwidth,scale=0.2]{HCOHNH-13-transfer} \caption{Transition state for 1,3[H]-transfer; \ce{HC(OH)NH $\to$ HC(O)NH2}} \label{13H} \end{figure} The additional water molecules and \ce{H3O+} in our system only marginally affect the 1,3[H]-transfer reaction; this is seen in the energetics, as discussed above, and also from the values of the imaginary frequencies which are gas-phase: 2,002 and 1,988 cm$^{-1}$ and cluster: 1,953 and 1,916 cm$^{-1}$ for \ce{HC(OH)NH $\leftrightarrow$ HC(O)NH2} and \ce{CH3C(OH)NH $\leftrightarrow$ CH3C(O)NH2} respectively. In the case of the gas-phase \ce{RC(OH)NH $\leftrightarrow$ RC(O)NH2} isomerisation reaction, PILGRIM \cite{pilgrim} calculations at B3LYP/cc-pVTZ incorporating small-curvature tunnelling yields rate constants, $k$, and half-lives, $\tau$, as shown in Table~\ref{tab:rates}; the ice-cluster values are probably not dis-similar. At the lowest temperatures, \emph{here} $\leq 150$~K, quantised reactant states tunnelling is included. The barrier to reaction is higher at 131.5 kJ mol$^{-1}$ for \ce{HC(OH)NH2} than the 123.9 kJ mol$^{-1}$ for \ce{CH3C(OH)NH2} consequently the rate of isomerisation to the appropriate amide is faster for ethanimidic acid. These are substantially faster rates of isomerisation than our previous Multiwell values, Fig.~\ref{comp}, which were based on Eckart tunnelling \cite{simmie}; this renders the intermolecular route, discussed below, essentially redundant. \begin{figure} \centering \includegraphics[width=\textwidth,scale=0.1]{RateConstants} \caption{Rate constants gas-phase isomerisation: Eckart versus small-curvature tunnelling. \textcolor{red}{\ce{CH3C(OH)NH}}, \ce{HC(OH)NH}.} \label{comp} \end{figure} A not dis-similar situation is considered by Concepci\'on et al \cite{con} in their work on the origin of the (\emph{E}/\emph{Z}) isomer ratio of imines in the ISM; thus they show that the less-stable (\emph{E}) conformer of cyanomethanimine, \ce{RCH=NH} where R is a \ce{C#N} group, re-arranges to the (\emph{Z}) with dramatically increased rates over canonical transition-state theory values at temperatures of 250 K and below. The variational effect is small and the faster rates are ascribed to quantum tunnelling; exactly the same as found \emph{here}. \begin{table}[tbh] \centering \begin{tabular}{ccccc} & \multicolumn{2}{c}{\ce{HC(OH)NH $\to$ HC(O)NH2} } & \multicolumn{2}{c}{\ce{CH3C(OH)NH $\to$ CH3C(O)NH2} }\\ $T$ / K & $k$ / s$^{-1}$ & $\tau$ / days & $k$ / s$^{-1}$ & $\tau$ / days\\ \hline \rule{0pt}{10pt} 50 & $7.6 \times 10^{-09}$ & 1,060 & $4.4 \times 10^{-08}$ & 180 \\ 100 & $7.8 \times 10^{-09}$ & 1,028 & $8.3 \times 10^{-08}$ & 100 \\ 150 & $1.2 \times 10^{-08}$ & 660 & $2.3 \times 10^{-07}$ & 36 \\ 200 & $4.2 \times 10^{-08}$ & 190 & $9.3 \times 10^{-07}$ & 9\\ 250 & $3.4 \times 10^{-07}$ & 24 & $6.7 \times 10^{-06}$ & 2 \\ 300 & $4.4 \times 10^{-06}$ & 2 & $7.5 \times 10^{-05}$ & 0.2 \\ \hline \end{tabular} \caption{Isomerisation rate constants} \label{tab:rates} \end{table} The detection of isotopically labelled compounds can be useful for tracing formation routes; specifically, deuteration has been used in this regard by Bianchi et al. \cite{bianchi} in their study of \ce{CH3C#N} in the SVS13-A Class I hot corino. Unfortunately, such a tool is unavailable here since the hydrogen transferred has been sourced from the water-ice; however were \ce{CH3C(OD)NH} or \ce{CH3C(OH)ND} to be detected then that might prompt other avenues of investigation. In Fig.~\ref{Figure:7} we plot the potential energy, the zero point energy (ZPE), and vibrationally adiabatic ground state energy $V_a^G$ along the minimum energy path for the gas phase 1,3-H-transfer. \begin{figure} \centering \includegraphics[width=\textwidth,scale=0.05]{V_MEP-13H-gas} \caption{Classical potential energies $V_{MEP}$, ground-state vibrational adiabatic potential energy ($V^G_a$), and ZPE as functions of $s$ / Bohr for 1,3[H]-transfer in gas phase} \label{Figure:7} \end{figure} \subsubsection{Intermolecular route} The same outcome, that is \ce{RC(OH)NH $\to$ RC(O)NH2}, can come about by the attack of hydronium at the N-atom leading similarly to \ce{R-C(OH)=NH2+ + H2O} and attack by a further water molecule at the O-atom leading to deprotonation, resulting in the final products \ce{R-C(O)-NH2 + H3O+}. \begin{figure} \centering \includegraphics[width=\textwidth,scale=0.05]{Step2} \caption{Intermolecular \ce{HC(OH)=NH} $\to$ \ce{HC(O)NH2}} \label{2step} \end{figure} So this reaction follows the same course mechanistically as the first; firstly, protonation at the nitrogen atom, which has a tiny barrier of 1.6 kcal/mol (disappears when adding ZPE), is followed by H-abstraction from the OH group and transfer of H to a neighbouring water and transfer of H to a second water, Fig.~\ref{2step}, which just shows the active site. The reaction can be summarised as: $$\ce{[HC(OH)NH2+ + 32H2O]} \to \ce{[HC(O)NH2 + 31H2O.H3O+]}$$ and the barrier for this is low at 5.8 kJ mol$^{-1}$. The results are summarised in Table~\ref{table} where $E^{\dagger}$ is the zero-point corrected electronic energy and $\Delta _rH$ is the reaction enthalpy, with both expressed in units of kJ mol$^{-1}$. \begin{sidewaystable} \center \begin{tabular}{lcc}\\ \hline Reaction & $E^{\dagger}$ & $\Delta _rH$(0 K) \\ \hline \multicolumn{3}{c}{Initial protonation and hydroxy imine formation}\\ \ce{HCN + 32H2O.H3O+ $\to$ HCNH+ + 33H2O $\to$ HC(OH)NH + 31H2O.H3O+} & 78.5 & $-82.8$ \\ \ce{CH3CN + 32H2O.H3O+ $\to$ CH3CNH+ + 33H2O $\to$ CH3C(OH)NH + 31H2O.H3O+} & 51.9 & $-80.0$ \\ \multicolumn{3}{c}{Intra-molecular amide formation via 1,3[H]-transfer}\\ \ce{HC(OH)NH (g) $\to$ HC(O)NH2(g)} & 135.5 & $-57.6$\\ \ce{CH3C(OH)NH (g) $\to$ CH3C(O)NH2 (g)} & 127.6 & $-57.1$\\ \ce{HC(OH)NH + 31H2O.H3O+ $\to$ HC(O)NH2 + 31H2O.H3O+} & 168.8 & $-81.5$\\ \ce{CH3C(OH)NH + 31H2O.H3O+ $\to$ CH3C(O)NH2 + 31H2O.H3O+} & 141.5 & $-90.1$\\ \multicolumn{3}{c}{Inter-molecular amide formation via protonation/deprotonation}\\ \ce{ HC(OH)NH + 31H2O.H3O+$\to$ HC(OH)NH2+ + 32H2O} & 1.6 &$-75.5$\\ \ce{HC(OH)NH2+ + 32H2O $\to$ HC(O)NH2 + 31H2O.H3O+} & 5.8 &-4.83 \\ \ce{CH3C(OH)NH2+ + 32H2O $\to$ CH3C(O)NH2 + 31H2O.H3O+} & & \\ \hline \end{tabular} \caption{Barrier heights, $E^{\dagger}$, and reaction enthalpies $\Delta _rH$(0~K) / kJ mol$^{-1}$} \label{table} \end{sidewaystable} \section{Discussion} There are of course other nitriles present in the ISM and one would anticipate that a similar fate would befall them; Manna and Pal \cite{manna} have detected the unfortunately named cyanamide, or amino cyanide \ce{H2NC#N}, in the hot molecular core G10.47+0.03 and outline three possible fates, degradation to \ce{H2N^. + CN^.} by cosmic rays and high-energy photons or by ion--neutral reactions: \ce{H2NC#N + H3+ -> H2NC=NH+ + H2}. Auto-catalytic addition of water on ice-grains, as per \emph{this work}, would lead to the formation of carbonyl diamide, \ce{OC(NH2)2}, better known as urea, which has been detected previously \cite{belloche}. In an extensive computational study by Slate et al. \cite{slate} into urea formation in the ISM they concluded that closed shell reactions had prohibitive barriers but that a route involving charged species was feasible except that their starting point involved iso-cyanic acid \ce{HN=C=O} with protonation at the O-atom followed by addition of ammonia and subsequent deprotonation; a somewhat more complex sequence than that envisaged here but nevertheless comparable. Previously Brigiano et al. had considered ion--molecule, neutral--neutral and radical reactions leading to the formation of urea but only in the gas-phase \cite{brig}. \section{Conclusion}\label{sec13} The auto-catalytic addition of water to \ce{RC#N} triple bonds is shown to be a credible process on water clusters which are impacted by hydrons \ce{H+}. The high mobility of the hydron through the cluster leads to the initial reactive step, protonation at the N-atom in a facile manner. This effectively transforms a bimolecular reaction into a unimolecular process --- thus removing the `collisional handicap' which hampers all gas-phase reactions in the interstellar medium. Subsequent attack by water at carbon and abstraction of \ce{H+} yields the hydroxy imine \ce{RC(OH)NH} which can then undergo a 1,3[H]-transfer reaction either intra-molecularly or inter-molecularly to the amide \ce{RC(O)NH2}. Quantum-mechanical tunnelling plays a key role in these processes. The recent review by Woon with its stress on the need to pay more attention to cation--ice reactions is shown to be prescient and his call for experimental confirmation timely. \cite{woon} \subsection*{ORCID} Bouthe\"ina Kerkeni: \textcolor{blue}{0000-0002-5762-5058}\\ John M. Simmie: \textcolor{blue}{0000-0003-0714-7956} \subsection*{Acknowledgement} JMS and BK thank the Irish Centre for High-End Computing, ICHEC, for the provision of computational resources (projects: nuig02, ngche102c, ngche115c) The assistance by David Ferro-Costas (Univ. Santiago de Compostela), author of Pilgrim, is gratefully acknowledged.
{ "timestamp": "2022-09-23T02:13:02", "yymm": "2209", "arxiv_id": "2209.10929", "language": "en", "url": "https://arxiv.org/abs/2209.10929" }
\section{Introduction} Multivariate extreme value problems are important across a range of subject domains, such as sea levels \citep{Coles1994}, air pollution \citep {HT2004}, rainfall \citep{Davison2012} and river flow \citep{Engelke2020} which all feature in influential discussion papers. The typical formulation is to have $n$ independent and identically distributed replicate observations $(\mathbf{x}_1,\ldots ,\mathbf{x}_n)$, from a $d$-dimensional vector random variable $\mathbf{X}$ with unknown joint distribution $F_{\mathbf{X}}$. Here the aim is to estimate $\Pr(\mathbf{X}\in A)$ where $A\subset \mathbb{R}^d$, such that all the elements in $A$ are in the upper tail of at least one of the marginal distributions of $\mathbf{X}$, with the formulation of $A$ depending on the characteristics of the problem of interest. The typical approach to make such inference is to estimate both the marginal distributions and dependence structure (copula) with a focus on their behaviour in their upper extremes. Univariate extreme value methods are well established \citep{Coles2001, DavisonSmith1990}, with multivariate dependence modelling being the key challenge. In the bivariate case, that we will focus on for variables $(X, Y)$, there are two distinct types of extremal dependence, which are easiest explained via the coefficient of asymptotic dependence $\chi$, given by \begin{align} \chi= \lim_{p\rightarrow 1}\Pr\left[F_Y(Y)>p\mid F_X(X)>p\right], \label{eqn:chi} \end{align} where $F_X$ and $F_Y$ are the marginal distributions of $X$ and $Y$ respectively. Having $\chi>0$ coincides with asymptotic dependence between $X$ and $Y$, a situation in which both variables can take their largest values simultaneously; while when $\chi=0$, termed asymptotic independence, such limiting dependence is impossible. Measures of sub-asymptotic dependence exist for the asymptotically independent case, with the dependence measure $\bar{\chi}$, covering a form of extremal positive and negative dependence and independence for the values $0<\bar{\chi}<1$, $-1<\bar{\chi}<0$ and $\bar{\chi}=0$ respectively, whilst $\bar{\chi}=1$ corresponds to asymptotic dependence; see \cite{ColesHeffernanTawn1999}. Since many models for bivariate extremes are only suitable in one of these situations, distinguishing between them, or having a model that incorporates both cases in a flexible way, can play a crucial role in model selection. For example, multivariate max-stable distributions \citep{Gudendorf2012} and multivariate generalised Pareto distributions \citep{Kiriliouk2019} only allow $\chi>0$ or have $\chi=\bar{\chi}=0$, while the Gaussian copula with correlation parameter $-1<\rho<1$ gives $\chi=0$ and $\bar{\chi}=\rho$ \citep{LedfordTawn1996}. One class of multivariate models, based on conditional limit theory as one variable becomes extreme, developed by \citet{HT2004}, has developed wide practical usage, with applications linked to widespread river flooding \citep{Keef2013b}, time series dependence in heatwaves \citep{WinterTawn2017}, spatial air temperature extremes \citep{WadsworthTawn2022}, spatio-temporal sea-surface temperatures \citep{Simpson2021}, offshore metocean environmental design contours \citep{Ewans2014}, coastal flooding \citep{Gouldby2017}, food chemicals \citep{Paulo2006}, and laboratory trials \citep{Southworth2012}. This \citet{HT2004} class of models has considerable flexibility as it covers both asymptotic dependence and asymptotic independence classes. Furthermore, in the multivariate case it allows for different extremal dependence classes between separate subsets of the variables, unlike models such as \cite{Wadsworth2017} and \cite{HuserWadsworth2019}. Since its initial presentation, the model proposed by \citet{HT2004} has been extended by \citet{Keef2013a} to its current most widely adopted form. Specifically, for $(X,Y)$ marginally transformed to have Laplace marginals, denoted $(X_L,Y_L)$, it is assumed that there exists values $(\alpha_{\mid X},\beta_{\mid X})\in[-1,1]\times(-\infty,1)$ such that for $x>0$ and $z\in \mathbb{R}$ \begin{equation} \Pr\left\{\frac{Y_L-\alpha_{\mid X}X_L}{X_L^{\beta_{\mid X}}} \leq z, X_L-t>x \mid X_L>t\right\} \to G_{\mid X}(z)\exp(-x) \qquad \mathrm{as}~t\to\infty, \label{eq:HT2004NEW} \end{equation} where $G_{\mid X}$ is the distribution function of a non-degenerate random variable, subject to the condition that $\lim_{z\rightarrow \infty}G_{\mid X}(z)=1$ to ensure that $\alpha$ is uniquely identifiable. This relation gives that the normalised $Y_L$ is conditionally independent of $X_L$ in the limit. Here the conditioning event $\{X_L>t\}$ differs from the condition event $\{X_L=t\}$ of \citet{HT2004}, with the latter corresponding to the former when joint densities exist. In statistical applications, the limit~\eqref{eq:HT2004NEW} is taken to hold for finite $t$, and using realisations of $(X_L,Y_L)$, such that $X_L>t$, the parameters $\alpha_{\mid X}$ and $\beta_{\mid X}$ are estimated using regression methods (for $Y_L$ given $X_L$) whilst non-parametric methods are used in inference for $G_{\mid X}$, based on the standardised residuals of this regression. To characterise the full joint tail of $(X_L,Y_L)$ in addition to limit~\eqref{eq:HT2004NEW} we also need the equivalent relationship for the reverse conditional distribution of $X_L$ given $Y_L$ is large. Despite the strong applied value of the conditional modelling framework, some concerns about the broader theoretical restrictions of the limiting assumptions exist. Attempts to formalise the method and weaken some of these assumptions include \cite{HeffernanResnick2007} and \cite{Resnick2014}. However, \cite{Drees2017} provided a number of counterexamples of their results. A side effect of this has been to undermine the potential wider practical adoption of the \citet{HT2004} conditional multivariate extremes framework. This paper explores the counterexamples of \cite{Drees2017} to see if they undermine any of the asymptotic justification for the statistical methods stemming from the \citet{HT2004} framework. There is a critical difference between the framework studied in \cite{HeffernanResnick2007}, \cite{Resnick2014}, \cite{Drees2017} and the \citet{HT2004} framework, specifically, that latter requires an initial marginal standardisation, so that after transformation of $(X,Y)$ they are assumed to have identical marginal distributions before studying the conditional extremes behaviour. This transformation was taken to be Gumbel in \citet{HT2004} and Laplace (as above) in \citet{Keef2013a}. Such standardisation of variables to common margins is quite usual in the study of dependence structure, e.g., \citet{Nelsen1999} and \citet{Beirlant2004}, as this makes relationships more easy to model through linearity, with exponential margins being particularly desirable for this, as shown by \citet{Papastathopoulos2017}. Our intuition is that having marginal variables on completely different marginal tail behaviours (explicitly different shape parameters/tail indices) imposes a major restriction on a conditional approach using affine transformations, such as in the norming of $Y_L$ in limit~\eqref{eq:HT2004NEW}. The results presented in this paper show that working with standardised marginal distributions overcomes all of the counterexamples. We believe that these findings further illustrate the versatility of the \citet{HT2004} conditional multivariate extremes framework. The paper is structured as follows: In Section~\ref{sec:background} we present the background theory of the different conditional representations. In Section~\ref{sec:Examples} we cover each of the counterexamples given by \cite{Drees2017}, with simulations to help interpretation, and state which features of \cite{Das2011} and \cite{Resnick2014} they show are not appropriate. In each case we illustrate how the problems are overcome through an initial standardisation of the marginal distributions. Some technical details of the calculations for the examples are given in the Appendix. \section{Background Theory} \label{sec:background} \subsection{Multivariate and Conditional Extremes} For notational simplicity, we focus on the bivariate case with $(X,Y)$, where $X$ and $Y$ are continuous random variables. Classical multivariate extreme value models assume that the marginal distributions $F_X$ and $F_Y$ of $(X,Y)$ belong to the domain of attraction of some extreme value distribution: $F_X$ is in the domain of attraction of an extreme value distribution if there exist functions $p_X:\mathbb{R}\to\mathbb{R}_+$ and $q_X:\mathbb{R}\to\mathbb{R}$ such that \begin{equation} F_X^t\left\{p_X(t)x + q_X(t)\right\} \to \exp\left\{-(1+\gamma_X x)^{-1/\gamma_X}\right\}\quad \mathrm{as}~t\to\infty \label{eq:Tail} \end{equation} for some $\gamma_X\in\mathbb{R}$ and all $x\in E^{(\gamma_X)}:=\{x \in\ \mathbb{R} \mid 1 + \gamma_X x >0\}$. Multivariate extreme value distributions then arise as the limiting joint distribution of the componentwise maxima of independent and identically distributed random variables $(X_i,Y_i)$, for $i=1,\ldots,t$, with joint distribution function $F_{X,Y}$ and marginal distribution functions $X_i\sim F_X$ and $Y_i\sim F_Y$. Specifically, it as assumed that there exist functions $p_X, q_X$ as in limit~\eqref{eq:Tail}, and similarly $p_Y, q_Y$, such that \begin{align*} \Pr &\left(\frac{\max_{i=1,\ldots,t}X_i-q_X(t)}{p_X(t)} \leq x, \frac{\max_{i=1,\ldots,t} Y_i-q_Y(t)}{p_Y(t)}\leq y\right)\\ = & [F_{x,y}(p_X(t)x+q_X(t), p_Y(t)y+q_Y(t))]^t\to H(x,y) \qquad \mbox{as}~t\to\infty, \end{align*} where $H$ is a bivariate distribution function with non-degenerate marginal distributions, given by limit form~\eqref{eq:Tail}, with tail indices of $\gamma_X$ and $\gamma_Y$ respectively, and with a copula possessing a specific max-stable property which, amongst other features, excludes the possibility of negative dependence, see \citet{ColesHeffernanTawn1999} and \citet{Beirlant2004}. \citet{HT2004} propose examining the dependence in the tail of ($X,Y$) by first standardising the marginals via the probability integral transformation to have Gumbel distributions, denoted $(X_G,Y_G)$, with $\Pr(X_G\le x)=\Pr(Y_G\le x)=\exp\{-\exp(-x)\}$~$(x\in\mathbb{R})$, and considering the conditional $Y_G\mid (X_G=t)$ as $t\to\infty$. The assumption underlying their approach is that there exist normalising functions $\tilde{a}_{\mid X}(y):\mathbb{R}_+\to\mathbb{R}$ and $\tilde{b}_{\mid X}(y):\mathbb{R}_+\to\mathbb{R}_+$ such that \begin{equation} \Pr\left\{ \frac{Y_G-\tilde{a}_{\mid X}(X_G)}{\tilde{b}_{\mid X}(X_G)} \leq z \mid X_G=t\right\} \to \tilde{G}_{\mid X}(z) \qquad \mathrm{as}~t\to\infty, \label{eq:HT2004} \end{equation} where the limit distribution $\tilde{G}_{\mid X}$ is non-degenerate. To ensure that $\tilde{a}_{\mid X}$, $\tilde{b}_{\mid X}$ and $\tilde{G}_{\mid X}$ are well-defined, we require $\lim_{z\to\infty}\tilde{G}_{\mid X}(z)=1$, i.e., $\tilde{G}_{\mid X}$ has no mass at $+\infty$, and $\tilde{b}_{\mid X}(x)/x\rightarrow 0$ as $x\rightarrow \infty$ \citep{Keef2013a}. \citet{HT2004} find that (up to type) the functions $\tilde{a}_{\mid X}$ and $\tilde{b}_{\mid X}$ in~\eqref{eq:HT2004} have a common parametric form for all copulas described by \cite{Joe1997} and \cite{Nelsen1999}. \cite{HeffernanResnick2007} modify and extend the original framework by \cite{HT2004}. Their main modification is to replace the condition $X=t$ in \eqref{eq:HT2004} by $X>t$, i.e., they analyse \begin{equation} \Pr\left\{ \frac{Y-a_{\mid X}(X)}{b_{\mid X}(X)} \leq z \mid X>t\right\} \to G_{\mid X}(z) \qquad \mathrm{as}~t\to\infty. \label{eq:HR20071} \end{equation} This is the most widely applied and considered conditional extreme value model framework and we will use it in the remainder of the paper. \cite{HeffernanResnick2007} further drop the assumption that $X$ and $Y$ have Gumbel margins and provide theoretical results subject to $F_X$ lying in the domain of attraction of some extreme value distribution. \cite{Keef2013a} focus on $X$ and $Y$ having standard Laplace margins, i.e., \[ \Pr(X<x) = \begin{cases} \frac{1}{2} \exp(x)&\mbox{if}~ x\leq 0,\\ 1 - \frac{1}{2} \exp(-x)&\mbox{if}~ x>0. \end{cases} \] Under these conditions, the functions in \eqref{eq:HR20071} are of the form $a_{\mid X}(x) = \alpha x$ and $b_{\mid X}(x) = x^\beta$, with $(\alpha,\beta)\in[-1,1]\times(-\infty,1)$, in all of the standard copulas studied by \cite{HT2004}. When $X$ and $Y$ are positively associated, standardisation of $X$ and $Y$ to Laplace margins gives the same limiting behaviour as when the variables were transformed to Gumbel margins. However, the limiting behaviours differ when $X$ and $Y$ are negatively associated, with the symmetry of the Laplace margins giving a simpler form. Estimation of the parameters $(\alpha,\beta)$ and the distribution function $G_{\mid X}$ for Laplace margins are considered in \cite{Keef2013a, Keef2013b}. \subsection{Linking Multivariate and Conditional Extremes Models} One early question asked about the conditional extremes model by \citet{HT2004} concerned its link to established multivariate extreme value models, e.g., \citet{Husler1989} or \citet{Tawn1990}. This motivated \cite{HeffernanResnick2007} to define the class of conditional extreme value models (CEVM) which does not require the margins to be standardised to common margins. The limit distribution of $Y\mid (X>t)$ as $t\to\infty$ lies in the CEVM class if \begin{enumerate} \item The distribution function $F_X$ of $X$ is in a domain of attraction of an extreme value distribution with parameter $\gamma_X\in\mathbb{R}$. \item There exist normalising functions $c,f: \mathbb{R} \to \mathbb{R}$ and $d,g: \mathbb{R} \to \mathbb{R}_+$, such that \begin{equation} t\,\Pr\left\{\frac{X-c(t)}{d(t)}>x, \frac{Y-f(t)}{g(t)}\leq y\right\} \to \mu_{Y\mid X>}\left([x,\infty]\times[-\infty,y]\right) \quad\mathrm{as}~t\to\infty, \label{eq:HR2007} \end{equation} where $\mu_{Y\mid X>}\left([x,\infty] \times [-\infty,y]\right)$ is a non-degenerate distribution function in $x$, and $\mu_{Y\mid X>}\left([x,\infty] \times [-\infty,y]\right)<\infty$. \end{enumerate} \cite{Resnick2014} add the condition $\mu_{Y\mid X>}([x,\infty]\times\{\infty\})=0$ to ensure uniqueness of the limit measure. However, Example 2.3 in \cite{Drees2017} show that this condition has to be strengthened to \[ \lim_{y\rightarrow \infty}\mu_{Y\mid X>}([x,\infty]\times [y,\infty) ) = \lim_{y\rightarrow \infty}\mu_{Y\mid X>}([x,\infty]\times (-\infty,-y]) =0, \] i.e., the limit measure cannot put any mass for $Y$ at $\{-\infty\}$ or $\{-\infty\}$. Returning to the link between the CEVM framework and multivariate extreme value models assume that $X$ and $Y$ belong to the domain of attraction of some extreme value distribution, with parameters $\gamma_X$ and $\gamma_Y$ respectively. Theorem 2.1 in \cite{Das2011} claims that ($X,Y$) lies in the domain of attraction of a multivariate extreme value distribution if $Y\mid (X>t)$ and $X\mid (Y>t)$ both lie in the CEVM class by \cite{HeffernanResnick2007}. Example 4.4 in \cite{Drees2017} illustrates, however, that this result is not true, unless the normalisations of $Y\mid(X>t)$ and $X\mid(Y>t)$ in \eqref{eq:HR2007} are identical. Now consider the case that $(X,Y)$ lies in the domain of attraction of a multivariate extreme value distribution. Suppose that (i) $X$ and $Y$ are asymptotically dependent and (ii) $\gamma_X,\gamma_Y\leq 0$. \cite{Drees2017} show that these conditions are sufficient for the limits of $Y\mid (X>t)$ and $X\mid (Y>t)$ to lie in the class of conditional extreme value models. The restriction $\gamma_X,\gamma_Y\leq 0$ is required, as demonstrated by Example 4.2 in \cite{Drees2017}. Note, the conditions (i) and (ii) are not necessary conditions for $Y\mid (X>t)$ and $X\mid (Y>t)$ to lie in the CEVM class; see Section 5 in \cite{HeffernanResnick2007} for the case of $X$ and $Y$ being asymptotically independent. \subsection{Standardisation of Marginals in the CEVM Class} \cite{HeffernanResnick2007} examined how the standardisation of $X$ to a standard Pareto distributed random variable $X_P$, but leaving $Y$ unchanged, effected the limiting measure $\mu_{Y\mid X>}$ in~\eqref{eq:HR2007}. They show that the limiting behaviour of $(X_P,Y)$ satisfies \[ t\,\Pr\left\{\frac{X_P}{t}>x, \frac{Y-f(t)}{g(t)}\leq y\right\} \to \begin{cases} \mu_{Y\mid X>}\left(\left[\dfrac{x^{\gamma_X}-1}{\gamma_X},\infty\right]\times[-\infty,y]\right) & \mbox{if}~\gamma_X\neq 0,\\ \mu_{Y\mid X>}\left([\log x,\infty]\times[-\infty,y]\right) & \mbox{if}~\gamma_X=0, \end{cases} \] where the measure $\mu_{Y\mid X>}$ corresponds to that in \eqref{eq:HR2007}, and where $c(t)=0$ and $d(t)=t$ due to the standardisation to $X_P$. The standardisation of $Y$ is more challenging than that of $X$, because the CEVM~\eqref{eq:HR2007} does not require $F_Y$ to be in a domain of attraction of an extreme value distribution, unlike the \citet{HT2004} and \cite{Keef2013a} formulations of limit~\eqref{eq:HT2004}. \citet{HeffernanResnick2007} and \cite{Das2011} consider the task of finding a monotone and unbounded function $h:\mathbb{R}\to\mathbb{R}_+$ such that \begin{equation} t\,\Pr\left\{\frac{X_P}{t}>x, \frac{h(Y)}{t}\leq y \right\} \to \tilde{\mu}_{Y\mid X>}\left([x,\infty]\times[-\infty,y]\right) \quad\mathrm{as}~t\to\infty, \label{eq:Standardization} \end{equation} where $\tilde{\mu}_{Y\mid X>}$ is finite and non-degenerate, $\tilde\mu_{Y\mid X>}([x,\infty]\times\{\infty\})=0$ and $\tilde\mu_{Y\mid X>}(\{\infty\}\times[-\infty,y])=0$ for all $x,y$. \cite{Das2011} argue that such a function $h$ exists if, and only if, $\mu_{Y\mid X>}$ is not a product measure. However, examples in \cite{Drees2017} Section 3 illustrate that neither implication is true, and the limit measures $\mu_{Y\mid X>}$ and $\tilde{\mu}_{Y\mid X>}$ may convey different information if they exist. \cite{Drees2017} further provide two sufficient sets of conditions on the functions $f$ and $g$ in expression~\eqref{eq:HR2007} for such a function $h$ to exist. \section{Investigating \citet{Drees2017} Examples} \label{sec:Examples} \subsection{Strategy} \label{sec:Strategy} Most examples in \citet{Drees2017} work with the joint distribution of $(X_P,Y)$, where $\Pr(X_P>x) = 1-x^{-1}$ ($x>1$), i.e., the conditioning variable has standard Pareto distribution, and the distribution of $Y$ is given indirectly through the distributions of $X_P$ and $Y\mid X_P$. In the following, we consider the examples by \cite{Drees2017} highlighted in Section~\ref{sec:background} and examine the obtained limiting behaviour, in each case using their numbering of the examples. We work within the framework of \cite{Keef2013a}, therefore, we consider the limiting behaviour after the variables $X_P$ and $Y$ have been transformed to Laplace margins, denoted by $(X_L,Y_L)$. The standardisation of $X_P$ to Laplace margins yields the variable $X_L$ and the link between the values $x_L$ of $X_L$ and $x$ of $X_P$ is given by \begin{equation} \frac{1}{x} = \begin{cases} 1-\frac{1}{2}\exp(x_L) & \mathrm{if}~x_L\leq 0,\\ \frac{1}{2}\exp(-x_L) & \mathrm{if}~x_L>0. \end{cases} \label{eq:LaplaceX} \end{equation} When transforming $Y$ to $Y_L$, we first derive the distribution function $F_Y$ and then derive the expression for the transformed value $y_L$ of $Y_L$ as $y_L = F_L^{-1} \left[ F_Y(y)\right]$, where $F_L^{-1}$ is the inverse distribution function of a Laplace random variable. \subsection{Example 2.3} Let $B$ be a discrete random variable that is uniformly distributed on $\{0,1\}$ that is independent of~$X_P$ and define $Y = B + (1-B)(2-1/X_P)$. The variable $Y$ can take any value in the interval $[1,2)$, with the highest values occurring when $B=0$ and $X_P$ large, and its marginal distribution is given by $\Pr(Y=1)=1/2$ and $\Pr(Y<y)= y/2$ for $1<y\leq 2$. This example in \cite{Drees2017} showed that the condition $\mu_{Y\mid X}\left([,x,\infty]\times\{\infty\}\right)=0$ in \cite{Resnick2014} is not sufficient to ensure uniqueness of the limit measure in expression~\eqref{eq:HR2007}, and that the stronger condition $\mu_{Y\mid X}\left([,x,\infty]\times\{-\infty,\infty\}\right)=0$ is required. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Images/Example23Original.pdf} \hspace{0.5cm} \includegraphics[width=0.4\textwidth]{Images/Example23Transformed.pdf} \caption{Illustration of 2,000 samples for the framework in Example 2.3. The left panel shows the simulated observations $(x_p,y)$ on the original scale, while the right panel corresponds to the transformed samples $(x_L, y_L)$.} \label{fig:Example23} \end{figure} As outlined in Section \ref{sec:Strategy}, we are interested in the limiting behaviour of the transformed variable $Y_L$ given that the transformed variable $X_L$ is large. Transformation of $Y$ to Laplace margins gives $Y = 2-\exp(-Y_L)$ for $B=0$, while $Y=1$ when $B=1$; this second case implies $Y_L=0$ irrespective of $X$ for $B=1$ and, thus, the lower tail of $Y_L$ is not Laplace distributed. Figure~\ref{fig:Example23} left panel shows that realised values of $Y$ are close to $y=1$ or $y=2$ for large values of $X_P$, while the transformed variable $Y_L$, shown in the right panel, has no upper bound. When $B=0$, substituting the realisations $x$ and $y$ by their transformed values $x_L$ and $y_L$, gives for $x_L>0$ \[ y = 2 - \frac{1}{x} \quad\Leftrightarrow\quad 2-\exp(-y_L) =~2 - \frac{1}{2}\exp(-x_L) \quad\Leftrightarrow\quad y_L =~\log(2) + x_L. \] This linear relationship between the values, for $x_L>0$ and $B=0$, is also visible in Figure \ref{fig:Example23} right panel, while, for $B=1$, we have $y_L=0$ for all possible values~$x_L$. From the calculations above, we conclude that the functions $a_{\mid X}(x) = x$ and $b_{\mid X}(x) = 1$ in expression~\eqref{eq:HR20071} give the limiting behaviour as \[ \Pr(Y_L - X_L \leq z \mid X_L > x_L) ~\to~ \frac{1}{2}\left(1 + \mathbf{I}\{\log 2 \leq z\}\right) = G_{\mid X}(z) \qquad \mathrm{as}~x_L \to \infty, \] where $\mathbf{I}$ denotes the indicator function, and $G_{\mid X}$ is a non-degenerate distribution function. The result $\lim_{z\to-\infty}G_{\mid X}(z)=0.5$ is due to the case $B=1$ which occurs with probability $0.5$. Other choices for $a_{\mid X}$ and $b_{\mid X}$ lead to a degenerate limiting distribution $G_{\mid X}$, contradicting the \cite{HT2004} assumption, or yield $\mu_{Y\mid X>}\left([x,\infty]\times\{\infty\}\right)=0$, violating the constraint $\lim_{z\to\infty} G_{\mid X}(z) =1$ by \cite{Keef2013a}. In terms of the measure $\mu_{Y_L\mid X_L>}$ in \eqref{eq:HR2007}, we have for $x_L>0$ \[ \mu_{Y_L\mid X_L>}\left((x_L,\infty]\times [-\infty,y_L]\right) = \frac{1}{2} \left(1 + \mathbf{I}\{\log 2\leq y_L\}\right)\times \frac{1}{2} \exp(-x_L). \] While this result is similar to the first limit found by \cite{Drees2017}, we do not require the additional constraint $\mu_{Y\mid X>}\left([x,\infty]\times\{-\infty\}\right)=0$, introduced by \cite{Drees2017}, to ensure a unique limiting behaviour, because we transformed the variables to common Laplace margins. \subsection{Example 3.1} Let $B$ be a discrete random variable that is uniformly distributed on $\{-1,1\}$ and independent of the Pareto distributed random variable~$X_P$. The variable $Y$ is defined as $Y = 2-B/X_P$. For large $X_P$, the values of $Y$ are concentrated around 2 (see Figure~\ref{fig:Example31} left panel). The marginal distribution of $Y$ is $Y\sim\mbox{Uniform}(1,3)$. \cite{Drees2017} present this and the following Example~3.2, to illustrate that the result by \cite{Das2011} linked to the standardisation \eqref{eq:Standardization} of $Y$ does not hold in general. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Images/Example31Original.pdf} \hspace{0.5cm} \includegraphics[width=0.4\textwidth]{Images/Example31Transformed.pdf} \caption{Illustration of 2,000 samples for the framework in Example 3.1. The left panel shows the simulated observations $(x_p,y)$ on the original scale, while the right panel corresponds to the transformed samples $(x_L, y_L)$.} \label{fig:Example31} \end{figure} We again start by transforming the random variables $X_P$ and $Y$ to Laplace margins. Substitution of the values $y$ by their transformed values $y_L$ gives \[ y = \begin{cases} 1 + \exp(y_L) &\mbox{if}~ y_L \leq 0,\\ 3 - \exp(-y_L) &\mbox{if}~ y_L > 0.\\ \end{cases} \] and the transformation of $X_P$ to Laplace margins is given in \eqref{eq:LaplaceX}. For the case $B=1$, $Y$ takes values smaller than $2$, while only values greater than $2$ are observed for $Y$ when $B=-1$. Therefore, we have to consider the transformation with $y_L \leq 0$ for $B=1$, and $y_L > 0$ for $B=-1$. For the case $B=1$, we find that \[ y=2-\frac{1}{x} \quad\Leftrightarrow\quad 1+\exp(y_L)=~2-\frac{1}{2}\exp(-x_L)\quad \Leftrightarrow\quad y_L =~\log\left\{ 1 - \frac{\exp(-x_L)}{2}\right\}. \] The final equation implies that we can approximate $y_L$ by $-\frac{1}{2}\exp(-x_L)$ as $x_L\to\infty$. Similar calculations for the case $B=-1$ give that as $x_L\to\infty$ \[ y = 2 + \frac{1}{x}\quad \Leftrightarrow\quad y_L =~-\log\left\{ 1 - \frac{\exp(-x_L)}{2}\right\} ~\sim~ \frac{1}{2}\exp(-x_L). \] Without norming, $Y_L\mid (X_L>x)\rightarrow^P 0$ as $x\rightarrow \infty$. To avoid this degeneracy, we need to take the functions in \eqref{eq:HR20071} to be $a_{\mid X}(x) = 0$ and $b_{\mid X}(x) = \exp(-x)$, the limiting distribution $G_{\mid X}$ then assigns probability $1/2$ to each of the values $z=-0.5$ and $z=0.5$. The expression for $b_{\mid X}(x)$ is not of the form for Laplace margins found by \citet{Keef2013a}, with $b_{\mid X}(x)$ tending to zero very rapidly. This form is needed given the speed of convergence of $Y_L\mid(X_L>x)$ towards zero as $x\to\infty$, as seen in Figure~\ref{fig:Example31} right panel. This is not too surprising as it is known that the simple parametric forms of \citet{Keef2013a} for the norming functions do not always hold, with \citet{Papastathopoulos2016} already identifying that it is possible to have $a_{\mid X}(x) = x\mathcal{L}_a(x)$ and $b_{\mid X}(x) = x^\beta\mathcal{L}_b(x)$, with $\mathcal{L}_a(x)$ and $\mathcal{L}_b(x)$ being slowly varying functions and $\beta\in (-\infty,1)$. Here we have an example that is outside that class with $\beta=0$ and $-\log\{\mathcal{L}_b(x)\}$ being regularly varying. With our norming, the limiting measure $\mu_{Y_L \mid X_L>}$, as defined in \eqref{eq:HR2007}, is \[ \mu_{Y_L\mid X_L>}\left( (x_L,\infty] \times [-\infty,y_L] \right) = \frac{1}{2}\left(\mathbf{I}\left\{-0.5\leq y_L\right\} + \mathbf{I}\left\{0.5\leq y_L\right\}\right) \times \frac{1}{2} \exp(-x_L). \] Thus, although \cite{Drees2017} obtain a non-product limiting measure in this example, the result above shows that standardisation of marginals to a common form leads to a simpler product limit measure; thus, providing further evidence that standardisation helps in extremal dependence modelling. \subsection{Example 3.2} Let $B$ be a discrete random variable that is uniformly distributed on $\{-1,1\}$, $U\sim\mbox{Uniform}(0,1)$, and $X_P$, $B$ and $U$ are all independent. Define $Y = B(1 -U/X_P)$, with the random variable $Y$ taking negative and positive values for $B=-1$ and $B=1$ respectively. Figure~\ref{fig:Example32} left panel shows that the values of $Y$ are close to $y=-1$ and $y=1$ for large values of $X_P$. For $-1<y<0$, we calculate the marginal distribution of $Y$ as $\Pr(Y<y)= (y+1)\{1-\log(y+1)\}/2$; see Section \ref{sec:Marginal32} for details. Using similar calculations, we find $\Pr(Y<y) = (1+y)/2 + (1-y)\log(1-y)/2$ for $0\leq y<1$. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Images/Example32Original.pdf} \hspace{0.5cm} \includegraphics[width=0.4\textwidth]{Images/Example32Transformed.pdf} \caption{Illustration of 2,000 samples for the framework in Example 3.2. The left panel shows the simulated observations $(x_p,y)$ on the original scale, while the right panel corresponds to the transformed samples $(x_L, y_L)$.} \label{fig:Example32} \end{figure} To transform $Y$ to Laplace margins for $y<0$, which corresponds to $y_L<0$, we use the relationship $(1/2)(y+1)\left[1-\log(y+1)\right] = (1/2)\exp(y_L)$. Since we cannot find an analytical close form for $y$ in terms of $y_L$, we consider approximations in order derive the link between $y_L$ and $y$ in the limit as $y\to -1$. The calculations in Appendix~\ref{sec:Limit32} give that \[ y+1 \sim -\frac{\exp (y_L)}{y_L} \] for $y\downarrow -1$. Using similar approximations, we find $1-y \sim \exp(-y_L)/y_L$ for $y\uparrow 1$. For $B=1$, the limiting behaviour of $Y$, as $X_P$ becomes large, is thus described by \[ y = - 1 + \frac{u}{x} \quad \Leftrightarrow\quad -\dfrac{\exp(y_L)}{y_L} - 1 = -1 + \frac{u}{2}\exp(-x_L) \quad \Leftrightarrow\quad y_L - \log(-y_L) = \log\left(\frac{u}{2}\right) - x_L, \] where $u$ denotes the realisation of the random variable $U$. Considering $x_L \to \infty$, we obtain that for $B=-1$ \[ y_L = -x_L + \log x_L + o_P(\log x_L), \] where the stochasticity is due to $U$. So, $y_L\overset{p}{\to} -\infty$ as $x_L\to\infty$; this can also be seen in Figure~\ref{fig:Example32} right panel. Similar calculations give $-y_L-\log(y_L)=\log\left(u/2\right)-x_L$ when $B=1$. Hence, for $B=1$, $y_L=x_L-\log x_L+o_P( \log x_L)$ with $y_L\overset{p}{\to}\infty$ as $x_L\to\infty$. At first sight these results appear to correspond to there being non-unique choices for $a_{\mid X}$ and $b_{\mid X}$ in \eqref{eq:HT2004} that yield a non-degenerate limiting distribution $G_{\mid X}$. However, there is only one such choice (up to type) with $a_{\mid X}(x) = x$ and $b_{\mid X}(x) = \log x$ ($x>1$) giving $G_{\mid X}$ placing mass of $1/2$ at $\{-\infty\}$ and $\{-1\}$, i.e., $G_{\mid X}(x) = 0.5$ for $-\infty <z<-1$ and $G_{\mid X}(z)=1$ for $-1\le z <\infty$. As in Example~3.1, the derived norming function $b_{\mid X}(x)$ is not of the simple power parametric form of \cite{Keef2013a}. Another possible norming has $b_{\mid X}(x)=x$ with $a_{\mid X}(x)=o(b_{\mid X}(x))$ as $x\rightarrow \infty$, giving $G_{\mid X}$ with mass at of $1/2$ at $\{-1\}$ and $\{1\}$, but this type of norming is not permitted as $b_{\mid X}(x)$ cannot grow as fast as $x$ \citep{Keef2013a}. \subsection{Example 4.2} Let $B$ be a discrete random variable that is uniformly distributed on $\{0,1\}$. Define the function $g(x):=x(2 + \sin\log x)$ for $x\geq 1$ and we consider $Y = B X_P + (1-B)\left\{-g^{-1}\left(2 X_P\right)\right\}$, where $g^{-1}$ is the inverse of $g$. Figure~\ref{fig:Example42} indicates that $Y$ tends to $-\infty$ and $+\infty$ as $X_P$ becomes large. The purpose of this example in \cite{Drees2017} is to illustrate that ($X,Y$) being multivariate extreme value distributed is not a sufficient condition for $Y\mid (X_P>t)$, as $t\to\infty$, to lie in the class of CEVMs of the form \eqref{eq:HR2007}. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Images/Example42Original.pdf} \hspace{0.5cm} \includegraphics[width=0.4\textwidth]{Images/Example42Transformed.pdf} \caption{Illustration of 2,000 samples for the framework in Example 4.2. The left panel shows the simulated observations $(x_p,y)$ on the original scale, while the right panel corresponds to the transformed samples $(x_L, y_L)$.} \label{fig:Example42} \end{figure} We derive the marginal distribution of $Y$ as \[ \Pr(Y<y) = \begin{cases} 1 / g(-y) &\mbox{if}~y<-1,\\ 1/2 &\mbox{if}~-1\le y\le 1\\ 1 - 1 / 2y & \mbox{if}~y>1, \end{cases} \] and the calculations are provided in Appendix A.3. Transformation of $Y$ to Laplace margins gives $y = -g^{-1}\left\{2/\exp(y_L) \right\}$ for $y\leq -1$, and $y = \exp(y_L)$ when $y\geq 1$. The limiting behaviour of the transformed variable $Y_L$ as $X_L$ becomes large, and for $B=1$, is given by $\exp (y_L) \sim 2 \exp (x_L)$, which is equivalent to $y_L = \log 2 + x_L+o(1)$ as $x_L\rightarrow \infty$. For the case $B=0$, we find $y_L = -\log 2 - x_L+o(1)$ as $x_L\rightarrow \infty$. This symmetry in the limiting behaviour on Laplace marginal distributions is also visible in Figure~\ref{fig:Example42} right panel. Consequently, we have that as $x_L\rightarrow \infty$, \[ Y_L = \begin{cases} \log 2 + X_L +o(1) & \mbox{if}~B=1,\\ -\log 2 - X_L +o(1) & \mbox{if}~B=0.\\ \end{cases} \] Defining $a_{\mid X}(x) = x$ and $b_{\mid X}(x)=1$ yields a non-degenerate limiting distribution $G_{\mid X}$ with $G_{\mid X}(z) = 0.5$ for $-\infty < z<\log 2$ and $G_{\mid X}(z) = 1$ for $\log 2\le z < \infty$. While the normalising functions $a_{\mid X}(x) = -x$ and $b_{\mid X}(x)=1$ also yield a non-degenerate limiting distribution $G_{\mid X}$, this choice is not permissible because $G_{\mid X}$ would have mass at $+\infty$ \citep{Keef2013b}. Consequently, the normalising functions $a_{\mid X}$ and $b_{\mid X}$ are well-defined (up to type). Furthermore, this example shows that a transformation to Laplace margins can result in the distribution of $Y_L\mid (X_L>t)$ as $t\to\infty$ being in the class of conditional extreme models by \cite{Keef2013a}, although $Y\mid (X_P>t)$ as $t\to\infty$ does not lie in the class of CEVMs introduced by \cite{HeffernanResnick2007}. When we consider the distributions of $X_L \mid (Y_L=y_L)$, where $y_L>0$, there is deterministic relationship between $X_L$ and $Y_L$, and thus there cannot exist a non-degenerate limiting distribution $G_{\mid Y}$, but the behaviour is trivial $X_L=Y_L-\log 2 \mid Y_L>y_L$ for any $y_L>0$. \subsection{Example 4.4} Define the function $g_c(u) = u(1 + c \sin\log u)$, where $0<u\le 1$ and $|c|<1/\sqrt{2}$, and $\psi_c(z) = g_c^{-1}(1/z)$ with $z\geq 1$. Let $Z_P$ be standard Pareto distributed, $\Pr(Z_P>z)=1-z^{-1}~(z>1)$, and $B$ be discrete and uniformly distributed on $\{1,2,3,4\}$ and also independent of $Z_P$. The random variables $X$ and $Y$ are then defined as \begin{equation} (X,Y) := \begin{cases} (2- \psi_{1/2}(Z_P),~2-1/Z_P )&\mbox{if}~B=1,\\ (2- \psi_{-1/2}(Z_P),~2-1/\sqrt{Z_P} )&\mbox{if}~B=2,\\ (1- 1/Z_P,~2-1/Z_P )&\mbox{if}~B=3,\\ (2- 1/Z_P,~1-1/Z_P )&\mbox{if}~B=4. \end{cases} \label{eqn:Ex44setup} \end{equation} The purpose of this example by \citet{Drees2017} is to show that ($X,Y$) does not lie in the class of multivariate extreme value models despite $Y\mid (X>t)$ and $X\mid (Y>t)$ belonging to the CEVM class by \cite{HeffernanResnick2007} as $t\to2$. This inconsistency of the CEVM class with $(X,Y)$ being in the domain of attractions of a bivariate extreme value distribution indicates that these conditional distributions fall outside the framework of the standard assumptions for bivariate extreme values. We now investigate this inconsistency for the bivariate distribution~\eqref{eqn:Ex44setup} after marginal standardisation. We start by calculating the marginal distributions of $X$ and $Y$ to see if they are individually in the domain of attraction of the univariate extreme value distribution~\eqref{eq:Tail}. Figure \ref{fig:Example44} left panel shows that the random variable $X$ ($Y$) respectively can take values between 0 and 1 when $B=3$ ($B=4$), while $B\neq3$ ($B\neq 4$) leads to the values of $X$ ($Y$) lying between 1 and 2. The cumulative distribution function of $X$ is \[ \Pr(X\leq x ) = \begin{cases} x/4 & \mbox{if}~0\leq x\leq 1,\\ 3x/4 - 1/2 &\mbox{if}~1\leq x\leq 2, \end{cases} \] and for $Y$ we have \[ \Pr(Y\leq y ) = \begin{cases} y/4 & \mbox{if}~0\leq y\leq 1,\\ y/2 - (2-y)^2/4 &\mbox{if}~1\leq y\leq 2. \end{cases} \] Detailed calculations for $\Pr(X\leq x )$ and $\Pr(Y\leq y )$ are provided in Appendix A.4. For these two marginals it is straightforward to show that they are each in the domain of attraction of the univariate extreme value distribution with parameters $\gamma_X=\gamma_Y=-1$. Transformation of $X$ and $Y$ to Laplace margins gives \[ x = \begin{cases} 2 \exp(x_L) &\mbox{if}~x_L\leq-\log 2,\\ 2/3 + (2/3) \exp(x_L) &\mbox{if}~-\log 2<x_L\leq0,\\ 2 - (2/3) \exp(-x_L) &\mbox{if}~x_L>0, \end{cases} \] and \[ y = \begin{cases} 2 \exp(y_L) &\mbox{if}~y_L\leq-\log 2,\\ 3 - \sqrt{5-2\exp(y_L)} &\mbox{if}~-\log 2<y_L\leq0,\\ 3 - \sqrt{1+2\exp(-y_L)} &\mbox{if}~y_L>0. \end{cases} \] Applying these transformations does not change the property that the marginal distributions are in the domain of attractions of univariate extremes value distribution, only now $\gamma_{X_L}=\gamma_{Y_L}=0$, and we still have that $(X_L,Y_L)$ is not in domain of attraction of a bivariate extreme value distribution. The following calculations show that, with standardisation to Laplace margins, the conditional limiting distribution of $Y_L\mid(X_L>x_L)$ fails to meets the conditions of \citet{HT2004} as $x_L\to\infty$, unlike the conditional $Y\mid(X>t)$, as $t\to2$, that falls in the CEVM class of \citet{HeffernanResnick2007}. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Images/Example44Original.pdf} \hspace{0.5cm} \includegraphics[width=0.4\textwidth]{Images/Example44Transformed.pdf} \caption{Illustration of 2,000 samples for the framework in Example 4.4. The left panel shows the simulated observations $(x,y)$ on the original scale, while the right panel corresponds to the transformed samples $(x_L, y_L)$, with the behaviour for each value of $B$ highlighted: the lightest shade corresponds to $B=1$, and the points with the darkest shade are the samples for $B=4$.} \label{fig:Example44} \end{figure} We explore the conditional distributions by looking at the relations between $(X_L,Y_L)$ for each value of $B$. For $B=1$, the expressions $X = 2-\psi_{1/2}(Z)$ and $Y=2-1/Z$ give $Y = 2 - g_{1/2}(2-X)$. To study the limiting behaviour, we again replace $X$ and $Y$ by their Laplace distributed transformed expressions. Figure~\ref{fig:Example44} right panel shows that, for $B=1$, large values of $X_L$ lead to large values of $Y_L$ and we thus consider the equality \begin{align*} 3 - \sqrt{1 + 2\exp(-y_L)} &= 2 - g_{1/2} \{(2/3) \exp(-x_L)\}\\ &= 2 - (1/3) \exp(-x_L)[2 + \sin\{ \log (2/3) - x_L \}]. \end{align*} For notational brevity, we define $h_1(x_L) = (1/3) \left[2 + \sin\{\log (2/3) - x_L\}\right]$ and we note $(1/3) \leq h_1(x_L) \leq 1$. By simplifying the terms and taking squares on both sides, we get \[ 1+2\exp(-y_L) = 1 + 2\exp(-x_L) h_1(x_L) + \exp(-2x_L) \left[h_1(x_L)\right]^2. \] Further simplifying the terms and taking logs, we end up with \begin{equation} y_L = x_L - \log h_1(x_L) + \log\left[ 1 + (1/2) \exp(-x_L) h_1(x_L)\right], \label{eqn:diagonal.asypmt} \end{equation} which we can write as $y_L = x_L - \log h_1(x_L) + (1/2) \exp(-x_L) h_1(x_L) + O\left(\exp(-2x_L)\right)$. For $B=2$, the expressions $X=2-\psi_{-1/2}(Z_P)$ and $Y=2-1/Z_P$ give that $Y = 2 - \sqrt{g_{-1/2}(2-X)}$. On Laplace scale, we then have \[ 3 - \sqrt{1 + 2\exp(-y_L)} ~=~ 2 - \sqrt{g_{-1/2} \left\{(2/3) \exp(-x_L)\right\}}\\ ~=~ 2 - \exp(-x_L/2) h_2(x_L), \] where $h_2(x_L) = \sqrt{ (2/3) - (1/3) \sin\{ \log (2/3) - x_L\}}$. By taking squares on both sides, \[ 1+2\exp(-y_L) = 1+2\exp(-x_L/2) h_2(x_L)+ \exp(-x_L)\left[h_2(x_L)\right]^2. \] Following the same steps as for $B=1$ yields \[ y_L = x_L/2 - \log h_2(x_L) + (1/2)\exp(-x_L/2) h_2(x_L) + O\left(\exp(-x_L)\right). \] The case $B=3$ leads to $x_L < -\log 2$, i.e., $x_L$ is not becoming large, and thus this mixture component can be ignored when studying $Y_L \mid (X_L>x_L)$ as $x_L\to\infty$. Finally, $B=4$ gives $Y = X-1$ with $2\exp(Y_L) = 2 - (2/3)\exp(-X_L) - 1$. Consequently, \[ y_L = \log\left[1/2 - (1/3)\exp(-x_L)\right]~\sim~ -\log(2)\qquad\mbox{ as}~x_L\to\infty. \] Now we consider combining the different mixture components and we set $a_{\mid X}(x) = x - \exp(-x) h_1(x)$ and $b_{\mid X}(x) = -\log h_1(x) + (1/2) h_1(x) \exp(-x)$ in \eqref{eq:HR20071}; as $-\log h_1(x)\geq 0$ and $h_1(x)> 0$ this gives $b_{\mid X}(x)>0$ as required. The limiting behaviour of $Y_L\mid(X_L>x_L)$ as $x_L\rightarrow \infty$, for $B=1$, is then \[ \frac{Y_L - X_L +\exp(-X_L) h_1(X_L)}{-\log h_1(X_L) + (1/2) h_1(X_L) \exp(-X_L)} \sim 1. \] For the remaining components, $B=2$ and $B=4$, $\lim_{x_L\to\infty}(Y_L-a_{\mid X}(X_L))/{b_{\mid X}(X_L)}=-\infty$ for $\{X_L>x_L\}$. However, there is no limiting distribution $G_{\mid X}$ as in \eqref{eq:HR20071} because $\Pr(B=1\mid X_L>x_L)$ oscillates between $1/6$ and $1/2$ as $x_L\to\infty$, that is, $\Pr\left( \{Y_L-a_{\mid X}(x_L)\} / b_{\mid X}(x_L) \le z \mid X_L > x_L \right)$ does not converge. This oscillating behaviour is found by considering $\Pr(B=1\mid X>t)$ for $1<t<2$. Using similar calculations as in Appendix A.4, we find \[ \Pr(B=1\mid X>t) = \frac{\Pr(X>t\mid B=1) \Pr(B=1)}{\Pr(X>t)} = \frac{1}{3}+\frac{1}{6}\sin\log(2-t), \] which oscillates between 1/6 and 1/2 as $t \to 2$, and this implies that $\Pr(B=1\mid X_L>x_L)$ oscillates between 1/6 and 1/2 as $x_L\to\infty$. Consequently, $Y_L\mid(X_L>x_L)$ as $x_L\to\infty$ does not fall in the class of conditional extreme value models by \cite{Keef2013a}, despite $Y\mid(X>t)$, as $t\to 2$, being in the CEVM class by \citet{HeffernanResnick2007}, see \cite{Drees2017}. So far we have focused on the conditional distribution of $Y_L\mid (X_L>x_L)$ for $x_L\rightarrow \infty$, but there is also interest in the asymptotic behaviour of the reverse conditional $X_L\mid (Y_L>y_L)$ as $y_L\to\infty$. Here Figure~\ref{fig:Example44} provides some insight into what happens, with only the mixture terms corresponding to $B=1$ and $B=3$ contributing to the tail of $Y_L$. Further, expression~\eqref{eqn:Ex44setup} shows that $Y_L\mid (B=1)$ and $Y_L \mid (B=3)$ are identical, which gives that the limiting distribution of $X_L \mid (Y_L>y_L)$ as $y_L\rightarrow \infty$ must be a mixture distribution with weights $1/2$ on each component. When $B=3$ we see that $X_L$ does not grow with $Y_L$, so with any norming on $X_L$ that is required to handle the growth of $X_L$ with $y_L$ in $X_L\mid (Y_L>y_L)$ will lead to mass tending to $-\infty$ when $B=3$. So it remains to consider the $B=1$ case. The deterministic relationship in expression~\eqref{eqn:diagonal.asypmt} between $X_L$ and $Y_L$ gives $Y_L>X_L$ conditional on $X_L$ being above a sufficiently high threshold, because $-\log h_1(x) \geq 0$ and $\log[1+(1/2) \exp(-x) h_1(x)]>0$ for all $x$. Furthermore, the relation between $X_L$ and $Y_L$ is bijective, because the first derivative in~\eqref{eqn:diagonal.asypmt} is strictly positive. Consequently, we can invert the relation between $Y_L$ and $X_L$ when $B=1$ and this gives for $y_L\to\infty$ that \[ x_L= y_L-Q(y_L), \] where $Q(y_L)>0$ is an oscillator function that is bounded above. When $B=1$, we thus obtain the limiting behaviour \[ \frac{x_L - y_L}{Q(y_L)} \sim -1. \] Hence we have that for all $z\in \mathbb{R}$, as $y_L\rightarrow \infty$ \[ \Pr\left(\left.\frac{X_L - Y_L}{Q(Y_L)}<z\right| Y_L>y_L\right)\rightarrow 0.5[1+\mathbf{I}(z>-1)]. \] Thus, the reverse conditional has a more straight-forward behaviour. We further note that the transformation to Laplace margins does not lead to ($X_L,Y_L$) lying in the class of multivariate extreme value models. \cite{Drees2017} showed that ($X,Y$) is not multivariate extreme value distributed either. Consequently, this is an example for which the limiting behaviour $Y_L \mid (X_L>x_L)$ is not in the class of conditional extremes models by \cite{Keef2013a} and $(X_L,Y_L)$ does not lie in the domain of attraction of a multivariate extreme value distribution. \section{Conclusions and Discussion} Our calculations show that standardisation to common Laplace margins resolves the problems highlighted by Examples~2.3 to 4.4 in \cite{Drees2017}. In Example~2.3, this fixed choice of standardisation implied a unique limit measure $G_{\mid X}$, while the CEVM framework by \cite{HeffernanResnick2007} allowed the limit measure $\mu_{Y\mid X>}$ to vary with the standardisation used. This example also highlighted that it is necessary to allow $G_{\mid X}$ to have mass at $\{-\infty\}$, because the limit measure might otherwise be degenerate. Consequently, while \cite{Drees2017} advocate the condition $\mu_{Y\mid X>}\left(\{-\infty,\infty\}\times E^{(\gamma_X)}\right)=0$ to ensure uniqueness, it is sufficient to require $\lim_{z\to\infty} G_{X}(z)=1$ for the conditional extremes model by \cite{Keef2013a}. A non-degenerate limit measure $G_{\mid X}$ was found in Examples 3.1 and 3.2, however, the functions $a_{\mid X}$ and $b_{\mid X}$ in \eqref{eq:HR2007} were not of the simple parametric form of \cite{Keef2013a}. Example 3.1 further shows that standardisation to Laplace margins can result in a non-degenerate limit measure, despite there not existing a standardisation of the form by \cite{Das2011} for the CEVM framework. Example~4.2 in \cite{Drees2017} showed that $(X,Y)$ being multivariate extreme value distributed is not sufficient for the distributions of $Y\mid (X>t)$ and $X \mid (Y>t)$ to be in the CEVM class as $t$ approaches the upper end of $X$ and $Y$ respectively, while Example~4.4 illustrated that $X\mid(Y>t)$ and $Y\mid (X>t)$ being CEVM does not imply that the distribution of $(X,Y)$ is in the domain of attraction of multivariate extreme value distributions. In contrast, after standardisation of $(X,Y)$ to Laplace margins, $(X_L,Y_L)$, our calculations show that the link between the conditional extremes models by \cite{Keef2013a} and the class of multivariate extreme value distributions remains an open research question. Specifically, Examples 4.2 and 4.4 are ruled out as being evidence for the limit~\eqref{eq:HR20071}, and the associated result for $X_L\mid Y_L$, not being equivalent to the domain of attraction condition of a bivariate extreme value distribution. Finally, we note that the examples of \cite{Drees2017} illustrate some statistical limitations of the \cite{HT2004} framework even with standardised Laplace marginals of \citet{Keef2013a}. Two particular areas relate to handling mixture distributions for $G_{\mid X}$ and the choice of parametric families for the normalising functions $a_{\mid X}$ and $b_{\mid X}$. We discuss these in turn below. Many of the examples of \cite{Drees2017} involved a mixture structure for $(X,Y)$, and hence also for $(X_L,Y_L)$. Although it was possible to identify normalising functions to give a non-degenerate $G_{\mid X}$ in these cases, it was no surprise that $G_{\mid X}$ was also a mixture distribution. From a statistical perspective the only complication with $G_{\mid X}$ having a mixture structure is when $G_{\mid X}$ puts an atom of mass at $\{-\infty\}$; with Example~3.1 being the only example where $\{-\infty\}$ has mass zero. The complication with limiting mass at $\{-\infty\}$ is that at non-asymptotic levels of $x_L$ this mass will be at a finite value with its precise value depending on the associated conditioning value, e.g., $x_L$ in this set up. Statistical methods have recently been developed by \cite{Tendijck2021} which extend the Heffernan–Tawn conditional extreme value model for handling exactly this situation. \citet{Keef2013a} propose parsimonious canonical parametric families for the normalising functions $a_{\mid X}(x)=\alpha x$ and $b_{\mid X}(x)= x^\beta$ which appears suitable for a wide range of published data applications. The examples in \cite{Drees2017} add to the list (first noted by \citet{Papastathopoulos2016}) of theoretical joint distributions for $(X_L,Y_L)$ with normalising functions that lie outside the canonical class. Clearly, the canonical families cannot be extended to cover all of these theoretical examples in a parsimonious way. So the most natural line of future research is to identify if it is possible to quantify the errors that can arise from the inappropriate usage of the canonical families in these examples, with the error relating to the bias of estimated probabilities of extremes events for finite extrapolations.
{ "timestamp": "2022-09-23T02:13:08", "yymm": "2209", "arxiv_id": "2209.10936", "language": "en", "url": "https://arxiv.org/abs/2209.10936" }
\section{Introduction} In \cite{mossel2015shotgun}, Mossel and Ross introduced the shotgun assembly of graphs. The shotgun assembly of a graph means reconstructing the graph from a collection of vertex neighbourhoods. The motivation comes from DNA shotgun assembly (determining a DNA sequence from multiple short nucleobase chains), reconstruction of neural networks (reconstructing a big neural network from subnetworks), and the random jigsaw puzzle problem. See \cite{MR3969756} and references therein. The recent development of random jigsaw puzzles can be found in \cite{bordenave2020shotgun} and \cite{martinsson2016shotgun}. The graph shotgun assembly for various models was studied extensively. For examples, the random regular graphs and the labelled graphs were considered in \cite{mossel2015shotgun} and \cite{MR3969756}, respectively. The reconstruction of the {E}rd{{\bf{H}}{o}}s--{R}{\'e}nyi graph is well studied. The {E}rd{{\bf{H}}{o}}s--{R}{\'e}nyi graph \cite{ergraph} \cite{erdos1960evolution}, denoted by ${\mathcal G}(n,p_n)$, is a random graph on $n$ vertices, where each edge is added independently with probability $p_n\in [0,1]$. In \cite{gaudio2020shotgun}, Gaudio and Mossel showed that ${\mathcal G}(n,p_n)$ with $p_n=n^{-\alpha}$ is reconstructable if $0<\alpha<1/3$ and not reconstructable if $1/2<\alpha<1$ from its $1$-neighbourhoods. Later, Huang and Tikhomirov showed that $\mathcal G(n,p_n)$ with $p_n=n^{-\alpha}$ is reconstructable if $0<\alpha<1/2$ and not reconstructable if $1/2<\alpha<1$ from its $1$-neighbourhoods \cite{huang2021shotgun}. The reconstruction of ${\mathcal G}(n,p_n)$ from its $3$ and $2$-neighbourhoods are considered in \cite[Theorem 4.5]{MR3969756} and \cite[Theorem 4]{gaudio2020shotgun} respectively. In this article, by generalising the notion of graph shotgun assembly, we introduce the notion of shotgun assembly of simplicial complexes. See Section \ref{sec:preli}. The problem of shotgun assembly essentially tells us whether the local structure contains all the information about its global structure. Here we only consider the shotgun assembly problem for the Linial-Meshulam model. However, our notion of shotgun assembly of simplicial complexes can potentially be used for other simplicial complexes, for example, the multi-parameter random simplicial complexes \cite{CF2016book, FCF2019}. The Linial-Meshulam model, denoted by $Y_d(n,p_n)$, is a random $d$-dimensional simplicial complex on $n$ vertices with a complete $(d-1)$-skeleton, in which each $d$-dimensional simplex is added independently with probability $p_n\in [0,1]$. See Section \ref{sec:preli} for details. In \cite{linialmeshulam}, Linial and Meshulam introduced this model for $d=2$. Later, it was extended for $d\ge 3$ by Meshulam and Wallach \cite{meshulamwallach}. After that this model has been studied extensively, for example see \cite{HJ2013, GW2016, LP2016, KR2017, HS2017, PR2017, LP2019, HK2019, LP2022}. Observe that $Y_1(n,p_n)={\mathcal G}(n,p_n)$, in other words, $d=1$ gives the {E}rd{{\bf{H}}{o}}s--{R}{\'e}nyi graph. We show that $Y_d(n,p_n)$ for any $d \in \mathbb{N}$ with $p_n=n^{-\alpha}$ is reconstructable if $0<\alpha<1/3$ and not reconstructable if $1/2<\alpha<1$ from its $1$-neighbourhoods. See Theorems \ref{main.thm.1} and \ref{main.thm.2}. The meaning of reconstruction of a simplicial complex from its $1$-neighbourhoods is given in Section \ref{sec:preli}. We believe that the range $0<\alpha<1/3$ of reconstruction is not optimal, the optimal range should be $0<\alpha<1/2$. See Conjecture \ref{conj}. The rest of the article is organized as follows. In Section \ref{sec:preli} we introduce the definition of the reconstruction of simplicial complexes, and relevant notation. The main two results are stated in Section \ref{sec.mainres}. The proofs of Theorems \ref{main.thm.1} and \ref{main.thm.2} are given in Sections \ref{sec.pfmain1} and \ref{sec.pfmain2} respectively. \section{Preliminaries}\label{sec:preli} Let $X_0$ be a finite set. A {\it finite abstract simplicial complex} $X$ on $X_0$ is a collection of subsets $S\subset X_0$ satisfying the following property \[ T\subset S \mbox{ and } S\in X \implies T\in X. \] For example, $X=\{\emptyset, \{1\},\{2\},\{3\},\{1,2\},\{2,3\},\{1,3\},\{1,2,3\}\}$ is an abstract simplicial complex on $\{1,2,3\}$. We call a set $S\in X$ with $|S|=k+1$ as a $k$-dimensional simplex. In particular, we call a vertex as a `$0$-simplex', an edge as a $1$-simplex, a triangle as a $2$-simplex, and so on. Also a convention that $\dim(\emptyset)=-1$. For ease of writing, we write {\it complex} instead of abstract simplicial complex in the rest of the article. The maximum of the dimensions of all simplexes in $X$ is called the dimension of complex $X$, denoted by $\dim(X)$. That is, \[ \dim(X):=\max\{\dim(S)\; : \; S \in X\}. \] Observe that if $\dim(X)=1$ then $X$ can be viewed as a graph. For $0\le j\le \dim(X)$, the set of all $j$-dimensional simplexes of $X$ is denoted by $$ X^j:=\{\sigma\in X\; : \; \dim(\sigma)=j\}. $$ We say $\sigma, \sigma'\in X^j$ are neighbour if $\sigma\cup \sigma'\in X^{j+1}$. Then we write $\sigma\sim \sigma'$. A similar notion was introduced in \cite{PR17}. We say the distance of $\sigma,\sigma'\in X^j$ is $k\in \mathbb{N}\cup\{0\}$ if $k$ is the least possible number such that there exist $\sigma_0,\sigma_1\ldots, \sigma_k\in X^j$ with $\sigma=\sigma_0$ and $\sigma_k=\sigma'$ such that $\sigma_i\sim \sigma_{i+1}$ for $0\le i\le k-1$. Then we write $\mbox{\rm dist}(\sigma, \sigma')=k$. Define \[ X_{\sigma,k}:=\{\sigma'\in X^j\; : \; \mbox{\rm dist}(\sigma,\sigma')\le k\}, \] the set of all $j$-simplexes which are within distance $k$ from $\sigma$. Clearly $\sigma\in X_{\sigma,k}$ for all $k\ge 0$. Note that if $k=0$ or $\dim(\sigma)=\dim(X)$ then $X_{\sigma,k}=\{\sigma\}$, as there is no $\sigma'(\neq \sigma)\in X$ such that $\sigma'\sim \sigma$. Thus $k=0$ and $\dim(\sigma)=\dim(X)$ are two trivial cases. Let $k\ge 1$ and $j<\dim(X)$. The {\it $k$-neighbourhood} of $\sigma\in X^j$ is the $(j+1)$-dimensional sub-complex induced by $X_{\sigma,k}$, denoted by $N_{k,X}(\sigma)$. That is, \[ N_{k,X}(\sigma):=\{\tau\in X\; : \; \tau \subseteq \sigma'\cup \sigma'' \mbox{ for some } \sigma',\sigma''\in X_{\sigma,k}\}. \] In particular, if $\dim(X)=1$ and $v\in X_0$ then $N_{1,X}(v)$ refers to the sub-graph induced by $v$ and its neighbours $\{w\in X_0\; : \; v\sim w\}$. We say two complexes $X$ and $Y$ (on $X_0$ and $Y_0$ respectively) are {\it isomorphic} (denoted by $X\simeq Y$) if there exists a bijective function $f: X_0 \to Y_0$ such that \[ \{\sigma^0,\sigma^1,\ldots,\sigma^k\}\in X \Longleftrightarrow \{f(\sigma^0),\ldots, f(\sigma^k)\}\in Y, \mbox{ for $0\le k\le \dim(X)$}. \] It is clear that if $X\simeq Y$ then $|X_0|=|Y_0|$ and $\dim(X)=\dim(Y)$. If $\dim(X)=1$ then $X\simeq Y$ means the two graphs $X$ and $Y$ are isomorphic. We say two complexes $X$ and $\widetilde{X}$ on $X_0$ have same $k$-neighbourhoods if \begin{align}\label{eqn:k-neighbourh} N_{k,X}(\sigma)\simeq N_{k ,\widetilde{X}}(\sigma) \mbox{ for all $\sigma\in X$,} \end{align} that is, the $k$-neighbourhoods of all simplexes in both complexes are isomorphic. In this case we write $X\simeq_k \widetilde{X}$. Observe that if $\dim(\sigma)=\dim(X)$ then \eqref{eqn:k-neighbourh} holds trivially. The definition of $X\simeq_k \widetilde{X}$ implies that $\sigma\in X$ if and only if $\sigma\in Y$. In particular, if $\dim(X)=1$ then $X\simeq_k \widetilde{X}$ implies that the $k$-neighbourhoods of $v\in X_0$ in $X$ and $\widetilde{X}$ are isomorphic as graphs. A complex $X$ on $X_0$ is said to be {\it reconstructable} (up to isomorphism) from its $k$-neighbourhoods if $X \simeq_k \widetilde{X}$ implies $X \simeq \widetilde{X}$, for all complexes $\widetilde{X}$ on $X_0$. Further, we say $X$ is {\it exactly reconstructable} if ${X} \simeq \widetilde{X}$ implies $X=\widetilde{X}$. We study whether the Linial-Meshulam model is reconstructable from its $1$-neighbourhoods. The Linial-Meshulam model is a random complex of the form \eqref{eqn:complex}. In the rest of the article, for $d\in \mathbb{N}$, the complex will be of the form \begin{equation}\label{eqn:complex} X:=\{\emptyset, X_0,X_1,\ldots,X_{d-1},X^d\}:=\left(\bigcup_{k=-1}^{d-1}X_k\right)\cup X^d, \end{equation} where $X_{-1}:=\emptyset$, $X_0 := \{1,2,\ldots,n\}$, $X_k :=\{\{i_0,\ldots,i_k\}: 1\leqslant i_0 < \cdots < i_k \leqslant n\}$, for $1\le k\le d$, and $X^d\subseteq X_d$. Note that $X_k$ denotes the set of all $k$-dimensional simplexes on $X_0$. In this model, the complex contains all the simplexes up to the dimension $(d-1)$ and a few $d$-dimensional simplexes. Note that if two complexes $X$ and $\widetilde{X}$ on $X_0$ are of the form \eqref{eqn:complex} then \[ N_{k,X}(\sigma)=N_{k,\widetilde{X}}(\sigma), \; \mbox{ whenever $\dim(\sigma)\le d-2$}. \] The neighbourhoods can differ only if $\dim(\sigma)=d-1$. Thus, in this case, the collection of $k$-neighbourhoods of $X$ will be denoted by \begin{align}\label{eqn:kneighbour} \mathcal N_{k}(X):=\{N_{k,X}(\sigma)\; : \; \sigma\in X_{d-1}\}. \end{align} We say a complex $X$ of the form \eqref{eqn:complex} is reconstructable from its $k$-neighbourhoods if, for all $\widetilde{X}$ of the form \eqref{eqn:complex}, \[ X\simeq \widetilde{X} \mbox{ whenever } N_{k,X}(\sigma)\simeq N_{k,\widetilde{X}}(\sigma) \mbox{ for all } \sigma\in X_{d-1}. \] Similarly, we say $X$ is exactly reconstructable from its $k$-neighbourhoods if \( X= \widetilde{X} \mbox{ whenever } N_{k,X}(\sigma)\simeq N_{k,\widetilde{X}}(\sigma) \mbox{ for all } \sigma\in X_{d-1} . \) The {\it degree} of a simplex $\sigma\in X_{d-1}$ is denoted by $$ \deg(\sigma)=\deg_X(\sigma):= \sum_{\tau \in X^d}{\mathbf 1}_{\{\sigma\subset \tau\}}, $$ the number of $d$-dimensional simplexes containing $\sigma$. Observe that a $\tau\in X^d$ will contribute non-zero value in the last equation if $\tau=\sigma\cup\{v\}$ for some $v\in X_0\backslash \sigma$. The set of neighbours of $\sigma\in X_{d-1}$ is denoted by $S_\sigma$, that is, \[ S_\sigma=:\{\sigma'\in X_{d-1}\; : \; \sigma'\sim \sigma\}. \] Note that the number of elements in $S_\sigma$ is $d$ times $deg(\sigma)$, that is, \[ |S_\sigma|=d\deg(\sigma). \] For any finite set $A$, the notation $|A|$ will denote the number of elements in $A$. For an example see Figure \ref{fig:degree}. \begin{figure}[h] \includegraphics[scale=0.1]{degree} \caption{ Here $d=2$, $\deg_X(1,2)=3$ and $|S_\sigma|=6$}\label{fig:degree} \end{figure} Next we recall the Linial-Meshulam model, which is a random simplicial complex. Let $X_{d,p_n} \left(\subseteq X_d\right)$ denote the collection of random $d$-simplexes, where each $\sigma\in X_d$ is chosen independently with probability $p_n$. Define a random simplicial complex $$Y_d(n,p_n) := \{\emptyset, X_0,\ldots,X_{d-1},X_{d,p_n}\},$$ which is known as the {\it Linial-Meshulam model}. Observe that $Y_d(n,p_n)$ contains all the simplexes up to the dimension $(d-1)$, whereas each $d$-dimensional simplex is included in the complex with probability $p_n$ and independently. It is easy to see that the degree of $\sigma\in X_{d-1} $ in $Y_d(n,p_n)$ is a Binomial random variable with parameters $(n-d, p_n)$, that is, $\deg_{Y_d(n,p_n)}(\sigma)\sim Bin(n-d, p_n)$. The use of the notation \enquote{$\sim$} will always be clear form the context as the same is also used for two neighbouring simplexes. \section{Main Results}\label{sec.mainres} In this section we state our main results, and give the key idea of the proofs. Let us define the high probability events. We say a sequence of events $A_n$ occurs \emph{with high probability} if \[ P(A_n^c)=o\left( \frac{1}{n^s}\right)\,, \] for some $s>0$. We write $a_n = o(b_n)$ for two sequence of numbers $\{a_n\}_{n=1}^{\infty}$ and $\{b_n\}_{n=1}^{\infty}$ if $\left|a_n/b_n\right|\to 0$ as $n \to \infty$. In \cite{gaudio2020shotgun}, it was shown that the Er{\bf{H}} os-R\' enyi graph ${\mathcal G}(n,p_n)=Y_1(n,p_n)$ with $p_n=n^{-\alpha}$ can be exactly reconstructed from its $1$-neighbourhoods with high probability when $0< \alpha <1/3$. We extended this result for $d\in \mathbb{N}$. \begin{theorem} \label{main.thm.1} The Linial-Meshulam model $Y_d(n,p_n)$ where $p_n=n^{-\alpha}$ for $0< \alpha< 1/3$, is exactly reconstructable from its $1$-neighbourhoods with high probability. \end{theorem} The idea of the proof of Theorem \ref{main.thm.1} is similar to the proof of \cite[Theorem 2]{gaudio2020shotgun} (that is, the Erd{\bf{H}} os-R\'enyi graph $\mathcal G(n,p_n)\equiv Y_1(n,p_n)$ is reconstructable for $0<\alpha < 1/3$), but the details require some more carefulness. We don't think the range $0<\alpha<1/3$ is optimal for the reconstruction of $Y_d(n,p_n)$. We have following conjecture. \begin{conjecture}\label{conj} The Linial-Meshulam model $Y_d(n,p_n)$ with $p_n=n^{-\alpha}$ is exactly reconstructable from its $1$-neighbourhoods with high probability if $0<\alpha <1/2$. \end{conjecture} One can try to prove Conjecture \ref{conj} using the method that used in \cite{huang2021shotgun}. An other direction of work would be considering the reconstruction problem from its $2$-neighbourhoods using the method used in \cite{gaudio2020shotgun}. These remain for future works. The next result is about non-constructibility of $Y_d(n,p_n)$. For $d=1$, the graph ${\mathcal G}(n,p_n)\equiv Y_1(n,p_n)$ is non-reconstructible from its $1$-neighbourhoods with high probability when $1/2<\alpha<1$. We show that the same result holds for all $d\ge 1$. \begin{theorem}\label{main.thm.2} The Linial-Meshulam graph $Y_d(n,p_n)$ where $p_n=n^{-\alpha}$ for $1/2< \alpha< 1$, cannot be reconstructed from its $1$-neighbourhoods with high probability. \end{theorem} \noindent Again the idea of the proof of this result is similar to the proof of \cite[Theorem 3]{gaudio2020shotgun}, but the calculations are more complicated. \section{Proof of Theorem \ref{main.thm.1} }\label{sec.pfmain1} In this section we prove Theorem \ref{main.thm.1}. The following lemmas will be used in the proof. Throughout we use $p_n=n^{-\alpha}$, where $0<\alpha<1$. We first state a generalization of the fingerprint lemma \cite[Lemma 2]{gaudio2020shotgun}. Let $\sigma_1, \sigma_2 \in X_{d-1}$. We say there is an edge between $\sigma_1$ and $\sigma_2$ in $X$, denoted by $(\sigma_1,\sigma_2)$, if $\sigma_1\sim \sigma_2$ in $X$. For $\sigma_1\sim \sigma_2$, $H_{\sigma_1,\sigma_2}(X)$ denotes the sub-complex induced by the simplexes of $S_{\sigma_1} \cap S_{\sigma_2}$, that is, \[ H_{\sigma_1,\sigma_2}=H_{\sigma_1,\sigma_2}(X):=\{\tau\in X\; : \; \tau\subseteq \sigma\cup \sigma' \mbox{ for some $\sigma,\sigma'\in S_{\sigma_1} \cap S_{\sigma_2}$}\}. \] Two edges $(\sigma_1,\sigma_2)$ and $(\sigma_3,\sigma_4)$ are said to be equal, denoted by $(\sigma_1,\sigma_2)=(\sigma_3,\sigma_4)$, if either $\sigma_1 =\sigma_3$, $\sigma_2 = \sigma_4$ or $\sigma_1 =\sigma_4$, $\sigma_2 = \sigma_3$. It is clear that if $(\sigma_1,\sigma_2)=(\sigma_3,\sigma_4)$ then $H_{\sigma_1,\sigma_2}\simeq H_{\sigma_3,\sigma_4}$. If $\dim(X)=1$ and $v_1,v_2\in X_0$ such that $v_1\sim v_2$ then $H_{v_1,v_2}$ is the subgraph induced by the common neighbours of $v_1$ and $v_2$. \begin{lemma}[Fingerprint Lemma]\label{lemma.gm.1} Let $X$ be a complex of the form $\{\emptyset, X_0,\ldots,X_{d-1},X^d\}$ where $X^d\subseteq X_d$. If two edges $(\sigma_1,\sigma_2)$ and $(\sigma_3,\sigma_4)$ are equal whenever $H_{\sigma_1,\sigma_2}$ and $H_{\sigma_3,\sigma_4}$ are isomorphic then $X$ can be exactly reconstructed from the collection of its $1$-neighbourhoods. \end{lemma} In the next lemma, we give an upper bound (with high probability) on the number of simplexes that are connected with both $\sigma_1,\sigma_2\in X_{d-1}.$ \begin{lemma}\label{lem.recon.1} Let $\sigma_1,\sigma_2\in X_{d-1}$ such that $\sigma_1\cup \sigma_2\in X_{d,p_n}$, that is, $\sigma_1\sim \sigma_2$. The number of simplexes that are neighbours of $\sigma_1$ and $\sigma_2$ is denoted by $W_{\sigma_1,\sigma_2}$, that is, \[ W_{\sigma_1,\sigma_2}=:|\{\sigma\in X_{d-1} \; : \; \sigma\sim \sigma_1, \sigma \sim\sigma_2 \}|. \] Then there exists a positive constant $C$ such that \begin{align}\label{eq.1.0.5} P(W_{\sigma_1,\sigma_2} \geqslant d-1+n^c(n-d-1)p_n^2) \leqslant \exp (-Cn^{1+c-2\alpha}). \end{align} In particular, if $c>2\alpha-1$, we obtain $W_{\sigma_1,\sigma_2}-d+1 \leqslant n^{1+c-2\alpha}$ with high probability. \end{lemma} In the next lemma, for $\sigma_1\sim \sigma_2$ and $\sigma_3\sim \sigma_4$, we derive a lower bound (with high probability) on the number of simplexes that are connected only with $\sigma_1,\sigma_2\in X_{d-1}$, not with $\sigma_3, \sigma_4$. We write $a_n=\Theta(b_n)$ if there exist $C_1,C_2>0$ such that $C_1b_n\le a_n\le C_2b_n$ for all large $n$. \begin{lemma}\label{lem.recon.2} Let $\sigma_1, \sigma_2,\sigma_3,\sigma_4\in X_{d-1}$ such that $\sigma_1\sim \sigma_2$ and $\sigma_3\sim \sigma_4$. Define \begin{align*} S&=S_{\sigma_1,\sigma_2,\sigma_3,\sigma_4}:=\{\sigma\in X_{d-1} \; : \; \sigma\sim \sigma_i \mbox{ for } i=1,2,3,4\}, \\mathbb{Z}&=Z_{\sigma_1,\sigma_2,\sigma_3,\sigma_4}:={\mathbf 1}\{\sigma_1\sim \sigma_3,\sigma_1\sim\sigma_4\}+{\mathbf 1}\{\sigma_2\sim \sigma_3,\sigma_2\sim\sigma_4\}. \end{align*} If $(1-2\alpha)>0$ then, for large $n$, \begin{equation} \label{eq.1.1} P\left(W_{\sigma_1,\sigma_2} -|S|-Z \leqslant \frac{1}{2}np_n^2\right) \le \exp(-\Theta(n^{1-2\alpha})). \end{equation} \end{lemma} \noindent The proofs of Lemmas \ref{lem.recon.1} and \ref{lem.recon.2} are given at the end of this section. We note down \cite[Lemma $3$, Lemma $4$]{gaudio2020shotgun} which will be used in the proofs. \begin{lemma}\label{lemma.gm.cb}[Chernoff's bound] Let $X_1,X_2, \ldots,X_n$ be independent indicator random variables and call $X= \sum_{i=1}^n X_i$. Then for any $\delta >0$, $$P(X\leqslant (1-\delta)\mbox{\bf E}(X)) \leqslant \exp\left(-\frac{\delta^2}{2}\mbox{\bf E}(X) \right) \text{ and}$$ $$P(X\geqslant (1+\delta)\mbox{\bf E}(X)) \leqslant \exp\left(-\frac{\delta^2}{2+\delta}\mbox{\bf E}(X) \right).$$ \end{lemma} \begin{lemma}\cite[Lemma 4]{gaudio2020shotgun}\label{lem:rec.compair} Let $X$ and $Y$ be random variables such that conditioned on $Y$, $X\sim Bin(Y,p)$. Let $Z(m)\sim Bin(m,p)$. Then \[ {\bf P}(X\le t_1\left.\vphantom{\hbox{\Large (}}\right| Y\ge t_2)\le {\bf P}(Z(t_2)\le t_1) \mbox{ and } {\bf P}(X\ge t_2\left.\vphantom{\hbox{\Large (}}\right| Y\le t_1)\le {\bf P}(Z(t_2)\ge t_1). \] \end{lemma} \noindent Now we proceed to prove Theorem \ref{main.thm.1}. \begin{proof}[Proof of Theorem \ref{main.thm.1}] Let $\sigma_1,\sigma_2\in X_{d-1}$ such that $\sigma_1\sim \sigma_2$. Suppose $H_{\sigma_1,\sigma_2}$ denotes the sub-complex induced by the vertices of $S_{\sigma_1} \cap S_{\sigma_2} $ in $Y_d(n,p_n)$. Note that the sub-complex $H_{\sigma_1,\sigma_2}$ is random. For $\sigma_1,\sigma_2,\sigma_3,\sigma_4\in X_{d-1} $ such that $\sigma_1\sim \sigma_2$, $\sigma_3\sim \sigma_4$, we show that, for $s>0$, \begin{align}\label{eqn:isomorphic} {\bf P}(H_{\sigma_1,\sigma_2}\simeq H_{\sigma_3,\sigma_4})=o(n^{-s}) \mbox{ whenever } (\sigma_1,\sigma_2)\neq (\sigma_3,\sigma_4). \end{align} Then the result follows from Lemma \ref{lemma.gm.1} and \eqref{eqn:isomorphic}. It remains to prove \eqref{eqn:isomorphic}. Let $S$ be as defined in Lemma \ref{lem.recon.2} and $Y_1$ be the sub-complex induced by the simplexes of $S_{\sigma_1} \cap S_{\sigma_2} \backslash (S\cup\{\sigma_3,\sigma_4\})$, the shared neighbours of $\sigma_1$ and $\sigma_2$ (excluding $\sigma_3$ and $\sigma_4$) that are not neighbours of both $\sigma_3$ and $\sigma_4$. Let $Y_2$ be the sub-complex induced by the simplexes of $S_{\sigma_3} \cap S_{\sigma_4} $. Note that $Y_1$ and $Y_2$ are disjoint by construction. Observe that if $H_{\sigma_1,\sigma_2}\simeq H_{\sigma_3,\sigma_4}$ then $W_{\sigma_1,\sigma_2}=W_{\sigma_3,\sigma_4}$ and $Y_1$ can be embedded into $Y_2$ as a sub-complex of $Y_2$ (we write $Y_1 \subset Y_2$ with the abuse of notation). Thus \begin{align*} {\bf P}(H_{\sigma_1,\sigma_2} \simeq H_{\sigma_3,\sigma_4})\le {\bf P}(Y_1\subset Y_2). \end{align*} We show that \begin{align}\label{eqn:graph} P\left(Y_1 \subset Y_2\right)\le n^4(n^{an^{1+c-2\alpha}-bn^{2-5\alpha}}+exp(-Cn^{1+c-2\alpha})), \end{align} where $\max\{0,2\alpha-1\} <c< 1-3\alpha$ and $C>0$. The right hand side of the above equation will go to zero if $2-5\alpha > 1+c-2\alpha$, which is equivalent to say that $c<1-3\alpha$. This is a consistent condition if $\alpha<\frac{1}{3}$. Applying a union bound, \begin{align*} &{\bf P}\{\exists \sigma_1\sim \sigma_2,\sigma_3\sim \sigma_4 \; : \; H_{\sigma_1,\sigma_2}\simeq H_{\sigma_3,\sigma_4}\}\\&\le n^{4d}{\bf P}\{H_{\sigma_1,\sigma_2}\simeq H_{\sigma_3,\sigma_4}\} \\&\le n^{4(d+1)}(n^{an^{1+c-2\alpha}-bn^{2-5\alpha}}+exp(-Cn^{1+c-2\alpha})) \\&=o(n^{-s}), \end{align*} for any $s>0$ as $\alpha<1/3$. Thus, for any $(\sigma_1,\sigma_2)\neq(\sigma_3,\sigma_4)$, we have $H_{\sigma_1,\sigma_2}\not\simeq H_{\sigma_3,\sigma_4}$ with high probability if $\alpha<1/3$. This proves result. The rest of the proof is dedicated to prove \eqref{eqn:graph}. We have \begin{align*} &{\bf P}(Y_1\subset Y_2)\\\le &\sum_{\lambda,\mu,k}{\bf P}(\left\{Y_1\subset Y_2 \; : \; W_{\sigma_1, \sigma_2} = W_{\sigma_3,\sigma_4} = \lambda+Z, |S|=\mu, Supp_d(Y_1^{d-1})=k\right\}),\nonumber \end{align*} where $Supp_d(A) =| \{\sigma_1 \cup \sigma_2 \in X_{d,p_n}| \sigma_1, \sigma_2 \in A\}|$ for $A \subseteq X_{d-1}$. Note that, given $|S|=\mu$, at most $2\mu+1$ $d$-simplexes are revealed in $X_{d,p_n}$. Therefore \begin{align*} &P\left(Y_1 \subset Y_2 \; : \; W_{\sigma_1, \sigma_2} = W_{\sigma_3,\sigma_4} = \lambda+Z, |S|=\mu, Supp_d(Y_1^{d-1})=k\right)\nonumber\\ \leqslant & \binom{\lambda+2}{\lambda-\mu} (\lambda -\mu)!p_n^{k-2\mu-1} \\\leqslant& (\lambda+2)^{\lambda-\mu} \left(n^{-\alpha}\right)^{k-2\lambda-1}, \end{align*} as $\mu \leqslant \lambda$ and $p_n=n^{-\alpha}$. Again, $\lambda+2\le n$ and $\lambda-\mu\le \lambda$ implies that \begin{align}\label{eqn:upperp} &P\left(Y_1 \subset Y_2 \; : \; W_{\sigma_1, \sigma_2} = W_{\sigma_3,\sigma_4} = \lambda+Z, |S|=\mu, Supp_d(Y_1^{d-1})=k\right)\nonumber \\&\le \exp \left(\lambda \log (n) -\alpha(k-2\lambda-1)\log (n)\right)\nonumber \\ &\leqslant \exp\{((2\alpha+1)\lambda+\alpha-\alpha k)\log n\}\nonumber \\&=n^{(2\alpha+1)\lambda+\alpha-\alpha k}. \end{align} Next we complete the proof of \eqref{eqn:graph} using the following two claims \begin{align}\label{eqn:upperlambda} {\bf P}(\lambda\le n^{1+c-2\alpha})&\ge 1- \exp(-Cn^{1+c-2\alpha}).\\ {\bf P}(k\ge C n^{2-5\alpha} )&\ge 1-\exp(-C_2n^{2-5\alpha}). \label{eqn:lowerk} \end{align} Using \eqref{eqn:upperlambda} and \eqref{eqn:lowerk} from \eqref{eqn:upperp} we get \begin{align*} P\left(Y_1 \subset Y_2 \; : \; W_{\sigma_1, \sigma_2} = W_{\sigma_3,\sigma_4}, |S|, Supp_d(Y_1^{d-1})\right)\le n^{an^{1+c-2\alpha}-bn^{2-5\alpha}}, \end{align*} with probability at least $1-\exp(-Cn^{1+c-2\alpha})$. Therefore we get \[ P\left(Y_1 \subset Y_2\right)\le n^4.n^{an^{1+c-2\alpha}-bn^{2-5\alpha}}+n^4exp(-Cn^{1+c-2\alpha}), \] as $\lambda,\mu\le n$ and $k\le n^2$. This completes the proof of \eqref{eqn:graph}. It remains to prove \eqref{eqn:upperlambda} and \eqref{eqn:lowerk}. \vspace{.2cm} \noindent{\it Proof of \eqref{eqn:upperlambda}:} Observe that \eqref{eqn:upperlambda} follows from Lemma \ref{lem.recon.1}. \vspace{.2cm} \noindent {\it Proof of \eqref{eqn:lowerk}:} Clearly, given $W_{\sigma, \sigma^\prime} -|S|-Z$, $Supp_d(Y_1^{d-1})\sim Bin(W_{\sigma, \sigma^\prime} -|S|-Z,p_n)$. From Lemma \ref{lem.recon.2}, we have \[ {\bf P}(W_{\sigma, \sigma^\prime} -|S|-Z\ge \frac{1}{2}np_n^2)\ge 1- e^{-\Theta(n^{1-2\alpha})}. \] The right hand side goes to $1$ if $1-2\alpha>0$. Lemma \ref{lem:rec.compair} and Lemma \ref{lemma.gm.cb} imply that \begin{align*} &P\left(Supp_d(Y_1^{d-1}) \leqslant (1-\epsilon)p_n \begin{pmatrix} \frac{1}{2}np_n^2 \\ 2 \end{pmatrix}\left.\vphantom{\hbox{\Large (}}\right| (W_{\sigma, \sigma^\prime} -|S|-Z)\ge\frac{1}{2}np_n^2 \right) \nonumber \\&\le P\left(Bin(\frac{1}{2}np_n^2,p_n)\leqslant (1-\epsilon)p_n \begin{pmatrix} \frac{1}{2}np_n^2 \\ 2 \end{pmatrix}\right)\nonumber \\&\leqslant \exp\left(-\frac{\epsilon^2}{2}p_n\begin{pmatrix} \frac{1}{2}np_n^2 \\ 2 \end{pmatrix} \right)\nonumber \\ &= \exp(-C_2n^{2-5\alpha}), \end{align*} for some positive constant $C_2$. Thus we have \begin{align*} {\bf P}(Supp_d(Y_1^{d-1})\ge C n^{2-5\alpha} )\ge 1-\exp(-C_2n^{2-5\alpha}), \end{align*} for some positive constant $C$. This complete the proof. \end{proof} Next we give the proofs of Lemmas \ref{lemma.gm.1}, \ref{lem.recon.1} and \ref{lem.recon.2}. The proof of Lemma \ref{lemma.gm.1} can be derived from \cite[Lemma 2]{gaudio2020shotgun}, for sake of completeness we give a proof. \begin{proof}[Proof of Lemma \ref{lemma.gm.1}] Since $X$ has complete $(d-1)$-dimensional skeleton, in order to reconstruct $X$ it is enough to check whether any two simplexes $\sigma_1, \sigma_2 \in X_{d-1}$ ($\sigma_1 \neq \sigma_2$) are connected in $X$. To determine that, we examine the neighbourhoods of $\sigma_1, \sigma_2$ by observing the sub-complexes $H_{\sigma_1,\sigma_3}$ and $H_{\sigma_2,\sigma_4}$ for neighbours $\sigma_1 \sim \sigma_3$ and $\sigma_2 \sim \sigma_4$. The reconstruction algorithm is as follows: We conclude that $\sigma_1 \sim \sigma_2$ in $X$ if there exist $\sigma_3, \sigma_4 \in X_{d-1}$ such that $\sigma_1 \sim \sigma_3$, $\sigma_2 \sim \sigma_4$ and $H_{\sigma_1,\sigma_3}$ is isomorphic with $H_{\sigma_2,\sigma_4}$. Suppose $\sigma_1 \sim \sigma_2$ in $X$. We choose $\sigma_3 = \sigma_1$ and $\sigma_4 = \sigma_2$. Then, $H_{\sigma_1,\sigma_3} = H_{\sigma_2,\sigma_4}= H_{\sigma_1,\sigma_2}$. Conversely, suppose there are some $\sigma_3, \sigma_4 \in X_{d-1}$ such that $\sigma_1 \sim \sigma_3$, $\sigma_2 \sim \sigma_4$ and $H_{\sigma_1,\sigma_3}$ is isomorphic with $H_{\sigma_2,\sigma_4}$. Then, the hypothesis of the lemma says that $(\sigma_1,\sigma_3) = (\sigma_2,\sigma_4)$. Therefore, $\sigma_1=\sigma_4$ and $\sigma_3=\sigma_2$ because $\sigma_1\neq \sigma_2$. Hence, $(\sigma_1,\sigma_2)$ is an edge in $X$ or in other words $\sigma_1\sim \sigma_2$. So, continuing the process described in the algorithm, we recover the complex. \end{proof} \begin{proof}[Proof of Lemma \ref{lem.recon.1}] For $\sigma_1\sim \sigma_2$, define $S_{\sigma_1,\sigma_2}=S_{\sigma_1,\sigma_2}'\cup S_{\sigma_1,\sigma_2}''$ where \begin{align*} S_{\sigma_1,\sigma_2}'&=\{\sigma\in X_{d-1}\; : \; \sigma\sim \sigma_1,\sigma\sim \sigma_2, \sigma\subset \sigma_1\cup \sigma_2\}\\ S_{\sigma_1,\sigma_2}''&=\{\sigma\in X_{d-1}\; : \; \sigma\sim \sigma_1,\sigma\sim \sigma_2, \sigma\nsubseteq \sigma_1\cup \sigma_2\} \end{align*} Clearly, $W_{\sigma_1,\sigma_2}=|S_{\sigma_1,\sigma_2}'|+|S_{\sigma_1,\sigma_2}''|$. Observe that if $\sigma_1,\sigma_2\neq \sigma\in X_{d-1}$ such that $\sigma\subset \sigma_1\cup\sigma_2$ then $\sigma\sim \sigma_1$ and $\sigma\sim \sigma_2$. Therefore \begin{align}\label{eqn:s'} |S_{\sigma_1,\sigma_2}'|=d-1. \end{align} Again $\sigma_1\sim \sigma_2$ implies that $\sigma_1\cap \sigma_2\in X_{d-2}$. Therefore if $\sigma\sim \sigma_1,\sigma_2$ but $\sigma\nsubseteq \sigma_1\cup\sigma_2$ then $\sigma$ will be of the form $(\sigma_1\cap\sigma_2)\cup\{v\}$ for some $v\in X_0\backslash (\sigma_1\cup\sigma_2)$. See Figure \ref{fig:wsigma}. \begin{figure}[h] \includegraphics[scale=0.1]{wsigma} \caption{Simplexes $(1,5)$ and $(1,3)$ are connected with both $(1,4), (1,2)$, as $(125), (145), (134), (123)$ simplexes are included in the complex.}\label{fig:wsigma} \end{figure} \noindent Therefore $S_{\sigma_1,\sigma_2}''$ can be written as \[ S_{\sigma_1,\sigma_2}''=\{(\sigma_1\cap\sigma_2)\cup\{v\}\; : \; v\in X_0, (\sigma_1\cap\sigma_2)\cup\{v\}\in X_{d,p}\}. \] Which implies that $|S_{\sigma_1,\sigma_2}''|\sim Bin(n-d-1, p_n^2)$, as we need to add two simplexes to get an element in $S_{\sigma_1,\sigma_2}''$. See Figure \ref{fig:wsigma}. Let $c>0$, to be determined later. Using Lemma \ref{lemma.gm.cb}, we have \begin{align}\label{eqn:s''} P(| S_{\sigma_1,\sigma_2}''|\geqslant n^c(n-d-1)p_n^2) &\leqslant \exp\left(-\frac{n^{2c}}{1+n^c}(n-d-1)n^{-2\alpha}\right) \nonumber\\ & \leqslant \exp (-Cn^{1+c-2\alpha}), \end{align} for some constant $C>0$. We get the result by combining \eqref{eqn:s'} and \eqref{eqn:s''}. Clearly, the right hand side goes to zero if $c>2\alpha-1$. \end{proof} \begin{proof}[Proof of Lemma \ref{lem.recon.2}] Let $\sigma\in S$. Then $\sigma\sim \sigma_1,\sigma_2$ and $\sigma\sim \sigma_3,\sigma_4$ which imply that \begin{align*} \sigma=(\sigma_1\cap \sigma_2)\cup\{v\} \mbox{ and } \sigma=(\sigma_3\cap \sigma_4)\cup\{v'\} \end{align*} for some $v, v'\in X_0$. Thus we get the following identity \begin{align}\label{eqn:identity} (\sigma_1\cap \sigma_2)\cup\{v\}=(\sigma_3\cap \sigma_4)\cup\{v'\}. \end{align} \noindent{\bf Case-I:} Suppose $|\cap_{i=1}^4\sigma_i|=d-1$, that is, $(\sigma_1\cap \sigma_2)=(\sigma_3\cap \sigma_4)$. Then any $v=v'\in X_0\backslash (\cup_{i=1}^4\sigma_i)$ satisfies \eqref{eqn:identity}. Thus \[ {\bf P}((\sigma_1\cap \sigma_2)\cup\{v\}\in S_{\sigma_1,\sigma_2}''\backslash S)=p_n^2(1-p_n^2), \] where $S$ is as defined in Lemma \ref{lem.recon.2}. Therefore we get $|S_{\sigma_1,\sigma_2}''|-|S|-Z\sim Bin(n-d-3, p_n^2(1-p_n^2))$. Fix $0<\epsilon<1/2$. Lemma \ref{lemma.gm.cb} implies that \[ {\bf P}(|S_{\sigma_1,\sigma_2}''|-|S|-Z\le \epsilon (n-d-3)p_n^2(1-p_n^2))\le \exp(-\frac{\epsilon^2}{2}(n-d-3)p_n^2(1-p_n^2)). \] If $1-2\alpha>0$, then the last equation implies that \begin{align} {\bf P}(W_{\sigma_1,\sigma_2}-|S|-Z\le \frac{1}{2} np_n^2)\le \exp(-\Theta(n^{1-2\alpha})), \end{align} as $|S_{\sigma_1,\sigma_2}'|=d-1$. \vspace{.2cm} \noindent{\bf Case-II:} Suppose $|\cap_{i=1}^4\sigma_i|=d-2$. Then there is only one choice of $v,v'$ which satisfies \eqref{eqn:identity}, namely, $v=(\sigma_1\cap\sigma_2)\backslash (\sigma_3\cap \sigma_4)$ and $v'=(\sigma_3\cap\sigma_4)\backslash (\sigma_1\cap \sigma_2)$. Thus $|S|\le 1$. We have \[ W_{\sigma_1,\sigma_2}-|S|-Z\le W_{\sigma_1,\sigma_2}. \] Next we give bound on $W_{\sigma_1,\sigma_2}$. We have \[ W_{\sigma_1,\sigma_2}=|S_{\sigma_1,\sigma_2}'|+|S_{\sigma_1,\sigma_2}''|, \] where $S_{\sigma_1,\sigma_2}'$ and $S_{\sigma_1,\sigma_2}''$ are as defined in the proof of Lemma \ref{lem.recon.1}. Since $|S_{\sigma_1,\sigma_2}''|\sim Bin(n-d-1, p_n^2)$, by Chernoff's bound (Lemma \ref{lemma.gm.cb}), for $0<\epsilon<1/2$, \[ {\bf P}(|S_{\sigma_1,\sigma_2}''|\le \epsilon (n-d-1)p_n^2)\le \exp(-\frac{\epsilon^2}{2}(n-d-1)p_n^2). \] If $1-2\alpha>0$ then the last equation implies that \[ {\bf P}(W_{\sigma_1,\sigma_2}\le \frac{1}{2}np_n^2)\le \exp(-\Theta(n^{1-2\alpha})), \] as $|S_{\sigma_1,\sigma_2}'|=d-1, |S|\le 1, |Z|\le 2$. Thus we get, for large $n$, \begin{equation} P\left(W_{\sigma_1, \sigma_2} -|S|-Z \leqslant \frac{1}{2}np_n^2\right) \le \exp(-\Theta(n^{1-2\alpha})). \end{equation} \vspace{.2cm} \noindent{\bf Case-III:} Suppose $|\cap_{i=1}^4\sigma_i|\le d-3$. Then there is no $v,v'\in X_0$ which satisfies \eqref{eqn:identity}. Thus $|S|=0$. Hence the result follows as in Case II. \vspace{.1cm} Similar analysis can be done when the edges are of the form $(\sigma_1,\sigma_2)$ and $(\sigma_2,\sigma_3)$. It can be shown that $|S_{\sigma_1,\sigma_2}''|-|S|-Z\sim Bin(n-d-3, p_n^2(1-p_n))$ if $|\cap_{i=1}^3\sigma_i|= d-1$. Otherwise, $|S|\le 1$. Thus, following the calculation as in Case-I,II, if $1-2\alpha>0$ we get \begin{equation*} P\left(W_{\sigma_1, \sigma_2} -|S|-Z \leqslant \frac{1}{2}np_n^2\right) \le \exp(-\Theta(n^{1-2\alpha})). \end{equation*} We skip the details here. Hence the result. \end{proof} \section{Proof of Theorem \ref{main.thm.2}}\label{sec.pfmain2} This section is dedicated for the proof of Theorem \ref{main.thm.2}. Let $X$ be a complex, where \[ X:=\{\emptyset, X_0,\ldots,X_{d-1},X^d\}, \] and $X^d\subseteq X_d$. Recall $S_\sigma$ denotes the the set of neighbours of $\sigma\in X_{d-1}$. Define \begin{align*} D_\sigma=D_{\sigma}(S_\sigma):=\{\sigma\cup \sigma' \; : \; \sigma'\in S_\sigma\} \mbox{ and } Supp_d(S_\sigma):=\{\sigma_1\cup\sigma_2\; : \; \sigma_1, \sigma_2\in S_\sigma\}. \end{align*} Observe that each simplex in $D_\sigma(S_\sigma)$ contains $\sigma$. However, not every simplex in $Supp_d(S_\sigma)$ contains $\sigma$. The set the simplexes which do not contain $\sigma$ is denoted by \[ D_{\sigma}^*=D_{\sigma}^*(S_\sigma):=Supp_d(S_\sigma) \backslash D_{\sigma}(S_\sigma). \] Observe that $|D_\sigma|=\deg(\sigma)$. The $1$-neighbourhood of $\sigma$ in $X$ can be written as \[ N_{1,X}(\sigma)=(S_\sigma, D_\sigma, D_\sigma^*). \] Fix $0<\epsilon < 1$ and $q_n = (1+\epsilon)p_n$. For $c>0$, suppose $t_n=(1+n^c)p_n$. Define \begin{align}\label{eqn:S} \mathcal S=\{\{N_{1,X}(\sigma), \sigma\in X_{d-1}\} \; : \; |D_\sigma|< nq_n, |D_{\sigma}^*|< \frac{d}{2}n^2q_n^2t_n, \forall \sigma\in X_{d-1}\}, \end{align} the set of all possible $1$-neighbourhood collections where the degree of each central vertex is less than $nq_n$ and each neighbourhood has fewer than $\frac{d}{2}n^{2}q_n^{2}t_n$ neighbouring $d$-faces those are not counted in $\deg(\sigma)$. \begin{lemma} \label{lem.thm2.1} Let $\mathcal N_1(X)$ and $\mathcal S$ be as defined in \eqref{eqn:kneighbour} and \eqref{eqn:S} respectively. Let $X\in Y_d(n,p_n)$ with $p_n=n^{-\alpha}$ and $0<\alpha<1$. If $c>3\alpha-2$ then \[ {\bf P}(\mathcal N_1(X)\in \mathcal S)\ge 1- e^{-an^{b}} \] for some positive constants $a$ and $b$. \end{lemma} \begin{lemma}\label{lem.thm2.2} Let $I:=\left\{m \in \mathbb{N}: \left|m- \binom{n}{d+1}p_n\right| < \epsilon \binom{n}{d+1} p_n\right\}$, where $p_n=n^{-\alpha}$. Then \[ {\bf P}(|X_{d,p_n}|\in I)\ge 1-e^{-n^d}, \] where $|X_{d,p_n}|$ denotes the number of $d$-simplexes in $Y_d(n,p_n)$. \end{lemma} \begin{lemma}\label{lem.thm2.3} Let $\mathcal S$ be as defined in \eqref{eqn:S}. If $c<2\alpha-1$ then \[ \max_{m \in I}\frac{n^d|\mathcal S|}{\binom{\binom{n}{d+1}}{m}}=o(e^{-n^s}), \] for some $s>0$. \end{lemma} \begin{proof}[Proof of Theorem \ref{main.thm.2}] We consider a particular complex reconstruction algorithm which outputs a complex when a collection of $1$-neighbourhoods is given. Let $\mathcal S$ be a collection of $1$ neighbourhoods as defined in \eqref{eqn:S}. The algorithm maps each element of $\mathcal S$ to an isomorphism class, which corresponds to at most $n^d!$ labelled complexes. The algorithm fails if $X\in Y_d(n,p_n)$ such that $\mathcal N_1(X)\in \mathcal S$ but the output of the algorithm of $\mathcal N_1(X)$ is not isomorphic to $X$. We condition on the event $|X_{d,p_n}|=m$ for some $m\in \mathbb{N}$. Given this information, there are $\binom{\binom{n}{d+1}}{m}$ possible labelled $d$-simplexes which may be chosen with equal probability. Therefore, conditioned on $|X_{d,p_n}|=m$, the algorithm fails when any complex $X$ is not achieved by the algorithm output. Let $p_m$ denote the probability of failure given $|X_{d,p_n}|=m$. Thus, \begin{align*} p_m&={\bf P}(\mbox{ algorithm fails}\left.\vphantom{\hbox{\Large (}}\right| |X_{d,p_n}|=m) \\&\ge P\left(\mathcal{N}_1(X) \in \mathcal S | |X_{d,p_n}|=m \right) - \frac{n^d!|\mathcal S|}{\binom{\binom{n}{d+1}}{m}} \\&= \frac{\binom{\binom{n}{d+1}}{m} - n^d!|\mathcal S|}{\binom{\binom{n}{d+1}}{m}} - P\left(\mathcal{N}_1(X) \notin \mathcal S | |X_{d,p_n}|=m \right). \end{align*} Let $I \subseteq \left\{1,2, \ldots,\binom{n}{d+1}\right\}$ be as defined in Lemma \ref{lem.thm2.2}. Let $p^*$ denote the overall failure probability of the algorithm. Then \begin{align*} p^*&\ge \sum_{m\in I}p_m \\&\ge \sum_{m \in I} \frac{\binom{\binom{n}{d+1}}{m}- n^d!|\mathcal S|}{\binom{\binom{n}{d+1}}{m}} P(|X_{d,p_n}|=m) - P\left(\{\mathcal{N}_1(X) \notin \mathcal S\} \cap \{|X_{d,p_n}| \in I\} \right)\\ & \geqslant P(|X_{d,p_n}| \in I) \min_{m \in I} \frac{\binom{\binom{n}{d+1}}{m}- n^d!|\mathcal S|}{\binom{\binom{n}{d+1}}{m}} - P(\mathcal{N}_1(X) \notin \mathcal S). \end{align*} Therefore Lemmas \ref{lem.thm2.1}, \ref{lem.thm2.2} and \ref{lem.thm2.3} implies that \[ p^*\ge (1-e^{-n^d})(1-e^{-n^s})-e^{-an^b}\ge 1-e^{-a'n^{b'}}, \] for some $s,a,b,a',b'>0$, if the constant $c$ satisfies $$\max\{0,3\alpha-2\} < c < \min\{\alpha,2\alpha-1\}.$$ The above is satisfied when $1/2< \alpha <1$ as required. The proof is then completed by choosing $c=1/2(\max{0,3\alpha-2}+2\alpha-1)$ since $\min\{\alpha,2\alpha-1\} = 2\alpha-1$. \end{proof} The rest of the section is dedicated to prove Lemmas \ref{lem.thm2.1}, \ref{lem.thm2.2} and \ref{lem.thm2.3}. \begin{proof}[Proof of Lemma \ref{lem.thm2.1}] Let $S_\sigma$ denote the set of neighbours of $\sigma\in X_{d-1}$ in $Y_d(n,p_n)$. Consequently we define $D_{\sigma}$ and $D_\sigma^*$ as defined above. Note that $S_\sigma$ is a random set, hence $D_{\sigma}$ and $D_\sigma^*$ are random. We show that if $c>3\alpha -2$ then \begin{align}\label{eqn:lem6} P\left(\bigcap_{\sigma \in X_{d-1}} \Big(\{|D_{\sigma}| < nq_n \}\cap\{|D_{\sigma}^*| < \frac{d}{2}n^2q_n^2t_n \}\Big)\right)\ge 1-e^{-an^b}, \end{align} for some positive constants $a$ and $b$. Clearly \eqref{eqn:lem6} gives the result. \vspace{.2cm} \noindent {\it Proof of \eqref{eqn:lem6}:} Observe that $|D_{\sigma}|=\deg(\sigma)\sim Bin(n-d,p_n)$. Then Lemma \ref{lemma.gm.cb} gives \begin{align}\label{pf.lem.1.eq2} P\left(|D_{\sigma}| < nq_n\right) &\geqslant P\left( |D_{\sigma}| < (n-d)q_n \right)\nonumber\\ &=1- P\left( |D_{\sigma}|\geqslant (1+\epsilon)(n-d)q_n \right)\nonumber\\ &\geqslant 1-\exp\left(-\frac{\epsilon^2p_n(n-d)}{3} \right). \end{align} Next we derive a bound of $|D_{\sigma}^*|$ with high probability. We have \begin{align}\label{pf.lem.1.eq1} &P\left(|D_{\sigma}^*| \geqslant \frac{1}{2}n^{2}q_n^{2}t_n) \right)\nonumber \\& \leqslant P\left(|D_{\sigma}^*| \geqslant \frac{1}{2}n^{2}q_n^{2}t_n \Big| |D_{\sigma}| < nq_n) \right) + P(|D_{\sigma}| \geqslant nq_n). \end{align} Note that, conditioned on $|D_{\sigma}|$, $|D_{\sigma}^*|\sim dBin(\binom{|D_{\sigma}|}{2}, p_n)$. Lemmas \ref{lemma.gm.cb} and \ref{lem:rec.compair} imply \begin{align}\label{pf.lem.1.eq3} P\left(|D_{\sigma}^*|\geqslant \frac{d}{2}n^{2}q_n^{2}t_n| |D_{\sigma}| \le nq_n) \right) &\leqslant P\left(|D_{\sigma}^*|\geqslant \frac{d}{2}n^{2}q_n^{2}t_n| |D_{\sigma}| = nq_n) \right) \nonumber\\ &\leqslant \exp\left(-\frac{n^{2c}d}{2+n^c} \binom{nq_n}{2}p_n \right). \end{align} Combining the bounds from \eqref{pf.lem.1.eq2} and \eqref{pf.lem.1.eq3} and substituting in \eqref{pf.lem.1.eq1}, we obtain $$P\left(|D_{\sigma}^*|\geqslant \frac{d}{2}n^2q_n^2t_n) \right) \leqslant \exp\left(-\frac{n^{2c}d}{2+n^c} \binom{nq_n}{d+1}p_n \right) + \exp\left(-\frac{\epsilon^2p_n(n-d)}{3} \right) .$$ By the union bound, the required probability is given by \begin{align*} &P\left(\bigcap_{\sigma \in X_{d-1}} \Big(\{|D_{\sigma}| < nq_n \}\cap\{|D_{\sigma}^*|< \frac{d}{2}n^2q_n^2t_n \}\Big)\right) \\ &\geqslant 1 - \binom{n}{d} \exp\left(-\frac{n^{2c}d}{2+n^c} \binom{nq_n}{2}p_n \right) + \binom{n}{d} \exp\left(-\frac{\epsilon^2p_n(n-1)}{3} \right)\\ &\geqslant 1-n^d \exp\left(-C_3n^{2+c-3\alpha} \right) -n^d\exp\left(-C_4n^{1-\alpha} \right), \end{align*} for some positive constants $C_3$, $C_4$. The above will give us a high probability bound when $2+c-3\alpha >0$. Hence the result if $c>3\alpha -2$, as $\alpha < 1$, . \end{proof} \begin{proof}[Proof of Lemma \ref{lem.thm2.2}] Note that $|X_{d,p_n}\sim Bin(\binom{n}{d+1}, p_n)|$. Then Lemma \ref{lemma.gm.cb} gives $$P(|X_{d,p_n}| \notin I) =P\left( \left||X_{d,p_n}|- \binom{n}{d+1}p_n\right| \geqslant \epsilon \binom{n}{d+1}p_n \right) \leqslant 2\exp \left( -\frac{\epsilon^2}{3}\binom{n}{d+1}p_n\right).$$ Using $p_n=n^{-\alpha}$ and $\binom{n}{d+1}\le n^{d+1}$, we get the result. \end{proof} \begin{proof}[Proof of Lemma \ref{lem.thm2.3}] By the definition of $\mathcal S$, we have $|D_\sigma|\in \{1,\ldots,q_nn\}$ and $|D_{\sigma}^*|\in \{1,\ldots,\frac{d}{2}n^2q_n^2t_n \}$ for all $\sigma\in X_{d-1}$. Again the choices for the number of neighbouring $d$-simplexes is upper bounded by $\binom{d\binom{nq_n}{2}}{\frac{d}{2}n^2q_n^2t_n}$ for each $\sigma$. Therefore \begin{align}\label{pf.thm.2.eq4} |\mathcal S| &\leqslant \left(nq_n \cdot \frac{d}{2}n^2q_n^2t_n \cdot \binom{d\binom{nq_n}{2}}{\frac{d}{2}n^2q_n^2t_n} \right)^{\binom{n}{d}} \leqslant \left( \frac{d}{2}n^3q_n^3t_n \left( \frac{e}{t_n}\right)^{\frac{d}{2}n^2q_n^2t_n} \right)^{\binom{n}{d}}, \end{align} in the last inequality we use the fact that $\binom{n}{k} \leqslant \left(\frac{{\bf e} n}{k} \right)^k$. Now, \begin{align}\label{pf.thm.2.eq5} \min_{m \in I}\binom{\binom{n}{d+1}}{m}= \binom{\binom{n}{d+1}}{(1-\epsilon)\binom{n}{d+1}p_n} \geqslant \left( \frac{1}{(1-\epsilon)p_n} \right)^{(1-\epsilon)\binom{n}{d+1}p_n}. \end{align} Therefore, using \eqref{pf.thm.2.eq4}, \eqref{pf.thm.2.eq5} and the fact that $n^d!\leqslant \exp\left(dn^d\log(n)\right)$, we get \begin{align}\label{eqn:boundupper} \max_{m \in I} \frac{n^d!|S|}{\binom{\binom{n}{d+1}}{m}} &\leqslant \frac{\exp\left(dn^d\log(n)\right)\left( \frac{d}{2}n^3q_n^3t_n \left( \frac{e}{t_n}\right)^{\frac{d}{2}n^2q_n^2t_n} \right)^{\binom{n}{d}}}{\left( \frac{1}{(1-\epsilon)p_n} \right)^{(1-\epsilon)\binom{n}{d+1}p_n}}\nonumber\\ &=\exp\bigg\{dn^d\log(n) +\binom{n}{d}\log(\frac{d}{2}n^3q_n^3t_n) + \binom{n}{d}\frac{d}{2}n^2q_n^2t_n \log \left( \frac{e}{t_n}\right) ) \nonumber \\ &\hspace{2cm}- (1-\epsilon)\binom{n}{d+1} p_n \log \left( \frac{1}{(1-\epsilon)p_n} \right) \bigg\} \nonumber \\ &\le \exp\bigg\{dn^d\log(n) + C_5n^d\log (n^{3+c-4\alpha}) +C_6 n^{c-3\alpha+d+2}\log (n) \\&\hspace{2cm}-C_7 n^{1+d-\alpha} \log(n) \bigg\}\nonumber, \end{align} for some positive constants $C_5,C_6,C_7$. The right hand side of \eqref{eqn:boundupper} goes to zero exponentially when $d< d+1-\alpha$ (which is always true since $\alpha<1$) and $c-3\alpha+d+2 < 1+d-\alpha$. In other words, the required condition is $c<2\alpha-1$. \end{proof} \section*{Acknowledgement} The research of KA was partially supported by the Inspire Faculty Fellowship: DST/INSPIRE/04/2020/000579. SC's research was supported by the NBHM postdoctoral fellowship (order no. 0204/19/2021/R\&D-II/11871). \bibliographystyle{abbrv}
{ "timestamp": "2022-09-23T02:13:14", "yymm": "2209", "arxiv_id": "2209.10942", "language": "en", "url": "https://arxiv.org/abs/2209.10942" }
\section{Introduction} The explosion of home-working and online interactions, the pervasive uses of technologies in daily activities and many working environments impose ever more mental workload upon operators and less physical load. The literature on the construct of mental workload (MWL) or, often interchangeably referred to as cognitive load (CL), has been vast and in constant evolution for the last half-century. Note that cognitive load and mental workload might differ in little aspects, according to authors working in different fields, such as psychology, neuroscience or education. However, to my knowledge, no clear evidence has been found to address these differences formally. Thus they are interchangeably used in the remainder of this article \cite{longo2022human}. The principal reason for measuring workload is to quantify the mental cost of performing tasks to predict human performance \cite{wickens2017mental}. In turn, prediction of performance can be used for designing interfaces, interactive technologies \cite{longo2015designing}, and information-processing activities \cite{OrruL19} optimally aligned to the well-known human mental limited capacities \cite{miller1956magical}. Despite 50 years of effort, research on MWL has not been able to make major advances yet \cite{hart2006nasa,young2001mental,paas2003cognitive} failing at providing a model of understanding \cite{young2015state,charles2019measuring,van2018understanding,HancockLongo2021}. Guess intuitions and several operational definitions from various fields have proliferated \cite{longo2018reliability,HART1988139}. Still, these disagree about the MWL sources, their attributes, the mechanisms to aggregate these together and their impact on human performance \cite{young2015state}. Identifying these sources, attributes, and mechanisms and how they impinge on human performance are all open fundamental research problems. For instance, some researchers have considered task-specific attributes \cite{wickens2020processing} while others chose a combination of task and user-specific attributes \cite{hart2006nasa}. Primary researchers have employed self-reporting measurements \cite{HART1988139} or a combination of psychophysiological techniques \cite{Brookhuis2001}. However, MWL is also influenced by the environment in which a human performs a task \cite{Vidulich2012}. \\ Currently, the literature on mental workload includes a plethora of hand-crafted knowledge-driven models grounded in different theories, employing different attributes and different strategies for aggregating these into indexes of workload, limiting their comparison \cite{cain2007review,longo2015defeasible,van2018understanding, longo2022human}. This makes cognitive load a \emph{knowledge-dependent construct}. This is also supported by the fact that cognitive load has been mainly investigated in the fields of ergonomics and psychology \cite{HART1988139,young2015state} with several applications in the aviation \cite{hart2006nasa}, automobile \cite{Brookhuis2001} and manufacturing industries \cite{bommer2018theoretical}. In these fields, investigations are mainly conducted in labs and highly controlled settings, making cognitive load a \emph{field-dependent construct}. Past research has had a tendency to focus on complex safety-critical systems \cite{charles2019measuring} with many applications in the transportation \cite{arico2016adaptive,borghini2014measuring}, nuclear and manufacturing industries \cite{hart2006nasa,Brookhuis2001}, making mental workload an \emph{application-driven construct}. However, researchers have claimed the need for models of cognitive load in other ecological settings with real-world activities \cite{young2015state,young2001mental,paas2003cognitive,burns2018understanding}. The vast majority of existing knowledge-dependent, field-dependent, and application-driven models aggregate attributes, believed to influence workload, in a linear fashion \cite{hart2006nasa}, or assume stationarity within a task, neglecting temporal dynamics \cite{charles2019measuring}, making cognitive load a \emph{static construct}. Additionally, these models are largely built by fitting or correlating to some ad-hoc indicator of human performance. This is either explicitly achieved by applying self-reporting techniques and correlating to subjective responses from experimental participants, or based on fitting human responses grouped by tasks of varying demands, often ad-hoc and subjectively defined. This largely complicates research efforts attempted at modelling mental workload and increasing the generalisability of models because they are highly constrained on those subjective design choices from modellers that highly differ across experiments, disciplines and contexts. The aforementioned state of the art in cognitive load modelling has led to many definitions of workload \cite{cain2007review,xie2000review,johannsen1979workload,longo2022human} and the formation of ad-hoc, knowledge-dependent, field-dependent, application-driven and static models with little chance of reconciliation \cite{charles2019measuring}. Because of this, despite 50 years and more of research, the construct of workload is still ill-defined \cite{cain2007review,young2015state, paas2003cognitive,longo2015defeasible, charles2019measuring, longo2022human}. \\ The goal of this research is to tackle the above issues and design a model of cognitive load that has wider applicability, facilitating comparison across studies, that is less constrained to the context of application, that is not static, that does not require any explicit ground truth, and that minimizes experimental design-choices of researchers. To achieve this goal, this research proposes to apply modern Deep Learning methods to avoid incrementally extending current knowledge-driven approaches and supporting automatic learning of salient features for cognitive load and their non-linear inner-relationship from data. Additionally, this research focuses on neurophysiological data collected in ecological settings and daily real-world activities not traditionally considered in cognitive load research. In detail, electroencephalography is employed for such a purpose. Experiments will be focused on simultaneously taking advantage of the temporal, spatial and spectral properties of physiological EEG data without making any assumption on the linearity of cognitive load, supporting the automated extraction of salient features and representations and their inner relationships with no explicit declarative knowledge from designers. This will allow moving beyond the knowledge-driven research approaches that have produced hand-crafted deductive knowledge and have dominated the research landscape on mental workload for the last 50 years. Also, without resorting to self-reporting subjective perceptions or task-performance measures but only manipulating physiological EEG data, it represents a more objective method for modelling cognitive load. Eventually, the proposed computational method does not require explicit ground truth for mental workload. Instead, a self-supervised brain rate generated from data is proposed, supporting the development of a model of cognitive load that potentially has a higher degree of replicability and applicability.\\ The remainder of this article is structured as follows. Section \ref{sec:design}, introduces the design of a self-supervised mental workload model based on a brain rate, an index of cognitive activation, trained with deep learning techniques that are expected to identify recurrent patterns while fitting such a rate. Section \ref{sec:results} presents the results of the experiment, followed by a discussion in section \ref{sec:discussion} and the identification of future research improvements. \section{State of the art in cognitive load modeling} The literature on cognitive load is vast, and recent work has attempted to collate the great amount of information surrounding this construct \cite{longo2022human}. This section thus is mainly devoted to reviewing related work on the application of Electroencephalography to the problem of cognitive load modelling and not performing another wide general review of this construct. \emph{Electroencephalography} (EEG) is a technique for the direct assessment of brain activity via electrodes placed on the scalp and, as a consequence, the inference of objective neuro-metrics of human mental activation and mental states \cite{richer2018real}. The advantages behind the application of EEG data for cognitive load modelling are represented by its low invasiveness, when compared to neuroimaging methods such as fMRI \cite{lemieux1997recording}, its wider applicability in ecological settings, thanks also to its high portability \cite{xu2018review,casson2010wearable} and financial affordability \cite{mullen2015real}, and its high temporal resolution \cite{burle2015spatial}. Unfortunately, EEG-based cognitive load modelling methods must consider several technical issues. Firstly, variation in EEG signals exists mainly because of the slight differences in cortical mappings and brain functioning of subjects, leading to differences in spatial, spectral and temporal patterns or due to imperfect fitting of the EEG cap on heads of different shapes and sizes. Therefore, a key challenge in successfully recognizing mental states from EEG data is to create a model that is \emph{robust to deformation and translation of signal in space, frequency, and time} due to inter and intra-subject differences and to the protocols or methods employed in signal acquisition. Fortunately, advances in machine learning \cite{jordan2015machine} and particularly in deep learning methods \cite{lecun2015deep} have proven useful for learning models from EEG data \cite{craik2019deep}. The advantage of these \emph{data-driven deep-learning methods} is that they support the automatic extraction of meaningful high-level representations from complex, non-linear data \cite{GomezLongo2022}, they can lead to the creation of learning architectures that have wider applicability, supporting replicability of experimental research, and are flexible enough to be adapted and extended, eventually supporting advances and research progresses. However, applications of Deep Learning methods with EEG data have barely attempted to jointly preserve the structure of EEG signals within space, frequency and time. Most studies have focused on spatio-temporal learning \cite{tran2015learning}, time-frequency learning \cite{boashash2016automatic} or spatial-frequency learning \cite{ang2012filter}. Therefore, a challenge is to inductively learn a model capable of exploiting the spatio-temporal and frequency-based properties of EEG data.\\ The literature on cognitive load modelling with EEG and deep learning is recent, not vast and highly scattered \cite{saha2018classification, jimenez2020custom, qayyum2018classification, bashivan2015single, liu2017convolutional, jiao2018deep, xiong2020pattern, cabanero2019analysis, yin2017cross}. Most of these models are supervised, which means they require a form of ground truth, usually in task-based categories or task-performance measures. Unfortunately, there is no agreement among researchers on how to form such categories systematically. This limits comparisons across studies because, on the one hand, some scholars might focus on building a model for classifying low or high levels of task load for relatively simple tasks. On the other hand, others might focus, for example, on building models for assessing low, medium or high load of complex tasks. In other words, these models are context-dependent, and they learn high-level features from EEG data focused on fitting these application-specific target classes. Therefore they cannot be meaningfully used across studies, limiting their generalisability. Some recent work focused on applying unsupervised learning techniques such as auto-encoders to automatically learn relevant latent representations from EEG data in an unsupervised fashion or aimed at automatically reducing the presence of noise in the data itself \cite{yang2019assessing, yin2019physiological}. However, these unique high-level representations are often used to learn a second model that, unfortunately, still often requires supervision, as the goal is to fit, as described earlier, categories of task load, these being the independent feature subjectively defined by researchers. State-of-the-art models manipulating EEG data often rely on frequency bands, such as the alpha or theta rhythms, deemed the alphabet for brain functions and mental state extraction. These have been individually used as cognitive load indicators \cite{stipacek2003sensitivity,castro2020validating}, or aggregated together \cite{chang2016yet,holm2009estimating,borghini2014measuring,RaufiLongo2022} because they have been shown to be sensitive to task difficulty manipulation, task engagement or memory load \cite{gevins2003neurophysiological,antonenko2010using}. However, these approaches often discard some EEG bands in favour of other bands. \section{Design and methods} \label{sec:design} A novel method is proposed to tackle the issues in modelling cognitive load, as discussed in the previous sections, followed by an empirical study to validate such a method. Contrary to all the existing methods of cognitive load modeling, the method proposed here is \emph{self-supervised} \cite{jing2020self,banville2021uncovering}. Self-supervision is an approach that autonomously learns from the data itself, and that is in the middle between supervised and unsupervised learning methods within the discipline of artificial intelligence. It is not fully supervised because it does not require ground truth (an independent variable to fit), usually as a form of declarative knowledge. It is also not fully unsupervised because it is not used for discovering patterns in the EEG data that need to be subsequently labelled and categorised with human intervention. Rather, self-supervision refers to the fact that the ground truth is generated by some automatic methods applied to the available data itself. Subsequently, some supervised machine learning algorithm uses this ground truth as supervisory data to train a model. In other words, self-supervised machine learning can be seen as an autonomous form of supervised learning because it does not require explicit human declarative knowledge. \\ \begin{figure}[ht] \includegraphics[scale=0.36]{images/brain_rate_diagram.jpg} \caption{Diagrammatic illustration of the computation of the mean frequency of brain oscillations weighted over the EEG bands of potential (power) spectrum for each channel and their final aggregation towards a brain rate.} \label{fig:brainRateComputation} \end{figure} Analogously to blood pressure and heart rate, seen as standard preliminary indicators of general bodily activation, a brain rate is proposed as an indicator of mental activation, and then used in this research as an indicator of cognitive load. In contrast to the approaches that suppress or elevates some EEG band, as described in the previous section, the proposal is to fully use them, reasonably assuming that, whenever some band is modulated, the others are influenced too \cite{ferri2008functional}. Analogously to the computations for the centre of gravity or the mean energy of a physical system \cite{landi2002properties}, a spectrum-weighted frequency rate across the five canonical EEG bands (delta, theta, alpha, beta, gamma) is proposed \cite{pop2005spectrum}, here on referred to as the \emph{brain rate} (BR). This is the sum of the mean frequency of brain oscillations weighted over the EEG bands of the potential (power) spectrum for each channel, as illustrated in figure \ref{fig:brainRateComputation}). Formally: $$ BR=\sum_{ch=1}^{n} \sum_{b=1}^{5} f_b \cdot P(b,ch) $$ where $b$ is the index denoting the frequency band (for delta $b=1$, theta $b=2$, alpha $b=3$, beta $b=4$, gamma $b=5$), $ch$ is the index denoting a specific EEG channel, $f_{b}$ is the weight associated with frequency band $b$, which is the mean frequency of each EEG band. Setting the boundaries for each band as delta=\{0.5-4 hz\}, theta=\{4-8 hz\}, alpha=\{8-12 hz\}, beta=\{12-30 hz\} and gamma=\{30-45 hz\}, then $f_1=2.25, f_2=6, f_3=10, f_4=21, f_5=37.5$ (Figure \ref{fig:brainRateComputation}). $P(b,ch)$ is the mean amplitude of the electrical potential for band $b$ of a channel $ch$ over the mean of all its amplitudes: $$ P(b,ch) = \frac{avg_b(FFT_{ch})}{avg(FFT_{ch})} $$ with $FFT_{ch}$ is the vector containing the amplitudes of the fast-Fourier transformed channel $ch$, $avg_b$ is the average (centroid) of only the amplitudes within the frequency band $b$. Note that $f_b$ is in hertz, and $P(b,ch)$ is in micro-volt, with the brain rate $BR$ in hertz. By keeping the length of an EEG segment relatively short, in the order of seconds, then this rate can be used as a pseudo-real-time measure of cognitive load, since it is the mean activation of the brain response, as registered all over the scalp. Pseudo-real-time is because this rate is computed over a window of EEG data rather that a single point in time. This is also dictated by the fact that the Fourier transformation requires some data collected over time to produce meaningful translations in the frequency domain.\\ \begin{figure*} \centering \includegraphics[scale=0.25]{images/pre-processing-pipeline.jpg} \caption{Processing pipeline for producing topology-preserving head maps from windows of EEG data. I) The electrodes distributed over the scalp in a 3D space produce neural signals continuously over time II) these are segmented into windows III) for each signal in a window, fast-Fourier transformation is applied to obtain information in the power spectrum IV) each power-spectrum is divided into the five EEG bands (delta, theta, alpha, beta, gamma) V) the centroid of the frequency amplitudes for each band is computed VI) all the centroids are positioned in a 3D space to produce a scattered head map, one for each EEG band VII) polar projection is applied to each scattered map to produce 2D head maps VIII) each 2D map is interpolated IX) the 5 2D maps, one for each EEG band are aggregated into a tensor.} \label{fig:processingTopographicMaps} \end{figure*} One common problem within neuroscience, in general, and for the specific technical challenge of creating a robust model of cognitive load, in particular, is the limited availability of EEG data. This is often due to the difficulties in recruiting participants, or faulty recordings, or the presence of various artefacts in the EEG signal, leading researchers to discard significant portions of collected data. Unfortunately, when employing machine learning methods, in general and deep learning methods in particular, limited training data might often not benefit a robust model formation. For these reasons, this work proposes to use a sliding-window technique \cite{ryang2016high}. The available EEG data are segmented into windows of $k$ seconds, shifted by $w$ milliseconds. For each window, a pre-processing pipeline has been designed for producing 2D spatial-spectral preserving images, as summarised in figure \ref{fig:processingTopographicMaps}. Fast Fourier transformation is run for each EEG channel in each window, obtaining a power spectrum in the frequency domain. For each spectrum, the five EEG bands (delta, theta, alpha, beta, gamma) are defined by employing the same boundaries used to compute the brain rate. For each band, the centroid (geometric centre) is computed, which equates to the arithmetic mean of all the power values within that band. For a given band, all the computed centroids, one for each channel, are positioned in a 3-dimensional space, following the coordinates of each electrode position on the scalp, forming a scattered 3D spectral topology-preserving map. Azimuthal Equidistant Projection (polar) is subsequently used to transform this map into a scattered 2D map, preserving the relative distance between adjacent electrodes. Eventually, the Clough-Tocher method \cite{mann1999cubic} is applied to fill the scattered 2D maps by estimating the values in-between the electrode over a new interpolated map, an image of $32x32$. The aggregation of the five $32x32$ maps, one for each EEG band, creates a tensor of $32x32x5$. The sequence of these tensors can be seen as an `EEG movie', a stream of data over time in the frequency domain that preserves information in space. This stream can then be processed with deep learning methods, inspired by state-of-the-art video classification methods for spatio-temporal feature learning \cite{yue2015beyond,wang2018appearance}.\\ The aforementioned justifications and design choices have led to the design of a novel self-supervised convolutional, recurrent deep neural network trained to fit the brain rate introduced above. The proposed architecture, as depicted in figure \ref{fig:neuralnetwork}, is built upon a first part, the Convolutional Network (CNN), due to its ability to learn robust compressed representations of EEG data, and upon a second part, the Recurrent Network (RNN) to account for temporal variations. From a higher perspective, the overall architecture contains $z$ parallel convolutional networks with shared weights, which are useful for representational learning. Their outputs, high-level representations referred to as feature maps, are concatenated into a sequence of length $z$, respecting their time order. This sequence is subsequently injected into a recurrent network composed of Long Short-term Memory units (LSTM) aimed at temporal feature learning. The feature maps, the output of each CNN parallel network, are injected into a final convolutional one-dimensional layer, and along with the output of the last LSTM unit, they are used to fit the brain rate extracted from the $z+1$ EEG window (hence self-supervision).\\ \begin{figure*} \centering \includegraphics[scale=0.41]{images/neural_network.jpg} \caption{Self-supervised Convolutional-recurrent deep neural network for spatio-temporal learning with spectral topology-preserving head maps and a brain rate.} \label{fig:neuralnetwork} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.34]{images/cnn.jpg} \caption{Single VGGNET-inspired Convolutional Neural Network (CNN) architecture for feature maps learning with spectral topology-preserving head-maps with brain rate as a target feature.} \label{fig:cnn} \end{figure*} In more detail, the CNN architecture was inspired by the VGG-NET architecture designed and used in the Imagenet classification challenge \cite{simonyan2014very,russakovsky2015imagenet}. In detail, this network, as depicted in figure \ref{fig:cnn}, is composed of $7$ stacked convolutional layers with small receptive fields of size $3x3$ and stride of $1x1$ pixel, with Rectified Linear Unit (ReLU) selected as the activation function. To preserve the spatial resolution of each of the $32x32x5$ topology-preserving spectral maps of each convolutional block, each layer's inputs are padded with $1$ pixel. Each stacked block of convolutional layers is followed by a max-pooling layer over a $2x2$ window with a stride of $2x2$ pixels. The number of kernels in each convolutional block doubles for every consecutive block, expecting to create effective receptive fields of higher dimensions while requiring fewer parameters \cite{simonyan2014very}. In summary, this network contains $4$ consecutive 2D CNN layers with $32$ filters, each with a kernel size of $3x3$, a stride of $1x1$ and no padding (`valid' padding), followed by a max pooling layer with a stride size of $2x2$ and zero-padding (`same' padding, results in padding with zeros evenly to the left/right or up/down of the input). This block is followed by another one containing two 2D-CNN layers with $64$ filters, with a kernel size of $3x3$, a stride of $1x1$ and no padding (valid padding), followed by a max pooling layer with a stride size of $2x2$ and zero-padding (same padding). Eventually, the last block contains a single 2D-CNN layer with $128$ filters, with a kernel size of $3x3$, a stride of $1x1$ and no padding (valid padding), followed by a max pooling layer with a stride size of $2x2$ and zero-padding (same padding). Since the nature of neural responses is dynamic over time, a suitable method for modelling the temporal evolution of brain activity is recurrent neural networks (RNNs). Technically, Long Short-Term Memory (LSTM) appears to be an appropriate modelling choice \cite{hochreiter1997long}. It is a specific type of RNN that uses memory cells with internal memory, and gated inputs/outputs which have led to the creation of models that are efficient in capturing long-term dependencies. The hidden layer function for LSTM is calculated by applying the following equations: $$i_t = \sigma \bigl( W_{xi^{x_t}} + W_{hi^{h_{t-1}}} + W_{ci^{c_{t-1}}}+ b_i \bigr)$$ $$ f_t = \sigma \bigl( W_{xf^{x_t}} + W_{hf^{h_{t-1}}} + W_{cf^{c_{t-1}}}+ b_f \bigr) $$ $$ c_t = f_t c_{t-1} + i_t tanh \bigl( W_{xc^{x_t}} + W_{hc^{h_{t-1}}} + b_c \bigr) $$ $$ o_t = \sigma \bigl( W_{xo^{x_t}} + W_{ho^{h_{t-1}}} + W_{{co}^{c_t}} + b_o \bigr) $$ $$ h_t = o_t tanh(c_t) $$ $\sigma$ represents the logistic sigmoid function, $i$ as the input gate of the LSTM model, $f$ as its forget gate, $o$ as the output gate and $c$ as the cell activation vectors. As shown in \cite{bashivan2016mental} where various trials were performed with EEG data, a reasonable number of LSTM units seems to be only one, with $128$ cells in it. This architecture was adopted to capture the temporal relationship of the feature maps obtained from each parallel CNN and shaped as a sequence of feature maps. However, only the output made by the LSTM after seeing the complete sequence of the feature maps produced by each parallel CNN was propagated to a fully connected layer. This fully connected layer also gets the output of a CNN layer that receives the concatenation of the features maps computed by each of the parallel CNNs. This is because of the reasonable assumption that variations between these may contain additional information about the underlying mental state experienced by a subject. This is a single 2D-CNN layer containing $64$ filters with a stride of dimension $1x1$ with valid padding and ReLU as the activation function. The output of this layer was concatenated to the output of the last LSTM, followed by a drop-out layer with a probability of $0.5$, and its output was injected to a dense layer with $512$ neurons and ReLU as an activation function. Another dropout layer with a probability of $0.5$ followed, and a final dense layer with a linear activation function was devised for fitting the brain rate computed for the next window of EEG data following the sequence in time ($z+1$). Concerning the hypothesis that this study seeks to test, this is:\\ \begin{quote} H: IF a convolutional-recurrent deep neural network architecture is trained with spatio-temporal spectral topology-preserving head maps, derived from multi-channel EEG data, to fit a brain rate, an index of cognitive activation, in a self-supervised fashion\\ THEN within-subject and across-subjects models can be induced with low error rates, highlighting recurrent patterns of cognitive activation, and thus cognitive load.\\ \end{quote} In order to test such a research hypothesis, data from a well-known dataset of EEG recording is employed, namely, the DEAP dataset \cite{koelstra2011deap}, as described in the following section. \subsection{Dataset and pre-processing} Electroencephalographic (EEG) data was recorded from 32 participants while watching 40 one-minute long excerpts of music videos \cite{koelstra2011deap}. These segments were carefully selected with maximum emotional content, following a procedure that considers valence and arousal as emotions. The main rationale behind the selection of this dataset was the fact the data was recorded for a prolonged time, that means 1 minute, and not in the order of seconds, as often the case for event-related potential studies. The reasonable assumption was that while cognitively processing excerpts of videos and perceiving different emotions, participants would have also experienced different levels of cognitive load \cite{plass2019four}. Cortical activity was recorded at 512 Hz using a Biosemi ActiveTwo system using 32 active AgCl electrodes placed according to the international 10-20 system, with participants sitting 1 meter away from a 17-inch screen. A 5-second fixation cross was run before each video to act as a baseline. Participants watched two blocks of 20 videos each, separated by a short break. Other peripheral physiological signals and self-reports were recorded in the original experiment \cite{koelstra2011deap}. However, only EEG data from the following 32 EEG channels were considered: Fp1, AF3, F3, F7, FC5, FC1, C3, T7, CP5, CP1, P3, P7, P03, O1, Oz, Pz, Fp2, AF4, Fz, F4, F8, FC6, FC2, Cz, C4, T8, CP6, CP2, P4, P8, PO4, O2. A pre-processing procedure using the EEGlab toolbox was applied to data, including i) downsampling to 128Hz ii) EOG artefact removal using a blind-source separation technique iii) band-pass filtering between 4.0Hz to 45Hz iv) common average referencing v) 3 seconds pre-trial baseline was kept. For further information, readers are referred to \cite{koelstra2011deap}. \begin{figure}[ht] \centering \includegraphics[scale=0.50]{images/training_data_preparation.jpg} \caption{Pipeline for generating sequences for the convolutional-recurrent neural network.} \label{fig:pipelineTraining} \end{figure} \subsection{Training}\label{sec:training} After the pre-processing pipeline is applied to selected EEG data, a new procedure (as depicted in Figure \ref{fig:pipelineTraining}) is designed and run to generate training instances for the specific convolutional/recurrent neural network described in the previous section. Here, each video that participants watched lasted for 63 seconds (60 for the actual video and 3 for baseline). A time window of 2 seconds is set for producing spectral topology-preserving maps by applying the processing pipeline described in Figure \ref{fig:processingTopographicMaps}. This length is deemed short enough for producing a meaningful power spectrum that contains enough points well distributed across the five EEG bands. In detail, given a final sample rate of $128$ Hz, each window contains $256$ points ($128x2$) spread across the EEG bands for each channel. This means that each video contains 8064 points ($63x128=8064$). A sliding-window technique is applied across these points, and a shift of $125$ ms is used ($8$ points per second), which translates into a shift of 16 points ($128x0.125$). This generates $489$ windows of $2$ seconds ($63x8-16+1$) for each video in the dataset. The neural network designed in figure \ref{fig:neuralnetwork} is a specific convolutional-recurrent neural network accepting a sequence of windows. As mentioned before, this sequence is set to $z=7$ windows, equating to $14$ seconds of neural activity. This is believed to be short enough for the expectation of detecting some variations in cognitive load, and not too long for hampering the automatic learning of temporal dependencies across points. Each of these sequences represents a training input instance. Thus $482$ of these instances (sequences) were produced for each video ($489-7$). As previously mentioned, the designed architecture is a specific self-supervised many-to-one network. The target output is the brain rate computed for the subsequent window outside the sequence, next in time (the $8th$). The goal is to learn this rate from past information, which in other words, is the estimation of a brain rate from the neural activity of the previous $14$ seconds ($7x2$).\\ \begin{table*}[ht] \centering \begin{tabular}{| l| l| c| c| c | c| c| } \hline \multirow{2}{*}{Models} & \multirow{2}{*}{Type} & \multicolumn{4}{c|}{Instances (training sequences)} & \multirow{2}{18mm}{Repetitions} \\ \cline{3-6} & & Total & Training & Validation & Test & \\ \hline 1-person & within subject & 19280 & 13496 & 2892 & 2892 & 2 \\ 3-persons & across-subjects & 57840 & 40488 & 8676 & 8676 & 10 \\ 5-persons & across-subjects & 96400 & 67480 & 14460 & 14460 & 10 \\ 7-persons & across-subjects & 134960 & 94472 & 20244 & 20244 & 10 \\ 9-persons & across-subjects & 177570 & 125514 & 26028 & 26028 & 10 \\ \hline \end{tabular} \caption{Details of within and across-subjects models with number of training, validation and test instances, as well as the number of Monte Carlo repetitions.} \label{tab:trainingDetails} \end{table*} Several models are trained within and across subjects to test the research hypothesis, as listed in table \ref{tab:trainingDetails}. Since each participant watched $40$ videos, then the number of total sequences associated with each participant equates to $19280$ ($482x40$). The canonical approach employed in machine learning to create generalisable models would be to shuffle these sequences and split them into training, validation and test sets. However, although technically valid, performing such a shuffle for training a within-subject model would generate a training set that will likely contain some sequence from each video. In other words, each video would have a certain amount of representative data in the training, validation and test sets. To further increase generalisability, it is decided that the training set contains entire data from random $70\%$ of the possible videos, and the validation and test sets, respectively $15\%$ of the data associated with the remaining videos. Thus the shuffle is done at the video level, and data associated with $28$ random videos are selected as the training set ($482x28=13496$ training sequences), data from $6$ different random videos for the validation set ($482x6=2892$ training sequences), and the data from the remaining videos for the test set. In this way, the generalisability is exploited across unseen test videos, expected to lead to different cognitive load fluctuations than those used for training and validating models. The same rationale is applied to across-specific models. The only difference is that the training, validation and test sets contain data from a random number of participants, as listed in table \ref{tab:trainingDetails}. In other words, for example, for a 3-persons model, $3$ splits are performed for each participant individually. Then the resulting individual training, validation and test sets are concatenated to produce larger sets.\\ $32$ within-subject CNN models (figure \ref{fig:cnn}) are trained for participants twice with different batch sizes ($32$ and $100$). This step aims to understand batch-size manipulation to validate and test errors. The rationale is to analyse the trade-off between generalisability and computational resource consumption since it is known that larger batches lead to better convergence to the global optima of the objective function but at the cost of slower convergence since more memory is requested and more computations are performed. Instead, smaller batches allow the model to start learning earlier, before seeing all the data, with lower consumption of computational resources. Still, it is not guaranteed that the model converges to the global optima, thus with a negative impact on its generalisability. After assessing the ideal batch size, across-subject models are trained with incremental complexity, in terms of a higher volume of data coming from an increasing number of participants, to assess whether their generalisability still holds with a higher heterogeneity in the EEG signals. Additionally, to reinforce the analysis, repeated Monte Carlo sampling is performed for each across-subject model, with a random selection of participants at each repetition. Table \ref{tab:trainingDetails} summarises the number of training, validation and test sequences used and the number of repetitions for each training configuration. The training dataset is not augmented in any way, for example, by employing image zooming or flipping techniques, because of the distinct interpretations of direction and location in the EEG topographic-maps that correspond to specific cortical regions. Training is conducted by optimising the Mean Squared Error (MSE) loss function: $$ \frac{1}{n}\sum_{i=1}^{n}(y_i-\hat{y_i})^2 $$ with $n$ the number of sequences (of length $7$), $y_{i}$ the observed brain rate for that sequence (in the 8th position) and $\hat{y}_{i}$ the predicted brain rate for that sequence. Validation and test MSEs is monitored during and after training. Also, Mean Absolute Percentage Error (MAPE) is computed: $$ \frac{100\%}{n}\sum_{t=1}^{n}\left |\frac{y_i-\hat{y_i}} {y_i}\right| $$ where $y_{i}$ is the observed brain rate and $\hat{y_i}$ is the predicted one. Their difference is divided by the actual observed brain rate $y_{i}$. The absolute value in this ratio is summed for every predicted brain rate and divided by the number of sequences $n$. MAPE comes under percentage errors and it has been selected because these errors are scale independent, thus especially suitable for across-subject models and because it is easier to interpret and explain. As mentioned earlier, the parallel CNNs share weights, thus potentially producing different gradients in different internal layers. As a consequence, a smaller learning rate, set to $1e-3$, was employed when applying the Stochastic Gradient Descent (SFD) to the CNNs. Similarly, the whole convolutional-recurrent neural network was trained with a small learning rate of $1e-4$ optimised with the Adam algorithm \cite{kingma2015amsterdam}, shown to achieve reasonable fast convergence rates, with decay rates of first and second moments set to $0.9$ and $0.999$ respectively.\\ The overall final neural network devised contains a large number of parameters (1.62 million) and considering that a different number of models are trained with an increasing amount of training instances per model, with each instance being a tensor of $32x32x5x7$ (where $32x32$ is the size of the spatial-preserving topographic maps, $5$ is the number of EEG bands, $7$ is the number of EEG windows, that means the length of the trainable sequence), a significant demand on computational resources, in terms of memory and processing power, is required. Additionally, many parameters can make each trained model susceptible to overfitting. Therefore, several measures are taken into account. As mentioned earlier, all the CNN networks share parameters across the $7$ frames. Thus a good amount of parameters in the overall architecture were removed. Dropout layers were added after each fully connected layer, with a probability of $0.5$ to minimise overfitting \cite{hinton2012improving,krizhevsky2012imagenet}. Similarly, an early stopping training mechanism is employed to avoid training models when it is no longer necessary, thus saving a significant amount of time. This is an optimization procedure that is also used to minimise overfitting without compromising on model accuracy. In detail, this is a regularization technique that stops training when the updates of the model's parameters no longer yield improvement on a validation set after consecutive $E$ epochs. The value $E$ is called patience, and in this study it was set to $6$, after some trials. This means that the training phase early stops automatically when the error associated with the validation set does not reach a lower value for $6$ consecutive epochs, and the $Eth$-last model is retained as the final model.\\ Data up to $9$ people are considered to train a single across-subject model since this is the maximum amount of data that the selected machine has been estimated to process with its resources. In particular, this machine is an Alienware Aurora R8 (model: 02XRCM), Intel Core i7-8700 (6-core, 12 threads), 64 bits, 12Mb L2-cache, 32GB DDR-SRAM, 2 additional graphics cards (GeForce RTX 2070), with the Linux Mint 19.2 operating system, and an internal local total storage of 4 TeraBytes, comprising a primary 1TB SSD (Solid State Drive) hard-disk (model: SK Hynix, PC601 NVMe), a 3.5-inch 2TB hard-drive (model: Seagate BarraCuda ST2000DM008-2FR102) and an additional 1TB SSD hard-disk (model: 2-Power SSD2044A). For allowing training of across-specific models (up to 9 persons), a Swap RAM of 0.5TB was created. \section{Results} \label{sec:results} Figure \ref{fig:batch_size_comparison} depicts the density plots of the validation and test mean squared errors (MSEs) for the $32$ within-subject models trained only by employing the CNN architecture (\ref{fig:cnn}), respectively with batch size of $32$ and $100$. Similarly, figure \ref{fig:cnnEpochs} depict the density plots of the number of epochs necessary to train the within-subject CNN architectures, respectively, with a batch size of $32$ and $100$, with a minimum of $7$ epochs to a maximum of $60$. No significant difference exists in the validation and test errors, with the batch size of $32$ leading to slightly better (lower) MSEs. However, although not significantly different, on average, the number of epochs necessary to train CNN models with batch size $32$ is lower than that associated with batch size $100$. Every epoch for the within-subject model, with the current machine, required on average $300$ seconds (5 minutes), thus, the finalisation of training, according to the minimum and a maximum number of epochs ($7$ and $60$), required between $2100$ to $18000$ seconds (35 and 300 minutes). Therefore, $32$ was the batch size selected for training the subsequent within-subject and across-subject models with the full architecture (figure \ref{fig:neuralnetwork}) since it leads to a lower number of training sequences in one forward/backwards pass, thus lower consumption of memory, as well as a lower number of training epochs, saving a great amount of time. \begin{figure}[ht] \centering \includegraphics[scale=0.55]{images/val_mse_within-subject_cnn.jpg} \includegraphics[scale=0.55]{images/test_mse_within-subject_cnn.jpg} \caption{Comparison of validation and test Mean Squared Error for within-subjects CNN models trained respectively with batch size of dimension 32 and 100.} \label{fig:batch_size_comparison} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.54]{images/cnn_epochs.jpg} \caption{Comparison of the number of epochs to train the within-subjects CNN models respectively with batch size of dimension 32 and 100.} \label{fig:cnnEpochs} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.39]{images/mape_within-subjects.jpg} \caption{Paired histogram of the Mean Absolute Percentage Errors (MAPE) of the test data of the 32 within-subject models respectively trained only with the single Convolutional Neural Network (CNN), and the Convolutional/Recurrent Neural network (CNN+LSTM).} \label{fig:mape_within} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.52]{images/mape_density_within-subjects.jpg} \caption{Density plot of the Mean Absolute Percentage Errors (MAPE) of the test data of the 32 within-subject models respectively trained only with the single Convolutional Neural Network (CNN), and the Convolutional/Recurrent Neural network (CNN+LSTM).} \label{fig:mape_within_density} \end{figure} Figure \ref{fig:mape_within} and \ref{fig:mape_within_density} depict the Mean Absolute Percentage Errors (MAPE) for the test data of the within-subject models for the $32$ participants, trained first with the single CNN architecture of figure \ref{fig:cnn} for learning the weights (in full red), and with the convolutional/recurrent neural network with the parallel CNNs, sharing such weights, and the LSTM component for temporal learning (figure \ref{fig:neuralnetwork}) (in dashed blue). As it is possible to notice, the test MAPE has mean $0.111$ (Std: $0.073$) for the single CNN models and mean $10.75$ (Std: $0.070$) for the CNN+LSTM models. These results demonstrate that the brain rate prediction for each participant's unseen test data is good because the forecast is only off by roughly $10\%$. However, at first glance, it seems that the impact of the addition of the recurrent component (the Long Short Term Memory), as in the architecture depicted in figure \ref{fig:neuralnetwork}, does not add much value in minimising the MAPE. This seems to point to the individual capability of the single CNN architecture (figure \ref{fig:cnn}) to learn the relevant patterns, intricacies and relationships in the data in the shape of topographic head maps containing information in the 5 EEG frequency bands for the specific window length used (2 seconds). However, the LSTM layer takes a sequence of $7$ outputs from the single CNNs (in addition to a vector containing their variational information) and tries to fit the brain rate associated with the next window (the $8th$ after the sequence). The fact that the MAPE of the CNN+LSTM does not significantly change (decrease) does not mean that the LSTM did not learn any temporal relationship and dependency in the input sequences. This can be demonstrated by inspecting figure \ref{fig:exampleComparisonsPreds}, whereby the brain rate index, the predictions of the single CNN model and those of the CNN+LSTM for some within-subject models associated with random participants and a random video in their respective test sets, are compared. In detail, these figures show that the brain rates (green), computed for each of the 482 instances, as explained in section \ref{sec:training} (and depicted in figure \ref{fig:pipelineTraining}), associated with a specific video that a participant has watched, not used for training the within-subject model of that participant, are reasonably approximated by the single-CNN within-subject model (red). However, the brain rate indexes seem better approximated by the CNN+LSTM within-subject model (blue). \\ The comparisons of figure \ref{fig:exampleComparisonsPreds} highlight a number of things. Firstly, the main bursts in the brain rates are also grasped by the CNN and the CNN+LSTM models. However, those associated with the CNN (red) are shifted a bit to the right (x time axis) when compared to those associated with the CNN+LSTM (blue), which seem to be more aligned to the brain rates (green) over time. This is confirmed by the Person correlation coefficient, which on average for participants and testing videos, is $0.5$ for the CNN models and $0.7$ for the CNN+LSTM models. This means that the LSTM layer in the CNN+LSTM architecture did learn some temporal relationships and long/short-term dependencies. The CNN+LSTM predictions are smoother than those produced by the single CNN, and this might be justified by the fact that they are based on the information taken from the precedent 7 consecutive EEG windows over time. For the same reasons, this might be the reason why the scale (y-axis) of the predictions of the CNN+LSTM (blue) is a bit lower than the others (blue and green).\\ \begin{figure*} \centering \includegraphics[scale=0.35]{images/p23_a.jpg} \includegraphics[scale=0.35]{images/p23_b.jpg} \caption{Illustrative comparisons of the brain rate index, the single Convolutional Neural Network (CNN) predictions and the Convolutional/Recurrent Neural Network (CNN+LSTM) predictions for two random participants and a random video used in the test set.} \label{fig:exampleComparisonsPreds} \end{figure*} Regarding the across-subjects models, as planned in table \ref{tab:trainingDetails}, figure \ref{fig:compMAPE_CNN_CNNLSTM_across-subject} depicts the density plots of their Mean Absolute Percentage Errors (MAPEs) on the test sets. In detail, each density curve contains the MAPEs associated with the test sets of 10 models, each trained with the respective number of random people. As it is possible to see, the test MAPEs are lower on average for those models trained with material taken from 10 people (black), followed by those trained with 7 (brown), 5 (grey) and 3 people (yellow). Additionally, the standard deviations (width of each curve) are smaller (thinner) for those trained with data from more people and larger for those trained with data from fewer people. This means that smaller standard deviations are associated with more steady models because these are capable of predicting brain rates on the test data more consistently. These results might seem intuitive because it can be argued that the more training material, the higher capacity a model has to learn. However, training material comes from different numbers of people, selected randomly at each run, and their cerebral responses are different while watching videos, exhibiting different power activations and temporal dynamics. This introduces a higher variability within data, thus making a model prone to confusion while learning. Despite this, across-subject models can mitigate the influence of such an increasingly higher variability and can learn consistent higher-level representations that are more generalisable across people. \\ \begin{figure}[ht] \includegraphics[scale=0.365]{images/mape_across_subjects_CNN.jpg} \includegraphics[scale=0.365]{images/mape_across_subjects_CNNLSTM.jpg} \caption{Comparisons of the test Mean Absolute Percentage Error (MAPE) of the across-subject models grouped by the type of architecture which is the single convolutional neural network (CNN) and the convolutional/recurrent neural network (CNN+LSTM).} \label{fig:compMAPE_CNN_CNNLSTM_across-subject} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.43]{images/across-subject_CNN_vs_CNNLSTM.jpg} \caption{Pairwise comparisons of the test Mean Absolute Percentage Error (MAPE) of the across-subject models trained respectively with the single convolutional neural network (CNN) and the convolutional/recurrent neural network (CNN+LSTM) compared to the within-subject models.} \label{fig:pairwisecompMAPE_CNN_CNNLSTM_across-subject} \end{figure} Figure \ref{fig:pairwisecompMAPE_CNN_CNNLSTM_across-subject} plots the pair-wise comparison of the across-subject models trained with the single CNN and the CNN+LSTM architectures, grouped by the number of people, and the density curve associated to the MAPEs of the within-subject models, used here as baseline. Noticeably, the density plots associated with those models trained with the CNN+LSTM architecture (dashed lines) contain lower MAPEs on the test sets than those associated with the models trained with the CNN only (continuous lines). This means that the addition of the Long-Short Term Memory (LSTM) layer for temporal learning had an impact on building more accurate models, although, in this study, not statistically significant. Additionally, these results suggest that the convolution of the topology-preserving topographic maps over space (down-sampling) could learn some repetitive high-level patterns within an EEG window (as set to 2 seconds). In other words, as expected in the research hypothesis set in section \ref{sec:design}, within-subject and across-subjects models can be induced from spatio-temporal spectral topology-preserving head maps derived from multi-channel EEG data to fit a brain rate, an index of cognitive activation, with low error-rates, demonstrating the existence of recurrent patterns of cognitive load over time. A more detailed interpretation of such results, along with a discussion of the strengths and limitation of the designed method for cognitive load modeling, is done in the following section. \section{Discussion} \label{sec:discussion} The computational method described and tested in the previous sections is fully automated and allows the induction of a model of cognitive load from EEG data based on deep learning without requiring human intervention. In summary, this novel method: \begin{itemize} \item is based on data-driven deep-learning techniques for automatic inductive learning \cite{lecun2015deep}; \item is built upon electroencephalography (EEG), a non-invasive method for gathering brain responses with high-temporal resolution \cite{craik2019deep}; \item is sensitive to brain responses variation over time thanks to its recurrent neural network component \cite{hochreiter1997long}; \item is robust to deformation and translation of signal in space, and frequency, thanks to the ability of its convolutional neural network component to learn meaningful representations \cite{lecun1998gradient}; \item is built upon 2D spectral topology-preserving head maps that are rich in information and also more explainable than vectorial data \cite{LongoGLKH20, VILONE202189, Vilone2021Output}; \item is self-supervised and does not require human intervention and explicit declarative knowledge \cite{banville2021uncovering}; \item is constructed upon a brain rate, a measure of cognitive activation, and treated as an index of cognitive load that considers cortical brain oscillations weighted over the potentials of all the canonical EEG bands; \item is flexible with short EEG segments, thanks to its time-slicing procedure over cortical recordings; \item it is adjustable and customisable because it can be trained on EEG data collected from a variable number of electrodes, it can be employed with different ranges for the five EEG bands (delta, theta, alpha, beta, gamma), and with EEG windows of varying size; \item it is replicable and open to falsifiability \cite{popper2005logic}, supporting the formation of models of cognitive load with higher generalisability. \end{itemize} This method allowed the fully-automated formation of within-subject and across-subject models of cognitive load from EEG signals. These models fit a brain rate, an index of cognitive activation, with good accuracy, measured by the Mean Absolute Percentage Error (MAPE) on the test sets, demonstrating a good degree of generalisability to unseen data. In detail, each within-subject model, trained with EEG material from a single person, could predict the brain rate of unseen EEG data - as encoded with spatially preserving topographic head-maps built upon 32 channels - with a MAPE of 0.11 and 10.75 (std 0.073, 0.070), only using a convolutional neural network architecture for spatial learning, and its extension with a long-short term memory layer for temporal learning, respectively. The across-subject specific models, induced from an increasingly higher amount of EEG data from different people, confirmed these results and maintained the same testing accuracy as measured with MAPE, despite the increasing variability within training data. This perseveration in achieving similar testing accuracy, despite a higher variability in training data, can be seen as positive because it highlights the existence of some patterns within EEG data that are repetitive and stable. This observation might be linked to microstate theory which assumes that distributions of activity across the scalp persist for milliseconds before changing into a different pattern \cite{MICHEL2018577}. EEG microstates can be seen as transient, quasi-stable patterns of an electroencephalogram \cite{wackermann1993adaptive,khanna2015microstates}. An analogy can be applied to the findings obtained in this current work, and the trained models might have learned quasi-stable patterns of mental activation fluctuations, as modelled with a brain rate. The convolution applied to the spatially preserving topographic head-maps, built over five EEG frequency bands, has already led to the development of within and across-subject models with good accuracy. This means quasi-stable high-level representations might be induced from the convolutional operations that can be successfully mapped to a brain rate. Also, this view might be enforced by the minimal decrement of the test MAPEs obtained by those models trained with the LSTM layer in the neural network for temporal learning. The fact that it was minimal suggests that the sequence of convoluted representations over time is not as important as the actual representations alone, taken individually, which seem to be already rich in information and able to learn certain repetitive patterns of cognitive activation. \section{Conclusion} Cognitive Load, often referred to as Mental workload \cite{HancockLongo2021}, is one of the most invoked concepts in the disciplines of human factors, with important utility within human-computer interaction, neuroscience and education \cite{longo2022human}. Unfortunately, a reliable, generally applicable computational method for cognitive load modelling does not exist yet, complicating applied research. This research, the first of its kind, was aimed at developing a method for cognitive load modelling with generalisability in mind, supporting its application across disciplines, replicability, comparisons across studies and thus enabling falsifiability. All these advantages are aimed at supporting research on cognitive load modelling at a larger level, avoiding the creation of another ad-hoc, field-dependent, knowledge-dependent and application-driven method of mental workload that has little chance of being generally applicable across empirical works. This novel method employs Deep Learning techniques of Artificial Intelligence, for the automatic formation of models of cognitive load, in a fully unsupervised way, drastically limiting human intervention and declarative knowledge. These models work on continuous EEG data, thus having a great temporal resolution. They are built upon a newly designed notion of brain rate, a particular index of cognitive load derived from the five EEG frequency bands (delta, theta, alpha, beta, gamma). This method works on spatially-preserving topographic head-maps of cognitive activation, offering spatial resolution and supporting diagnosticity. In this study, these maps are based on spectral information derived from the five EEG bands, which are known to be rich in information for deriving mental states and facilitating the analysis and interpretation of human behaviours.\\ Findings suggest that within-subject and across-subjects models of cognitive load, developed with the newly devised computational method, are accurate enough, exhibiting a low prediction error on unseen data, thus showing a good degree of generalisability. They suggest that certain high-level representations from EEG data in the frequency bands can be extracted automatically, frequently appearing over time. However, these existing repetitive blocks of mental activation do not seem to be repetitive over time, in line with the non-stationary nature of brain activation. In other words, frequent, quasi-stable high-level representations of cognitive activation exist, but these are not repetitive over time. Additionally, these representations seem to be repetitive across-subjects, with important implications for the research field of mental workload. Their existence might suggest that general patterns of cognitive load exist, and these are subject-independent, therefore having a great generalisability. However, to confirm this claim, further studies are needed.\\ Future work will include replicating the method developed in this research study with varying time window sizes and investigating how these influence the accuracy of resulting cognitive load models. A layer of interpretability for the automatically extracted higher-level representations will be deployed, in line with principles and practices from Explainable Artificial Intelligence \cite{VILONE202189,Vilone2021Output} and knowledge-representation \cite{LONGO2021106514,LONGO2021106514}. This will help understand the shape of these representations, and the recurrent activated brain regions, giving analysts a richer level of interpretability. It will also serve as a layer of explainability, providing analysts with tools for explaining spatial and temporal dynamic of cognitive activation. The inferences of these models of cognitive load can be compared against other indexes such as the theta-to-alpha or alpha-to-theta band ratios \cite{RaufiLongo2022}, increasing their meaningfulness and validity. Eventually, studies can be devoted to the development of additional recurrent neural networks for understanding the temporal aspects of the high-level representations of cognitive activation, and establishing if there exist sequences, and their lengths, that are repetitive and recurrent over time. These future avenues will expand the science of mental workload and support the formation of models of cognitive activation with an increasing accuracy and generalisability, in turn facilitating the analysis of human behaviour. \bibliographystyle{plain} \section{Introduction} The explosion of home-working and online interactions, the pervasive uses of technologies in daily activities and many working environments impose ever more mental workload upon operators and less physical load. The literature on the construct of mental workload (MWL) or, often interchangeably referred to as cognitive load (CL), has been vast and in constant evolution for the last half-century. Note that cognitive load and mental workload might differ in little aspects, according to authors working in different fields, such as psychology, neuroscience or education. However, to my knowledge, no clear evidence has been found to address these differences formally. Thus they are interchangeably used in the remainder of this article \cite{longo2022human}. The principal reason for measuring workload is to quantify the mental cost of performing tasks to predict human performance \cite{wickens2017mental}. In turn, prediction of performance can be used for designing interfaces, interactive technologies \cite{longo2015designing}, and information-processing activities \cite{OrruL19} optimally aligned to the well-known human mental limited capacities \cite{miller1956magical}. Despite 50 years of effort, research on MWL has not been able to make major advances yet \cite{hart2006nasa,young2001mental,paas2003cognitive} failing at providing a model of understanding \cite{young2015state,charles2019measuring,van2018understanding,HancockLongo2021}. Guess intuitions and several operational definitions from various fields have proliferated \cite{longo2018reliability,HART1988139}. Still, these disagree about the MWL sources, their attributes, the mechanisms to aggregate these together and their impact on human performance \cite{young2015state}. Identifying these sources, attributes, and mechanisms and how they impinge on human performance are all open fundamental research problems. For instance, some researchers have considered task-specific attributes \cite{wickens2020processing} while others chose a combination of task and user-specific attributes \cite{hart2006nasa}. Primary researchers have employed self-reporting measurements \cite{HART1988139} or a combination of psychophysiological techniques \cite{Brookhuis2001}. However, MWL is also influenced by the environment in which a human performs a task \cite{Vidulich2012}. \\ Currently, the literature on mental workload includes a plethora of hand-crafted knowledge-driven models grounded in different theories, employing different attributes and different strategies for aggregating these into indexes of workload, limiting their comparison \cite{cain2007review,longo2015defeasible,van2018understanding, longo2022human}. This makes cognitive load a \emph{knowledge-dependent construct}. This is also supported by the fact that cognitive load has been mainly investigated in the fields of ergonomics and psychology \cite{HART1988139,young2015state} with several applications in the aviation \cite{hart2006nasa}, automobile \cite{Brookhuis2001} and manufacturing industries \cite{bommer2018theoretical}. In these fields, investigations are mainly conducted in labs and highly controlled settings, making cognitive load a \emph{field-dependent construct}. Past research has had a tendency to focus on complex safety-critical systems \cite{charles2019measuring} with many applications in the transportation \cite{arico2016adaptive,borghini2014measuring}, nuclear and manufacturing industries \cite{hart2006nasa,Brookhuis2001}, making mental workload an \emph{application-driven construct}. However, researchers have claimed the need for models of cognitive load in other ecological settings with real-world activities \cite{young2015state,young2001mental,paas2003cognitive,burns2018understanding}. The vast majority of existing knowledge-dependent, field-dependent, and application-driven models aggregate attributes, believed to influence workload, in a linear fashion \cite{hart2006nasa}, or assume stationarity within a task, neglecting temporal dynamics \cite{charles2019measuring}, making cognitive load a \emph{static construct}. Additionally, these models are largely built by fitting or correlating to some ad-hoc indicator of human performance. This is either explicitly achieved by applying self-reporting techniques and correlating to subjective responses from experimental participants, or based on fitting human responses grouped by tasks of varying demands, often ad-hoc and subjectively defined. This largely complicates research efforts attempted at modelling mental workload and increasing the generalisability of models because they are highly constrained on those subjective design choices from modellers that highly differ across experiments, disciplines and contexts. The aforementioned state of the art in cognitive load modelling has led to many definitions of workload \cite{cain2007review,xie2000review,johannsen1979workload,longo2022human} and the formation of ad-hoc, knowledge-dependent, field-dependent, application-driven and static models with little chance of reconciliation \cite{charles2019measuring}. Because of this, despite 50 years and more of research, the construct of workload is still ill-defined \cite{cain2007review,young2015state, paas2003cognitive,longo2015defeasible, charles2019measuring, longo2022human}. \\ The goal of this research is to tackle the above issues and design a model of cognitive load that has wider applicability, facilitating comparison across studies, that is less constrained to the context of application, that is not static, that does not require any explicit ground truth, and that minimizes experimental design-choices of researchers. To achieve this goal, this research proposes to apply modern Deep Learning methods to avoid incrementally extending current knowledge-driven approaches and supporting automatic learning of salient features for cognitive load and their non-linear inner-relationship from data. Additionally, this research focuses on neurophysiological data collected in ecological settings and daily real-world activities not traditionally considered in cognitive load research. In detail, electroencephalography is employed for such a purpose. Experiments will be focused on simultaneously taking advantage of the temporal, spatial and spectral properties of physiological EEG data without making any assumption on the linearity of cognitive load, supporting the automated extraction of salient features and representations and their inner relationships with no explicit declarative knowledge from designers. This will allow moving beyond the knowledge-driven research approaches that have produced hand-crafted deductive knowledge and have dominated the research landscape on mental workload for the last 50 years. Also, without resorting to self-reporting subjective perceptions or task-performance measures but only manipulating physiological EEG data, it represents a more objective method for modelling cognitive load. Eventually, the proposed computational method does not require explicit ground truth for mental workload. Instead, a self-supervised brain rate generated from data is proposed, supporting the development of a model of cognitive load that potentially has a higher degree of replicability and applicability.\\ The remainder of this article is structured as follows. Section \ref{sec:design}, introduces the design of a self-supervised mental workload model based on a brain rate, an index of cognitive activation, trained with deep learning techniques that are expected to identify recurrent patterns while fitting such a rate. Section \ref{sec:results} presents the results of the experiment, followed by a discussion in section \ref{sec:discussion} and the identification of future research improvements. \section{State of the art in cognitive load modeling} The literature on cognitive load is vast, and recent work has attempted to collate the great amount of information surrounding this construct \cite{longo2022human}. This section thus is mainly devoted to reviewing related work on the application of Electroencephalography to the problem of cognitive load modelling and not performing another wide general review of this construct. \emph{Electroencephalography} (EEG) is a technique for the direct assessment of brain activity via electrodes placed on the scalp and, as a consequence, the inference of objective neuro-metrics of human mental activation and mental states \cite{richer2018real}. The advantages behind the application of EEG data for cognitive load modelling are represented by its low invasiveness, when compared to neuroimaging methods such as fMRI \cite{lemieux1997recording}, its wider applicability in ecological settings, thanks also to its high portability \cite{xu2018review,casson2010wearable} and financial affordability \cite{mullen2015real}, and its high temporal resolution \cite{burle2015spatial}. Unfortunately, EEG-based cognitive load modelling methods must consider several technical issues. Firstly, variation in EEG signals exists mainly because of the slight differences in cortical mappings and brain functioning of subjects, leading to differences in spatial, spectral and temporal patterns or due to imperfect fitting of the EEG cap on heads of different shapes and sizes. Therefore, a key challenge in successfully recognizing mental states from EEG data is to create a model that is \emph{robust to deformation and translation of signal in space, frequency, and time} due to inter and intra-subject differences and to the protocols or methods employed in signal acquisition. Fortunately, advances in machine learning \cite{jordan2015machine} and particularly in deep learning methods \cite{lecun2015deep} have proven useful for learning models from EEG data \cite{craik2019deep}. The advantage of these \emph{data-driven deep-learning methods} is that they support the automatic extraction of meaningful high-level representations from complex, non-linear data \cite{GomezLongo2022}, they can lead to the creation of learning architectures that have wider applicability, supporting replicability of experimental research, and are flexible enough to be adapted and extended, eventually supporting advances and research progresses. However, applications of Deep Learning methods with EEG data have barely attempted to jointly preserve the structure of EEG signals within space, frequency and time. Most studies have focused on spatio-temporal learning \cite{tran2015learning}, time-frequency learning \cite{boashash2016automatic} or spatial-frequency learning \cite{ang2012filter}. Therefore, a challenge is to inductively learn a model capable of exploiting the spatio-temporal and frequency-based properties of EEG data.\\ The literature on cognitive load modelling with EEG and deep learning is recent, not vast and highly scattered \cite{saha2018classification, jimenez2020custom, qayyum2018classification, bashivan2015single, liu2017convolutional, jiao2018deep, xiong2020pattern, cabanero2019analysis, yin2017cross}. Most of these models are supervised, which means they require a form of ground truth, usually in task-based categories or task-performance measures. Unfortunately, there is no agreement among researchers on how to form such categories systematically. This limits comparisons across studies because, on the one hand, some scholars might focus on building a model for classifying low or high levels of task load for relatively simple tasks. On the other hand, others might focus, for example, on building models for assessing low, medium or high load of complex tasks. In other words, these models are context-dependent, and they learn high-level features from EEG data focused on fitting these application-specific target classes. Therefore they cannot be meaningfully used across studies, limiting their generalisability. Some recent work focused on applying unsupervised learning techniques such as auto-encoders to automatically learn relevant latent representations from EEG data in an unsupervised fashion or aimed at automatically reducing the presence of noise in the data itself \cite{yang2019assessing, yin2019physiological}. However, these unique high-level representations are often used to learn a second model that, unfortunately, still often requires supervision, as the goal is to fit, as described earlier, categories of task load, these being the independent feature subjectively defined by researchers. State-of-the-art models manipulating EEG data often rely on frequency bands, such as the alpha or theta rhythms, deemed the alphabet for brain functions and mental state extraction. These have been individually used as cognitive load indicators \cite{stipacek2003sensitivity,castro2020validating}, or aggregated together \cite{chang2016yet,holm2009estimating,borghini2014measuring,RaufiLongo2022} because they have been shown to be sensitive to task difficulty manipulation, task engagement or memory load \cite{gevins2003neurophysiological,antonenko2010using}. However, these approaches often discard some EEG bands in favour of other bands. \section{Design and methods} \label{sec:design} A novel method is proposed to tackle the issues in modelling cognitive load, as discussed in the previous sections, followed by an empirical study to validate such a method. Contrary to all the existing methods of cognitive load modeling, the method proposed here is \emph{self-supervised} \cite{jing2020self,banville2021uncovering}. Self-supervision is an approach that autonomously learns from the data itself, and that is in the middle between supervised and unsupervised learning methods within the discipline of artificial intelligence. It is not fully supervised because it does not require ground truth (an independent variable to fit), usually as a form of declarative knowledge. It is also not fully unsupervised because it is not used for discovering patterns in the EEG data that need to be subsequently labelled and categorised with human intervention. Rather, self-supervision refers to the fact that the ground truth is generated by some automatic methods applied to the available data itself. Subsequently, some supervised machine learning algorithm uses this ground truth as supervisory data to train a model. In other words, self-supervised machine learning can be seen as an autonomous form of supervised learning because it does not require explicit human declarative knowledge. \\ \begin{figure}[ht] \includegraphics[scale=0.36]{images/brain_rate_diagram.jpg} \caption{Diagrammatic illustration of the computation of the mean frequency of brain oscillations weighted over the EEG bands of potential (power) spectrum for each channel and their final aggregation towards a brain rate.} \label{fig:brainRateComputation} \end{figure} Analogously to blood pressure and heart rate, seen as standard preliminary indicators of general bodily activation, a brain rate is proposed as an indicator of mental activation, and then used in this research as an indicator of cognitive load. In contrast to the approaches that suppress or elevates some EEG band, as described in the previous section, the proposal is to fully use them, reasonably assuming that, whenever some band is modulated, the others are influenced too \cite{ferri2008functional}. Analogously to the computations for the centre of gravity or the mean energy of a physical system \cite{landi2002properties}, a spectrum-weighted frequency rate across the five canonical EEG bands (delta, theta, alpha, beta, gamma) is proposed \cite{pop2005spectrum}, here on referred to as the \emph{brain rate} (BR). This is the sum of the mean frequency of brain oscillations weighted over the EEG bands of the potential (power) spectrum for each channel, as illustrated in figure \ref{fig:brainRateComputation}). Formally: $$ BR=\sum_{ch=1}^{n} \sum_{b=1}^{5} f_b \cdot P(b,ch) $$ where $b$ is the index denoting the frequency band (for delta $b=1$, theta $b=2$, alpha $b=3$, beta $b=4$, gamma $b=5$), $ch$ is the index denoting a specific EEG channel, $f_{b}$ is the weight associated with frequency band $b$, which is the mean frequency of each EEG band. Setting the boundaries for each band as delta=\{0.5-4 hz\}, theta=\{4-8 hz\}, alpha=\{8-12 hz\}, beta=\{12-30 hz\} and gamma=\{30-45 hz\}, then $f_1=2.25, f_2=6, f_3=10, f_4=21, f_5=37.5$ (Figure \ref{fig:brainRateComputation}). $P(b,ch)$ is the mean amplitude of the electrical potential for band $b$ of a channel $ch$ over the mean of all its amplitudes: $$ P(b,ch) = \frac{avg_b(FFT_{ch})}{avg(FFT_{ch})} $$ with $FFT_{ch}$ is the vector containing the amplitudes of the fast-Fourier transformed channel $ch$, $avg_b$ is the average (centroid) of only the amplitudes within the frequency band $b$. Note that $f_b$ is in hertz, and $P(b,ch)$ is in micro-volt, with the brain rate $BR$ in hertz. By keeping the length of an EEG segment relatively short, in the order of seconds, then this rate can be used as a pseudo-real-time measure of cognitive load, since it is the mean activation of the brain response, as registered all over the scalp. Pseudo-real-time is because this rate is computed over a window of EEG data rather that a single point in time. This is also dictated by the fact that the Fourier transformation requires some data collected over time to produce meaningful translations in the frequency domain.\\ \begin{figure*} \centering \includegraphics[scale=0.25]{images/pre-processing-pipeline.jpg} \caption{Processing pipeline for producing topology-preserving head maps from windows of EEG data. I) The electrodes distributed over the scalp in a 3D space produce neural signals continuously over time II) these are segmented into windows III) for each signal in a window, fast-Fourier transformation is applied to obtain information in the power spectrum IV) each power-spectrum is divided into the five EEG bands (delta, theta, alpha, beta, gamma) V) the centroid of the frequency amplitudes for each band is computed VI) all the centroids are positioned in a 3D space to produce a scattered head map, one for each EEG band VII) polar projection is applied to each scattered map to produce 2D head maps VIII) each 2D map is interpolated IX) the 5 2D maps, one for each EEG band are aggregated into a tensor.} \label{fig:processingTopographicMaps} \end{figure*} One common problem within neuroscience, in general, and for the specific technical challenge of creating a robust model of cognitive load, in particular, is the limited availability of EEG data. This is often due to the difficulties in recruiting participants, or faulty recordings, or the presence of various artefacts in the EEG signal, leading researchers to discard significant portions of collected data. Unfortunately, when employing machine learning methods, in general and deep learning methods in particular, limited training data might often not benefit a robust model formation. For these reasons, this work proposes to use a sliding-window technique \cite{ryang2016high}. The available EEG data are segmented into windows of $k$ seconds, shifted by $w$ milliseconds. For each window, a pre-processing pipeline has been designed for producing 2D spatial-spectral preserving images, as summarised in figure \ref{fig:processingTopographicMaps}. Fast Fourier transformation is run for each EEG channel in each window, obtaining a power spectrum in the frequency domain. For each spectrum, the five EEG bands (delta, theta, alpha, beta, gamma) are defined by employing the same boundaries used to compute the brain rate. For each band, the centroid (geometric centre) is computed, which equates to the arithmetic mean of all the power values within that band. For a given band, all the computed centroids, one for each channel, are positioned in a 3-dimensional space, following the coordinates of each electrode position on the scalp, forming a scattered 3D spectral topology-preserving map. Azimuthal Equidistant Projection (polar) is subsequently used to transform this map into a scattered 2D map, preserving the relative distance between adjacent electrodes. Eventually, the Clough-Tocher method \cite{mann1999cubic} is applied to fill the scattered 2D maps by estimating the values in-between the electrode over a new interpolated map, an image of $32x32$. The aggregation of the five $32x32$ maps, one for each EEG band, creates a tensor of $32x32x5$. The sequence of these tensors can be seen as an `EEG movie', a stream of data over time in the frequency domain that preserves information in space. This stream can then be processed with deep learning methods, inspired by state-of-the-art video classification methods for spatio-temporal feature learning \cite{yue2015beyond,wang2018appearance}.\\ The aforementioned justifications and design choices have led to the design of a novel self-supervised convolutional, recurrent deep neural network trained to fit the brain rate introduced above. The proposed architecture, as depicted in figure \ref{fig:neuralnetwork}, is built upon a first part, the Convolutional Network (CNN), due to its ability to learn robust compressed representations of EEG data, and upon a second part, the Recurrent Network (RNN) to account for temporal variations. From a higher perspective, the overall architecture contains $z$ parallel convolutional networks with shared weights, which are useful for representational learning. Their outputs, high-level representations referred to as feature maps, are concatenated into a sequence of length $z$, respecting their time order. This sequence is subsequently injected into a recurrent network composed of Long Short-term Memory units (LSTM) aimed at temporal feature learning. The feature maps, the output of each CNN parallel network, are injected into a final convolutional one-dimensional layer, and along with the output of the last LSTM unit, they are used to fit the brain rate extracted from the $z+1$ EEG window (hence self-supervision).\\ \begin{figure*} \centering \includegraphics[scale=0.41]{images/neural_network.jpg} \caption{Self-supervised Convolutional-recurrent deep neural network for spatio-temporal learning with spectral topology-preserving head maps and a brain rate.} \label{fig:neuralnetwork} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.34]{images/cnn.jpg} \caption{Single VGGNET-inspired Convolutional Neural Network (CNN) architecture for feature maps learning with spectral topology-preserving head-maps with brain rate as a target feature.} \label{fig:cnn} \end{figure*} In more detail, the CNN architecture was inspired by the VGG-NET architecture designed and used in the Imagenet classification challenge \cite{simonyan2014very,russakovsky2015imagenet}. In detail, this network, as depicted in figure \ref{fig:cnn}, is composed of $7$ stacked convolutional layers with small receptive fields of size $3x3$ and stride of $1x1$ pixel, with Rectified Linear Unit (ReLU) selected as the activation function. To preserve the spatial resolution of each of the $32x32x5$ topology-preserving spectral maps of each convolutional block, each layer's inputs are padded with $1$ pixel. Each stacked block of convolutional layers is followed by a max-pooling layer over a $2x2$ window with a stride of $2x2$ pixels. The number of kernels in each convolutional block doubles for every consecutive block, expecting to create effective receptive fields of higher dimensions while requiring fewer parameters \cite{simonyan2014very}. In summary, this network contains $4$ consecutive 2D CNN layers with $32$ filters, each with a kernel size of $3x3$, a stride of $1x1$ and no padding (`valid' padding), followed by a max pooling layer with a stride size of $2x2$ and zero-padding (`same' padding, results in padding with zeros evenly to the left/right or up/down of the input). This block is followed by another one containing two 2D-CNN layers with $64$ filters, with a kernel size of $3x3$, a stride of $1x1$ and no padding (valid padding), followed by a max pooling layer with a stride size of $2x2$ and zero-padding (same padding). Eventually, the last block contains a single 2D-CNN layer with $128$ filters, with a kernel size of $3x3$, a stride of $1x1$ and no padding (valid padding), followed by a max pooling layer with a stride size of $2x2$ and zero-padding (same padding). Since the nature of neural responses is dynamic over time, a suitable method for modelling the temporal evolution of brain activity is recurrent neural networks (RNNs). Technically, Long Short-Term Memory (LSTM) appears to be an appropriate modelling choice \cite{hochreiter1997long}. It is a specific type of RNN that uses memory cells with internal memory, and gated inputs/outputs which have led to the creation of models that are efficient in capturing long-term dependencies. The hidden layer function for LSTM is calculated by applying the following equations: $$i_t = \sigma \bigl( W_{xi^{x_t}} + W_{hi^{h_{t-1}}} + W_{ci^{c_{t-1}}}+ b_i \bigr)$$ $$ f_t = \sigma \bigl( W_{xf^{x_t}} + W_{hf^{h_{t-1}}} + W_{cf^{c_{t-1}}}+ b_f \bigr) $$ $$ c_t = f_t c_{t-1} + i_t tanh \bigl( W_{xc^{x_t}} + W_{hc^{h_{t-1}}} + b_c \bigr) $$ $$ o_t = \sigma \bigl( W_{xo^{x_t}} + W_{ho^{h_{t-1}}} + W_{{co}^{c_t}} + b_o \bigr) $$ $$ h_t = o_t tanh(c_t) $$ $\sigma$ represents the logistic sigmoid function, $i$ as the input gate of the LSTM model, $f$ as its forget gate, $o$ as the output gate and $c$ as the cell activation vectors. As shown in \cite{bashivan2016mental} where various trials were performed with EEG data, a reasonable number of LSTM units seems to be only one, with $128$ cells in it. This architecture was adopted to capture the temporal relationship of the feature maps obtained from each parallel CNN and shaped as a sequence of feature maps. However, only the output made by the LSTM after seeing the complete sequence of the feature maps produced by each parallel CNN was propagated to a fully connected layer. This fully connected layer also gets the output of a CNN layer that receives the concatenation of the features maps computed by each of the parallel CNNs. This is because of the reasonable assumption that variations between these may contain additional information about the underlying mental state experienced by a subject. This is a single 2D-CNN layer containing $64$ filters with a stride of dimension $1x1$ with valid padding and ReLU as the activation function. The output of this layer was concatenated to the output of the last LSTM, followed by a drop-out layer with a probability of $0.5$, and its output was injected to a dense layer with $512$ neurons and ReLU as an activation function. Another dropout layer with a probability of $0.5$ followed, and a final dense layer with a linear activation function was devised for fitting the brain rate computed for the next window of EEG data following the sequence in time ($z+1$). Concerning the hypothesis that this study seeks to test, this is:\\ \begin{quote} H: IF a convolutional-recurrent deep neural network architecture is trained with spatio-temporal spectral topology-preserving head maps, derived from multi-channel EEG data, to fit a brain rate, an index of cognitive activation, in a self-supervised fashion\\ THEN within-subject and across-subjects models can be induced with low error rates, highlighting recurrent patterns of cognitive activation, and thus cognitive load.\\ \end{quote} In order to test such a research hypothesis, data from a well-known dataset of EEG recording is employed, namely, the DEAP dataset \cite{koelstra2011deap}, as described in the following section. \subsection{Dataset and pre-processing} Electroencephalographic (EEG) data was recorded from 32 participants while watching 40 one-minute long excerpts of music videos \cite{koelstra2011deap}. These segments were carefully selected with maximum emotional content, following a procedure that considers valence and arousal as emotions. The main rationale behind the selection of this dataset was the fact the data was recorded for a prolonged time, that means 1 minute, and not in the order of seconds, as often the case for event-related potential studies. The reasonable assumption was that while cognitively processing excerpts of videos and perceiving different emotions, participants would have also experienced different levels of cognitive load \cite{plass2019four}. Cortical activity was recorded at 512 Hz using a Biosemi ActiveTwo system using 32 active AgCl electrodes placed according to the international 10-20 system, with participants sitting 1 meter away from a 17-inch screen. A 5-second fixation cross was run before each video to act as a baseline. Participants watched two blocks of 20 videos each, separated by a short break. Other peripheral physiological signals and self-reports were recorded in the original experiment \cite{koelstra2011deap}. However, only EEG data from the following 32 EEG channels were considered: Fp1, AF3, F3, F7, FC5, FC1, C3, T7, CP5, CP1, P3, P7, P03, O1, Oz, Pz, Fp2, AF4, Fz, F4, F8, FC6, FC2, Cz, C4, T8, CP6, CP2, P4, P8, PO4, O2. A pre-processing procedure using the EEGlab toolbox was applied to data, including i) downsampling to 128Hz ii) EOG artefact removal using a blind-source separation technique iii) band-pass filtering between 4.0Hz to 45Hz iv) common average referencing v) 3 seconds pre-trial baseline was kept. For further information, readers are referred to \cite{koelstra2011deap}. \begin{figure}[ht] \centering \includegraphics[scale=0.50]{images/training_data_preparation.jpg} \caption{Pipeline for generating sequences for the convolutional-recurrent neural network.} \label{fig:pipelineTraining} \end{figure} \subsection{Training}\label{sec:training} After the pre-processing pipeline is applied to selected EEG data, a new procedure (as depicted in Figure \ref{fig:pipelineTraining}) is designed and run to generate training instances for the specific convolutional/recurrent neural network described in the previous section. Here, each video that participants watched lasted for 63 seconds (60 for the actual video and 3 for baseline). A time window of 2 seconds is set for producing spectral topology-preserving maps by applying the processing pipeline described in Figure \ref{fig:processingTopographicMaps}. This length is deemed short enough for producing a meaningful power spectrum that contains enough points well distributed across the five EEG bands. In detail, given a final sample rate of $128$ Hz, each window contains $256$ points ($128x2$) spread across the EEG bands for each channel. This means that each video contains 8064 points ($63x128=8064$). A sliding-window technique is applied across these points, and a shift of $125$ ms is used ($8$ points per second), which translates into a shift of 16 points ($128x0.125$). This generates $489$ windows of $2$ seconds ($63x8-16+1$) for each video in the dataset. The neural network designed in figure \ref{fig:neuralnetwork} is a specific convolutional-recurrent neural network accepting a sequence of windows. As mentioned before, this sequence is set to $z=7$ windows, equating to $14$ seconds of neural activity. This is believed to be short enough for the expectation of detecting some variations in cognitive load, and not too long for hampering the automatic learning of temporal dependencies across points. Each of these sequences represents a training input instance. Thus $482$ of these instances (sequences) were produced for each video ($489-7$). As previously mentioned, the designed architecture is a specific self-supervised many-to-one network. The target output is the brain rate computed for the subsequent window outside the sequence, next in time (the $8th$). The goal is to learn this rate from past information, which in other words, is the estimation of a brain rate from the neural activity of the previous $14$ seconds ($7x2$).\\ \begin{table*}[ht] \centering \begin{tabular}{| l| l| c| c| c | c| c| } \hline \multirow{2}{*}{Models} & \multirow{2}{*}{Type} & \multicolumn{4}{c|}{Instances (training sequences)} & \multirow{2}{18mm}{Repetitions} \\ \cline{3-6} & & Total & Training & Validation & Test & \\ \hline 1-person & within subject & 19280 & 13496 & 2892 & 2892 & 2 \\ 3-persons & across-subjects & 57840 & 40488 & 8676 & 8676 & 10 \\ 5-persons & across-subjects & 96400 & 67480 & 14460 & 14460 & 10 \\ 7-persons & across-subjects & 134960 & 94472 & 20244 & 20244 & 10 \\ 9-persons & across-subjects & 177570 & 125514 & 26028 & 26028 & 10 \\ \hline \end{tabular} \caption{Details of within and across-subjects models with number of training, validation and test instances, as well as the number of Monte Carlo repetitions.} \label{tab:trainingDetails} \end{table*} Several models are trained within and across subjects to test the research hypothesis, as listed in table \ref{tab:trainingDetails}. Since each participant watched $40$ videos, then the number of total sequences associated with each participant equates to $19280$ ($482x40$). The canonical approach employed in machine learning to create generalisable models would be to shuffle these sequences and split them into training, validation and test sets. However, although technically valid, performing such a shuffle for training a within-subject model would generate a training set that will likely contain some sequence from each video. In other words, each video would have a certain amount of representative data in the training, validation and test sets. To further increase generalisability, it is decided that the training set contains entire data from random $70\%$ of the possible videos, and the validation and test sets, respectively $15\%$ of the data associated with the remaining videos. Thus the shuffle is done at the video level, and data associated with $28$ random videos are selected as the training set ($482x28=13496$ training sequences), data from $6$ different random videos for the validation set ($482x6=2892$ training sequences), and the data from the remaining videos for the test set. In this way, the generalisability is exploited across unseen test videos, expected to lead to different cognitive load fluctuations than those used for training and validating models. The same rationale is applied to across-specific models. The only difference is that the training, validation and test sets contain data from a random number of participants, as listed in table \ref{tab:trainingDetails}. In other words, for example, for a 3-persons model, $3$ splits are performed for each participant individually. Then the resulting individual training, validation and test sets are concatenated to produce larger sets.\\ $32$ within-subject CNN models (figure \ref{fig:cnn}) are trained for participants twice with different batch sizes ($32$ and $100$). This step aims to understand batch-size manipulation to validate and test errors. The rationale is to analyse the trade-off between generalisability and computational resource consumption since it is known that larger batches lead to better convergence to the global optima of the objective function but at the cost of slower convergence since more memory is requested and more computations are performed. Instead, smaller batches allow the model to start learning earlier, before seeing all the data, with lower consumption of computational resources. Still, it is not guaranteed that the model converges to the global optima, thus with a negative impact on its generalisability. After assessing the ideal batch size, across-subject models are trained with incremental complexity, in terms of a higher volume of data coming from an increasing number of participants, to assess whether their generalisability still holds with a higher heterogeneity in the EEG signals. Additionally, to reinforce the analysis, repeated Monte Carlo sampling is performed for each across-subject model, with a random selection of participants at each repetition. Table \ref{tab:trainingDetails} summarises the number of training, validation and test sequences used and the number of repetitions for each training configuration. The training dataset is not augmented in any way, for example, by employing image zooming or flipping techniques, because of the distinct interpretations of direction and location in the EEG topographic-maps that correspond to specific cortical regions. Training is conducted by optimising the Mean Squared Error (MSE) loss function: $$ \frac{1}{n}\sum_{i=1}^{n}(y_i-\hat{y_i})^2 $$ with $n$ the number of sequences (of length $7$), $y_{i}$ the observed brain rate for that sequence (in the 8th position) and $\hat{y}_{i}$ the predicted brain rate for that sequence. Validation and test MSEs is monitored during and after training. Also, Mean Absolute Percentage Error (MAPE) is computed: $$ \frac{100\%}{n}\sum_{t=1}^{n}\left |\frac{y_i-\hat{y_i}} {y_i}\right| $$ where $y_{i}$ is the observed brain rate and $\hat{y_i}$ is the predicted one. Their difference is divided by the actual observed brain rate $y_{i}$. The absolute value in this ratio is summed for every predicted brain rate and divided by the number of sequences $n$. MAPE comes under percentage errors and it has been selected because these errors are scale independent, thus especially suitable for across-subject models and because it is easier to interpret and explain. As mentioned earlier, the parallel CNNs share weights, thus potentially producing different gradients in different internal layers. As a consequence, a smaller learning rate, set to $1e-3$, was employed when applying the Stochastic Gradient Descent (SFD) to the CNNs. Similarly, the whole convolutional-recurrent neural network was trained with a small learning rate of $1e-4$ optimised with the Adam algorithm \cite{kingma2015amsterdam}, shown to achieve reasonable fast convergence rates, with decay rates of first and second moments set to $0.9$ and $0.999$ respectively.\\ The overall final neural network devised contains a large number of parameters (1.62 million) and considering that a different number of models are trained with an increasing amount of training instances per model, with each instance being a tensor of $32x32x5x7$ (where $32x32$ is the size of the spatial-preserving topographic maps, $5$ is the number of EEG bands, $7$ is the number of EEG windows, that means the length of the trainable sequence), a significant demand on computational resources, in terms of memory and processing power, is required. Additionally, many parameters can make each trained model susceptible to overfitting. Therefore, several measures are taken into account. As mentioned earlier, all the CNN networks share parameters across the $7$ frames. Thus a good amount of parameters in the overall architecture were removed. Dropout layers were added after each fully connected layer, with a probability of $0.5$ to minimise overfitting \cite{hinton2012improving,krizhevsky2012imagenet}. Similarly, an early stopping training mechanism is employed to avoid training models when it is no longer necessary, thus saving a significant amount of time. This is an optimization procedure that is also used to minimise overfitting without compromising on model accuracy. In detail, this is a regularization technique that stops training when the updates of the model's parameters no longer yield improvement on a validation set after consecutive $E$ epochs. The value $E$ is called patience, and in this study it was set to $6$, after some trials. This means that the training phase early stops automatically when the error associated with the validation set does not reach a lower value for $6$ consecutive epochs, and the $Eth$-last model is retained as the final model.\\ Data up to $9$ people are considered to train a single across-subject model since this is the maximum amount of data that the selected machine has been estimated to process with its resources. In particular, this machine is an Alienware Aurora R8 (model: 02XRCM), Intel Core i7-8700 (6-core, 12 threads), 64 bits, 12Mb L2-cache, 32GB DDR-SRAM, 2 additional graphics cards (GeForce RTX 2070), with the Linux Mint 19.2 operating system, and an internal local total storage of 4 TeraBytes, comprising a primary 1TB SSD (Solid State Drive) hard-disk (model: SK Hynix, PC601 NVMe), a 3.5-inch 2TB hard-drive (model: Seagate BarraCuda ST2000DM008-2FR102) and an additional 1TB SSD hard-disk (model: 2-Power SSD2044A). For allowing training of across-specific models (up to 9 persons), a Swap RAM of 0.5TB was created. \section{Results} \label{sec:results} Figure \ref{fig:batch_size_comparison} depicts the density plots of the validation and test mean squared errors (MSEs) for the $32$ within-subject models trained only by employing the CNN architecture (\ref{fig:cnn}), respectively with batch size of $32$ and $100$. Similarly, figure \ref{fig:cnnEpochs} depict the density plots of the number of epochs necessary to train the within-subject CNN architectures, respectively, with a batch size of $32$ and $100$, with a minimum of $7$ epochs to a maximum of $60$. No significant difference exists in the validation and test errors, with the batch size of $32$ leading to slightly better (lower) MSEs. However, although not significantly different, on average, the number of epochs necessary to train CNN models with batch size $32$ is lower than that associated with batch size $100$. Every epoch for the within-subject model, with the current machine, required on average $300$ seconds (5 minutes), thus, the finalisation of training, according to the minimum and a maximum number of epochs ($7$ and $60$), required between $2100$ to $18000$ seconds (35 and 300 minutes). Therefore, $32$ was the batch size selected for training the subsequent within-subject and across-subject models with the full architecture (figure \ref{fig:neuralnetwork}) since it leads to a lower number of training sequences in one forward/backwards pass, thus lower consumption of memory, as well as a lower number of training epochs, saving a great amount of time. \begin{figure}[ht] \centering \includegraphics[scale=0.55]{images/val_mse_within-subject_cnn.jpg} \includegraphics[scale=0.55]{images/test_mse_within-subject_cnn.jpg} \caption{Comparison of validation and test Mean Squared Error for within-subjects CNN models trained respectively with batch size of dimension 32 and 100.} \label{fig:batch_size_comparison} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.54]{images/cnn_epochs.jpg} \caption{Comparison of the number of epochs to train the within-subjects CNN models respectively with batch size of dimension 32 and 100.} \label{fig:cnnEpochs} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.39]{images/mape_within-subjects.jpg} \caption{Paired histogram of the Mean Absolute Percentage Errors (MAPE) of the test data of the 32 within-subject models respectively trained only with the single Convolutional Neural Network (CNN), and the Convolutional/Recurrent Neural network (CNN+LSTM).} \label{fig:mape_within} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.52]{images/mape_density_within-subjects.jpg} \caption{Density plot of the Mean Absolute Percentage Errors (MAPE) of the test data of the 32 within-subject models respectively trained only with the single Convolutional Neural Network (CNN), and the Convolutional/Recurrent Neural network (CNN+LSTM).} \label{fig:mape_within_density} \end{figure} Figure \ref{fig:mape_within} and \ref{fig:mape_within_density} depict the Mean Absolute Percentage Errors (MAPE) for the test data of the within-subject models for the $32$ participants, trained first with the single CNN architecture of figure \ref{fig:cnn} for learning the weights (in full red), and with the convolutional/recurrent neural network with the parallel CNNs, sharing such weights, and the LSTM component for temporal learning (figure \ref{fig:neuralnetwork}) (in dashed blue). As it is possible to notice, the test MAPE has mean $0.111$ (Std: $0.073$) for the single CNN models and mean $10.75$ (Std: $0.070$) for the CNN+LSTM models. These results demonstrate that the brain rate prediction for each participant's unseen test data is good because the forecast is only off by roughly $10\%$. However, at first glance, it seems that the impact of the addition of the recurrent component (the Long Short Term Memory), as in the architecture depicted in figure \ref{fig:neuralnetwork}, does not add much value in minimising the MAPE. This seems to point to the individual capability of the single CNN architecture (figure \ref{fig:cnn}) to learn the relevant patterns, intricacies and relationships in the data in the shape of topographic head maps containing information in the 5 EEG frequency bands for the specific window length used (2 seconds). However, the LSTM layer takes a sequence of $7$ outputs from the single CNNs (in addition to a vector containing their variational information) and tries to fit the brain rate associated with the next window (the $8th$ after the sequence). The fact that the MAPE of the CNN+LSTM does not significantly change (decrease) does not mean that the LSTM did not learn any temporal relationship and dependency in the input sequences. This can be demonstrated by inspecting figure \ref{fig:exampleComparisonsPreds}, whereby the brain rate index, the predictions of the single CNN model and those of the CNN+LSTM for some within-subject models associated with random participants and a random video in their respective test sets, are compared. In detail, these figures show that the brain rates (green), computed for each of the 482 instances, as explained in section \ref{sec:training} (and depicted in figure \ref{fig:pipelineTraining}), associated with a specific video that a participant has watched, not used for training the within-subject model of that participant, are reasonably approximated by the single-CNN within-subject model (red). However, the brain rate indexes seem better approximated by the CNN+LSTM within-subject model (blue). \\ The comparisons of figure \ref{fig:exampleComparisonsPreds} highlight a number of things. Firstly, the main bursts in the brain rates are also grasped by the CNN and the CNN+LSTM models. However, those associated with the CNN (red) are shifted a bit to the right (x time axis) when compared to those associated with the CNN+LSTM (blue), which seem to be more aligned to the brain rates (green) over time. This is confirmed by the Person correlation coefficient, which on average for participants and testing videos, is $0.5$ for the CNN models and $0.7$ for the CNN+LSTM models. This means that the LSTM layer in the CNN+LSTM architecture did learn some temporal relationships and long/short-term dependencies. The CNN+LSTM predictions are smoother than those produced by the single CNN, and this might be justified by the fact that they are based on the information taken from the precedent 7 consecutive EEG windows over time. For the same reasons, this might be the reason why the scale (y-axis) of the predictions of the CNN+LSTM (blue) is a bit lower than the others (blue and green).\\ \begin{figure*} \centering \includegraphics[scale=0.35]{images/p23_a.jpg} \includegraphics[scale=0.35]{images/p23_b.jpg} \caption{Illustrative comparisons of the brain rate index, the single Convolutional Neural Network (CNN) predictions and the Convolutional/Recurrent Neural Network (CNN+LSTM) predictions for two random participants and a random video used in the test set.} \label{fig:exampleComparisonsPreds} \end{figure*} Regarding the across-subjects models, as planned in table \ref{tab:trainingDetails}, figure \ref{fig:compMAPE_CNN_CNNLSTM_across-subject} depicts the density plots of their Mean Absolute Percentage Errors (MAPEs) on the test sets. In detail, each density curve contains the MAPEs associated with the test sets of 10 models, each trained with the respective number of random people. As it is possible to see, the test MAPEs are lower on average for those models trained with material taken from 10 people (black), followed by those trained with 7 (brown), 5 (grey) and 3 people (yellow). Additionally, the standard deviations (width of each curve) are smaller (thinner) for those trained with data from more people and larger for those trained with data from fewer people. This means that smaller standard deviations are associated with more steady models because these are capable of predicting brain rates on the test data more consistently. These results might seem intuitive because it can be argued that the more training material, the higher capacity a model has to learn. However, training material comes from different numbers of people, selected randomly at each run, and their cerebral responses are different while watching videos, exhibiting different power activations and temporal dynamics. This introduces a higher variability within data, thus making a model prone to confusion while learning. Despite this, across-subject models can mitigate the influence of such an increasingly higher variability and can learn consistent higher-level representations that are more generalisable across people. \\ \begin{figure}[ht] \includegraphics[scale=0.365]{images/mape_across_subjects_CNN.jpg} \includegraphics[scale=0.365]{images/mape_across_subjects_CNNLSTM.jpg} \caption{Comparisons of the test Mean Absolute Percentage Error (MAPE) of the across-subject models grouped by the type of architecture which is the single convolutional neural network (CNN) and the convolutional/recurrent neural network (CNN+LSTM).} \label{fig:compMAPE_CNN_CNNLSTM_across-subject} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.43]{images/across-subject_CNN_vs_CNNLSTM.jpg} \caption{Pairwise comparisons of the test Mean Absolute Percentage Error (MAPE) of the across-subject models trained respectively with the single convolutional neural network (CNN) and the convolutional/recurrent neural network (CNN+LSTM) compared to the within-subject models.} \label{fig:pairwisecompMAPE_CNN_CNNLSTM_across-subject} \end{figure} Figure \ref{fig:pairwisecompMAPE_CNN_CNNLSTM_across-subject} plots the pair-wise comparison of the across-subject models trained with the single CNN and the CNN+LSTM architectures, grouped by the number of people, and the density curve associated to the MAPEs of the within-subject models, used here as baseline. Noticeably, the density plots associated with those models trained with the CNN+LSTM architecture (dashed lines) contain lower MAPEs on the test sets than those associated with the models trained with the CNN only (continuous lines). This means that the addition of the Long-Short Term Memory (LSTM) layer for temporal learning had an impact on building more accurate models, although, in this study, not statistically significant. Additionally, these results suggest that the convolution of the topology-preserving topographic maps over space (down-sampling) could learn some repetitive high-level patterns within an EEG window (as set to 2 seconds). In other words, as expected in the research hypothesis set in section \ref{sec:design}, within-subject and across-subjects models can be induced from spatio-temporal spectral topology-preserving head maps derived from multi-channel EEG data to fit a brain rate, an index of cognitive activation, with low error-rates, demonstrating the existence of recurrent patterns of cognitive load over time. A more detailed interpretation of such results, along with a discussion of the strengths and limitation of the designed method for cognitive load modeling, is done in the following section. \section{Discussion} \label{sec:discussion} The computational method described and tested in the previous sections is fully automated and allows the induction of a model of cognitive load from EEG data based on deep learning without requiring human intervention. In summary, this novel method: \begin{itemize} \item is based on data-driven deep-learning techniques for automatic inductive learning \cite{lecun2015deep}; \item is built upon electroencephalography (EEG), a non-invasive method for gathering brain responses with high-temporal resolution \cite{craik2019deep}; \item is sensitive to brain responses variation over time thanks to its recurrent neural network component \cite{hochreiter1997long}; \item is robust to deformation and translation of signal in space, and frequency, thanks to the ability of its convolutional neural network component to learn meaningful representations \cite{lecun1998gradient}; \item is built upon 2D spectral topology-preserving head maps that are rich in information and also more explainable than vectorial data \cite{LongoGLKH20, VILONE202189, Vilone2021Output}; \item is self-supervised and does not require human intervention and explicit declarative knowledge \cite{banville2021uncovering}; \item is constructed upon a brain rate, a measure of cognitive activation, and treated as an index of cognitive load that considers cortical brain oscillations weighted over the potentials of all the canonical EEG bands; \item is flexible with short EEG segments, thanks to its time-slicing procedure over cortical recordings; \item it is adjustable and customisable because it can be trained on EEG data collected from a variable number of electrodes, it can be employed with different ranges for the five EEG bands (delta, theta, alpha, beta, gamma), and with EEG windows of varying size; \item it is replicable and open to falsifiability \cite{popper2005logic}, supporting the formation of models of cognitive load with higher generalisability. \end{itemize} This method allowed the fully-automated formation of within-subject and across-subject models of cognitive load from EEG signals. These models fit a brain rate, an index of cognitive activation, with good accuracy, measured by the Mean Absolute Percentage Error (MAPE) on the test sets, demonstrating a good degree of generalisability to unseen data. In detail, each within-subject model, trained with EEG material from a single person, could predict the brain rate of unseen EEG data - as encoded with spatially preserving topographic head-maps built upon 32 channels - with a MAPE of 0.11 and 10.75 (std 0.073, 0.070), only using a convolutional neural network architecture for spatial learning, and its extension with a long-short term memory layer for temporal learning, respectively. The across-subject specific models, induced from an increasingly higher amount of EEG data from different people, confirmed these results and maintained the same testing accuracy as measured with MAPE, despite the increasing variability within training data. This perseveration in achieving similar testing accuracy, despite a higher variability in training data, can be seen as positive because it highlights the existence of some patterns within EEG data that are repetitive and stable. This observation might be linked to microstate theory which assumes that distributions of activity across the scalp persist for milliseconds before changing into a different pattern \cite{MICHEL2018577}. EEG microstates can be seen as transient, quasi-stable patterns of an electroencephalogram \cite{wackermann1993adaptive,khanna2015microstates}. An analogy can be applied to the findings obtained in this current work, and the trained models might have learned quasi-stable patterns of mental activation fluctuations, as modelled with a brain rate. The convolution applied to the spatially preserving topographic head-maps, built over five EEG frequency bands, has already led to the development of within and across-subject models with good accuracy. This means quasi-stable high-level representations might be induced from the convolutional operations that can be successfully mapped to a brain rate. Also, this view might be enforced by the minimal decrement of the test MAPEs obtained by those models trained with the LSTM layer in the neural network for temporal learning. The fact that it was minimal suggests that the sequence of convoluted representations over time is not as important as the actual representations alone, taken individually, which seem to be already rich in information and able to learn certain repetitive patterns of cognitive activation. \section{Conclusion} Cognitive Load, often referred to as Mental workload \cite{HancockLongo2021}, is one of the most invoked concepts in the disciplines of human factors, with important utility within human-computer interaction, neuroscience and education \cite{longo2022human}. Unfortunately, a reliable, generally applicable computational method for cognitive load modelling does not exist yet, complicating applied research. This research, the first of its kind, was aimed at developing a method for cognitive load modelling with generalisability in mind, supporting its application across disciplines, replicability, comparisons across studies and thus enabling falsifiability. All these advantages are aimed at supporting research on cognitive load modelling at a larger level, avoiding the creation of another ad-hoc, field-dependent, knowledge-dependent and application-driven method of mental workload that has little chance of being generally applicable across empirical works. This novel method employs Deep Learning techniques of Artificial Intelligence, for the automatic formation of models of cognitive load, in a fully unsupervised way, drastically limiting human intervention and declarative knowledge. These models work on continuous EEG data, thus having a great temporal resolution. They are built upon a newly designed notion of brain rate, a particular index of cognitive load derived from the five EEG frequency bands (delta, theta, alpha, beta, gamma). This method works on spatially-preserving topographic head-maps of cognitive activation, offering spatial resolution and supporting diagnosticity. In this study, these maps are based on spectral information derived from the five EEG bands, which are known to be rich in information for deriving mental states and facilitating the analysis and interpretation of human behaviours.\\ Findings suggest that within-subject and across-subjects models of cognitive load, developed with the newly devised computational method, are accurate enough, exhibiting a low prediction error on unseen data, thus showing a good degree of generalisability. They suggest that certain high-level representations from EEG data in the frequency bands can be extracted automatically, frequently appearing over time. However, these existing repetitive blocks of mental activation do not seem to be repetitive over time, in line with the non-stationary nature of brain activation. In other words, frequent, quasi-stable high-level representations of cognitive activation exist, but these are not repetitive over time. Additionally, these representations seem to be repetitive across-subjects, with important implications for the research field of mental workload. Their existence might suggest that general patterns of cognitive load exist, and these are subject-independent, therefore having a great generalisability. However, to confirm this claim, further studies are needed.\\ Future work will include replicating the method developed in this research study with varying time window sizes and investigating how these influence the accuracy of resulting cognitive load models. A layer of interpretability for the automatically extracted higher-level representations will be deployed, in line with principles and practices from Explainable Artificial Intelligence \cite{VILONE202189,Vilone2021Output} and knowledge-representation \cite{LONGO2021106514,LONGO2021106514}. This will help understand the shape of these representations, and the recurrent activated brain regions, giving analysts a richer level of interpretability. It will also serve as a layer of explainability, providing analysts with tools for explaining spatial and temporal dynamic of cognitive activation. The inferences of these models of cognitive load can be compared against other indexes such as the theta-to-alpha or alpha-to-theta band ratios \cite{RaufiLongo2022}, increasing their meaningfulness and validity. Eventually, studies can be devoted to the development of additional recurrent neural networks for understanding the temporal aspects of the high-level representations of cognitive activation, and establishing if there exist sequences, and their lengths, that are repetitive and recurrent over time. These future avenues will expand the science of mental workload and support the formation of models of cognitive activation with an increasing accuracy and generalisability, in turn facilitating the analysis of human behaviour. \bibliographystyle{plain}
{ "timestamp": "2022-09-23T02:15:09", "yymm": "2209", "arxiv_id": "2209.10992", "language": "en", "url": "https://arxiv.org/abs/2209.10992" }
\chapter{Introduction} Deep Learning models have recently shown their capacity and versatility to be applied to various, unstructured, and high-dimensional sorts of data. In contrast to traditional machine learning models, deep artificial neural networks can easily deal with tremendous amounts of data, being able to model the complex reality of the world. Additionally, many concepts of deep learning are easily transferable among different modalities. All of this gives the possibility to use them for many applications including natural language translation \citep{languagetranslation}, detection of genetic disorders \citep{geneticdisorder}, and control of nuclear fusion plasma \citep{nuclearfusion}. Moreover, an increasing number of researches are now focusing on the objective to find a unique neural architecture and systematic training procedure which could be applied to any source of data \citep{data2vec}. In this fruitful context, a new category of models called multimodal machine learning has emerged, aiming to jointly process multiple sources of data such as text and image. We focus in this thesis on one type of multimodal models: \textit{text-to-image generative models}. These models learn to synthesize images given an image description and recent works have shown that scaling up their size makes them able to produce complex photorealistic images \citep{dalle2, imagen}. They also have a \textit{zero-shot} learning capacity to generalize, which enables them to synthesize image types that have not been seen during training \citep{dalle}. However, training these models necessitates handling massive datasets of captioned images. For example, state-of-the-art models for text-to-image generation DALL-E 2 \citep{dalle2} and Imagen \citep{imagen} use 650M and 860M of image-text pairs respectively. It involves particularly long trainings, requiring important amounts of computational power and resources. In addition, the datasets and code repositories are often unavailable to the public community, which makes the process of replicating these generative models even more difficult. \begin{figure} \centering \includegraphics[width=1\linewidth]{assets/best_images_introduction.pdf} \caption{Some of the images synthesized by our text-to-image generative model, conditioned on the corresponding textual caption.} \label{fig:best_images} \end{figure} Moreover, while the pace of development is high, little work has been devoted to determine what these models are really capable of, partly due to the difficulty of reproduction. In addition, novel approaches use heuristics which often lack theoretical foundations. It is therefore crucial to allow the wide deep learning researchers community to experiment with these models, by finding techniques to replicate them more easily and more efficiently. This aspiration is then in line with the DALL-E mini model \citep{dalle_mini}, which intends to reproduce the text-to-image generative model DALL-E \citep{dalle}. In particular, it proposes several tricks to decrease the computational load, including a smaller dataset, the use of pre-trained models, and the replacement of the auto-regressive model by a bidirectional encoder. However, the approach followed by DALL-E and its mini counterpart has been outperformed by methods leveraging diffusion models to produce higher-quality images. Diffusion models are a new class of generative models which are starting to quietly replace previous models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). In particular, the two state-of-the-art methods in text-to-image generation DALL-E 2 and Imagen use them to synthesize high-resolution photorealistic images. In this thesis, we propose therefore an implementation\footnote{Our implementation is available at \url{https://github.com/epfml/text_to_image_generation}} of a text-to-image generative model, based on DALL-E 2. Likewise DALL-E mini, we introduce slight modifications to make DALL-E 2 trainable without an explosion of computational load. This is, to our knowledge, the first available replication of a text-to-image generative model based on diffusion models. We then have the opportunity to experiment with this model to understand the possibilities and limitations of our approach. Its characteristics allow us to manipulate image and text representations in the form of vector embeddings capturing the semantic content of the data. It enables us to perform simple algebra operations on these vectors to gain a more detailed understanding of what information they contain. Finally, we propose a simple new guidance method called \textit{image guidance} to help the generating process by means of an extra image, and demonstrate its usefulness with some experiments. This master's thesis is structured as follows. We start in \autoref{sec:related_work} by introducing the related work from which we take inspiration to build our method, as well as alternative approaches for text-to-image generation. Next, we provide in \autoref{sec:background} some technical background knowledge which is essential to understand our system. In particular, we take a deep dive into diffusion models and how to guide them to improve sample quality. We also take the time to explain in detail the CLIP model \citep{clip}, as our method uses it extensively. Chapter \ref{sec:method} simply presents our method and its different components, whereas \autoref{sec:experiments} proposes some experiments to assess the quality of our system and to provide insights of why this method is useful. We then discuss in \autoref{sec:discussion} the results obtained, by comparing them with the related works and proposing future directions for this work. We also include a section on the societal impacts caused by the type of models we use in our method. Finally, we conclude this thesis in \autoref{sec:conclusion} by reviewing its principal aspects. \chapter{Related work} \label{sec:related_work} We describe in this chapter the different related works and approaches for text-to-image generation. We start in \autoref{sec:gen_models} by reviewing what deep generative models are and we briefly explain the different approaches devised to generate samples from a distribution. We then explore text-to-image generation and in the process we do a succinct recall of computer vision and natural language processing models. \section{Deep Generative Models} \label{sec:gen_models} Machine learning models are divided into two distinct categories: discriminative and generative models. Discriminative models aim to approximate $p(y|x)$, allowing to predict a target $y$ given the observation $x$. Logistic regression and decision tree models fall into this category, but also some deep neural networks, for example image classifier models. In contrast, generative modelling aims at solving a more general task consisting of learning the joint distribution $p(x)$ over all the variables $x$. $p(x)$ can be for instance the distribution of the pixels on an image. This is a more complex task, but it allows drawing samples from the distribution $p(x)$, e.g., to generate images. A good generative model $p_\theta(x)$ therefore tries to maximize the likelihood of the data, or at least an approximation of the likelihood when its computation is not tractable. Moreover, these models can be easily conditioned on some value $y$ to obtain a conditional probability distribution $p(x|y)$. As an example, $y$ can be a label or a caption indicating the content of the image $x$. Over the past years, different approaches and models relying on deep learning have been devised to generate samples, including GANs, VAEs, flow-based models, autoregressive models and finally diffusion models. Since we will often mention them and their performances, we provide here a brief description for each of them, as well as their upsides and downsides: \begin{itemize} \item \textbf{Generative Adversarial Networks (GANs)}: GANs \citep{GAN} are composed in general of two distinct models which have antagonistic objectives. The first is called the generative model $G$ and tries to capture the data distribution, while the second named the discriminative model $D$ is designed to differentiate synthetic samples generated by $G$ from real samples of the data distribution. In other words, $G$ tries to fool $D$ during this adversarial training. A lot of work focusing on models from the GANs family were produced in the past years. Even if the sample quality is generally great, they suffer from mode collapse leading to a lack of sample diversity \citep{biasGAN}. Moreover, the adversarial nature of the training makes this latter unstable and therefore often laborious \citep{unstableGAN}. \item \textbf{Variational Auto-Encoders (VAEs):} VAEs \citep{VAE} are probabilistic generative models learning first to encode data into a constrained lower-dimensional latent space and second to decode from the latter to get the data back. The constraints on the latent space enable to obtain samples close to the original data distribution, by simply randomly sampling from the latent space. VAEs often obtain high log-likelihood values, but struggle to produce non-blurry high quality samples. \item \textbf{Flow-based generative models:} This family of models consists of applying a sequence of invertible parameterized functions \citep{nice, normflow}. In contrast to GANs and VAEs, flow-based generative models maximize directly the exact log-likelihood probability. Nonetheless, these models are generally outperformed in terms of sample quality for now, primarily because of the difficulty of finding effective invertible architectures. \item \textbf{Autoregressive models:} Autoregressive models generate the output sequentially, conditioned on past parts of the output. For instance for image generation, PixelCNN \citep{pixelCNN} and iGPT \citep{iGPT} generate the image pixel by pixel. However, their autoregressive nature makes the sampling complexity grow linearly with the size of the output. \item \textbf{Diffusion models:} Diffusion models demonstrated recently their ability to produce high quality samples. They generate samples from a data distribution by progressively removing noise from a noisy data sample. This is done by sequentially applying the same model to the data. In consequence, the principal downside is the large sampling time, consisting of many Deep Neural Network (DNN) forward passes. Nevertheless, less expensive alternatives have been devised to reduce this number of forward passes \citep{DDIM}. We explain in more details the mechanisms of diffusion models in the next section. \end{itemize} Note that these approaches are not exclusive, and some recent models combine concepts such as VQ-GANs \citep{vqgan}, autoregressive diffusion models \citep{ARDM}, or denoising diffusion GANs \citep{DDGAN}. \section{Text-to-Image Generative Models} Now that we have a clear view about what generative models are and that we know a few instances of them, we delve into the literature of a specific type of generative models which are text-to-image models. As their name suggests, they aim to synthesize images from text, involving then two different modalities. It entails to process the data differently by using a particular sort of model depending on the type of data. Image data requires models belonging to the field of computer vision (CV), whereas text data is linked to the field of natural language processing (NLP). We therefore start by briefly summarizing the recent advances in these two areas, before introducing state-of-the-art models in text-to-image generation in a second phase. Since the seminal work of Yann LeCun \citep{LeCun-gradient}, Deep Neural Architectures (DNNs) have revolutionised the area of \textbf{computer vision}, being able to obtain prodigious levels of performance on a diverse set of visual tasks, including image classification \citep{AlexNet}, image generation \citep{glow}, and object detection \citep{YOLO}. While Convolutional Neural Networks (CNNs) have long been in the past years the \textit{de facto} favourite backbone architectures in computer vision, new types of models have recently emerged. An architecture imported and adapted from NLP called the Visual Transformer \citep{vit} achieved similar and even sometimes better results than traditional CNN models on different CV applications. Despite this paradigm shift, CNNs have not yet been abandoned \citep{convnext}, and hybrid architectures are being developed to combine the built-in inductive biases of the convolution operation with the self-attention mechanisms of Transformer models \citep{conv_transformer}. Moreover, last improvements in computer vision are principally due to the use of larger models and bigger datasets. They were made possible by the rise of self-supervised learning (again coming from NLP), allowing to leverage massive unlabeled datasets from the web \citep{vit}. On the other hand, the boom of efficient and valuable models for \textbf{natural language processing} took place a few years after the one in computer vision. The expansion of language models arose with the development of the Transformer architecture mentioned above \citep{transformer}, replacing recurrent neural networks (RNNs). Foundation models implementing the Transformer such as BERT \citep{bert} or GPT \citep{gpt} have demonstrated that profound natural language understanding could emerge using generative pre-training along with self-supervised learning. It consists of first pre-training the language model on a various corpus of unlabeled text, and secondly fine-tuning it on a more specific downstream task, without substantial architecture alterations. It hence enables to transfer the richness of vast datasets to a broad range of tasks, by learning a vector representation of a piece of text. These vector representations, also called text embeddings, are supposed to capture the syntactic and semantic patterns in natural language. They are then very effective for many language tasks. In addition and similarly to computer vision, scaling the size of the models and datasets led to major improvements. The third and biggest version of GPT \citep{GPT3} showed that a larger generative language model could handle more difficult reasoning tasks. Likewise, the largest language model ever created, PaLM (540B parameters), demonstrated its remarkable adaptability, being even able to explain jokes to humans \citep{palm}. \begin{figure} \centering \includegraphics[width=1\linewidth]{assets/timeline.pdf} \caption{Timeline of text-to-image generative models.} \label{fig:timeline} \end{figure} Therefore, the astounding effectiveness of deep learning in computer vision and natural language processing naturally gave birth to multimodal models combining these two fields, i.e., \textbf{vision-language models}. They are able to learn joint representations of text and image to accomplish many vision-language tasks including Visual Question Answering (VQA), Visual Retrieval (VR), and Visual Captioning (VC). In particular, we focus on the text-to-image generation task as this is the one we are interested in. The idea of synthesizing an image given a caption began with the early work alignDRAW \citep{alignDRAW}. However, the image quality was very low and the scenes and objects generated were barely recognizable. This was followed by five years of modest improvements instigated by the progresses of GANs, with models such as AttnGAN \citep{AttnGAN}, DM-GAN \citep{DM-GAN}, or DF-GAN \citep{DF-GAN}. Although some of the content of the captions was beginning to be represented, the images were still not realistic, except for restricted and simple datasets, e.g., the CUB dataset \citep{cub} which only consists of bird images. Integrating contrastive learning in the pipeline and especially increasing the dataset size enabled XMC-GAN \citep{XMC-GAN} to produce better images depicting clearer scenes. In the same vein, authors of DALL-E \citep{dalle} showed that further scaling up the dataset to 250M image-text pairs could enable zero-shot learning. Thus, the DALL-E model is able to mix different objects, concepts, and places to produce non-iconic images, e.g., an avocado chair. DALL-E however doesn't use GANs and instead exploits a VQ-VAE \citep{vqvae}, as well as two Transformers, riding on the wave of the Transformer model. Although DALL-E outputs were seen as astonishing, a novel category of generative models demonstrated that it was only the beginning. The same year of the release of the DALL-E paper, \cite{ADM} showed that diffusion models could surpass GANs on class-conditional image generation. Then, GLIDE \citep{glide} introduced diffusion models for text-to-image synthesis and obtained higher-quality images than DALL-E, being trained on the same dataset. The authors of GLIDE use a Transformer language model to embed the textual image descriptions, and then diffusion models conditioned on the embeddings to produce $256 \times 256$ images. But then during the development of this thesis, two major works were released a month apart: DALL-E 2 \citep{dalle2} and Imagen \citep{imagen}. DALL-E 2 is very different to its former version. Similarly to GLIDE, it decodes embeddings to generate images. However, the embeddings come from CLIP \citep{clip}, a vision-language model learning image-text representations. We dedicate a section of this thesis on CLIP (\autoref{sec:clip}). So to keep it short, the two CLIP encoders produce embeddings for images and texts respectively, where the cosine similarity between two embeddings coming from the same image-caption pair is assumed to be higher than uncorrelated embeddings. DALL-E 2 leverages these pre-trained encoders and learns a prior model to translate from a CLIP text embedding to a CLIP image embedding. They also use cascaded diffusion models \citep{cascade} to upsample the images from $64 \times 64$ to $1024 \times 1024$. The structure of the method we propose in this thesis is very close to DALL-E 2, thus reading \autoref{sec:method} explaining our method can help to understand DALL-E 2 in depth. Imagen on the other hand is more similar to GLIDE, but instead of training from scratch the language model, the authors reuse a large frozen Transformer model trained on a massive text-only corpus, called the T5 model \citep{T5}. They exhibit that increasing the size of the language model leads to higher improvements than increasing the size of the diffusion model. Besides, they double the size of the captioned images dataset and propose a few tricks to generate more realistic images, including architecture modifications and the introduction of \textit{dynamic thresholding} to improve guidance, which we therefore use in our method and detail in \autoref{sec:dynamic thresholding}. The authors claim that their model outperforms DALL-E 2 since they obtain a lower FID (metric described in \autoref{sec:experiments}) on the validation set of the MS-COCO dataset \citep{coco}. We compare these two models with our method in \autoref{sec:comparison}. Due to their recent release, no replication of DALL-E 2 or Imagen has been fully implemented until now. The most recent and efficient replication of text-to-image generation model is DALL-E mini, inspired by DALL-E. DALL-E mini demonstrates that decent performances in text-to-image generation can still be achieved despite using smaller datasets and models. Nevertheless, DALL-E mini doesn't take into account the recent breakthroughs in image generation initiated by diffusion models. We therefore aim to complete their work by replicating a text-to-image model which considers diffusion models. \chapter{Background} \label{sec:background} We introduce in this chapter all the required knowledge to understand our method. It starts with a reminder about diffusion models in \autoref{sec:diffusion_models}, as well as the different ways to guide them to generate more realistic images. In particular, we introduce a new guidance method called image guidance. We then dedicate \autoref{sec:clip} to CLIP embeddings \citep{clip}, which are extensively used by our method. We assume the reader is already familiar with basic machine learning and deep learning concepts. Otherwise, we recommend reading \cite{bishop} and \cite{deeplearning}. \section{Diffusion Models} \label{sec:diffusion_models} \subsection{Introduction} \label{sec:introduction_diffusion_models} Diffusion models are originally based on a modelling approach of molecular systems, called Langevin dynamics. They were first introduced by \cite{sohldickstein2015}, and progressively appeared to be a serious alternative to traditional generative models such as GANs or VAEs, obtaining outstanding results in text-to-image generation \citep{glide, dalle2, imagen}. Moreover, their generative capacity can be applied to synthesize various sources of data: image \citep{DDPM}, text \citep{diffusion_text}, speech \citep{fastdiff}, music \citep{musicdifmodel}, video \citep{videodifmodel}, or times series \citep{timegrad}. We consider image generation here since this is the case we are interested in, but concepts are similar for other modalities. \begin{figure} \centering \includegraphics[width=1\linewidth]{assets/diffusion.pdf} \caption{The forward (diffusion) process progressively adds noise to the image, whereas the reverse (inference) process removes the noise to retrieve the initial image.} \label{fig:diffusion} \end{figure} Let's suppose that we gradually add random noise to each pixel of an image. After a sufficient amount of noising steps, the image becomes itself pure noise and the signal is totally destroyed. Diffusion models try to learn the reverse process, i.e., to iterably recover the initial image from the noisy image. In mathematical terms, we call the initial image $x_0$, obtained from the data distribution $q(x_0)$. $x_T$ is the final noisy image, where $T$ is the number of noising steps sequentially applied to $x_0$. All the noising steps $x_1, \ldots, x_{T}$ are latent variables, with the same dimensionality as $x_0$. We define this progressive noising process as being the \textit{forward process} or \textit{diffusion process}, with the following distribution: \begin{align} q(x_1, \ldots, x_T) := \prod^T_{t=1} q(x_t | x_{t-1}). \end{align} Note that we assume here that the forward process is a Markov chain, making this diffusion model a Denoising Diffusion Probabilistic Model (DDPM) \citep{DDPM}. Non-Markovian forward processes could have also been considered, leading to models such as DDIM \citep{DDIM}. If we assume that the noise added to each step is Gaussian, then we can define: \begin{align} q(x_t | x_{t-1}) := \mathcal{N}\left(x_t; \sqrt{1 - \beta_t} x_{t-1}, \beta_t \mathbf{I}\right) \end{align} where $\beta_1, \ldots, \beta_T \in (0, 1)$ is the \textit{variance schedule}. It controls the level of noise added at each step. We observe that if $\beta_1 < \beta_2 < \ldots < \beta_T$, then for $T \to \infty$ the latent $x_t$ becomes an isotropic Gaussian random variable, i.e., $x_T \sim \mathcal{N}(0,\mathbf{I})$. This is a desired property since it enables one to generate samples simply by drawing from a Gaussian distribution, similarly to generative models such as GANs or VAEs. Linear \citep{DDPM} and cosine \citep{improved-DDPM} variance schedules imply this property. By defining $\alpha_t := 1 - \beta_t$ and $\bar{\alpha}_t := \prod^t_{s=1} \alpha_s$, the following reparametrization emerges from the forward process \citep{DDPM}: \begin{align} q(x_t|x_0) &:= \mathcal{N}\left(x_t; \sqrt{\bar{\alpha}_t} x_{0}, (1 - \bar{\alpha}_t) \mathbf{I}\right)\\ &:= \sqrt{\bar{\alpha}_t} x_{0} + \sqrt{1 - \bar{\alpha}_t} \epsilon \end{align} with $\epsilon \sim \mathcal{N}(0, \mathbf{I})$, since the sum of Gaussian random variables is also Gaussian. Thus, this marginal distribution allows us to sample any arbitrary step $x_t$ conditioned on the image $x_0$, which is handy for training the model as we will see. We are now interested to learn the reverse distribution $q(x_{t-1}|x_{t})$, because after sampling from $p(x_T) \sim \mathcal{N}(x_T; 0,\mathbf{I})$, we could just run the process in reverse to obtain a sample of $q(x_0)$, i.e., a synthetic image. However, we cannot easily estimate $q(x_{t-1}|x_{t})$ since it requires access to the full data distribution. Instead, we approximate it by learning a model $p_\theta$ called the \textit{reverse process}, such that \begin{align} p_\theta(x_0,\ldots,x_T) := p(x_T) \prod^T_{t=1 }p_\theta(x_{t-1}|x_t). \end{align} Besides, we define \begin{align} p_\theta(x_{t-1}|x_t) := \mathcal{N}(x_{t-1}; \mu_\theta(x_t, t), \Sigma_\theta(x_t, t)). \end{align} where the mean $\mu_\theta(x_t, t)$ is a neural network and the variance $ \Sigma_\theta(x_t, t)$ can be computed in different ways. \cite{DDPM} fixed this variance to a constant, but \cite{improved-DDPM} showed that learning the variance was beneficial to reduce the number of diffusion steps. They parameterize the variance as the following interpolation \begin{align} \Sigma_\theta(x_t, t) = \exp(v \log \beta_t + (1 - v) \log \Tilde{\beta_t}) \end{align} with $\Tilde{\beta_t} = \beta_t\frac{1 - \bar{\alpha}_{t-1}}{1-\bar{\alpha}_t}$ and $v$ is the output of a neural network. The combination of the forward process $q$ and the backward process $p$ can be interpreted as a variational auto-encoder \citep{VAE}, leading to the optimization of the usual variational lower bound (VLB) on negative log likelihood (also called the ELBO), defined as follows: \begin{align} L_{\text{vlb}} &:= L_0 + L_1 + \ldots + L_{T-1} + L_{T} \\ L_{0} &:= -\log p_\theta (x_0|x_1) \\ L_{t-1} &:= D_{KL}(q(x_{t-1}|x_t, x_0) || p_\theta (x_{t-1}|x_t)) \\ L_{T} &:= D_{KL}(q(x_{T}|x_0) || p(x_T)) \end{align} where $D_{KL}$ is the Kullback–Leibler divergence. Let first observe that $L_T$ is constant with respect to the parameters $\theta$ and $L_0$ can be easily evaluated using the CDF of the Gaussian distribution. For now, the only missing part is how to compute the posterior distribution $q(x_{t-1}|x_t, x_0)$. Fortunately, as exhibited by \cite{DDPM}, the posterior is tractable using Bayes theorem when it is conditioned on $x_0$: \begin{align} q(x_{t-1}|x_t, x_0) &= \mathcal{N}(x_{t-1}; \Tilde{\mu}_t(x_t, x_0), \Tilde{\beta}_t\mathbf{I}) \\ \Tilde{\mu}_t(x_t, x_0) &:= \frac{\sqrt{\bar{\alpha}_{t-1}}\beta_t}{1-\bar{\alpha}_{t}}x_0 + \frac{\sqrt{\alpha_t}(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_{t}}x_t \\ \Tilde{\beta}_t &:= \frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}} \beta_t. \end{align} In consequence, all terms (except $L_0$) of $L_{\text{vlb}}$ are some $KL$ divergence between two Gaussian distributions, enabling us to evaluate them with the closed form expression. During training, we can apply stochastic gradient descent by considering only one random term of $L_{\text{vlb}}$. However, \cite{DDPM} found out that optimizing a different loss could improve sample quality, but at the cost of a lower log likelihood: \begin{align} L_{\text{simple}} = E_{x_0 \sim q(x_0), \epsilon \sim \mathcal{N}(0,\mathbf{I}),t \sim \mathcal{U}(\{1,\ldots,T\})}[\Vert \epsilon_\theta(\underbrace{\sqrt{\bar{\alpha}_t} x_{0} + \sqrt{1 - \bar{\alpha}_t} \epsilon}_{x_t}, t) - \epsilon\Vert^2] \end{align} where the predicted noise $\epsilon_\theta$ is related to the mean $\mu_\theta$ of the reverse process by the following 1-to-1 mapping: \begin{align} \mu_\theta(x_t, t) = \frac{1}{\sqrt{\alpha_t}}\left(x_t - \frac{\beta_t}{\sqrt{1 - \bar{\alpha}_t}} \epsilon_\theta(x_t, t)\right). \end{align} $L_{\text{simple}}$ can be interpreted as the mean-squared error between the true noise added on the initial image and the noise predicted by the model with the noisy image and timestep as input. Moreover, it resembles to previous denoising score matching (\cite{score-matching}), with the score function $\nabla_{x_t} \log p(x_t) \propto \epsilon_\theta(x_t, t)$. However, $L_{\text{simple}}$ does not depend on the variance $\Sigma_\theta(x_t, t)$, preventing the learning of its parameters. Thus, \cite{improved-DDPM} proposed a new hybrid weighted sum objective: \begin{align} L_{\text{hybrid}} = L_{\text{simple}} + \lambda L_{\text{vlb}} \end{align} with $\lambda$ set to $0.001$. Stop-gradient is applied to the $\epsilon_\theta$ term inside $L_{\text{vlb}}$, implying that $\mu_\theta$ is guided only by $L_{\text{simple}}$ while $\Sigma_\theta(x_t, t)$ is learnt using $L_{\text{vlb}}$. \cite{improved-DDPM} showed that $L_{\text{hybrid}}$ obtain lower log likelihood than $L_{\text{simple}}$. Hence, we use $L_{\text{hybrid}}$ for all our experiments. \subsection{Conditioning and guidance} When we consider image generation, we often want to specify the content of the synthetic images. Therefore, generative models have been easily adapted to take into consideration extra information to have control on the generation. While early models were only capable of producing samples of a specific class label, alignDRAW \citep{alignDRAW} demonstrated that it was possible to consider captions describing the contents of the images to produce scene compositions unseen in the dataset. However, the sample quality was low and most of the images were blurry. It is only when DALL-E \citep{dalle} came out that generative models were able to produce realistic outputs given an image caption. We now show how to take into account conditional information in the context of diffusion models. Diffusion models have two distinct ways to integrate conditional information and we first need to explain the difference between the two. \textbf{Conditional generative models} try to learn the probability distribution $p(x|y)$, e.g., generating an image $x$ belonging to the class $y$ or matching a caption $y$. Applied to diffusion models, it simply consists of learning the conditional model $\epsilon_{\theta}(x_t, t|y)$ instead of the unconditional model $\epsilon_{\theta}(x_t, t)$. Thus, the model directly takes $y$ as input to condition the generation of $x$ during both training and sampling. How $y$ is concretely incorporated into the model is explained in \autoref{sec:diffusion_architecture}. On the other hand, \textbf{guidance methods} don't change the model structure, and are only used during sampling. Guidance slightly modifies the output of the model at each diffusion step to help the generative process to go in the desired direction. This small update often takes the form of a gradient ascent step and the desired direction can again be a condition $y$ such as a class label or a caption. \cite{ADM} exhibited that guidance could greatly improve sample quality. Finally, guidance methods and conditional generative models are complementary and can therefore be used jointly. Some guidance approaches such as classifier-free guidance even require a conditional model. We now review the different guidance methods usey by diffusion models. \subsubsection{Classifier guidance} Introduced by \cite{ADM}, classifier guidance leverages an extra image classifier model $p_\phi(y|x_t, t)$, which already conveys some knowledge of the dataset, to help the generating process. The gradient $\nabla_{x_t} \log p_\phi(y|x_t, t)$ is exploited to guide the sampling process in the direction of class label $y$: \begin{align*} \Tilde{\mu}_\theta(x_t, t|y) = \mu_{\theta}(x_t, t) + w \Sigma_\theta (x_t, t) \nabla_{x_t} \log p_\phi(y|x_t, t), \end{align*} with $w \geq 0$ being the classifier guidance scale. Note that $p_\phi(y|x_t, t)$ considers noisy images $x_t$, which therefore requires to train a noise-aware classifier. This notion of classifier guidance can be extended to other sorts of models, and not only classifiers. For instance, the authors of GLIDE \citep{glide} replace the image classifier by a CLIP model \citep{clip} to help generate images from text. CLIP provides a measure of similarity between an image and a caption (we explain more in detail how CLIP works in the next section, \autoref{sec:clip}) and taking the gradient with respect to the image enables to guide the sampling process in the direction of the caption. However, the authors observe that classifier-free guidance performs better than CLIP guidance. \subsubsection{Classifier-free guidance} \label{sec:classifier_free} Depending on a separate model is inconvenient and it complicates the training pipeline. Moreover, for classifier guidance, the number of classes is limited, preventing us from conditioning on complex information to generate more elaborated image compositions. Thus, \cite{classifierFree} proposed classifier-free guidance, which only relies on a single diffusion model. Classifier-free guidance considers a conditional diffusion model $\epsilon_{\theta}(x_t, t|y)$ that can be made unconditional by replacing occasionally during training the condition $y$ by an empty condition $\emptyset$, e.g., by setting the caption to an empty string. The model $\epsilon_{\theta}(x_t, t|\emptyset)$ can then be used to generate unconditional images. Classifier-free guidance therefore consists in updating the model output using a linear combination between $\epsilon_{\theta}(x_t, t|\emptyset)$ and $\epsilon_{\theta}(x_t, t|y)$ in the following way: \begin{align*} \Tilde{\epsilon}_\theta(x_t, t|y) = \epsilon_{\theta}(x_t, t|\emptyset) + s \cdot (\epsilon_{\theta}(x_t, t|y) - \epsilon_{\theta}(x_t, t|\emptyset)), \end{align*} with $s \geq 1$ being the guidance scale. This update can be understood as an attempt to move further in the direction of the conditional model, while moving away from the unconditional model. It is performed at each diffusion step when sampling and \cite{classifierFree} have shown that it improves sample quality. \subsubsection{Image guidance} \label{sec:image_guidance} We propose a new guidance method which has the potential to perform image inpainting and image editing. Image inpainting is a task which aims at reconstructing missing parts in an image, whereas image editing allows the modification of some elements of an image. We call this novel approach image guidance, and it is inspired by other guidance methods. Instead of using a gradient or an implicit gradient, we consider directly an extra image $z$ which will guide the sampling process. At each sampling step, we move in the direction of $z$ by updating $x_t$ in the following way: \begin{align*} \Tilde{x}_t = x_t + w \cdot d_t \cdot (z - x_t), \end{align*} where $w \geq 0$ is the image guidance scale and $d_t$ is a decay depending on the timestep. We consider a linear decay, i.e., $d_t = t/T$, but other forms of decay could be considered. The decay helps to make the base image $z$ contribute more to the output at the first steps of the reverse process. This approach has the benefit as other guidance methods to be applied only during sampling and not during training. It therefore contrasts with other methods including GLIDE \citep{glide} and Palette \citep{palette} which need to fine-tune and modify the model architecture if they want to perform image inpainting and editing. It is possible to combine image guidance with conditioning to fill the missing regions of an image and to edit the image in a desired way by providing an image or a text embedding as conditioning. We show in \autoref{sec:results_image_guidance} what image guidance is able to do. \subsubsection{Influence of the guidance scale and dynamic thresholding} \label{sec:dynamic thresholding} For each guidance method, we scale the level of guidance by a constant factor, i.e., the guidance scale. This parameter is very important and its impact on the sampling process is different for each type of guidance method. For image guidance, the guidance scale is highly sensitive because it moves at each step the image in the same direction. Its impact is then straightforward; a larger image guidance scale implies to sample an image closer to the image used for guidance. The right combination between the image guidance scale and its decay is hence crucial to obtain the desired samples. For classifier guidance and especially when the diffusion model is conditioned on a class label, the guidance scale represents a trade-off between diversity and sample fidelity as exhibited by \cite{DDPM}, with a higher diversity and lower sample fidelity when the scale is small and vice versa if the scale is large. When we condition on captions, large guidance scales also lead to more accurate text-image alignments. However, it further causes a train-test mismatch which engenders over-saturated and unnatural images, as exposed by the authors of Imagen \citep{imagen}. Since the guidance scale is large, the values of the latent $x_t$ can exceed the bounds of the training data, i.e., the range $[-1,1]$, for any timestep $t$ during sampling. To mitigate this issue, \cite{DDPM} introduced \textit{static thresholding}, which simply consists of clipping the pixel values of every latent variable $x_t$ to $[-1,1]$. Nevertheless, \cite{imagen} have shown that the effect of static thresholding is moderate and that therefore the generated images still suffer from saturation. For that reason, they propose \textit{dynamic thresholding}, a method which actively tries to push pixel values which are close to saturation, i.e., in the vicinity of $-1$ or $1$, towards lower absolute values. At each sampling step $t$, they compute the prediction of $x_0$ as \begin{align*} \hat{x}_0^t = \sqrt{\frac{1}{\bar{\alpha}_t}} x_t - \sqrt{\frac{1}{\bar{\alpha}_t} - 1} \cdot \epsilon_\theta(x_t, t). \end{align*} They then consider the $99.5$ percentile absolute pixel value in $\hat{x}_0^t$ and call it $s$. Next if $s > 1$, they clip $x_0$ to the interval $[-1,1]$ and then divide by $s$. This procedure allows them to increase the guidance scale to obtain better text-image alignments, while keeping good image quality. Besides, we empirically noticed that dynamic thresholding is effective to prevent image guidance from saturating the image. \section{CLIP embeddings} \label{sec:clip} Finally, we end this chapter by reviewing CLIP \citep{clip}, an efficient method to learn image representation using natural language supervision. Recent works have exhibited that scaling up the size of the dataset with data scraped from the internet could lead to significant model improvements. In particular, GPT-like \citep{GPT3} and BERT-like \citep{Roberta} models have demonstrated that a large amount of texts coupled with an efficient self-supervised learning approach was required for natural language understanding. In contrast, computer vision has long been based on pure supervision using the so-called "gold labels", such as distinct class labels. These annotations are often crowd-sourced and are therefore difficult to obtain in a sufficient amount. Based on these observations, CLIP leverages the large quantity of natural texts accompanying images on the internet to scale up the size of its dataset. The latter consists of 400,000 pairs of image and caption, where the caption is supposed to encapsulate the semantic content of the image. CLIP uses then two models (text and image encoders) to produce a text embedding and an image embedding respectively. The architecture of the text encoder is a Transformer model \citep{transformer}, whereas the one of the image encoder is a Vision Transformer \citep{vit}. The two encoders are jointly trained in a contrastive way, by maximizing the cosine similarity between two embeddings of caption-image pairs while minimizing it for non-associated caption and images (see \autoref{fig:clip}). For this reason, the acronym of CLIP stands for \textit{Contrastive Language-Image Pre-training}. This whole procedure hence implies that the image and text embeddings belong to the same multimodal latent space and that similar captions and images should be close in the cosine similarity sense. \begin{wrapfigure}{l}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{assets/clip.pdf} \caption{CLIP training. It aims to maximize the dot product between embeddings obtained from similar captions and images. Figure borrowed from \cite{clip}.} \label{fig:clip} \end{wrapfigure} Moreover, CLIP embeddings have some useful properties. \cite{clip} exhibited their zero-shot abilities to perform well on out-of-distribution samples, being robust to natural distribution shifts. Indeed, benefiting from pre-training on a massive amount of various samples, CLIP embeddings along with a simple linear classifier obtain better performances on different distribution shifts dataset including ImageNetV2 \citep{imagenetV2} and ImageNet Adversarial \citep{imagenet_adversarial} than the ResNet101 model \citep{resnet}, without seeing any sample of ImageNet \citep{imagenet}. Besides, the embeddings can be easily transferred to downstream tasks. \cite{CLIPanalysis} were able to match the state-of-the-art on many vision and language tasks such as Visual Question Answering (VQA) using the CLIP embeddings. These properties enable us to leverage the CLIP encoders on any image or text dataset to obtain linked visual and textual representations. We show how we use these CLIP image and text embeddings to help generate images in the next chapter. \chapter{Method} \label{sec:method} \section{Overview} We describe in this section the method we use to generate images from text. It is predominantly inspired by DALL-E 2 \citep{dalle2}, but with slight modifications. In particular, it uses CLIP embeddings to represent texts and images with the aim of transferring from one modality to another. It starts by considering as input a text caption $c$ describing the content of the desired image. The text of this caption is then encoded by a CLIP encoder into a text embedding $y_t$ of length $512$. Next, we use a model called the \textit{CLIP translator} to translate the text embedding $y_t$ into an image embedding $y_i$, also of length $512$. We finally employ a diffusion model named the \textit{image decoder} to obtain an image $x$ of resolution $64 \times 64$ from the image embedding. Optionally, we can upsample our image to resolution $256 \times 256$ by exploiting a super-resolution model. \autoref{fig:pipeline} illustrates the full pipeline, representing how the different models fit into each other. Note that it only depicts how we sample images; the training of this pipeline is done differently and independently for each model as we will see in the next sections. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{assets/pipeline.pdf} \caption{Our approach to synthesize images from text. Starting from an image description, we sequentially apply different models to obtain an image of resolution $256 \times 256$. It is similar to DALL-E 2, but the models considered are different.} \label{fig:pipeline} \end{figure} Our method handles then several models with different purposes: \begin{itemize} \item \textbf{Image decoder:} This is our main model and the most important one. The image decoder is a diffusion model which generates image $x$, conditioned on a CLIP image embedding $y_i$, i.e., it models the probability $p_\theta(x| y_i)$. We describe in detail the model as well as its architecture and its training in the next \autoref{sec:image_decoder}. \item \textbf{CLIP translator:} This model translates CLIP text embeddings $y_t$ into CLIP image embeddings $y_i$. Despite the fact that text and image embeddings are already supposed to be close for similar contents as we have seen in \autoref{sec:clip}, the CLIP translator learns to reduce the differences between the two. The model can be then written in probabilistic terms as $p_\phi(y_i|y_t)$. As for the image decoder, we dedicate a section to this model (\autoref{sec:clip_translator}). \item \textbf{CLIP encoders:} We use trained CLIP encoders\footnote{available at \url{https://github.com/openai/CLIP}} to embed caption $c$ and image $x$. The CLIP text encoder $p_{\text{CLIP}}(y_t|c)$ is a Transformer \citep{transformer} with 63M parameters and the CLIP image encoder $p_{\text{CLIP}}(y_i|x)$ is a ViT-B/32 \citep{vit} with 86M parameters. The image encoder is only used during the trainings of the image decoder and the CLIP translator, to obtain the embeddings of the images. \item \textbf{Super-resolution model:} The image decoder only creates images of resolution $64 \times 64$, which is low. Thus, we increase the resolution of our generated images to $256 \times 256$, upsampling with a super-resolution model \citep{sr3}. This upsampler model\footnote{available at \url{https://github.com/openai/guided-diffusion}} consists of a diffusion model trained on ImageNet \citep{imagenet}, being therefore restricted to ImageNet-like pictures, e.g., the upsampler doesn't recognize and doesn't produce natural text or numbers on images well. Super-resolution models play a crucial role in DALL-E 2 and Imagen, allowing to obtain detailed $1024 \times 1024$ images using cascaded diffusion models \citep{cascade}. However, training from scratch one or two super-resolution models is complex and requires images of larger resolution, which considerably increases the size of the dataset and involves heavier computations. Being limited in the amount of available computational resources, we opt therefore for an already trained upsampler model even if it is limited. \end{itemize} Then generating images $x$ from a caption $c$ can be described probabilistically by the following equation \begin{align*} p_{\theta,\phi,\text{CLIP}}(x|c) = p_{\theta}(x|y_i) p_{\phi}(y_i|y_t) p_{\text{CLIP}}(y_t|c). \end{align*} We keep the upsampler out of this equation since it is used optionally. Note that since we reuse the CLIP encoders and the upsampler model, we only need to train the image decoder and the CLIP translator, i.e., finding parameters $\theta$ and $\phi$. Moreover, they can be trained independently and with different datasets, as they consider different types of data. We hence dedicate the next two sections to them. \section{Image Decoder} \label{sec:image_decoder} \begin{algorithm} \caption{Image decoder (diffusion model) sampling} \label{alg:sampling} \begin{algorithmic}[1] \Require diffusion model $(\epsilon_\theta, \Sigma_{\theta})$, image embedding $y_i$, guidance scale $s$ \State $x_T \sim \mathcal{N}(0, \mathbf{I})$ \For{$t = T,\ldots,1$} \State $\epsilon = \epsilon_{\theta}(x_t, t|\emptyset) + s \cdot \left(\epsilon_{\theta}(x_t, t|y_i) - \epsilon_{\theta}(x_t, t|\emptyset)\right)$ \Comment{Apply classifier-free guidance} \State $\Tilde{\epsilon} = \text{dynamic\_thresholding}(\epsilon)$ \Comment{Apply dynamic thresholding} \State $\mu_\theta(x_t, t) = \frac{1}{\sqrt{\alpha_t}}\left(x_t - \frac{\beta_t}{\sqrt{1 - \bar{\alpha}_t}} \Tilde{\epsilon} \right)$ \State $z \sim \mathcal{N}(0, \mathbf{I}) \text{ if } t > 1 \text{, else } z = 0 $ \State $x_{t-1} = \mu_\theta(x_t, t) + \Sigma_{\theta}(x_t,t) \odot z$ \EndFor \\ \Return $x_0$ \end{algorithmic} \end{algorithm} We focus on this section on the image decoder, which generates images conditioned on a CLIP image embedding. The image decoder is a diffusion model, in particular a DDPM \citep{DDPM}. It allows us to use classifier-free guidance (described in \autoref{sec:classifier_free}) along with dynamic thresholding to sample from it. Algorithm \ref{alg:sampling} describes the sampling process to generate images. How dynamic thresholding is applied is also explained in detail in \autoref{sec:dynamic thresholding}. We set the guidance scale $s$ to $6$ and use the $99.5$ percentile for dynamic thresholding. Note that since the generating process is non-deterministic, the same image embedding can engender different variations of the same image content. We first introduce in the next subsection the model architecture of the diffusion model. We then describe the dataset used to train the diffusion model as well as the training procedure. \subsection{Architecture} \label{sec:diffusion_architecture} \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{assets/diffusion_model.pdf} \caption{The U-Net architecture of the diffusion model (image decoder). It processes a RGB $64 \times 64$ image $x_t$ along with a timestep $t$ and outputs the noise $\epsilon_\theta$ and the variance $\Sigma_\theta$.} \label{fig:diffusion_model} \end{figure} We opt for a U-Net architecture \citep{unet} as proposed by \cite{DDPM}. A U-Net aims to output a sample of the same shape of the input and was used originally for biomedical image segmentation. The architecture is composed of two distinct parts: an \textit{encoder} decreasing the resolution and increasing the number of channels, and a \textit{decoder} doing the opposite to retrieve the original shape. The encoder and decoder each contain a few resolution layers, corresponding to different spatial sizes, i.e., resolutions. The layers of the encoder and decoder with the same spatial size, are linked using skip connections, where each block output in the encoder is concatenated to the corresponding block input in the decoder. The different types of blocks in the U-Net are the following: \begin{itemize} \item \textbf{ResBlock:} This is a standard resnet block \citep{resnet}, containing two convolutional layers with group normalization \citep{gn} and SiLU \citep{silu} applied before each of them, as well as a Dropout module \citep{dropout}, and a skip connection. The convolutional layers use kernels of size $3 \times 3$, a stride of $1$, and a padding of $1$. The ResBlock is also used to increase or decrease the number of channels. All the operations performed in a ResBlock are illustrated in \autoref{fig:resblock}. \item \textbf{Self-attention block:} Self-attention \citep{transformer} is a powerful mechanism to draw global dependencies between different parts of the input. The authors of Palette \citep{palette} exhibited the importance of self-attention blocks to improve sample quality, whereas \cite{cordonnier} demonstrated their high expressiveness, similar and often even larger than convolutional layers. Thus, we add self-attention blocks after ResBlocks, but only at low resolution levels since the computational complexity grows quadratically with the resolution. The spatial shape of the input is first flattened before applying self-attention, and residual connections rescaled by $\frac{1}{\sqrt{2}}$ are used to connect the input to the output. The activations are normalized using group normalization. \item \textbf{BigGAN residual block:} Introduced by \cite{BigGAN}, BigGANs use residual blocks for upsampling and downsampling the activations. \cite{ADM} found them to be beneficial to improve performance, and therefore use them to increase or decrease the resolution. They are very similar to ResBlocks, but with an upsample or downsample operation interleaved between the first group normalization and the first convolutional layer. \item \textbf{Timestep embedding:} The timestep $t$ is incorporated into a sinusoidal timestep embedding \citep{transformer}. This embedding is then linearly projected and integrated into each block of the U-Net, i.e., the ResBlocks, the self-attention blocks and the BigGAN residual blocks. \item \textbf{Conditional image embedding:} We condition the diffusion model on an image embedding. When we want to use the model unconditionally, we just set the embedding to the null vector. The image embedding is first linearly projected before being added to the timestep embedding. \end{itemize} \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{assets/ResBlock.pdf} \caption{Structure of the ResBlock. The BigGAN residual blocks are very similar, but with an extra upsample/downsample operation to modify the spatial dimension size.} \label{fig:resblock} \end{wrapfigure} Our U-Net takes as input a noisy image $x_t$ at $64 \times 64$ resolution, as well as the timestep $t$ indicating the noise level of the image. It outputs the mean $\mu_\theta$ and the variance $\Sigma_\theta$, which are used to generate $x_{t-1} \sim \mathcal{N}(\mu_\theta, \Sigma_\theta)$. We draw the noise $x_T \sim \mathcal{N}(0,\mathbf{I})$ and apply recursively the U-Net to obtain a sample $x_0$. Sampling a single image requires then $T$ forward passes of the U-Net. The resolution of the generated images is also $64 \times 64$. The values of the hyperparameters are based on the work of \cite{ADM} with some adjustments recommended by \cite{dalle2}. The diffusion model uses $T=1000$ diffusion steps with noise being applied following a cosine schedule. The encoder and decoder of the U-Net are both composed of $4$ resolution layers, with resolution varying between $64, 32, 16,$ and $8$, and the number of channels between $256, 512, 768,$ and $1024$. Each of these resolution layers are composed of $3$ ResBlocks, with two extra blocks at the lowest resolution level to connect the encoder and decoder parts. Self-attention is only applied at resolution layers $32, 16,$ and $8$, with $64$ channels per head. Finally, we reduce the interdependence among neurons by applying Dropout \citep{dropout} after ResBlocks with probability $0.1$. \subsection{Data} \label{sec:diffusion_dataset} We train the diffusion model by jointly considering three different datasets. First, we use the \textbf{ImageNet} Large Scale Visual Recognition Challenge (ILSVRC) dataset, a subset of 1,281,167 samples of ImageNet \citep{imagenet}. It contains square images of various types of objects, animals, vehicles, people, or food, which are divided into 1000 classes. The annotations were obtained via crowdsourcing on the web. Our approach however is self-supervised, and therefore doesn't need the class labels. This dataset helps our diffusion model to generate images focused on a single item or entity. On the other hand, the image diversity of ImageNet is limited, with objects being often out of their natural context. Besides, we would like to be able to generate diverse scenes, with different concept compositions. We thus use \textbf{CC3M} \citep{cc3m} and \textbf{CC12M} \citep{cc12m} to add more complex situations in our image dataset. These two datasets contain 2,064,294 and 8,856,123 caption-image pairs respectively, harvested from the internet. In particular, the datasets authors implemented an automatic pipeline which extracts, filters, and modifies the raw description obtained from the Alt-text HTML attribute of images on the web. They then obtain what they call conceptual captions whose specificities such as proper nouns, numbers, and dates have been removed or been substituted by hypernyms. CC12M is just a higher recall dataset than CC3M, by using less restrictive filters. Nevertheless, the captions are not necessary to train the diffusion model, since we use the CLIP image encoder to obtain directly the image embeddings. But they are for training the CLIP translator as we will see in \autoref{sec:mlp_dataset}. The diffusion model is then trained using a total of 12,201,584 images, which are all resized to a $64 \times 64$ resolution. For rectangular images from CC3M and CC12M, white borders are added. The CLIP image embeddings are nevertheless obtained on the original image resolution, to encapsulate more fine details. They are also standardized before they are inputted into the model. Finally, we use data augmentation by randomly horizontally flipping half of the images as suggested by \cite{dataaugmentation}, and we scale the values of the pixels to values between $-1$ and $1$. \subsection{Training} \begin{algorithm}[h] \caption{Image decoder (diffusion model) training}\label{alg:training} \begin{algorithmic}[1] \Require image dataset $q(x_0)$, embedding drop probability $p$ \State Initialize $\epsilon_\theta, \Sigma_\theta$ \While{ $\epsilon_\theta, \Sigma_\theta$ have not converged} \State $x_0 \sim q(x_0)$ \State $t \sim \mathcal{U}(\{1,\ldots,T\})$ \State $\epsilon \sim \mathcal{N}(0, 1)$ \State $u \sim \mathcal{U}(0,1)$ \State $y_i = \text{CLIP}(x_0) \text{ if } u \geq p \text{, else } y_i = \vec{0} $ \Comment{Randomly drop the CLIP image embedding} \State $x_t = \sqrt{\bar{\alpha}_t} x_{0} + \sqrt{1 - \bar{\alpha}_t} \epsilon$ \State Take gradient descent step on $\nabla_\theta \left(\Vert \epsilon_\theta(x_t, t | y_i) - \epsilon\Vert^2 + \lambda \cdot \text{sg$_{\epsilon_\theta}$}[L_t]\right)$ \Comment{Optimize $L_{\text{hybrid}}$} \EndWhile \\ \Return $\epsilon_\theta, \Sigma_\theta$ \end{algorithmic} \end{algorithm} The diffusion model is then trained by minimizing the hybrid loss $L_{\text{hybrid}}$ (see \autoref{sec:introduction_diffusion_models}) using gradient descent and the backpropagation algorithm \citep{backpropagation}. The training procedure is described by Algorithm \autoref{alg:training}. The abbreviation sg$_{\epsilon_\theta}$ stands for stop-gradient, avoiding the backpropagation of $L_t$ to update the parameters $\epsilon_\theta$. Moreover, we use the Adam optimization algorithm \citep{adam} with default parameters values, i.e., $\beta_1=0.9$, $\beta_2=0.999$, and $\epsilon=1$e-$8$, but without weight decay. The learning rate is initially set to $3$e-$4$ and is then annealed at each iteration with a linear decay. We noticed some exploding gradients during training, we hence apply gradient clipping to keep the norm of the gradients reasonable. In order to make the model fit into the memory, we fix the batch size to $16$. We also maintain the exponential moving average (EMA) of the weights, and sample with the EMA model. The EMA rate is $0.9999$. Besides, we exploit the CLIP image encoder to obtain the image embedding $y_i$, but we set it to the null vector with probability $p=0.2$ for the purpose of enabling the model to generate images unconditionally and to apply classifier-free guidance. Finally, we perform 500,000 iterations (weight updates), corresponding to a bit less than 1 epoch. More iterations should still improve our model, but our computational resources are limited and the 500,000 iterations already last $8$ days on a NVIDIA Tesla V100 SXM2 32GB. \section{CLIP Translator} \label{sec:clip_translator} In this section, we delve into the implementation of the CLIP translator which as its name suggests translates embeddings from one modality to another, in this case from text to image. In a similar way as it was done for the section about the image decoder, we introduce in the next subsections the model architecture, the datasets used, and the training process. \subsection{Architecture} Authors of DALL-E 2 propose two different architectures for their CLIP translator (which they call the prior model in their research paper). They consider either a Transformer \citep{transformer} or a diffusion model, but they demonstrate that the diffusion model generates higher-quality samples than the Transformer. We observe however that the CLIP translator simply needs to perform a vector-to-vector transformation, where the two vectors (embeddings) are already very similar because of how CLIP embeddings are produced (see \autoref{sec:clip}). Moreover, the elements of the vectors can be permuted without losing any information, since they are outputs of Transformer models. It involves that we cannot take advantage of an inductive bias such as a spatial one as it is done by convolutional layers or a sequential one for autoregressive models. We therefore simply consider as the architecture of the CLIP translator a multilayer perceptron (MLP). \cite{MLP-Mixer} have shown that slight changes to the original MLP architecture could make this model able to compete with recent Transformer and CNN models. Besides, MLP models are incredibly easy to implement, relying often only on a few lines of codes using popular deep learning frameworks. To benefit from the extensive advancements of the past decade in deep learning, we hence integrate many architectural components of the state-of-the-art MLP-Mixer \citep{MLP-Mixer}, including layer normalization \citep{layernormalization}, Dropout \citep{dropout}, skip-connections \citep{resnet}, and GELU \citep{gelu}. The architecture is depicted in \autoref{fig:MLP}. It is mainly composed of a stack of $N$ identical layers, performing each of the operations listed above, as well as two linear projections. Dropout is used with probability $0.1$. \begin{figure} \centering \includegraphics[width=1\linewidth]{assets/MLP.pdf} \caption{The multilayer perceptron architecture of the CLIP translator.} \label{fig:MLP} \end{figure} \subsection{Data} \label{sec:mlp_dataset} We use the images and captions of CC3M and CC12M (both described in \autoref{sec:diffusion_dataset}) along with the CLIP encoders to obtain pairs of image and text embeddings. It becomes a simple supervision learning task where we try to model the transformation from a caption embedding to its associated image embedding. Note that we don't use ImageNet to train the CLIP translator since it doesn't contain captions. We thought about adding the 82,783 captioned images of the training set of the MS-COCO dataset \citep{coco} to our dataset collection, but we finally decided to use MS-COCO only for testing the whole pipeline, with the aim of zero-shot learning this dataset (see \autoref{sec:testing}). Similarly as for the image decoder, the image embeddings are computed on the original image resolution. Both image and text embeddings are standardized, i.e., subtracted by the mean and divided by the standard deviation. Finally, we split the 10,920,397 embedding pairs into a training set (95\%) and a validation set (5\%). \subsection{Training} To train the CLIP translator, we simply consider the Mean Squared Error (MSE) between the image embeddings ground-truth $y_i$ and the image embeddings $\hat{y}_i$ predicted by the MLP, i.e., $L_\text{MSE} = \frac{1}{n}\Sigma^n_{k=1}((y_i)_k - (\hat{y}_i)_k)^2$. We then seek to minimize this loss function using gradient descent with AdamW \citep{adamw}, since we apply weight decay with rate $0.0001$. This is employed jointly with Dropout \citep{dropout} to avoid overfitting. The probability of Dropout is $0.1$ and the parameters of AdamW are again the default ones: $ \beta_1 = 0.9$, $\beta_2 = 0.999$, $\epsilon=1$e-$8$, and the learning rate is equal to $1$e-$3$. Finally, we consider a batch size of $256$. The hyperparameter values above are common to train machine learning models. We decide to not spend too much time tuning them, because it only marginally improves the performance of the CLIP translator. Instead, we focus on the number of layers $N$ of the MLP which empirically seems to be the largest source of variations of the loss function. Specifically, we perform grid search on its value using the validation set and we find that $N=30$ is the most effective. Finally, we train the CLIP translator with $30$ layers for $6$ epochs and apply early stopping. The model with the lower validation loss is achieved at epoch $5$ and is kept for our pipeline. We can compare our model to an \textit{identity model} which simply outputs its input $y_t$. The identity model gets a loss of $1.5 $, while our CLIP translator achieves a loss of $0.66$, which is more than twice as small. \chapter{Experiments} \label{sec:experiments} Now that we have a fully operational method to generate images from text, we can experiment its effectiveness and how well each part of the pipeline is performing. We start by simply evaluating the method to generate images of good quality: firstly in considering the whole system, and secondly in performing ablation studies where some components are removed to investigate their individual contribution. Next, as we extensively use the CLIP embeddings, we would like to understand to which degree they can capture semantic regularities in texts and in images. Finally, we analyze our novel image guidance method to determine what it is capable of doing. \section{Testing our method} \label{sec:testing} Before assessing the performances of our method and its components, we introduce the main metric that we will use to do so. It consists of the \textit{Fréchet Inception Distance} (FID), a common criterion to evaluate the performances of image generative models. Introduced by \cite{FID}, it aims to overcome the shortcomings of the \textit{Inception Score} (IS) from \cite{inception_score}. Even if both metrics utilize an extra image classifier, an Inception-v3 trained on ImageNet \citep{InceptionV3}, the IS only considers the distribution of the generated images to determine their quality. On the other hand, the FID exploits and compares statistics of both the generated and real-world images, the latter belonging to the same distribution of the model training set. It makes the FID correlating more to human judgement of sample quality than the IS. In particular, the FID uses the activations produced by the last pooling layer of the Inception-v3 classifier, yielding a 2,048 feature vector for each image (real and synthetic). Next, it computes the first two moments of the activations of the real and the generated images separately, i.e., the two means $\mu_r,\mu_g$ and the two covariance matrices $\Sigma_r,\Sigma_g$ respectively. The FID is then given by \begin{align*} \text{FID} = \Vert\mu_r -\mu_g \Vert^2 + \text{tr}(\Sigma_r + \Sigma_g -2(\Sigma_r \Sigma_g)^{1/2}). \end{align*} Lower FIDs involve in general higher image quality. We hence compute and analyze this metric for different experiments. For more exhaustivity, we also include the IS, and the improved precision and recall as introduced by \cite{precision_recall}. \subsection{Full pipeline} \label{sec:full_pipeline} We test in this subsection the capacity of our system to generate quality images. We first showcase some of the best $256 \times 256$ images we obtain from engineered prompts in \autoref{fig:best_images}. Our method is able to generate various scenes with accurate text-image alignments. The sky and the ground textures are remarkably well-depicted and the model approximately identifies when and where a natural shadow is necessary. Nevertheless, we observe that our method seems to struggle to generate high-level features such as paws, legs, and faces of animals. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{assets/best_images.pdf} \caption{Sample of the $256 \times 256$ images generated by our method.} \label{fig:best_images} \end{figure} The captions and images of \autoref{fig:best_images} were carefully selected, so we now demonstrate the capacity to generate directly a single good image for any sort of caption. To do this, we consider $12$ captions randomly picked from the MS-COCO validation set \citep{coco}. Recall that all the samples of the MS-COCO dataset are held-out during training, which forces our method to learn this dataset in a zero-shot fashion. The captions and the synthetic images obtained from them are displayed in \autoref{fig:random_pipeline_images}. \begin{figure}[h] \centering \begin{minipage}[t]{.49\textwidth} \centering \includegraphics[width=1\linewidth]{assets/captions.pdf} \end{minipage} \hspace{0.2em} \begin{minipage}[t]{.49\textwidth} \centering \includegraphics[width=1\linewidth]{assets/pipeline.jpg} \end{minipage} \caption{Captions randomly drawn from the MS-COCO validation set, accompanied by the $256 \times 256$ images generated from these captions by our method. No cherry-picking at all.} \label{fig:random_pipeline_images} \end{figure} We observe that the content of the captions is always present in the images, but with varying degrees of accuracy. Our system seems to be able to deal and represent small numbers since it managed to generate the right amount of horses and lambs in the top images. The colors are also respected for the different animals (lamb, cow, dog, and cat) and for the pink objects, but not for the gold bus. It appears that our model has difficulty handling complex and unusual prompts, e.g, "A white dog lays under a multitude of hats." and "The black and white cat is wearing a pink hat". Besides, as for other generative models, we notice that generating human faces seems to be difficult for our method. Nevertheless, the generated textures such as grass or fur are often of high quality. It could be explained by the fact that our super-resolution model has been trained on ImageNet, which contains many iconic images of animals in simple landscapes. However, when the upsampler has to deal with overlapping objects, the outcome can be a little blurry. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{assets/dog.jpg} \caption{Random samples from our method for caption “A dog sitting on top of a grass covered field.”} \label{fig:dog} \end{figure} In addition, we demonstrate in \autoref{fig:dog} the diversity of the generated images, displaying $12$ random samples from the caption "A dog sitting on top of a grass covered field." Even for a simple caption like this one, the model produces a variety of images by trying to synthesize the dog in distinct postures with different backgrounds. We see the limits of this diversity though, since the dog is often not well depicted. More examples can be found in \autoref{sec:extra_samples}. Finally, we compute the FID, IS, precision, and recall of 1,000 image-caption pairs randomly drawn from the MS-COCO validation set. Usually, these metrics are computed on 30,000 images, but generating images with diffusion models is long and costly, and thus we consider less images to save computational resources. We empirically observed that the FID can be drastically reduced by using more images, making this metric unsuitable for comparison with other models. However, we compute it in the same way for several ablated variations of our method, enabling us to study the contributions of the different components. \autoref{tab:metrics} compiles all the metrics obtained for each method variant. We provide some analyses of this table in \autoref{sec:clip_translator}. \subsection{Image decoder} \label{sec:test_image_decoder} \begin{wrapfigure}{r}{0.45\textwidth} \centering \includegraphics[width=1\linewidth]{assets/image_reconstruction.pdf} \caption{Image reconstruction with the image decoder.} \label{fig:image_reconstruction} \end{wrapfigure} We focus now on the image decoder, to see how well it can generate an image given a CLIP image embedding. To do this, we consider the same set of $12$ images used in \autoref{sec:full_pipeline} and compute their corresponding image embeddings with the CLIP image encoder. We can then reconstruct all the images using the image decoder conditioned on the embeddings. This process is illustrated by \autoref{fig:image_reconstruction}. We show the $12$ not cherry-picked reconstructed images of this set in \autoref{fig:reconstruction}, as well as the original images. Since we are using the same set of images, we can compare images generated by the image decoder only in \autoref{fig:reconstruction} to the ones obtained by the full pipeline in \autoref{fig:random_pipeline_images}. We observe that the image compositions are more similar to the original images when we directly use the CLIP image embeddings, with objects being often in the same position as in the original images. This higher fidelity could demonstrate that CLIP image embeddings capture generally more details than captions, such as the location and posture of the elements of the image. This hypothesis is also put forward by \cite{clip_see}. Nonetheless, it also frequently results in less coherent images, where it seems that the image decoder is trying to incorporate too many little details in the image. Finally and in the same way as for the full pipeline, we compute our different metrics for 1,000 generated images conditioned on the CLIP image embeddings corresponding to the same image-caption pairs used in \autoref{sec:full_pipeline}. \begin{figure}[h] \centering \begin{minipage}[t]{.475\textwidth} \centering \includegraphics[width=1\linewidth]{assets/original.jpg} \end{minipage} \hspace{0.5em} \begin{minipage}[t]{.475\textwidth} \centering \includegraphics[width=1\linewidth]{assets/decoder.jpg} \end{minipage} \caption{Reconstructed images (\textbf{right}) by the image decoder conditioned on the CLIP embeddings of the original images (\textbf{left}).} \label{fig:reconstruction} \end{figure} \subsection{CLIP translator} \label{sec:clip_translator} In the two previous subsections, we analyzed the differences between outputs of the full pipeline and the image decoder. The CLIP translator plays an important role since it is supposed to reduce these differences. Obviously, a caption and, consequently, its related text embedding will be always less rich than an image embedding computed directly on the image. But still the CLIP translator has to ensure that all the details provided by the caption are well translated into visual features on the images. Therefore, a modest way to understand its impact is simply to remove it from the pipeline. This means giving directly the CLIP text embeddings to the image decoder instead of first translating them to image embeddings with the CLIP translator. We then follow the same methodology as before to compute the different metrics. \begin{table} \centering \begin{tabular}{lllll} \hline \textbf{Method variant} & \textbf{FID} $\downarrow$ & \textbf{IS} $\uparrow$ & \textbf{Precision} $\uparrow$ & \textbf{Recall} $\uparrow$ \\ \hline \hline Full pipeline & 65.9 & 11.6 & 0.804 & 0.307 \\ Image decoder only & 62.8 & 11.8 & 0.696 & 0.676 \\ No CLIP translator & 124 & 7.75 & 0.315 & 0.340 \\ \hline \end{tabular} \caption{Summary of the different metrics. They are computed on 1,000 synthetic images only, making them difficult to compare with other works. The different variations of our method are described in \autoref{sec:testing}} \label{tab:metrics} \end{table} Now that we have estimated the values of the different metrics for each method variant (see \autoref{tab:metrics}), we can try to interpret them. The lowest FID is obtained when only the image decoder is used, which is consistent with the fact that a caption captures less information about the image than the image embedding computed by the CLIP image encoder. Nevertheless, this value is very close to the one obtained with the full pipeline, which indicates that the CLIP text encoder and CLIP translator are doing a good job to transfer the textual content of the caption to the image decoder. This is confirmed by the FID obtained when the CLIP translator is removed from the pipeline, which is almost twice as large as the others. Therefore, even if the CLIP text and image embeddings are already closed for similar contents, a CLIP translator is necessary to switch from one to the other. As explained above, the FID is the most common metric to measure the performances of image generative models, so we are not going to do much analysis on the other metrics. We note that the IS behaves similarly to the FID here. However, the full pipeline variant obtained the highest precision but the lowest recall. One interpretation which is linked to what we noticed in \autoref{sec:test_image_decoder} is that using a caption to describe an image simplifies its content, involving that the generated images will more likely fall in the support of the real images distribution. It hence induces a higher precision. But on the other hand, the generated images will only cover a small portion of the support of the real images distribution, causing this time a lower recall. More details about precision and recall can be found in \cite{precision_recall}. \subsection{Upsampler} Finally, we devote this subsection to the assessment of the upsampler, which increases the image resolution from $64 \times 64$ to $256 \times 256$. As mentioned earlier, we use a super-resolution diffusion model already trained on ImageNet. In contrast, the other state-of-the-art DALL-E 2 and Imagen train their own upsamplers, which are additionally conditioned on the embedding. We have however no way to compare our $64 \times 64$ generated images to the ones produced by DALL-E 2 and Imagen, since they provide only the final $1024 \times 1024$ images. We can still assume that it is tricky to model complex scenes and objects in the low resolution regime, involving the necessity to use powerful upsamplers handling high-level features. We show in \autoref{fig:upsampler} pairs of images before and after being inputted to our upsampler. We can observe that the $256 \times 256$ images are not really sharp, often with some blurry parts. But as mentioned above, some textures are nevertheless well enhanced by the upsampler, such as the cloudy sky in the right bottom image. This upsampler is then still a useful one as its implementation is available. \begin{figure} \centering \includegraphics[width=1\linewidth]{assets/upsampler.pdf} \caption{Differences between $64 \times 64$ and $256 \times 256$ images, the latter being obtained using our upsampler model.} \label{fig:upsampler} \end{figure} \section{Exploring the properties of CLIP embeddings } \label{sec:clip_exploring} We have shown in \autoref{sec:clip} that CLIP embeddings have many useful properties. However, it has not yet been demonstrated if these embeddings can be manipulated and combined in their continuous space representation. In particular, we are referring to the seminal work of \cite{mikolov2013linguistic} which exhibited that linguistic regularities appear in the vector-space of word embeddings, enabling them to perform some vector operations. For instance, they showed that the vector corresponding to "Man + Queen - Woman" resulted in a vector very close to "King". We would like therefore to see in this section if CLIP embeddings also learn relationships between concepts and if it allows us to perform simple vector-oriented reasoning. \subsection{Image embeddings} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{assets/img_algebra.pdf} \caption{Simple vector operations on the CLIP image embeddings. The average of two embeddings yields a combination of the content of the two images (\textbf{top}). However, subtracting one part of the image leads to embeddings which are out of the training distribution, resulting in peculiar images (\textbf{bottom}).} \label{fig:img_algebra} \end{figure} We begin by investigating the properties of the CLIP image embeddings. To keep our analyses independent from the CLIP translator, we use only the image decoder and not the full pipeline. Our first experiment consists in testing the semantic robustness of the image embeddings under simple vector operations such as additions and subtractions. To do this, we compute the image embeddings of two different images with the help of the CLIP image encoder. Next, we perform 1) a vector average and 2) a vector difference between these two embeddings. We then use the image decoder to generate the images corresponding to the embeddings calculated. The results are displayed in \autoref{fig:img_algebra}. We observe that taking the average works well and leads to a satisfactory outcome combining the mountain of the first image and the beach of the second one. The subtraction, on the other hand, results in an odd image, when it was supposed to simply represent a palm tree. It could be explained by the fact that the embedding obtained after the subtraction could end up in an unseen area of the embedding space. We push the analysis further by considering different points of the linear interpolation between two image embeddings in \autoref{fig:interpolation}. We observe intermediate variations of the images content. At interpolation coefficient equals to $0.25$, the beach gets some greenery, whereas at $0.75$ only the blue background remains from the image, replaced by the palm tree. The transition between one image to another is therefore rather smooth. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{assets/interpolation.pdf} \caption{Linear interpolation between two image embeddings. The interpolation coefficient is indicated on the bottom. The rightmost and leftmost images are the original ones.} \label{fig:interpolation} \end{figure} \subsection{Text embeddings} We now focus on the text embeddings to combine contents of different captions. We start by two captions which are encoded and then averaged. We use the CLIP translator to translate the resulting text embedding to an image embedding and decode it with our image decoder. We obtain the image of \autoref{fig:txt_algebra}. Similarly to image embeddings, we obtain an image including the content of both captions, as if the two sentences had simply been concatenated to form a single caption. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{assets/txt_algebra.pdf} \caption{Combination of the content of two captions by an embedding average.} \label{fig:txt_algebra} \end{figure} Our final experiment on CLIP embeddings consists of reproducing the famous example given by \cite{mikolov2013linguistic}. We expect that the embedding resulting from the vector operation "An image of a man" + "An image of a queen" - "An image of a woman" is decoded in an image of a king. The image obtained, as well as an image corresponding to the caption "An image of a king", can be seen in \autoref{fig:king}. We observe that the two images are similar, portraying both a man in a suit\footnote{In particular, we generated $16$ images per embedding, and we could not distinguish which samples came from which embedding (see \autoref{sec:more_image_king}).}. Despite the fact that our first perception of a king generally consists of a man with a crown sitting on a throne, modern kings are now more often men in suits, such as the King of Spain. Our dataset must therefore contain more kings of this type. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{assets/king.pdf} \caption{The image decoded from the caption "An image of a king" (\textbf{left}). The image decoded from the displayed vector operation on text embeddings (\textbf{right}). The two images depict modern kings.} \label{fig:king} \end{figure} \section{Image guidance} \label{sec:results_image_guidance} In \autoref{sec:image_guidance}, we introduced a new guidance method named image guidance. It consists in guiding the image generation process towards the direction of another image of our choice. In this section, we conduct a preliminary experiment to see what this method is capable of doing. In particular, we would like to know if it can help the generating process to produce higher quality images. To test this, we consider an image of a corgi lying on the beach, as well as a variation of this image where the corgi is additionally wearing a purple party hat and a red bow tie, as depicted in \autoref{fig:corgi}. The first image is used for image guidance, and the second one is encoded into a CLIP image embedding. We then use our image decoder conditioned on the image embedding with the aim of generating images similar to the second image, i.e., an image of a corgi with a hat and a bow tie. We test this with and without image guidance. The image guidance scale is set to $0.005$, and is linearly decayed over the timesteps. The outputs are displayed in \autoref{fig:corgi}. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{assets/corgi.pdf} \caption{Illustration of the utility of image guidance, used here with the base image. Reproducing the content of the target image from its image embedding is easier with image guidance than without. All images are $64 \times 64$.} \label{fig:corgi} \end{figure} We observe that the image generated with image guidance is way more consistent than the one without. It seems that image guidance can help the diffusion model by providing it a base image on which it can build on, allowing the diffusion model to better reproduce small details, such as the position and color of the hat and bow tie. Nevertheless, we consider here only the image decoder. In real-life application contexts, we do not have access to the target image, nor its embedding. Instead, we can use our full pipeline to obtain the embedding of a textual description of the target image. Then we can decode the embedding with and without the help of image guidance as previously. The elements used in this process are shown in \autoref{fig:image_guidance_tree}, as well as the generated images (more of them can be found in \autoref{sec:more_image_guidance}). We notice here again that the base image assists the diffusion model to correctly interpret the embedding. Indeed, the image and especially the palm tree produced with image guidance is more the outcome we expected for the given caption than the image obtained without image guidance. It then appears that image guidance is a useful tool to help the generating process, by providing extra information, taking the form of an image, about the desired outcome. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{assets/image_guidance_tree.pdf} \caption{The base image with image guidance helps to obtain better text-image alignments from a given target caption.} \label{fig:image_guidance_tree} \end{figure} However, this method is obviously greatly limited. Firstly, an appropriate $64 \times 64$ image must be found to accordingly serve as a base image, along with the proper image guidance scale. Besides, image guidance induces an important mismatch between training and sampling as the base image is added at each timestep, but this is the case for other guidance methods as well. For these reasons above, we struggled to produce quality images to illustrate the effectiveness of image guidance, in particular for image inpainting. It might be then interesting to see how this method performs with better models such as DALL-E 2 and Imagen. \chapter{Discussion} \label{sec:discussion} Now that we have conducted several experiments, we discuss the results obtained, and especially the possibilities and limitations of our method. We start by comparing and explaining the differences between our approach and the related works. Next, we propose some future directions that could be taken to continue and improve this work. Finally, because generative models can have a deep impact on society, we briefly mention some ethical implications of these models. \section{Comparison with related works} \label{sec:comparison} The quality of our synthetic images is difficult to compare directly with the ones of DALL-E 2 and Imagen. These two models consider way more resources and therefore obtain highly realistic images. In particular, we list here the differences between our method and DALL-E 2 which could have an impact on the final outcome: \begin{itemize} \item \textbf{Dataset size}: DALL-E 2 uses 650M image-caption pairs, whereas we only use 12M. \item \textbf{Diffusion model size}: The diffusion model employed by DALL-E 2 to decode CLIP image embeddings has a larger architecture than ours, e.g., $512$ base channels, only $256$ for us. It results in 3.5B and 0.5B learnable parameters respectively. \item \textbf{CLIP encoders size}: We use smaller pre-trained CLIP encoders compared to DALL-E 2. For the CLIP image encoder we consider the ViT-B/32. They use the ViT-L/14, and their CLIP text encoder is twice as wide and deep. The authors of Imagen showed that the size of the language model could make a considerable difference on the performance of the whole system. \item \textbf{Upsamplers}: We use a super-resolution model trained on ImageNet only, while the authors of DALL-E 2 train their own upsamplers on their large dataset. Nevertheless, it allows us to avoid the large computational cost of the training. \item \textbf{CLIP translator}: We use a small MLP model (30M parameters) for the CLIP translator, whereas DALL-E 2 uses a large diffusion model (1B parameters). \item \textbf{Training}: There are a few differences between the training of DALL-E 2 and ours. Firstly, they train their image decoder with a batch size of 2,048 for $2.5$ epochs. For reference, we only train our image decoder for $1$ epoch on a $54$ times smaller dataset. Moreover, we had to choose a small batch size to make the GPU able to support the heavy computations. It resulted in exploding gradients, solved by gradient clipping. It could have an impact on the performance though. Finally, they use optimization tricks and tune their hyperparameters well, which could not be replicated without further increasing computational demand. \end{itemize} All these differences therefore lead to significantly different performances, but also to different computational costs. Our method may not have the sample quality of DALLE-2, but it still manages to understand the content of the captions and to reproduce them with varying degrees of success, without the need for notably high computational resources. Especially, it has the same properties of DALL-E 2 which allow us to manipulate CLIP embeddings, as we have seen in \autoref{sec:clip_exploring}. It enables combining images and captions, and to obtain a deeper comprehension of the content of both text and image embeddings. This property is not present with Imagen, which does not represent images explicitly by embeddings. Nevertheless, the model structure of Imagen is simpler and has the benefit of leveraging a huge pre-trained language model. Its text modeling capacity is therefore higher, and the output for a given caption is a sequence of vectors instead of a single vector as DALLE-E 2 and us, capturing potentially more content subtleties. Another advantage of our method and DALL-E 2 over Imagen, is the possibility to use distinct datasets to train the image decoder and the CLIP translator. Indeed, the image decoder can use any image dataset during training, not only captioned images. Note that this is the reason why we have included ImageNet in our dataset pool. The CLIP translator, on the other hand, needs pairs of text and image embeddings, which are lighter to store than images and captions. Thus, it enables to consider more easily massive datasets such as LAION \citep{LAION}, whose authors provide the CLIP embeddings for each image and caption. \section{Future work} The design and training of text-to-image generative models are based on many heuristics. Thus, an infinite number of experiments can be done by just trying various combinations of the different hyperparameters. However, recent observations exhibit that just scaling the size of the dataset and the model leads almost all the time to performance improvements. It results then in the traditional trade-off between computational cost and model performance. Accordingly, a longer and more careful training of our diffusion model should definitely improve its effectiveness. We could also consider the larger CLIP encoders which are publicly available and pre-trained. In particular, Imagen demonstrated that huge language models were the key to enhance sample quality and text-image alignment. Furthermore, training our own upsamplers could also enable us to produce better high-level image features. We conducted several experiments in this work. We would have liked to test out more ideas, but the sampling time of our method, due to the sequential nature of the diffusion models, restricted us. We propose therefore as a future work to implement and test DDIM \citep{DDIM}, in order to reduce the number of steps required during sampling. It would also allow to compute the FID more efficiently and to experiment in depth the capacities of image guidance. Finally, as mentioned above, the DALL-E 2 pipeline is complex and requires the juxtaposition of different models. Leveraging instead a huge pre-trained language model as Imagen does reduces this complexity. We hence suggest trying to incorporate this element to our codebase. It would be very easy to implement as the only modification to our diffusion model is to give as input the embedding obtained from the language model instead of the one from the CLIP translator. If the upsamplers are also trained, it would result in a replication of Imagen, which could probably synthesize greater images than our current implementation. \section{Societal impacts} Outputs of large-scale text-to-image models have an impact on society, with direct repercussions on individuals. In this section, we propose a brief review of the principal issues and opportunities offered by these models. \textbf{Energy consumption.} Deep neural networks and especially deep generative models often require high computational resources, consisting mainly of hours or even days of training on modern tensor processing hardware. It consumes a considerable amount of energy whose production releases CO$_2$ emission in the atmosphere\footnote{The energy produced by renewables does not emit CO$_2$, however no country in the world attains 100\% renewable energy for now.}, contributing to climate change. \cite{EnergyDL} estimated that the cost in terms of kgCO$_2$ emission of performing neural architecture search could attain 313,078 CO$_2$e, the equivalent of 8.6 years of average energy consumption for an American. \cite{reportEnergy} therefore encourages the machine learning community to systematically report the energy consumption of their models. Consequently, we use \textit{cumulator}, a tool developed at EPFL by \cite{cumulator}, to quantify the carbon footprint of the training of our models. We get $14.2$ kgCO$_2$ to train the image decoder and $0.2$ kgCO$_2$ for the CLIP translator. The amount for the image decoder is considerable and reflects the issues of these huge models, which require colossal amounts of computational resources. Nevertheless, we observe that choosing a simple MLP model for the CLIP translator was judicious since it avoids the heavy computations required by the prior diffusion model of DALL-E 2. \textbf{Dataset bias.} It has been well-established that massive web-scraped datasets contain inherent bias \citep{pyrrhic, revise}, mirroring harmful racism and gender stereotypes among other things. In addition, \cite{stereotypes_multimodal} showed that one of these image-text datasets, LAION \citep{LAION}, includes explicit images of rape and pornography. Therefore, text-to-image generation models trained on these datasets can reproduce problematic and prejudicial contents \citep{glide}, which results in the amplification of these issues. Even if our method is not really concerned since the datasets we use are mostly filtered and curated, it is important to raise awareness to prevent detrimental impacts on people who are already subject to discrimination. We therefore strongly encourage future works on text-to-image generation to take into account these considerations, by implementing for example suggestions from \cite{model_cards} and \cite{gebru_datasheets}. \textbf{Malicious uses.} There is a high potential for misuse when it comes to generative models, as they can be used to generate deepfakes or violent images for harmful downstream applications \citep{deep_fakes}. The photorealistic but synthetic images produced by DALL-E 2 or Imagen could lead to the public being misinformed, or even manipulated. It hence could considerably reduce the trust that individuals have when they see images on the internet. For now, these large text-to-image generation models are not available to the public community as their authors are searching for ways to alleviate these problematic misuses. But even in the hypothesis that solutions are found, it may not be sufficient to prevent all types of adversarial attacks, such as the one shown by \cite{hidden_vocabulary} on DALL-E 2. \textbf{Beneficial uses.} Of course, text-to-image generative models do not only have disadvantages for society. Their ability to create high-quality images of any kind simply based on textual descriptions opens the door to numerous, beneficial and artistic applications. They can be a tool to enhance human creativity and which could be available to anyone. In particular, artists could leverage these models to explore and boost their imagination. Moreover, these models can contribute to social causes, e.g., by synthesizing pictures depicting the dramatic effects of climate change, with the aim to raise public awareness \citep{climate_change}. \chapter{Conclusion} \label{sec:conclusion} Throughout this thesis, we investigated diffusion models for text-to-image generation. We started by reviewing the different elements which contributed to the recent progresses of text-to-image generative models, delving into diffusion models literature and the guidance methods employed to enhance them. We then implemented our own model to generate images from textual descriptions. It is an adapted replication of the state-of-the-art model DALL-E 2, which however requires considerably less computational resources to train than its counterpart. We make this implementation available to the public community. Next, we experimented with our model to understand what are the components which make text-to-image models so effective. We discovered that certain types of images were easier to generate. We also found out that the embeddings representing texts and images exhibited semantic regularities, allowing us to perform vector operations to manipulate and combine the content of different texts and images. In addition to this, we introduced a new guidance method named image guidance. We demonstrated that image guidance has the potential to help text-to-image models to produce images of higher quality and with better text-image alignments. Moreover, we compared our method to other state-of-the-art models in text-to-image generation. We noticed that a few improvements could be obtained by scaling the size of our different models and datasets, but at the cost of a greater computational load. We also mentioned the societal issues raised by the use of larger models and datasets. We proposed future directions for this project which can be implemented without too much effort. In particular, replicating a model such as Imagen by integrating a larger pre-trained language model could have an important impact on the performances. We finally strongly encourage to conduct further experiments, in order to contribute to the currently thin body of knowledge the AI community has on these models. \cleardoublepage \phantomsection
{ "timestamp": "2022-09-23T02:13:30", "yymm": "2209", "arxiv_id": "2209.10948", "language": "en", "url": "https://arxiv.org/abs/2209.10948" }
\section{Introduction} Throughout this article, we fix an additive character $\psi$ of $\mathbb{R}$. Let $dx$ be the unique Haar measure on $\mathbb{R}$ which is selfdual for Fourier transformation with respect to $\psi$. Unless we explicitly mention the contrary, by a representation, we always mean a unitary Casselman-Wallach representation of finite length (Fr\'echet representation of moderate growth), cf. \cite[Chapter XII]{Wal92}. The inner product on a representation is denoted by $( -,-)$. Let $\pi$ be a representation. We denote by $\pi^{\vee}$ the space of continuous linear functionals on $\pi$ and it is given the strong topology (uniform convergence on bounded subsets). The smooth dual of $\pi$, i.e. the subspace of smooth vectors in $\pi^{\vee}$, is identified with $\overline{\pi}$. Let $(G, G')$ be a reductive dual pair in $\mathrm{Sp}_{2m}(\mathbb{R})$. Let $\widehat{G}$ and $\widehat{G}'$ be the inverse images of $G$ and $G'$ in the metaplectic double cover group $\widehat{\rm Sp}_{2m}(\mathbb{R})$ by the covering map. If $\pi$ and $\pi^{'}$ are irreducible admissible representations of $\widehat{G}$ and $\widehat{G}'$ respectively, we say $\pi$ and $\pi^{'}$ correspond if $\pi\otimes \pi^{'}$ is a quotient of the Weil representation $\omega$ of $\widehat{\mathrm{Sp}}_{2m}(\mathbb{R})$, restricted to $\widehat{G}\times\widehat{G}'$. Note that the Weil representation is not a representation by our convention as it is not of finite length. Let $W$ be a $2n$-dimensional real symplectic vector space and let $V$ be a real quadratic space of dimension $2n+1$ with discriminant \[{\rm disc}(V)=(-1)^n{\rm det}(V)\equiv 1 \in \mathbb{R}^{\times}/\mathbb{R}^{\times 2}.\] The space $(W\otimes V, \langle-,-\rangle_W\otimes\langle-,-\rangle_V)$ is a real symplectic space. We have a natural homomorphism \begin{equation}\label{thetacorr} \widehat{\rm Sp}(W)\times O(V)\rightarrow \widehat{\rm Sp}(W\otimes V). \end{equation} We denote by $H(W\otimes V)=(W\otimes V)\ltimes\mathbb{R} $ the Heisenberg group associated to the symplectic space $W\otimes V$. Let $\omega_{\psi}$ be the Weil representation of $\widehat{{\rm Sp}}(W\otimes V)\ltimes H(W\otimes V)$ associated to $W\otimes V$. We denote by $\omega_{W, V,\psi}$ the representation of $\widehat{\rm Sp}(W)\times O(V)$ by pulling back the Weil representation $\omega_{\psi}$ by the homomorphism (\ref{thetacorr}). Given an irreducible representation $\sigma$ of ${\rm O}(V)$, the maximal isotropic quotient of $\omega_{W,V,\psi}$ has the form $\sigma\boxtimes \Theta_{W, V, \psi}(\sigma)$ for some smooth representation $\Theta_{W,V, \psi}(\sigma)$ of $\widehat{\rm Sp}(W)$, which is either $0$ or of finite length. Let $\theta_{W, V, \psi}(\sigma)$ be the maximal semi-simple quotient of $\Theta_{W,V, \psi}(\sigma)$. It is known by Howe \cite{Howe89} that $\theta_{W, V, \psi}(\sigma)$ is either zero or irreducible. Similarly, if $\pi$ is an irreducible genuine representation of $\widehat{\rm Sp}(W)$, we have the representations $ \Theta_{W, V, \psi}(\pi)$ and $\theta_{W, V, \psi}(\pi)$ of ${\rm O}(V)$. Let ${\rm Rep}^{\rm gen}_{\psi}(\widehat{\rm Sp}(W))$ be the set of irreducible genuine representations $\pi_W$ of $\widehat{{\rm Sp}}(W)$. Let $S_{2n+1}$ be the set of isomorphism classes of real orthogonal spaces $V'$ with $\dim V'=2n+1$ and ${\rm disc}(V')\equiv 1$. Let ${\rm Rep}_{\psi}^{\rm irr}({\rm SO}(V'))$ be the set of irreducible representations of ${\rm SO}(V')$ with $V'\in S_{2n+1}$. Adam and Barbasch show that the dual pair $({\rm Sp}(W)), {\rm O}(p,q))$ with $p+q=2n+1$ gives rise to a bijection between the genuine representations of metaplectic group and the representations of odd special group of the same rank. \begin{theorem}\cite{AB98} There is a bijection given by the metaplectic theta correspondence: \[{\rm Rep}_{\psi}^{\rm gen}(\widehat{\rm Sp}(W))\leftrightarrow \coprod_{V'\in S_{2n+1}}{\rm Rep}_{\psi}^{\rm irr}({\rm SO}(V')).\] More precisely, given an irreducible genuine representation $\pi$ of $\widehat{\rm Sp}(W)$, there is a unique $V'\in S_{2n+1}$ such that $\theta_{W,V',\psi}(\pi)\in {\rm Rep}_{\psi}^{\rm irr}({\rm SO}(V'))$ is nonzero. \end{theorem} Among the (genuine) irreducible representations, there is an important class of representations, whose matrix coefficients are controlled by the Harish-Chandra $\Xi$ function. We call such a representation the tempered (genuine) representation. The classification of tempered representations is given in \cite{KZ82}, and in particular, one knows that an irreducible representation is tempered if and only if it is an irreducible parabolic induction of limit of discrete series (cf. \cite[theorem 14.2]{KZ82}). Adams and Barbasch explicitly determined the $K$-types of all the representations on both side of the metaplectic theta correspondence. Together with the classification of the irreducible tempered representations, one can deduce that the metaplectic theta correspondence is compatible with the tempered condition. More precisely, let ${\rm Temp}^{\rm gen}_{\psi}(\widehat{\rm Sp}(W))\subset {\rm Rep}^{\rm gen}_{\psi}(\widehat{\rm Sp}(W))$ be the subset of irreducible tempered genuine representations and let ${\rm Temp}_{\psi}^{\rm irr}({\rm SO}(V'))\subset {\rm Rep}_{\psi}^{\rm irr}({\rm SO}(V'))$ be the subset of irreducible tempered representations of ${\rm SO}(V')$ with $V'\in S_{2n+1}$, then we have \begin{theorem}\label{main} There is a bijection given by the metaplectic theta correspondence: \[{\rm Temp}_{\psi}^{\rm gen}(\widehat{\rm Sp}(W))\leftrightarrow \coprod_{V' \in S_{2n+1}}{\rm Temp}_{\psi}^{\rm irr}({\rm SO}(V')).\] More precisely, given an irreducible tempered genuine representation $\pi$ of $\widehat{\rm Sp}(W)$, there is a unique $V'\in S_{2n+1}$ such that $\theta_{W,V',\psi}(\pi)\in {\rm Temp}_{\psi}^{\rm irr}({\rm SO}(V'))$ is nonzero. \end{theorem} The main purpose of this article is to prove the theorem \ref{main} by directly estimating the matrix coefficient, without using the classification theorem. This may be known to the experts, but we could not find a literature for the proofs for this approach and thus we insist to write this note. We will only give the details for the estimation of $\theta_{W,V',\psi}(\pi)$ in this article, for $\pi\in {\rm Temp}_{\psi}^{\rm gen}(\widehat{\rm Sp}(W))$. If we start with an irreducible tempered representations $\sigma$ of $\mathrm{SO}(V)$, the same strategy shows that $\theta_{W,V,\psi}(\sigma)$ is tempered. {\bf Acknowledgement}: This note is based on a disussion with Hang Xue. The authors would like to express their gratitude to Hang Xue for explaining his works on unitary groups to us. The second author would like to thank Wenwei Li and Fan Gao for useful discussions on representations of metaplectic groups. \section{Tempered genuine representations of metaplectic groups} For a real reductive group $G$, Harish-Chandra defined a special spherical function $\Xi^G$ on $G(\mathbb{R})$, which can be used to control the growth of $C^{\infty}$-functions on $G(\mathbb{R})$ with values in $\mathbb{C}$. We recall briefly its definition and some useful results. We denote by $C^{\infty}(G(\mathbb{R}))$ the space of all complex-valued $C^{\infty}$-functions on $G(\mathbb{R})$. Let $P_{\mathrm{min}}$ be a minimal parabolic subgroup of a real reductive group $G$ with modulus character $\delta_{P_{\mathrm{min}}}$ and $K$ be a maximal compact subgroup of $G$. Consider the normalized smooth induced representation \[i_{P_{\mathrm{min}}}^{G}(1)^{\infty}:=\{f\in C^{\infty}(G(\mathbb{R})): f(pg)=\delta_{\mathrm{min}}(p)^{1/2}f(g), \forall p\in P_{\mathrm{min}}(\mathbb{R}), g\in G(\mathbb{R})\}\] equipped with the scalar product \[(f, f')=\int_Kf(k)\overline{f'(k)}dk, \forall f, f'\in i_{P_{\mathrm{min}}}^{G}(1)^{\infty}.\] Let $e_K\in i_{P_{\mathrm{min}}}^{G}(1)^{\infty}$ be the unique function such that $e_K(k)=1$ for all $k\in K$. Then the Harish-Chandra spherical function $\Xi^{G}$ is defined by \[\Xi^{G}(g)=(i_{P_{\mathrm{min}}} ^{G}(1)(g)e_K, e_K), \forall g\in G(\mathbb{R}).\] Note that if $f$ and $g$ are positive functions on a set $X$, we will say $f$ is essential bounded by $g$, if there exists a $c>0$ such that $f(x)\leq c g(x)$ for all $x\in X$. We will denote it by $f\ll g$. We say $f$ and $g$ are equivalent if $f$ is essentially bounded by $g$ and $g$ is essentially bounded by $f$. The function $\Xi^{G}$ is a bi-$K$-invariant function and it is independent of the choice of the maximal compact subgroup $K$ up to equivalence. Let $A_G$ be the maximal $\mathbb{R}$-split torus of $G$ of rank $r$, and $M_{\mathrm{min}}$ be the centralizer of $A_G$ in $G$, which is exactly the Levi factor of $P_{\mathrm{min}}$. We denote by $\Delta=R(A_G, P_{\mathrm{min}})$ the set of roots of $A_G$ in the unipotent radical $U_{\mathrm{min}}$ of $P_{\mathrm{min}}$. Set \begin{equation} \begin{split} A_G^+&=\{a\in A_G(\mathbb{R}): \vert\alpha(a)\vert\leq 1, \forall \alpha\in \Delta\}\\ &=\{(a_1,\cdots,a_r):0<\vert a_1\vert\leq \vert a_2\vert\leq\cdots\leq\vert a_r\vert\leq 1 \} . \end{split} \end{equation} Fix an embedding $\iota: G(\mathbb{R})\rightarrow {\rm GL}_{m}(\mathbb{R})$, we define the height function \[\sigma(g)=1+\sup\{\log\vert a_{i,j}\vert, \log \vert b_{i,j}\vert\},\] where $(a_{i,j})$ is the matrix $\iota(g)$ and $(b_{i,j})$ is the corresponding matrix of $\iota(g^{-1})$. In particular, if $a=(a_1,\cdots, a_r)\in A_G^+$, we have \begin{equation}\label{height}\sigma(a)=1-\log\vert a_1\vert\geq 1.\end{equation} We have the following well-known estimation of $\Xi^G$ due to Harish-Chandra. \begin{lemma}\label{est}\cite[theorem 30]{Var77} There exists constants $A, B>0$ such that for any $a\in A_{G}^+$, we have \[A^{-1}\delta_{P_{\mathrm{min}}}^{\frac{1}{2}}(a)\leq \Xi^{G}(a)\leq A\delta_{P_{\mathrm{min}}}^{\frac{1}{2}}(a)\sigma (a)^B.\] \end{lemma} The metaplectic group $\widehat{\rm Sp}_{2n}$ is not an algebraic group, but behaves in many way like an algebraic group. In particular, we have the Cartan decomposition for $\widehat{\rm Sp}_{2n}$, i.e. $\widehat{\rm Sp}_{2n}=KA_0^+K$, where $K$ is the inverse image of a special maximal compact subgroup of ${\rm Sp}_{2n}$ and $A_0^+$ is the inverse image of $A^{+}_{{\rm Sp}_{2n}}$ in $\widehat{\rm Sp}_{2n}$. We define the corresponding Harish-Chandra spherical function by $\Xi^{\widehat{\rm Sp}_{2n}}=\Xi^{{\rm Sp}_{2n}}\circ p$, where $p$ is the covering map. Using Harish-Chandra's $\Xi$-function, we have the following definition of tempered representation for real reductive groups and metaplectic groups. \begin{definition} We say that a unitary representation $(\pi, \mathscr{H}_{\pi})$ of a real reductive group or a metaplectic group $G$ is tempered if for any $e, e'\in \pi$, we have an inequality \[\vert(\pi(g)e, e')\vert\ll\Xi^{G}(g), \forall g\in G(\mathbb{R}).\] \end{definition} Thanks to the work of Cowling, Haagerup and Howe \cite{CHH}, a representation of $G$ is tempered if and only if its matrix coefficients are almost square integrable functions (i.e. it belongs to $L^{2+\epsilon}(G(\mathbb{R}))$ for all $\epsilon\in\mathbb{R}_{>0}$). \section{Metaplectic Theta Correspondence} Let $W$ be a $2n$-dimensional real symplectic vector space. Let $S_{2n+1}$ be the set of isomorphism classes of real orthogonal spaces $V'$ with $\dim V'=2n+1$ and ${\rm disc}(V')\equiv 1$. Note that the metaplectic theta correspondence is established by Adams and Barbasch \cite{AB98}. \begin{theorem}\cite{AB98} There is a bijection given by the metaplectic theta correspondence: \[{\rm Rep}_{\psi}^{\rm gen}(\widehat{\rm Sp}(W))\leftrightarrow \coprod_{V' \in S_{2n+1}}{\rm Rep}_{\psi}^{\rm irr}({\rm SO}(V')).\] More precisely, given an irreducible genuine representation $\pi$ of $\widehat{\rm Sp}(W)$, there is a unique $V'\in S_{2n+1}$ such that $\theta_{W,V',\psi}(\pi)\in {\rm Rep}_{\psi}^{\rm irr}({\rm SO}(V'))$ is nonzero. \end{theorem} In this section, we recall the explicit metaplectic theta correspondence. \subsection{Estimation of matrix coefficients} \subsubsection{Tempered representations of real reductive groups} Let $G$ be a real reductive group and $A_G$ be the maximal $\mathbb{R}$-split torus of $G$ with rank $r$ (i.e. $A_G\cong (\mathbb{R}^{\times})^r$). We will write an element $a\in A_G(\mathbb{R})$ by $(a_1,\cdots,a_r)$. Define \[A_G^+=\{a\in A_G(\mathbb{R}):0<\vert a_1\vert\leq\cdots\leq\vert a_{r}\vert\leq 1 \}.\] Fix a minimal parabolic subgroup $P_G\supset A_G$ of $G$. We denote by $\delta_{P_G}$ the modulus character of $P_G$. We fix a special maximal compact subgroup $K$ of $G(\mathbb{R})$ and we have a Cartan decomposition of $G(\mathbb{R})$: \[G(\mathbb{R})=KA_G^+K.\] For any integrable function $f$ on $G(\mathbb{R})$, the following formula holds (cf. \cite[\S 4]{II10}): \begin{equation}\label{deco}\int_{G(\mathbb{R})} f(g)dg=\int_{A_G^+}\nu(a)\int_{K\times K}f(k_1ak_2)dk_1dk_2da,\end{equation} where $\nu$ is a positive function on $A_G^+$ such that $\nu(a)\leq C\cdot\delta^{-1}_{P_G}(a)$ for some constant $C$. Let $\pi$ be a tempered representation of $G$. For any $v,v'\in\pi$ and $g\in G(\mathbb{R})$, by definition of tempered representation, there exists a constant $A_1>0$, such that \[\vert(\pi(g)v,v')\vert\leq A_1\cdot \Xi^G(g).\] Moreover, a more precise estimation is given by Sun \cite{Sun09} : there is a continuous seminorm $\nu_\pi$ on $\pi$ such that \begin{equation}\label{matrixcoeff}\vert (\pi(g)v, v')\vert\leq \Xi^G(g)\nu_\pi(v)\nu_{\pi}(v'), \forall v, v'\in \pi. \end{equation} We deduce, from the lemma \ref{est} and the fact that the Harish-Chandra function $\Xi^G$ is bi-$K$-invariant, that there exists two positive constants $A_2$ and $B$ such that for any $g=k_1ak_2\in KA_G^+K$, we have $$\Xi^G(g)=\Xi^G(k_1ak_2)=\Xi^G(a)\leq A_2\delta_{P_G}^{1/2}(a)\sigma(a)^B.$$ Thus, for any $g=k_1ak_2\in KA_G^+K$, there exists two positive constants $A$ and $B$ such that \begin{equation}\vert(\pi(g)v,v')\vert\leq A\delta_{P_G}^{1/2}(a)\sigma(a)^B.\end{equation} Similary, if $\pi$ is a tempered genuine representation of $\widehat{\rm Sp}(W)$, then for any element $\hat{g}=k\hat{a}k'\in \widehat{\rm Sp}(W)$ and $v, v'\in \pi$, there exists constants $A$ and $B$ such that \begin{equation}\label{estmeta}\vert(\pi(\hat{g})v,v')\vert\leq A\cdot \Xi^{\widehat{\rm Sp}(W)}(\hat{g})\leq A\delta_{P_G}^{1/2}(p(\hat{a}))\sigma(p(\hat{a}))^B.\end{equation} \subsubsection{Weil representation} Let $W$ be a real symplectic space of dimension $2n$ and $V$ be a real quadratic space of dimension $2n+1$ with discriminant \[{\rm disc}(V)=(-1)^n{\rm det}(V)\equiv 1 \in \mathbb{R}^{\times}/\mathbb{R}^{\times 2}.\] The space $(W\otimes V, \langle-,-\rangle_W\otimes\langle-,-\rangle_V)$ is a real symplectic space. Then $G_1={\rm Sp}(W)$ and $G_2=O(V)$ form a reductive dual pair. We have a natural homomorphism \begin{equation}\label{emb1} G_1\times G_2\rightarrow {\rm Sp}(W\otimes V). \end{equation} Let $r_i, 1\leq i\leq 2$ be the rank of the maximal $\mathbb{R}$-split torus $A_{G_i}$ of $G_i$. Note that since $G_1$ is split, we have $r_1=n$. We will denote by $a=(a_1,\cdots,a_{r_1})$ an element of $A_{G_1}$ and $b=(b_1,\cdots,b_{r_2})$ an element of $A_{G_2}$. Let $\widehat{G}_1$ and $\widehat{G}_2$ be the inverse images of $G_1$ and $G_2$ in the metaplectic group $\widehat{\rm Sp}(W\otimes V)$ by the covering map respectively. The embedding (\ref{emb1}) can be lifted to a homomorphism \[\widehat{G}_1\times\widehat{G}_2\rightarrow\widehat{\mathrm{Sp}}(W\otimes V).\] Let $(\omega_{W, V, \psi},\mathscr{S})$ be the Weil representation of the metaplectic group $\widehat{\mathrm{Sp}}(W\otimes V)$ realized on the mixed model \cite[Section 7.4]{GI16}. By \cite[Lemma D.1]{GI11}, for $(\hat{g},h)\in \widehat{G_1}\times G_2$ and $\phi,\phi'\in \omega_{W,V,\psi}$, we have: \begin{equation}\label{estWeil} \begin{split} \vert (\omega_{W,V,\psi}(\hat{g},h)\phi,\phi')\vert&\leq C\cdot\prod_{i=1}^{r_1} \vert a_i\vert^{\frac{2n+1}{2}}\prod_{j=1}^{r_2}\vert b_j\vert^{\frac{2n}{2}-n}\prod_{k=1}^{r_1}\prod_{j=1}^{r_2}\Upsilon(a_kb_j^{-1})\\ &=C\cdot\prod_{i=1}^{r_1} \vert a_i\vert^{\frac{2n+1}{2}}\prod_{k=1}^{r_1}\prod_{j=1}^{r_2}\Upsilon(a_kb_j^{-1}), \end{split} \end{equation} where $C$ is a constant and $\Upsilon(x)=\begin{cases} 1,& \text{ if }\vert x\vert\leq 1\\ \vert x\vert^{-1},&\text{ if }\vert x\vert >1 \end{cases}$. Moreover, a more precise estimation is given by Xue \cite{Xue22}: there exists a continuous semi-norm $\nu_{\mathscr{S}}$ on $\omega_{W, V, \psi}$ such that \begin{equation}\label{mcXue} \vert(\omega_{W, V,\psi}(\widehat{g}, h)\phi, \phi') \vert\leq \prod_{i=1}^{r_1}\vert a_i\vert^{\frac{2n+1}{2}}\prod_{k=1}^{r_1}\prod_{j=1}^{r_2}\Upsilon(a_kb_j^{-1})\nu_{\mathscr{S}}(\phi)\nu_{\mathscr{S}}(\phi'). \end{equation} \subsection{Weil representation and theta lifts} In \cite[theorem 6.1]{Li89}, Li shows that if the dual pair $(G_1,G_2)$ is in the stable range case, then there is an explicit realization of the theta correspondence. The unitary case is studied in \cite{LZ98} and for more general classical groups, this is studied in \cite{GQT14} and used by Xue in \cite{Xue22}. In the following, we recall the construction of the explicit metaplectic theta correspondence and study the matrix coefficients of this explicit theta lift. Let $\pi$ be an unitary irreducible genuine representation of $\widehat{G}_1$. Then the tensor product $\omega_{W,V, \psi}\otimes\pi$ is a $\widehat{G}_1\times\widehat{G}_2$-module, where $\widehat{G}_2$ acts by $\omega_{W,V, \psi}$ and $\widehat{G}_1$ acts by $\omega_{W,V, \psi}\otimes\pi$. \begin{lemma}\label{cont}Let $\pi$ be an irreducible genuine tempered representation of $\widehat{\rm Sp}(W)$. For any $v, v'\in \pi$ and $\phi,\phi'\in \omega_{W, V, \psi}$, there exists continuous semi-norms $\nu_{\pi}$ on $\pi$ and $\nu_{\mathscr{S}}$ on $\omega_{W, V, \psi}$ such that \[\vert\int_{\mathrm{Sp}(W)}(\omega_{W,V,\psi}(g,1)\phi,\phi')\overline{(\pi(g)v,v')}dg\vert\leq \nu_{\pi}(v)\nu_{\pi}(v')\nu_{\mathscr{S}}(\phi)\nu_{\mathscr{S}}(\phi').\] \end{lemma} \begin{proof} By estimations (\ref{matrixcoeff}), (\ref{mcXue}) and the formula (\ref{deco}), the integral is bounded by \begin{equation} \begin{split} &\int_{ A_{G_1}^+}\prod_{i=1}^{r_1}\vert a_i\vert^{-\frac{1}{2}(2n+2-2i)}(1-\sum_{i=1}^{r_1}\log\vert a_i\vert)^B\prod_{j=1}^{r_1}\vert a_j\vert^{\frac{2n+1}{2}}da\\ &\int_{K_1\times K_1}\tilde{\nu}_{\pi}(\pi(k_1)v)\tilde{\nu}_{\pi}(\pi(k_1^{'-1},1)v')\tilde{\nu}_{\mathscr{S}}(\omega_{W, V, \psi}(k_1,1)\phi)\tilde{\nu}_{\mathscr{S}}(\omega_{W, V, \psi}(k_1^{'-1},1)\phi')dk_1dk_1^{'}, \end{split} \end{equation} where $B$ is a positive constant and $\tilde{\nu}_{\pi}$ (resp. $\tilde{\nu}_{\mathscr{S}}$) is a continuous semi-norm on $\pi$ (resp. $\omega_{W,V,\psi}$). The integral $\int_{A_{G_1}^+}\prod\limits_{i=1}^{r_1}\vert a_i\vert^{-\frac{1}{2}(2n+2-2i)}(1-\sum\limits_{i=1}^{r_1}\log\vert a_i\vert)^B\prod\limits_{j=1}^{r_1}\vert a_j\vert^{\frac{2n+1}{2}}da$ can be simplified to the form $$\int_{A_{G_1}^+}\prod_{i=1}^{r_1}\vert a_i\vert^{i-\frac{1}{2}}(1-\sum_{i=1}^{r_1}\log\vert a_i\vert)^Bda,$$ hence it converges. Since $K_1$ is compact, the integral \[\int_{K_1\times K_1}\tilde{\nu}_{\pi}(\pi(k_1)v)\tilde{\nu}_{\pi}(\pi(k_1^{'-1},1)v')\tilde{\nu}_{\mathscr{S}}(\omega_{W, V, \psi}(k_1,1)\phi)\tilde{\nu}_{\mathscr{S}}(\omega_{W, V, \psi}(k_1^{'-1},1)\phi')dk_1dk_1^{'}\] is bounded by \[\mathrm{Vol}(K_1)^2\nu_{\pi}(v)\nu_{\pi}(v')\nu_{\mathscr{S}}(\phi)\nu_{\mathscr{S}}(\phi'),\] where $\nu_{\pi}(v)=\sup_{k_1\in K_1}\tilde{\nu}_{\pi}(\pi(k_1)v)$ and $\nu_{\mathscr{S}}(\phi)=\sup_{k_1\in K_1}\tilde{\nu}_{\mathscr{S}}(\omega_{W,V,\psi}(k_1,1)\phi)$. Each $\sup$ term defines a continuous semi-norm on the corresponding space by the uniform boundedness principle \cite[Theorem 33.1]{Tr67}. \end{proof} \begin{proposition}Let $\pi$ be an irreducible tempered representation of $\widehat{\rm Sp}(W)$. Take $v, v'\in \pi$ and $\phi, \phi'\in \omega_{W,V,\psi}$. The multilinear form\footnote{We ignore the identification of multilinear form and the linear form via the tensor product.} \begin{equation}\label{form} (v, v',\phi,\phi')\mapsto \int_{{\rm Sp}(W)}\overline{(\pi'(g)v, v')}\cdot (\omega_{W, V, \psi}(g,1)\phi, \phi') dg \end{equation} continuously extends to a linear form on $\bar{\pi}\widehat{\otimes}\pi\widehat{\otimes} \omega_{W, V, \psi}\widehat{\otimes}\bar{\omega}_{W, V, \psi}$. It is not identically zero if and only if $\theta_{W, V,\psi}(\pi)\neq 0$. \end{proposition} \begin{proof}The absolute convergence and continuity follow from Lemma \ref{cont}. The nonvanishing is proved by Gan, Qiu and Takeda in \cite{GQT14}. \end{proof} The integral (\ref{form}) defines a hermitian form on $\bar{\pi}\otimes \omega_{W,V,\psi}$. In fact, for any $\phi,\phi'\in\omega_{W,V,\psi}$ and $v,v'\in\pi$, \begin{equation} \begin{split} \overline{\langle v\otimes\phi,v'\otimes\phi'\rangle}= &\overline{\int_{\rm Sp(W)}\overline{(\pi(g)v, v')}\cdot (\omega_{W, V, \psi}(g,1)\phi, \phi') dg}\\ =& \int_{\rm Sp(W)}(\pi(g)v, v')\cdot\overline{ (\omega_{W, V, \psi}(g,1)\phi, \phi')} dg\\ =&\int_{\rm Sp(W)}\overline{(v',\pi(g) v)}\cdot (\phi',\omega_{W, V, \psi}(g,1) \phi) dg\\ =&\int_{\rm Sp(W)}\overline{(\pi(g^{-1}) v',v)}\cdot (\omega_{W, V, \psi}(g^{-1},1) \phi',\phi) dg\\ =&\int_{\rm Sp(W)}\overline{(\pi(g) v',v)}\cdot (\omega_{W, V, \psi}(g,1) \phi',\phi) dg\\ =&\langle v'\otimes\phi',v\otimes\phi\rangle \end{split} \end{equation} which means (\ref{form}) defines a hermitian form on $\Theta_{W, V, \psi}(\pi)$. By \cite{He}, this form is semi-positivity. Moreover, we have the fact that: if $q$ is a nonzero semi-positive definite hermitian form on a vector space $X$, and $L$ is the radical of $q$, then $q$ descends to an inner product on $X/L$, still denote by $q$. To prove this, if there exists an $x\notin L$ such that $q(x,x)=0$, then take some $y\in X$, which satisfies $q(x,y)\neq 0$. For $t\in\mathbb{C}$, then we have\[q(tx+y,tx+y)=q(y,y)+2\mathrm{Re}(t)\cdot q(x,y).\] As $t$ is an arbitraty complex number and $q(x,y)\neq 0$, we conclude that for a well-chosen complex number $t$, $q(tx+y,tx+y)$ can be a negative real number, which is a contradiction to the semi-positivity of $q$. Let $R$ be the radical of semi-positive hermitian form defined by (\ref{form}) as above. Then it defines an inner product on $\Theta_{W, V, \psi}(\pi)/R$. Therefore $\Theta_{W, V, \psi}(\pi)/R$ must be semisimple, and thus coincides with $\theta_{W, V, \psi}(\pi)$. The following proposition give the explicit matrix coefficients of $\theta_{W,V,\psi}(\pi)$ using the explicit theta correspondence. \begin{proposition} The function $$\Phi_{\phi,\phi',v,v'}:h\in O(V)\mapsto\int_{\rm Sp(W)}\overline{(\pi(g)v, v')}\cdot (\omega_{W, V, \psi}(g,h)\phi, \phi') dg$$ defines a matrix coefficient of $\theta_{W, V, \psi}(\pi)$. \end{proposition} We also need the following proposition to simplify the computation. \begin{proposition}\label{redense}Let $\Phi$ be the subspace of the matrix coefficients of $\theta_{W,V,\psi}(\pi)$ generated by $\Phi_{\phi,\phi',v,v'}$, where $\phi$ and $\phi'$ range over a dense subspace of $\omega_{W,V,\psi}$, and $v$ and $v'$ range over a dense subspace of $\pi$. Then the space $\Phi$ is dense in the space of matrix coefficients of $\theta_{W,V,\psi}(\pi)$. \end{proposition} \begin{proof} Fix a surjective homomorphism $c:\omega_{W,V,\psi}\widehat{\otimes}\overline{\pi}\rightarrow\theta_{W,V,\psi}(\pi)$. The matrix coefficients of $\theta_{W,V,\psi}(\pi)$ are of the form $\langle \theta_{W,V,\psi}(\pi)(h)c(\phi,v),c(\phi',v')\rangle$ with $h\in O(V)$. Then the assertion follows from the surjectivity of $c$ and the density of $\omega_{W,V,\psi}\otimes\overline{\pi}$ in $\omega_{W,V,\psi}\widehat{\otimes}\overline{\pi}$. \end{proof} \section{Theta lifts for tempered representations} In this section, we use the estimations of the matrix coefficients of various representations established in the previous sections to prove that if $\pi\in {\rm Temp}_{\psi}^{\rm gen}(\widehat{\rm Sp}(W))$, then $\theta_{W,V,\psi}(\pi)$ is tempered. This is equivalent to show that the matrix coefficients of $\theta_{W,V,\psi}(\pi)$ are almost square integrable functions (i.e. $L^{2+\epsilon}(O(V))$ for all $\epsilon\in\mathbb{R}_{>0}$). By the proposition \ref{redense}, it suffices to prove that for any $\epsilon\in\mathbb{R}_{>0}$, for any $\phi,\phi'\in\omega_{W,V,\psi}$ and $v,v'\in \pi$, the integral \begin{equation*} \int_{O(V)}\vert \Phi_{\phi,\phi',v,v'}(h) \vert^{2+\epsilon}dh=\int_{O(V)}\vert(\int_{\mathrm{Sp}(W)}(\omega_{W,V,\psi}(g,h)\phi,\phi')\overline{(\pi(g)v,v')} dg)\vert^{2+\epsilon}dh \end{equation*} converges. In the following, we will prove a stronger condition: the integral \begin{equation}\label{minusfinal}\int_{O(V)}(\int_{\mathrm{Sp}(W)}\vert(\omega_{W,V,\psi}(g,h)\phi,\phi')(\pi(g)v,v')\vert dg)^{2+\epsilon}dh\end{equation} converges. \subsection{Reduction using the estimation of matrix coefficients} For any $\phi,\phi'\in\omega_{W,V,\psi}$ and $v,v'\in \pi$, by the estimations (\ref{estmeta}) and (\ref{estWeil}), there exists positive constants $A,B$ such that $$ \vert(\omega_{W,V,\psi}(g,h)\phi,\phi')(\pi(g)v,v')\vert\leq A\delta_{P_{G_1}}^{\frac{1}{2}}(a)\sigma(a)^B\prod_{i=1}^{r_1} \vert a_i\vert^{\frac{2n+1}{2}}\prod_{k=1}^{r_1}\prod_{j=1}^{r_2}\Upsilon(a_kb_j^{-1}).$$ Together with the equation (\ref{deco}), we have \begin{equation} \begin{split} &\int_{\mathrm{Sp}(W)}\vert(\omega_{W,V,\psi}(g,h)\phi,\phi')(\pi(g)v,v')\vert dg \\ \leq& A\int_{\mathrm{Sp}(W)}\delta_{P_{G_1}}^{\frac{1}{2}}(a)\sigma(a)^B\prod_{i=1}^n \vert a_i\vert^{\frac{2n+1}{2}}\prod_{k=1}^n\prod_{j=1}^{r_2}\Upsilon(a_kb_j^{-1})dg\\ \leq& A\int_{A_{G_1}^+}\delta_{P_{G_1}}^{-1}(a)\int_{K_1\times K_1}\delta_{P_{G_1}}^{\frac{1}{2}}(a)\sigma(a)^B\prod_{i=1}^n \vert a_i\vert^{\frac{2n+1}{2}}\prod_{k=1}^n\prod_{j=1}^{r_2}\Upsilon(a_kb_j^{-1})dk_1dadk'_1\\ =&A\cdot{\rm Vol}(K_1)^2\cdot\int_{A_{G_1}^+}\delta_{P_{G_1}}^{-\frac{1}{2}}(a)\sigma(a)^B\prod_{i=1}^n \vert a_i\vert^{\frac{2n+1}{2}}\prod_{k=1}^n\prod_{j=1}^{r_2}\Upsilon(a_kb_j^{-1})da. \end{split} \end{equation} Hence if we denote $A\cdot{\rm Vol}(K_1)^2$ by $A'$, then for any $\epsilon>0$, using the equation (\ref{deco}) again, we have \begin{equation} \begin{split} &\int_{O(V)}(\int_{\mathrm{Sp}(W)}\vert(\omega_{W,V,\psi}(g,h)\phi,\phi')(\pi(g)v,v')\vert dg)^{2+\epsilon}dh\\ \leq& A' \int_{O(V)}(\int_{ A_{G_1}^+}\delta_{P_{G_1}}^{-\frac{1}{2}}(a)\sigma(a)^B\prod_{i=1}^n \vert a_i\vert^{\frac{2n+1}{2}}\prod_{k=1}^n\prod_{j=1}^{r_2}\Upsilon(a_kb_j^{-1})da)^{2+\epsilon}dh\\ \leq & A'\int_{A_{G_2}^+}\delta_{P_{G_2}}^{-1}(b)\int_{K_2\times K_2}(\int_{A_{G_1}^+}\delta_{P_{G_1}}^{-\frac{1}{2}}(a)\sigma(a)^B\prod_{i=1}^n \vert a_i\vert^{\frac{2n+1}{2}}\prod_{k=1}^n\prod_{j=1}^{r_2}\Upsilon(a_kb_j^{-1})da)^{2+\epsilon}dk_2dbdk'_2\\ =& A'\cdot\mathrm{Vol}(K_2)^2\int_{A_{G_2}^+}\delta_{P_{G_2}}^{-1}(b)(\int_{A_{G_1}^+}\delta_{P_{G_1}}^{-\frac{1}{2}}(a)\sigma(a)^B\prod_{i=1}^n \vert a_i\vert^{\frac{2n+1}{2}}\prod_{k=1}^n\prod_{j=1}^{r_2}\Upsilon(a_kb_j^{-1})da)^{2+\epsilon}db. \end{split} \end{equation} By the formula (\ref{height}), we have \[\sigma(a)\leq 1-\sum_{i=1}^n\log\vert a_i\vert\leq 1-\sum_{i=1}^n\log\vert a_i\vert-\sum_{j=1}^{r_2}\log\vert b_j\vert. \] Note that \[\delta_{P_{G_1}}(a)=\prod_{i=1}^n\vert a_i\vert^{2n+2-2i},\delta_{P_{G_2}}(b)=\prod_{j=1}^{r_2}\vert b_j\vert^{2n+1-2j}.\] Replacing $\epsilon$ by $2\epsilon$, it suffices to show that for all $\epsilon>0$, the integral \begin{equation}\label{intli} \begin{split} &\int_{ A_{G_1}^+\times A_{G_2}^+} \prod_{i=1}^n \vert a_i\vert^{(2i-1)(1+\epsilon)}\prod_{j=1}^{r_2}\vert b_j\vert ^{2j-2n-1}\prod_{k=1}^n\prod_{j=1}^{r_2}\Upsilon(a_kb_j^{-1})^{2+2\epsilon}\\ &(1-\sum_{i=1}^n\log\vert a_i\vert-\sum_{j=1}^{r_2}\log\vert b_j\vert)^{B(2+2\epsilon)}dadb. \end{split} \end{equation} converges. \subsection{Proof of the convergence of the integral (\ref{intli})} Let $(p_1,\cdots,p_{r_2+1})$ be a $(r_2+1)$-tuple of non-negative integers such that \[p_1+\cdots p_{r_2+1}=n.\] Let $S_{p_1,\cdots,p_{r_2+1}}$ be the subset of $A_{G_1}^+\times A_{G_2}^+$, defined by the condition \begin{equation}\label{order} \begin{split} &\vert a_1\vert\leq\cdots\leq \vert a_{p_1}\vert\leq\vert b_1\vert \\ \leq &\vert a_{p_1+1}\vert\leq \cdots\leq\vert a_{p_1+p_2}\vert \leq\vert b_2\vert \leq\vert a_{p_1+p_2+1}\vert\leq\cdots\leq\vert a_{p_1+\cdots+p_{r_2}}\vert \\\leq &\vert b_{r_2}\vert\leq \vert a_{p_1+\cdots+p_{r_2}+1}\vert \leq\cdots\leq\vert a_{p_1+\cdots+p_{r_2+1}}\vert \leq 1. \end{split} \end{equation} We can break the domain $A_{G_1}^+\times A_{G_2}^+$ of the integral (\ref{intli}) by $S_{p_1,\cdots,p_{r_2+1}}$, and it suffices to show that over each region $S_{p_1,\cdots,p_{r_2+1}}$, the integral (\ref{intli}) converges. We will use the following simple lemma to conclude its convergence. \begin{lemma}\label{com} Let $N$ be a natural number. Let $s_1,\cdots,s_N$ and $B$ be real numbers. If $s_1+\cdots+s_i>0$ for all $1\leq i\leq N$, then the integral $$\int_{\vert x_1\vert\leq\cdots\leq\vert x_N\vert\leq 1}\vert x_1\vert^{s_1}\cdots\vert x_N\vert^{s_N}(1-\sum_{i=1}^N\log\vert x_i\vert)^{B}dx_1\cdots dx_N$$ converges. \end{lemma} Note that in a fixed region $S_{p_1,\cdots,p_{r_2+1}}$, we have \begin{equation*} \begin{split} \prod_{i=1}^n\prod_{j=1}^{r_2}\Upsilon(a_ib_j^{-1}) =\prod_{j=1}^{r_2}\left(\vert \prod_{i=1}^{p_{j+1}}a_{i+\sum_{k=1}^jp_k}\vert^{-j}\cdot\vert b_j\vert^{n-(\sum_{k=1}^{j}p_k)}\right). \end{split} \end{equation*} We rearrange the terms in the integral (\ref{intli}) with respect to the order given by the condition (\ref{order}). If the integral (\ref{intli}) on region $S_{p_1,\cdots,p_{r_2+1}}$ satisfies the condition of the lemma \ref{com} with respect to this order, then we can conclude that the integral (\ref{minusfinal}) converges. For $0\leq t\leq p_{j+1},1\leq j\leq r_2$, we check the sum of the exponents in the integral (\ref{intli}) up to $a_{p_1+\cdots+p_j+t}$ : \begin{enumerate} \item The sum of the exponents of $a_i(1\leq i\leq p_1+\cdots+p_j+t)$: $$(1+3+\cdots+(2(p_1+\cdots+p_j+t)-1))(1+\epsilon)-(p_2+2p_3+\cdots+(j-1)p_j+jt)(2+2\epsilon)$$ \item The sum of the exponents of $b_i(1\leq i\leq j)$: \begin{equation*} \begin{split} 2(1+\cdots+j)-j(2n+1)+((n-p_1)+\cdots+(n-p_1-\cdots-p_j))(2+2\epsilon). \end{split} \end{equation*} \end{enumerate} Summing these two terms, we get $$(p_1+\cdots+p_j+t)^2+\epsilon((p_1+\cdots+p_j+t)^2+2j(n-p_1-\cdots-p_j-t))>0.$$ The same type of verification shows that the sum of the exponents up to $b_j$ is positive. Hence the integral (\ref{intli}) satisfies the condition of the lemma \ref{com}. As a consequence, the integral (\ref{minusfinal}) converges. \bibliographystyle{amsplain}
{ "timestamp": "2022-09-27T02:16:31", "yymm": "2209", "arxiv_id": "2209.10919", "language": "en", "url": "https://arxiv.org/abs/2209.10919" }
\section{Introduction} The need for interpretability becomes more apparent every day. Due to the black-box character of many machine learning (ML) algorithms, a typical user tends to be reluctant in trusting the resulting models despite their high predictive performance \citep[][]{Glikson2020}. This attitude of the users consequently leads to lost potential. Interpretability is expected to mitigate this effect by providing transparency and explanations for the user albeit the definition of interpretability itself is an ongoing discussion in the ML community \citep[see for example,][]{doshi-velez2017, linardatos2020, lipton2018, adadi2018}. In the pursuit of finding a universal definition for interpretability, the social sciences and philosophy offer an interesting perspective. In fact, the question of \textit{what constitutes a good explanation} has been of interest to philosophers for millennia. Arguably, the social sciences can contribute relevant insights from their field, and \citet{miller2018} points out that explainable ML without consideration of insights from the social sciences will result in failure, as “the very experts who understand decision-making models the best are not in the right position to judge the usefulness” \citep[p. 2]{miller2018}. One of the key aspects \citet{miller2018} puts forward is that humans intuitively search for counterfactual explanations (CE); \textit{i.e.}, the explanations that provide information in terms of what variables should be changed in order to arrive at a different prediction outcome. CEs carry the underlying assumption of what would have been necessary in the past (retrodiction) directly translates to what should be done in the future (recommendation). This link between retrodiction and recommendation makes them very attractive for their use in practice. Besides, CEs imitate how we provide explanations in everyday life. It has been established that we do not seek to explain the cause of an event \textit{per se}, but \textit{relative} to some other event that did not occur. Typically, we have a factual instance vector $\hat{\sibm{x}}$ for which the (prediction) outcome $\hat{y}$ relative to some other, desired, outcome $\tilde{y}$ should be explained. The key idea for generating a CE is to find a data point $\bm{\tilde{x}}$ close to the factual instance $\hat{\bm{x}}$, such that the prediction outcome for $\bm{\tilde{x}}$ is $\tilde{y}$. The difference in the features constitutes the explanation. As CEs do not try to explain all possible causes of an event but focus on necessary changes to the environment to reach a certain state, they tend to be simpler, and with that, also easier to understand than those methods which communicate explanations based on the entire feature space \citep[][]{miller2018}. Due to these desirable characteristics of CEs, several approaches for the generation of these have been proposed in recent years \citep[\textit{e.g.}][]{Wachter.2018,Russell.2019, Mothilal.2020, Kanamori.2020}. A set of criteria for (good) CEs has been identified in the literature \citep[][]{Verma.2020}, but to the best of our knowledge this is the first work that addresses all of these in a combined setting. We propose \texttt{CE-OCL}, a generic and flexible approach to generating CEs based on optimization with constraint learning (OCL). OCL is a new and fast-growing research field whose aim is to learn parts of an optimization model (\textit{e.g.}, constraints or objective function) using ML models whenever explicit formulae are not available (see \citet{fajemisin2021optimization} for a recent survey on OCL). The core contribution of this work lies in bridging together two, apparently, different fields that have many underlying similarities. We show that the criteria proposed in the literature can be addressed by an OCL framework. We propose a new modeling approach to ensure data manifold closeness and coherence which stems from the OCL concept of $\epsilon$-trust region. Finally, we propose exploiting incumbent solutions to obtain a set of diverse CEs in a single execution. With our extensive demonstration on standard datasets from the CE literature, we also set new benchmarks for future research. \section{Related work}\label{sec:related_work} \citet{Wachter.2018} are the first researchers who have proposed an optimization-based approach for generating CEs. Having a trained classifier $h(\cdot)$, the aim is to find at least one CE, say $\bm{\tilde{x}}$, which has the closest distance to the original factual instance $\hat{\bm{x}}$ such that $h(\bm{\tilde{x}})$ is equal to a different target $\tilde{y}$. Such a CE can be obtained by solving the following mathematical optimization model: \begin{align} \label{eqn:wachtermodel} \min_{\bm{x}} \max_{\lambda} \lambda (h(\bm{x}) - \tilde{y})^2 + d(\hat{\bm{x}}, \bm{x}), \end{align} where $d(\cdot,\cdot)$ is a distance function and $\lambda$ acts as a nonnegative balancing weight to ensure $h(\bm{x}) = \tilde{y}$. \citet{Wachter.2018} suggest the $\ell_1$-norm weighted by the inverse median absolute deviation for this distance function, but other alternatives have also been proposed in the literature, \textit{e.g.}, a combination of the Mahalanobis' distance and the local outlier factor \citep{Kanamori.2020}. In a recent review of the literature, \citet{Verma.2020} identify a set of criteria that generated CEs should adhere to both in theory and practice. These criteria are \textit{validity, proximity, sparsity, actionability, data manifold closeness}, and \textit{causality}. In addition, several works have highlighted the importance of algorithms being able to generate multiple, diverse explanations to provide the user with a set of actions to choose from \citep[\textit{e.g.}][]{Verma.2020, Wachter.2018, Russell.2019, Navas-Palencia.2021, Mothilal.2020}. We summarize these criteria in eight components: \textbf{Proximity:} The CE should be as close as possible to the factual instance $\hat{\bm{x}}$ with respect to feature values. \textbf{Validity:} The prediction for the CE $\bm{\tilde{x}}$ should be equal to $\tilde{y}$, with $\tilde{y} \neq \hat{y}$. \textbf{Coherence.} When one-hot encoding is used for categorical data, we should be able to map it back to the input feature space to obtain coherent explanations, \textit{i.e.}, only one of the dummy variables has to be equal to one, and the others to zero. \textbf{Sparsity:} The CE should differ from the factual instance in as few features as possible. \textbf{Actionability:} We can distinguish between immutable, mutable but not actionable, and actionable features. Immutable features are features that cannot be changed, such as a person's sex. Mutable but not actionable features are features that could theoretically be different, but unrealistic to change, such as marital status. The generated CE should exclude any changes to these features. \textbf{Data manifold closeness:} To ensure the generation of realistic and actionable explanations, the generated CEs should be close to the observed (training) data. \textbf{Causality:} Any (known) causal relationships in the data should be respected in the proposed CEs to further ensure realistic explanations. \textbf{Diversity:} Any algorithm for the generation of CEs should return a set of CEs which differ in at least one feature. To avoid failure modes due to using one-hot encoding for categorical variables, \citet{Russell.2019} implements a set of linear constraints combined with simple integer constraints for the indicator variables creating coherent explanations that map back to the original input space. Their second contribution is the generation of multiple CEs. \citet{Wachter.2018} highlight the importance of generating a set of CEs and propose using local minima as a source for multiple, diverse CEs. \citet{Russell.2019} points out that for linear classifiers the objective function proposed by \citet{Wachter.2018} in \eqref{eqn:wachtermodel} is convex in $\bm{x}$ for any choice of $\lambda$, and therefore only one minimum exists. As an alternative, \citet{Russell.2019} suggests adding constraints greedily by restricting the state of variables altered in previously generated CEs. \citet{Mothilal.2020} also focus on diversity and propose the algorithm \texttt{DiCE}, where a set of diverse CEs is generated based on determinantal point processes \citep{Kulesza_2012}. \citet{Wachter.2018} point out that the optimal solution may not capture all relevant user preferences, and hence advocate the generation of multiple and diverse CEs to ensure at least one attainable explanation. Other works address this issue by focusing on producing only realistic and actionable explanations. To this end, \citet{Ustun.2019} introduce the notion of immutable, conditionally immutable, and mutable features. Immutable features are those that cannot be changed, such as sex or ethnicity. Conditionally immutable features are features that may only take on certain values, depending on the factual instance and their current state for that feature. For example, the feature \textit{education} with the value \textit{bachelor\_degree} can only change to higher degrees of education, such as \textit{masters\_degree} or \textit{phd}, but never back to \textit{highschool}. Mutable features are those that are not restricted in the values they can take on. Other authors have extended this line of work and proposed a modified cost function to ensure that the solutions are close to the training data, and hence, the generated CEs become sensible. With their algorithm \texttt{DACE}, \citet{Kanamori.2020} attempt to optimize the idea of proximity using \textit{Mahalanobis’ distance} and the \textit{local outlier factor} to generate CEs close to the empirical distribution of the training data. They compare their results with the work of \citet{Ustun.2019} and \citet{Wachter.2018}, and demonstrate how their approach generates distribution-aware CEs. \citet{Poyiadzi2020}'s work is based on graph theory and uses the $f$-distance to quantify the trade-off between the path length and the density along this path. They apply the shortest path algorithm to minimize the distance, ensuring that the solution lies in a high-density region and with that is more attainable in practice. Several authors argue that the specification of a set of feasible actions, such as in \citet{Ustun.2019} and \citet{Kanamori.2020}, is insufficient in practice due to interaction and causal relationships among features; \textit{e.g.}, \citep{Mahajan.2019, Mothilal.2020, Kanamori.2020jn, Karimi.2020}. Given a directed acyclic graph, \citet{Kanamori.2020jn} show how to generate \textit{ordered} CEs, where the necessary actions are provided to the user in order. \citet{Mahajan.2019} propose a proximity loss that is based on constraints derived from a structural causal model (SCM) of the input features, instead of the standard objective function based on $\ell_1$- or $\ell_2$-norm. For example, if there is a known relation $f: (x_1, x_2) \rightarrow x_3$, the CE value for $x_3$ must depend on its \textit{parents} in the CE example (\textit{i.e.}, $x_1$ and $x_2$). Alternatively, if such function is unknown, \citet{Mahajan.2019} propose to learn feasibility constraints from users’ binary feedback on generated CEs. Similarly, \citet{Karimi.2020} work with SCMs to account for causal relations in the data. In a situation where the mapping between parent-child features is not known, one may train a linear regression model. \texttt{DiCE} \citep[][]{Mothilal.2020} addresses causality with a post-hoc filtering approach based on causal constraints that is applied to the generated CEs. While the discussed related work requires access to at least the gradients of the model and, in some cases, also the training data, there exist approaches which do not require either. For example, \citet{Laugel2017} propose a two-step heuristic-based approach, \textit{Growing Spheres}, where (i) points are generated in all directions around the factual instance, and (ii) once a point with the desired prediction outcome was found, this point is adjusted to induce sparsity while keeping the prediction outcome. Unlike most of the related work that focuses on a subset of the established criteria, we propose a generic approach that accommodates all of these in a combined framework. To guide the reader, we have mapped the characteristics of the discussed literature and highlighted our contributions in Table~\ref{tab:overview}. We refer to \citep[][]{Verma.2020, Guidotti2022} for a complete and extensive overview of recent works on counterfactual explanations. \input{tables/table_overview} \section{Generation of counterfactual explanations}\label{sec:generate} In an OCL framework, ML models are used to design constraint and objective functions of an optimization model when explicit expressions are unknown. First, the predictive model is trained on historical data and then it is embedded into the optimization model using decision variables as inputs \cite{Biggs2017RF, verwer2017auction, villarrubia2018artificial}. Although the interplay between optimization and ML has a different aim in OCL versus CE generation, the two frameworks have a similar structure which allows the mutual transferring of knowledge from one discipline to the other. In this regard, we show how the problem of generating CEs, given a fitted model $h(\cdot)$, a factual instance $\bm{\hat{x}}$, and the desired outcome $\tilde{y}$, can be seen as a special case of \textit{optimization with constraint learning}. We first introduce a generic OCL model, and then we describe its relation to the problem of generating CEs that meet the aforementioned criteria. In an OCL setting, a dataset $\mathcal{D} = \{(\bm{\bar{x}_i}, \bar{y}_i)\}_{i=1}^N$ with observed feature vector $\bm{\bar{x}_i}$ and outcome of interest $\bar{y}_i$ for sample $i$, is used to train predictive models that are to be constrained or optimized in a larger optimization problem. An OCL model is typically presented as \begin{subequations} \begin{align} \underset{{\bm{x}\in \mathbb{R}^n,y\in \mathbb{R}}}{\mbox{minimize\hspace{4mm} }} \ & f(\bm{x}, y) & \label{eqn:conceptualmodelCL1}\\ \mbox{subject to\hspace{4mm}} \ & \bm{g}(\bm{x}, y) \leq \bm{0}, & \label{eqn:conceptualmodelCL2}\\ & y = h(\bm{x}), & \label{eqn:conceptualmodelCL3}\\ & \bm{x} \in \mathcal{X},& \label{eqn:conceptualmodelCL4} \end{align} \end{subequations} where $\bm{x} \in \mathbb{R}^n$ is the decision vector with components $x_i \in \mathbb{R}$, $f(\cdot,\cdot):\mathbb{R}^{n+1} \mapsto \mathbb{R}$ and $\bm{g}(\cdot,\cdot):\mathbb{R}^{n+1} \mapsto \mathbb{R}^m$ are known functions possibly also depending on the predicted outcome $y$, and $h(\cdot):\mathbb{R}^{n} \mapsto \mathbb{R}$ represents the predictive model\footnote{To simplify our exposition, we include only one predictive model. However, a general OCL framework admits multiple learned constraints in the model.} trained on $\mathcal{D}$. The set $\mathcal{X}$ defines the trust region, \textit{i.e.}, the set of solutions for which we trust the embedded predictive models (see below for details). {\leftskip=0.5cm\relax \rightskip=0.5cm\relax The \textit{Palatable Diet Problem} \cite{maragno2021mixedinteger} is a conventional example of OCL, in which we seek to find a cost-minimizing diet that satisfies fixed nutrient requirements while also being sufficiently “palatable.” The objective (\ref{eqn:conceptualmodelCL1}) and the nutritional constraints (\ref{eqn:conceptualmodelCL2}) are functions of the decision variable $\bm{x}$ and are explicitly known, while the palatability constraints are not explicit but depend on the personal taste. Exploiting survey data on how people like different diets, an ML model $h(\bm{x})$ is trained and embedded into the model as a set of constraints (\ref{eqn:conceptualmodelCL3}). The palatability constraint (\ref{eqn:conceptualmodelCL2}) is represented as $y \geq \tau$, namely a diet is feasible if $y \in \mathbb{R}$ is greater than a chosen threshold ($\tau \in \mathbb{R}$). \par} Formulation (\ref{eqn:conceptualmodelCL1}-\ref{eqn:conceptualmodelCL4}) is quite general and encompasses a large body of work that includes CE generation. Now, we characterize the parallelism between the eight components listed in Section~\ref{sec:related_work} and the structure of the resulting OCL model. \textbf{Proximity.} By definition, a CE has to be in the proximity of the factual instance according to some user-defined distance function. To obtain a CE $\bm{\tilde{x}}$ in the proximity of $\bm{\hat{x}}$, we can write the objective function (\ref{eqn:conceptualmodelCL1}) as a distance function $d(\bm{x}, \hat{\bm{x}})$. In the literature, this function is represented by $\ell_1$-norm, $\ell_2$-norm, or as the Mahalanobis’ distance. \textbf{Validity.} While the trained model $h(\cdot)$ is used in constraint learning to define, completely or partially, the objective function and/or the constraints, in CE generation it is used to enforce the validity constraint. Constraint (\ref{eqn:conceptualmodelCL3}) is likely to be an encoding of the predictive model. In other words, embedding a trained ML model requires adding multiple constraints and auxiliary variables. When $h(\cdot)$ is a classification model, the CE validity is obtained by constraining the model prediction to be equal to the desired class $\tilde{y}$; that is, we set $y=\tilde{y}$. If $h(\cdot)$ is a regression model, the OCL framework still applies, and an inequality constraint can be used to enforce validity; \textit{e.g.}, $y\leq\tilde{y}-\delta$ or $y\geq\tilde{y}+\delta$ for some fixed $\delta \in \mathbb{R}_+$. \textbf{Coherence.} When one-hot encoding is used to deal with categorical features, we can use the constraints proposed by \citet{Russell.2019} to obtain coherent CEs. That is, we write for $k$ categorical features the following constraints: \begin{align} \sum_{j' \in \mathcal{C}_j} x_{j'} = 1, ~~ j = 1,\dots, k,\label{eqn:coherence} \end{align} where $\mathcal{C}_j$ is a set of indices referring to the dummy (binary) variables used to represent the categorical feature $j$. \textbf{Sparsity.} The sparsity can be handled by enforcing the following set of constraints: \begin{subequations} \begin{align} & \ |x_j - \hat{x}_j| \leq M z_j, ~~ j=1, \dots, n,\label{eqn:sparsity1}\\ & \ \sum_{i=1}^n z_i \leq K,\label{eqn:sparsity2} \end{align} \end{subequations} where $z_j \in \{0,1\}$, $j=1, \dots, n$ are auxiliary variables that are simply used to count the number of features in $\bm{x}$ that differ from $\hat{\bm{x}}$, and $K$ is an upper bound on the number of allowed changes. Alternatively, constraints (\ref{eqn:sparsity2}) can be relaxed and moved to the objective function with a scaling penalty factor $\alpha > 0$. That is, we obtain the new objective function $f(\bm{x}, y) + \alpha\sum^n_{i=1} z_i$. Though simpler, this relaxation does not guarantee to lead to an optimal solution with less than or equal to $K$ changes. \textbf{Actionability.} As a recommended CE should never change the immutable features, we can restrict the CE to be equal to the factual instance for all the immutable features. Suppose that the set of immutable features is represented by $\mathcal{I}_m$, then we simply add the following constraints: \begin{align} x_i = \hat{x}_i, ~~ i \in \mathcal{I}_m. \end{align} Other feasibility constraints might concern actionable variables that cannot take certain values, such as \textit{age}, which can only be increased, or \textit{has\_phd}, which can only change from false to true. These conditions can be added exactly like immutable features. \begin{figure} \centering \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Figures/CE_without_Trust_Region.pdf} \end{subfigure}% \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Figures/CE_With_Trust_Region.pdf}\label{fig:sub2} \end{subfigure} \caption{The effect of the data manifold region on the generated CE. The left figure shows the factual instance and its closest counterfactual without closeness constraints. The right figure shows the same factual instance with the CE constrained to be within the data manifold region.} \label{fig:test} \end{figure} \textbf{Data manifold closeness.} One of the requirements to obtain plausible CEs is that they are close to the data manifold. For this purpose, we can make use of the \textit{trust region} constraints. \citet{maragno2021mixedinteger} define the trust region as the convex hull (CH) of $\mathcal{D}$ in the features space, and they use it in OCL to prevent the trained model from extrapolating, therefore, mitigating the deterioration in predictive performance for points that are farther away from the data points in $\mathcal{D}$. In CE generation, the trust region, or rather \textit{data manifold region}, serves the purpose of ensuring solutions in a high-density region. To this end, we can also denote a CE ($\bm{\tilde{x}}$) as the convex combination of samples in $\mathcal{D}$, in particular samples belonging to the desired class ($\tilde{y}$). Figure~\ref{fig:test} shows how the data manifold region, defined by the CH of blue points, can drastically affect the CE and its plausibility. In case the CH is too restrictive, we can use a relaxed formulation to enlarge the data manifold by including those solutions that are in the $\epsilon$-ball surrounding some feasible solutions in the CH: \begin{align} \text{$\epsilon$-CH} = \bigg\{ \bm{x} \bigg| \sum_{i \in \mathcal{I}} \lambda_i \bm{\bar{x}}_i = \bm{x} + \bm{s}, \ \sum_{i \in \mathcal{I}} \lambda_i = 1, \ \bm{\lambda} \geq 0, \ ||\bm{s}||_{p} \leq \epsilon \bigg\}, \label{eqn:epstrustregionconstr} \end{align} where $\lambda_i \in [0,1]$ and $\bm{s} \in \mathbb{R}^n$ are auxiliary variables, $\epsilon \geq 0$ is a hyperparameter, and $\mathcal{I}$ denotes the indices corresponding to the subset of samples in $\mathcal{D}$ belonging to the desired class $\tilde{y}$. When $\epsilon=0$, we obtain the trust region as discussed in \citet{maragno2021mixedinteger}. However, $\epsilon > 0$ leads to a less restrictive set of conditions. This is also a solution to the criticism by \citet{Balestriero2021}: “[...] interpolation\footnote{Interpolation occurs for a sample $\bm{x}$ whenever this sample belongs to the CH of a set of data points.} almost surely never occurs in high-dimensional spaces $(> 100)$ regardless of the underlying intrinsic dimension of the data manifold.” Aside from the bound on the norm of $\sibm{s}$, all constraints in \eqref{eqn:epstrustregionconstr} are linear. Fortunately, the most common norms used to constraint $\bm{s}$ are $\ell_1$-, $\ell_2$-, or $\ell_\infty$-norm. These norms lead to convex conic constraints that can be handled easily with off-the-shelf optimization solvers. The use of a data manifold region (with a sufficiently small $\epsilon$) has an interesting impact on CE coherence because constraints (\ref{eqn:coherence}) become redundant. To exemplify how data manifold constraints guarantee coherence, we consider a set of samples represented by the set of indices $\mathcal{I}$, and a categorical feature \textit{diet} that can assume only three values: \textit{vegan}, \textit{vegetarian}, or \textit{omnivore}. We use one-hot encoding to replace the feature \textit{diet} and describe a CE with the dummy (binary) variables $x_{vegan}$, $x_{vegetarian}$, $x_{omnivore}$. From (\ref{eqn:epstrustregionconstr}), we have \begin{align*} x_j = \sum_{i\in \mathcal{I}} \lambda_i\bar{x}_{i,j}, ~~ j\in \{vegan, vegetarian, omnivore\}, \end{align*} with $\sum_{i\in\mathcal{I}} \lambda_i = 1$. One of the dummy variables, say $x_{vegan}$, can assume value 1 only if it is the convex combination of data points $\bar{\sibm{x}}_i$ with $\bar{x}_{i,vegan} = 1$ and $\bar{x}_{i,vegeterian} = \bar{x}_{i,omnivore} = 0$. Thus, $\lambda_i > 0$ only when $\bar{x}_{i,vegan}=1$, and consequently, we obtain $x_{vegetarian} = x_{omnivore} = 0$. The effectiveness of the data manifold region might be hampered by the fact that the CH includes low-density regions. In this case, \citet{maragno2021mixedinteger} advocate a two-step approach: first, clustering is used to identify distinct high-density regions, and then, the data manifold region is represented as the union of the (enlarged) convex hulls of the individual clusters. \textbf{Causality.} CEs might be inefficient or unrealistic when causal relations are not considered in the generation process. Both these situations are exemplified in \citet{Karimi.2020}, where the authors show the importance of causal relations to obtain CEs that better answer the question “what \textit{should be done} in the future considering the laws governing the world.” When a causal model is available, we can formulate the causal relations among variables as extra constraints of the optimization model. Applying the Abduction-Action-Prediction steps \citep{Pearl.2013}, \citet{Karimi.2020} define the endogenous variables (with indices in the set $\mathcal{E}$) as \begin{align} x_i = \hat{x}_i + c_i(\bm{p}_i) - c_i(\hat{\bm{p}}_i), ~~ i \in \mathcal{E}, \end{align} where $c_i(\bm{p}_i)$ is a function of the parents of $x_i$, namely the predecessors of the feature $i$ in the SCM. Both $\hat{x}_i$ and $c_i(\hat{\bm{p}}_i)$ are known before the optimization and therefore treated as parameters. When there is not an explicit formulation of $c_i(\cdot)$, we are in a constraint learning scenario where an ML model can be trained and embedded into the optimization as $c_i = h_i(\bm{p}_i)$ for all $i \in \mathcal{E}$. \textbf{Diversity.} Most of the methods for generating multiple and diverse CEs in the literature require multiple runs and extra constraints to generate diverse CEs for the same input. Following an iterative approach, we can generate diverse CEs using constraints on the actionability of features \citep{Russell.2019}, or constraints on the distance between the subsequent CE and all the previously generated ones \citep{Karimi.2019fy}. Again in an iterative way, we can also use the data manifold constraints to generate diverse CEs (i) by finding one CE for each clustered CH, (ii) by enlarging the CH with increasing $\epsilon$ whenever the data manifold constraints are active. The use of diversity constraints offers great flexibility at the expense of computation time. As an alternative, we propose to solve one single optimization model and use the pool of \textit{incumbent solutions} as the set of CEs. In mixed-integer optimization, solvers like Gurobi or CPLEX allow retrieving the sub-optimal solutions found during the tree search procedure \citep{gurobi, cplex2009v12}. In this way, collecting a set of CEs comes at no cost in terms of computation time. \section{Computational Study}\label{sec:demonstration} In this section, we demonstrate the effectiveness of OCL through empirical experiments on multiple datasets. The experiments are executed using \texttt{OptiCL}\footnote{\hyperlink{https://github.com/hwiberg/OptiCL}{https://github.com/hwiberg/OptiCL}, under the MIT license} \citep{maragno2021mixedinteger}, an open-source Python package for optimization with constraint learning. \texttt{OptiCL} has been originally designed to help practitioners in modeling an optimization problem whose constraints are partially unknown, but where ML models can be deployed to learn them. However, as detailed in Section~\ref{sec:generate}, the problem of generating CEs directly relates to an OCL problem. \texttt{OptiCL} currently supports several MIO-representable predictive models, including logistic regression (lr), support vector machines (svm), (optimal) decision trees (cart), random forests (rf), gradient boosting machines (gbm), and neural networks with ReLU activation functions (mlp). Moreover, \texttt{OptiCL} allows for trust region constraints as defined in (\ref{eqn:epstrustregionconstr}). Whenever a causal model is available but the relations are not explicit, \texttt{OptiCL} allows representing the relation using one of the MIO-representable ML models. The open-source implementation for reproducing all our results is available at \texttt{https://github.com/tabearoeber/CE-OCL}. \subsection{Case study: German Credit Data}\label{sec:case_study} We demonstrate the generation of CEs on the Statlog (German Credit Data) dataset \citep[][]{Dua:2019}, which is one of the standard datasets in the CE literature\footnote{We also provide another demonstration on the Statlog (Heart) dataset \citep[][]{Dua:2019} in Appendix~\ref{app:heart}}. The German Credit dataset classifies people described by a set of 20 features as good or bad credit risk, see Table \ref{tab:description} in Appendix~\ref{app:german} for an overview of the features. For this demonstration, we gradually add constraints to the model and present the generated CEs at each step in Table~\ref{tab:display_demo}. The table is divided into six parts (A-F), each showing the set of CEs generated, and a dash is used to represent no change to the corresponding features. The following mathematical model is used to generate CEs and contains all the constraints -- criteria -- presented in Section~\ref{sec:generate}: \allowdisplaybreaks \begin{subequations} \begin{align} \underset{{\bm{x}, \bm{z}, \bm{s}\in \mathbb{R}^n, \bm{\lambda} \in \mathbb{R}_{\geq0}^{|\mathcal{I}|}}}{\mbox{minimize\hspace{4mm} }} \ & \ell_2(\bm{x}, \bm{\hat{x}}) + \alpha \sum_i z_i + \beta \ell_1(\bm{s}, \bm{\tilde{s}}) & \owntag[eq:final0]{proximity, sparsity, and closeness}\\ \mbox{subject to\hspace{4mm}} \ & h(\bm{x}) = 1 \owntag[eq:final1]{validity}\\ & |\bm{x} - \bm{\hat{x}}| \leq M \bm{z},\owntag[eq:final2]{sparsity}\\ & \sum_{i \in \mathcal{I}} \lambda_i \bm{\bar{x}}_i = \bm{x} + \bm{s}, & \owntag[eq:final3]{data manifold closeness}\\ & \sum_{i \in \mathcal{I}} \lambda_i = 1, &\owntag[eq:final4]{data manifold closeness}\\ & x_i \geq 0, ~~ i \in \{F1, F2, F3, F6 ,F7\} \owntag[eq:final5]{actionability}\\ & x_i \geq \hat{x}_i, ~~ i \in \{F4, F5\}\owntag[eq:final6]{actionability}\\ & x_i = \hat{x}_i, ~~ i \in \{F11, F17, F19\}\owntag[eq:final7]{immutability}\\ & x_{F10} \in \mathcal{C}_{F10},\owntag[eq:final8]{conditional immutability}\\ & x_{F1} = \hat{x}_{F1} + h_{causality}(x_{F2}) - h_{causality}(\hat{x}_{F2}), \owntag[eq:final9]{causality}\\ & \bm{x} \in \mathcal{L}, \owntag[eq:final10]{Domain (real, integer, binary)} \end{align} \end{subequations} In Table~\ref{tab:evaluation_demo}, we present the evaluation of these CEs using several evaluation metrics proposed by \citet{Mothilal.2020}. Validity, sparsity, categorical proximity, categorical diversity, and sparsity-based diversity range in the interval [0,1], where 0 and 1 represent the worst and the best scores ($\uparrow^1_0$), respectively. Continuous diversity is a positive number, and the higher it is, the better ($\uparrow_0^+$). Continuous proximity is a negative number, and the closer it is to 0, the better ($\uparrow^0_{-}$). \input{tables/table_demonstration} We fit several ML models to the data, all of which performed similarly well. For demonstration purposes, we have chosen a linear support vector machine. The factual instance $\bm{\hat{x}}$ used for this case study is reported in Table \ref{tab:display_demo}. We start the demonstration considering only validity, proximity, and coherence (Part A), and using the $\ell_2$-norm as a distance function. The optimal solution suggests several changes in the factual instance and is not actionable in practice due to the negative value for F2 (credit amount). To induce sparsity (Part B), we use auxiliary variables to keep track of the number of features changed and penalize them in the objective function. Multiple and diverse CEs are generated using incumbent solutions (Part C). To ensure that the set of generated CEs is valuable in practice, we add actionability constraints (Part D), such that certain variables are restricted to be positive, or equal or larger than the value in the factual instance. Other variables, such as F11 (foreign worker), F17 (personal status), and F19 (credit purpose) are fixed to be equal to the corresponding value in the $\bm{\hat{x}}$. We also consider conditionally immutable features like F10, which is a categorical variable representing the years of employment at the current job; for a factual instance with value \textit{1$\leq$X$<$4} in F10, a CE should not take on the values \textit{unemployed} or \textit{$<$1}, but only the same value or categories ranked higher (\textit{4$\leq$X$<$7} or \textit{$\geq7$}). Respecting these constraints, the set of generated CEs seems more realistic however, they may still not be attainable in practice. Specifically, if we consider solution (c) of Part D, the only suggested change concerns F4 (age). However, this CE is unlikely to represent a realistic data point, considering the other feature values remain unchanged. In other words, CEs that do not resemble the training data come with the risk of being unattainable in practice. To this end, we use the idea of a \textit{data manifold region}, as detailed in Section \ref{sec:generate}. As a result, in Part E, we obtain a more realistic set of CEs, although at the expense of sparsity and (categorical) proximity (see the scores reported in Table~\ref{tab:evaluation_demo}). From a qualitative point of view, the three CEs show a more sensible combination of feature values compared to those in Part D. Finally, we can leverage the partial SCM provided by \citet{Karimi.2020} for this dataset, which shows that F1 (duration) is causally related to F2 (credit amount). To this respect, several MIO-representable predictive models are trained through \texttt{OptiCL} and the multi-layer perceptron (MLP) achieves the best performance in terms of mean squared error on a 5-fold cross-validation set. While solution (c) of Part E suggests a counterintuitive increase in the credit amount with half of the duration time, the fitted MLP learns the (more intuitive) positive correlation between these two features. In Part F, we display the set of CEs that satisfy also the learned causality constraints. \subsection{Comparison against other methods}\label{sec:comparison} In this section, we compare \texttt{CE-OCL} to four open-source tools for generating CEs: Growing Spheres \cite{Laugel2017}, \texttt{FACE}\cite{Poyiadzi2020}, Actionable Recourse \cite{Ustun.2019}, and DiCE \cite{Mothilal.2020}. The experiments are performed using \texttt{CARLA} \cite{pawelczyk2021carla}, a Python library to benchmark counterfactual explanation and recourse models. The predictive model used in the experiments is a random forest and the evaluation is performed by generating a counterfactual for 30 different factual instances on four datasets available in \texttt{CARLA}: Adult, Give Me Some Credit, COMPAS, and HELOC. We average the results for the evaluation metrics proposed by \citet{Mothilal.2020} and present them together with the standard error (s.e.) in Table~\ref{tab:carla_comparison}. While \texttt{CE-OCL} can deal with causality and closeness constraints, this does not apply to \texttt{DiCE} which uses a post-hoc filtering approach to remove unrealistic CEs. In addition to causality and closeness constraints, Actionable Recourse, and Growing Sphere cannot generate more than one counterfactual for each instance. \texttt{FACE} does not support diversity and causality constraints but it is able to generate CEs close to the data manifold region. Therefore, we report in Table~\ref{tab:carla_comparison} both the results obtained with \texttt{CE-OCL} including validity, proximity, coherence, sparsity, and immutability constraints, and the results obtained including also the closeness constraints, \texttt{CE-OCL}\_tr. The results show that, across all datasets, both \texttt{CE-OCL} and \texttt{CE-OCL}\_tr exhibit better performance in terms of validity, categorical proximity, and sparsity. Actionable Recourse and \texttt{CE-OCL}/\texttt{CE-OCL}\_tr perform equally well in terms of continuous proximity. \input{tables/tab_carla} We performed a more thorough comparison between \texttt{CE-OCL} and \texttt{DiCE} on the same four datasets but this time generating three CEs for each instance and using all the predictive models supported by both \texttt{OptiCL} and \texttt{DiCE}; that is random forest (rf). In Table~\ref{tab:ce-ocl_vs_dice}, we report the results obtained with \texttt{CE-OCL} including validity, proximity, coherence, sparsity, diversity, and actionability together with the results obtained considering also the data manifold closeness, (\texttt{CE-OCL}\_tr). The results clearly show how \texttt{CE-OCL} outperforms \texttt{DiCE} in terms of validity, categorical proximity, continuous proximity, and sparsity. While both methods have a categorical diversity score very close to zero in every scenario, \texttt{DiCE} has a generally better performance in terms of continuous diversity. Similarly, \texttt{DiCE} has a better sparsity-based diversity score with the exception of the COMPAS dataset. The addition of closeness constraints (\texttt{CE-OCL}\_tr) has a negative effect on the sparsity and proximity scores but it positively affects the diversity scores when compared to \texttt{CE-OCL}. This was to be expected, as the data manifold region forces solutions to be located in a high-density region, which might lead to optimal solutions with more feature changes. While the sparsity decreases, this loss comes at a high potential of more valuable counterfactuals. \input{tables/dice_comparison} \clearpage \section{Discussion}\label{sec:conclusion} With this work, we propose \texttt{CE-OCL}, a generic approach for interpretability to generate sensible and practical counterfactual explanations. In Section~\ref{sec:generate}, we have described how a set of constraints mathematically represents each criterion that makes a good CE. We have also introduced a new definition of data manifold region based on the (enlarged) CH of the data points. The main advantage of \texttt{CE-OCL} is the flexibility and modularity that allows the user to generate CEs concerning the preferred desiderata, as shown in Section~\ref{sec:case_study}. Through the experiments in Section~\ref{sec:comparison}, we have confirmed the effectiveness of \texttt{CE-OCL} by comparing it with a recent and comprehensive tool, \texttt{DiCE}. Although \texttt{CE-OCL} is performing relatively well in terms of diversity, we acknowledge the limitations of using incumbent solutions as multiple counterfactuals caused by the lack of control over the solutions' diversity. Whenever we have specific diversity requirements to meet, the iterative approaches proposed by \citet{Russell.2019} and \citet{Karimi.2019fy} may suit best. Moreover, owing to the MIO structure of \texttt{CE-OCL} and various constraints used to satisfy the established criteria, the feasibility space may shrink to the point of being empty, making the optimization problem infeasible. In the infeasibility case, we recommend following an approach similar to that presented in Section 1, where constraints are added one at a time. Infeasibility problems due to data manifold constraints can be mitigated by enlarging the data manifold region at the (potential) expense of the sensibility of the CEs. For future research, we plan to investigate the effect of clustering and enlargement of the data manifold region on the CE quality and on diversity. We also intend to extend \texttt{CE-OCL} with additional criteria like robustness in the sense that the generated CEs are not point solutions, but that they are defined by ranges in the feature values. \section*{Acknowledgments}{This work was supported by the Dutch Scientific Council (NWO) grant OCENW.GROOT.2019.015, Optimization for and with Machine Learning (OPTIMAL).} \newpage \section{Introduction} The need for interpretability becomes more apparent every day. Due to the black-box character of many machine learning (ML) algorithms, a typical user tends to be reluctant in trusting the resulting models despite their high predictive performance \citep[][]{Glikson2020}. This attitude of the users consequently leads to lost potential. Interpretability is expected to mitigate this effect by providing transparency and explanations for the user albeit the definition of interpretability itself is an ongoing discussion in the ML community \citep[see for example,][]{doshi-velez2017, linardatos2020, lipton2018, adadi2018}. In the pursuit of finding a universal definition for interpretability, the social sciences and philosophy offer an interesting perspective. In fact, the question of \textit{what constitutes a good explanation} has been of interest to philosophers for millennia. Arguably, the social sciences can contribute relevant insights from their field, and \citet{miller2018} points out that explainable ML without consideration of insights from the social sciences will result in failure, as “the very experts who understand decision-making models the best are not in the right position to judge the usefulness” \citep[p. 2]{miller2018}. One of the key aspects \citet{miller2018} puts forward is that humans intuitively search for counterfactual explanations (CE); \textit{i.e.}, the explanations that provide information in terms of what variables should be changed in order to arrive at a different prediction outcome. CEs carry the underlying assumption of what would have been necessary in the past (retrodiction) directly translates to what should be done in the future (recommendation). This link between retrodiction and recommendation makes them very attractive for their use in practice. Besides, CEs imitate how we provide explanations in everyday life. It has been established that we do not seek to explain the cause of an event \textit{per se}, but \textit{relative} to some other event that did not occur. Typically, we have a factual instance vector $\hat{\sibm{x}}$ for which the (prediction) outcome $\hat{y}$ relative to some other, desired, outcome $\tilde{y}$ should be explained. The key idea for generating a CE is to find a data point $\bm{\tilde{x}}$ close to the factual instance $\hat{\bm{x}}$, such that the prediction outcome for $\bm{\tilde{x}}$ is $\tilde{y}$. The difference in the features constitutes the explanation. As CEs do not try to explain all possible causes of an event but focus on necessary changes to the environment to reach a certain state, they tend to be simpler, and with that, also easier to understand than those methods which communicate explanations based on the entire feature space \citep[][]{miller2018}. Due to these desirable characteristics of CEs, several approaches for the generation of these have been proposed in recent years \citep[\textit{e.g.}][]{Wachter.2018,Russell.2019, Mothilal.2020, Kanamori.2020}. A set of criteria for (good) CEs has been identified in the literature \citep[][]{Verma.2020}, but to the best of our knowledge this is the first work that addresses all of these in a combined setting. We propose \texttt{CE-OCL}, a generic and flexible approach to generating CEs based on optimization with constraint learning (OCL). OCL is a new and fast-growing research field whose aim is to learn parts of an optimization model (\textit{e.g.}, constraints or objective function) using ML models whenever explicit formulae are not available (see \citet{fajemisin2021optimization} for a recent survey on OCL). The core contribution of this work lies in bridging together two, apparently, different fields that have many underlying similarities. We show that the criteria proposed in the literature can be addressed by an OCL framework. We propose a new modeling approach to ensure data manifold closeness and coherence which stems from the OCL concept of $\epsilon$-trust region. Finally, we propose exploiting incumbent solutions to obtain a set of diverse CEs in a single execution. With our extensive demonstration on standard datasets from the CE literature, we also set new benchmarks for future research. \section{Related work}\label{sec:related_work} \citet{Wachter.2018} are the first researchers who have proposed an optimization-based approach for generating CEs. Having a trained classifier $h(\cdot)$, the aim is to find at least one CE, say $\bm{\tilde{x}}$, which has the closest distance to the original factual instance $\hat{\bm{x}}$ such that $h(\bm{\tilde{x}})$ is equal to a different target $\tilde{y}$. Such a CE can be obtained by solving the following mathematical optimization model: \begin{align} \label{eqn:wachtermodel} \min_{\bm{x}} \max_{\lambda} \lambda (h(\bm{x}) - \tilde{y})^2 + d(\hat{\bm{x}}, \bm{x}), \end{align} where $d(\cdot,\cdot)$ is a distance function and $\lambda$ acts as a nonnegative balancing weight to ensure $h(\bm{x}) = \tilde{y}$. \citet{Wachter.2018} suggest the $\ell_1$-norm weighted by the inverse median absolute deviation for this distance function, but other alternatives have also been proposed in the literature, \textit{e.g.}, a combination of the Mahalanobis' distance and the local outlier factor \citep{Kanamori.2020}. In a recent review of the literature, \citet{Verma.2020} identify a set of criteria that generated CEs should adhere to both in theory and practice. These criteria are \textit{validity, proximity, sparsity, actionability, data manifold closeness}, and \textit{causality}. In addition, several works have highlighted the importance of algorithms being able to generate multiple, diverse explanations to provide the user with a set of actions to choose from \citep[\textit{e.g.}][]{Verma.2020, Wachter.2018, Russell.2019, Navas-Palencia.2021, Mothilal.2020}. We summarize these criteria in eight components: \textbf{Proximity:} The CE should be as close as possible to the factual instance $\hat{\bm{x}}$ with respect to feature values. \textbf{Validity:} The prediction for the CE $\bm{\tilde{x}}$ should be equal to $\tilde{y}$, with $\tilde{y} \neq \hat{y}$. \textbf{Coherence.} When one-hot encoding is used for categorical data, we should be able to map it back to the input feature space to obtain coherent explanations, \textit{i.e.}, only one of the dummy variables has to be equal to one, and the others to zero. \textbf{Sparsity:} The CE should differ from the factual instance in as few features as possible. \textbf{Actionability:} We can distinguish between immutable, mutable but not actionable, and actionable features. Immutable features are features that cannot be changed, such as a person's sex. Mutable but not actionable features are features that could theoretically be different, but unrealistic to change, such as marital status. The generated CE should exclude any changes to these features. \textbf{Data manifold closeness:} To ensure the generation of realistic and actionable explanations, the generated CEs should be close to the observed (training) data. \textbf{Causality:} Any (known) causal relationships in the data should be respected in the proposed CEs to further ensure realistic explanations. \textbf{Diversity:} Any algorithm for the generation of CEs should return a set of CEs which differ in at least one feature. To avoid failure modes due to using one-hot encoding for categorical variables, \citet{Russell.2019} implements a set of linear constraints combined with simple integer constraints for the indicator variables creating coherent explanations that map back to the original input space. Their second contribution is the generation of multiple CEs. \citet{Wachter.2018} highlight the importance of generating a set of CEs and propose using local minima as a source for multiple, diverse CEs. \citet{Russell.2019} points out that for linear classifiers the objective function proposed by \citet{Wachter.2018} in \eqref{eqn:wachtermodel} is convex in $\bm{x}$ for any choice of $\lambda$, and therefore only one minimum exists. As an alternative, \citet{Russell.2019} suggests adding constraints greedily by restricting the state of variables altered in previously generated CEs. \citet{Mothilal.2020} also focus on diversity and propose the algorithm \texttt{DiCE}, where a set of diverse CEs is generated based on determinantal point processes \citep{Kulesza_2012}. \citet{Wachter.2018} point out that the optimal solution may not capture all relevant user preferences, and hence advocate the generation of multiple and diverse CEs to ensure at least one attainable explanation. Other works address this issue by focusing on producing only realistic and actionable explanations. To this end, \citet{Ustun.2019} introduce the notion of immutable, conditionally immutable, and mutable features. Immutable features are those that cannot be changed, such as sex or ethnicity. Conditionally immutable features are features that may only take on certain values, depending on the factual instance and their current state for that feature. For example, the feature \textit{education} with the value \textit{bachelor\_degree} can only change to higher degrees of education, such as \textit{masters\_degree} or \textit{phd}, but never back to \textit{highschool}. Mutable features are those that are not restricted in the values they can take on. Other authors have extended this line of work and proposed a modified cost function to ensure that the solutions are close to the training data, and hence, the generated CEs become sensible. With their algorithm \texttt{DACE}, \citet{Kanamori.2020} attempt to optimize the idea of proximity using \textit{Mahalanobis’ distance} and the \textit{local outlier factor} to generate CEs close to the empirical distribution of the training data. They compare their results with the work of \citet{Ustun.2019} and \citet{Wachter.2018}, and demonstrate how their approach generates distribution-aware CEs. \citet{Poyiadzi2020}'s work is based on graph theory and uses the $f$-distance to quantify the trade-off between the path length and the density along this path. They apply the shortest path algorithm to minimize the distance, ensuring that the solution lies in a high-density region and with that is more attainable in practice. Several authors argue that the specification of a set of feasible actions, such as in \citet{Ustun.2019} and \citet{Kanamori.2020}, is insufficient in practice due to interaction and causal relationships among features; \textit{e.g.}, \citep{Mahajan.2019, Mothilal.2020, Kanamori.2020jn, Karimi.2020}. Given a directed acyclic graph, \citet{Kanamori.2020jn} show how to generate \textit{ordered} CEs, where the necessary actions are provided to the user in order. \citet{Mahajan.2019} propose a proximity loss that is based on constraints derived from a structural causal model (SCM) of the input features, instead of the standard objective function based on $\ell_1$- or $\ell_2$-norm. For example, if there is a known relation $f: (x_1, x_2) \rightarrow x_3$, the CE value for $x_3$ must depend on its \textit{parents} in the CE example (\textit{i.e.}, $x_1$ and $x_2$). Alternatively, if such function is unknown, \citet{Mahajan.2019} propose to learn feasibility constraints from users’ binary feedback on generated CEs. Similarly, \citet{Karimi.2020} work with SCMs to account for causal relations in the data. In a situation where the mapping between parent-child features is not known, one may train a linear regression model. \texttt{DiCE} \citep[][]{Mothilal.2020} addresses causality with a post-hoc filtering approach based on causal constraints that is applied to the generated CEs. While the discussed related work requires access to at least the gradients of the model and, in some cases, also the training data, there exist approaches which do not require either. For example, \citet{Laugel2017} propose a two-step heuristic-based approach, \textit{Growing Spheres}, where (i) points are generated in all directions around the factual instance, and (ii) once a point with the desired prediction outcome was found, this point is adjusted to induce sparsity while keeping the prediction outcome. Unlike most of the related work that focuses on a subset of the established criteria, we propose a generic approach that accommodates all of these in a combined framework. To guide the reader, we have mapped the characteristics of the discussed literature and highlighted our contributions in Table~\ref{tab:overview}. We refer to \citep[][]{Verma.2020, Guidotti2022} for a complete and extensive overview of recent works on counterfactual explanations. \input{tables/table_overview} \section{Generation of counterfactual explanations}\label{sec:generate} In an OCL framework, ML models are used to design constraint and objective functions of an optimization model when explicit expressions are unknown. First, the predictive model is trained on historical data and then it is embedded into the optimization model using decision variables as inputs \cite{Biggs2017RF, verwer2017auction, villarrubia2018artificial}. Although the interplay between optimization and ML has a different aim in OCL versus CE generation, the two frameworks have a similar structure which allows the mutual transferring of knowledge from one discipline to the other. In this regard, we show how the problem of generating CEs, given a fitted model $h(\cdot)$, a factual instance $\bm{\hat{x}}$, and the desired outcome $\tilde{y}$, can be seen as a special case of \textit{optimization with constraint learning}. We first introduce a generic OCL model, and then we describe its relation to the problem of generating CEs that meet the aforementioned criteria. In an OCL setting, a dataset $\mathcal{D} = \{(\bm{\bar{x}_i}, \bar{y}_i)\}_{i=1}^N$ with observed feature vector $\bm{\bar{x}_i}$ and outcome of interest $\bar{y}_i$ for sample $i$, is used to train predictive models that are to be constrained or optimized in a larger optimization problem. An OCL model is typically presented as \begin{subequations} \begin{align} \underset{{\bm{x}\in \mathbb{R}^n,y\in \mathbb{R}}}{\mbox{minimize\hspace{4mm} }} \ & f(\bm{x}, y) & \label{eqn:conceptualmodelCL1}\\ \mbox{subject to\hspace{4mm}} \ & \bm{g}(\bm{x}, y) \leq \bm{0}, & \label{eqn:conceptualmodelCL2}\\ & y = h(\bm{x}), & \label{eqn:conceptualmodelCL3}\\ & \bm{x} \in \mathcal{X},& \label{eqn:conceptualmodelCL4} \end{align} \end{subequations} where $\bm{x} \in \mathbb{R}^n$ is the decision vector with components $x_i \in \mathbb{R}$, $f(\cdot,\cdot):\mathbb{R}^{n+1} \mapsto \mathbb{R}$ and $\bm{g}(\cdot,\cdot):\mathbb{R}^{n+1} \mapsto \mathbb{R}^m$ are known functions possibly also depending on the predicted outcome $y$, and $h(\cdot):\mathbb{R}^{n} \mapsto \mathbb{R}$ represents the predictive model\footnote{To simplify our exposition, we include only one predictive model. However, a general OCL framework admits multiple learned constraints in the model.} trained on $\mathcal{D}$. The set $\mathcal{X}$ defines the trust region, \textit{i.e.}, the set of solutions for which we trust the embedded predictive models (see below for details). {\leftskip=0.5cm\relax \rightskip=0.5cm\relax The \textit{Palatable Diet Problem} \cite{maragno2021mixedinteger} is a conventional example of OCL, in which we seek to find a cost-minimizing diet that satisfies fixed nutrient requirements while also being sufficiently “palatable.” The objective (\ref{eqn:conceptualmodelCL1}) and the nutritional constraints (\ref{eqn:conceptualmodelCL2}) are functions of the decision variable $\bm{x}$ and are explicitly known, while the palatability constraints are not explicit but depend on the personal taste. Exploiting survey data on how people like different diets, an ML model $h(\bm{x})$ is trained and embedded into the model as a set of constraints (\ref{eqn:conceptualmodelCL3}). The palatability constraint (\ref{eqn:conceptualmodelCL2}) is represented as $y \geq \tau$, namely a diet is feasible if $y \in \mathbb{R}$ is greater than a chosen threshold ($\tau \in \mathbb{R}$). \par} Formulation (\ref{eqn:conceptualmodelCL1}-\ref{eqn:conceptualmodelCL4}) is quite general and encompasses a large body of work that includes CE generation. Now, we characterize the parallelism between the eight components listed in Section~\ref{sec:related_work} and the structure of the resulting OCL model. \textbf{Proximity.} By definition, a CE has to be in the proximity of the factual instance according to some user-defined distance function. To obtain a CE $\bm{\tilde{x}}$ in the proximity of $\bm{\hat{x}}$, we can write the objective function (\ref{eqn:conceptualmodelCL1}) as a distance function $d(\bm{x}, \hat{\bm{x}})$. In the literature, this function is represented by $\ell_1$-norm, $\ell_2$-norm, or as the Mahalanobis’ distance. \textbf{Validity.} While the trained model $h(\cdot)$ is used in constraint learning to define, completely or partially, the objective function and/or the constraints, in CE generation it is used to enforce the validity constraint. Constraint (\ref{eqn:conceptualmodelCL3}) is likely to be an encoding of the predictive model. In other words, embedding a trained ML model requires adding multiple constraints and auxiliary variables. When $h(\cdot)$ is a classification model, the CE validity is obtained by constraining the model prediction to be equal to the desired class $\tilde{y}$; that is, we set $y=\tilde{y}$. If $h(\cdot)$ is a regression model, the OCL framework still applies, and an inequality constraint can be used to enforce validity; \textit{e.g.}, $y\leq\tilde{y}-\delta$ or $y\geq\tilde{y}+\delta$ for some fixed $\delta \in \mathbb{R}_+$. \textbf{Coherence.} When one-hot encoding is used to deal with categorical features, we can use the constraints proposed by \citet{Russell.2019} to obtain coherent CEs. That is, we write for $k$ categorical features the following constraints: \begin{align} \sum_{j' \in \mathcal{C}_j} x_{j'} = 1, ~~ j = 1,\dots, k,\label{eqn:coherence} \end{align} where $\mathcal{C}_j$ is a set of indices referring to the dummy (binary) variables used to represent the categorical feature $j$. \textbf{Sparsity.} The sparsity can be handled by enforcing the following set of constraints: \begin{subequations} \begin{align} & \ |x_j - \hat{x}_j| \leq M z_j, ~~ j=1, \dots, n,\label{eqn:sparsity1}\\ & \ \sum_{i=1}^n z_i \leq K,\label{eqn:sparsity2} \end{align} \end{subequations} where $z_j \in \{0,1\}$, $j=1, \dots, n$ are auxiliary variables that are simply used to count the number of features in $\bm{x}$ that differ from $\hat{\bm{x}}$, and $K$ is an upper bound on the number of allowed changes. Alternatively, constraints (\ref{eqn:sparsity2}) can be relaxed and moved to the objective function with a scaling penalty factor $\alpha > 0$. That is, we obtain the new objective function $f(\bm{x}, y) + \alpha\sum^n_{i=1} z_i$. Though simpler, this relaxation does not guarantee to lead to an optimal solution with less than or equal to $K$ changes. \textbf{Actionability.} As a recommended CE should never change the immutable features, we can restrict the CE to be equal to the factual instance for all the immutable features. Suppose that the set of immutable features is represented by $\mathcal{I}_m$, then we simply add the following constraints: \begin{align} x_i = \hat{x}_i, ~~ i \in \mathcal{I}_m. \end{align} Other feasibility constraints might concern actionable variables that cannot take certain values, such as \textit{age}, which can only be increased, or \textit{has\_phd}, which can only change from false to true. These conditions can be added exactly like immutable features. \begin{figure} \centering \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Figures/CE_without_Trust_Region.pdf} \end{subfigure}% \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Figures/CE_With_Trust_Region.pdf}\label{fig:sub2} \end{subfigure} \caption{The effect of the data manifold region on the generated CE. The left figure shows the factual instance and its closest counterfactual without closeness constraints. The right figure shows the same factual instance with the CE constrained to be within the data manifold region.} \label{fig:test} \end{figure} \textbf{Data manifold closeness.} One of the requirements to obtain plausible CEs is that they are close to the data manifold. For this purpose, we can make use of the \textit{trust region} constraints. \citet{maragno2021mixedinteger} define the trust region as the convex hull (CH) of $\mathcal{D}$ in the features space, and they use it in OCL to prevent the trained model from extrapolating, therefore, mitigating the deterioration in predictive performance for points that are farther away from the data points in $\mathcal{D}$. In CE generation, the trust region, or rather \textit{data manifold region}, serves the purpose of ensuring solutions in a high-density region. To this end, we can also denote a CE ($\bm{\tilde{x}}$) as the convex combination of samples in $\mathcal{D}$, in particular samples belonging to the desired class ($\tilde{y}$). Figure~\ref{fig:test} shows how the data manifold region, defined by the CH of blue points, can drastically affect the CE and its plausibility. In case the CH is too restrictive, we can use a relaxed formulation to enlarge the data manifold by including those solutions that are in the $\epsilon$-ball surrounding some feasible solutions in the CH: \begin{align} \text{$\epsilon$-CH} = \bigg\{ \bm{x} \bigg| \sum_{i \in \mathcal{I}} \lambda_i \bm{\bar{x}}_i = \bm{x} + \bm{s}, \ \sum_{i \in \mathcal{I}} \lambda_i = 1, \ \bm{\lambda} \geq 0, \ ||\bm{s}||_{p} \leq \epsilon \bigg\}, \label{eqn:epstrustregionconstr} \end{align} where $\lambda_i \in [0,1]$ and $\bm{s} \in \mathbb{R}^n$ are auxiliary variables, $\epsilon \geq 0$ is a hyperparameter, and $\mathcal{I}$ denotes the indices corresponding to the subset of samples in $\mathcal{D}$ belonging to the desired class $\tilde{y}$. When $\epsilon=0$, we obtain the trust region as discussed in \citet{maragno2021mixedinteger}. However, $\epsilon > 0$ leads to a less restrictive set of conditions. This is also a solution to the criticism by \citet{Balestriero2021}: “[...] interpolation\footnote{Interpolation occurs for a sample $\bm{x}$ whenever this sample belongs to the CH of a set of data points.} almost surely never occurs in high-dimensional spaces $(> 100)$ regardless of the underlying intrinsic dimension of the data manifold.” Aside from the bound on the norm of $\sibm{s}$, all constraints in \eqref{eqn:epstrustregionconstr} are linear. Fortunately, the most common norms used to constraint $\bm{s}$ are $\ell_1$-, $\ell_2$-, or $\ell_\infty$-norm. These norms lead to convex conic constraints that can be handled easily with off-the-shelf optimization solvers. The use of a data manifold region (with a sufficiently small $\epsilon$) has an interesting impact on CE coherence because constraints (\ref{eqn:coherence}) become redundant. To exemplify how data manifold constraints guarantee coherence, we consider a set of samples represented by the set of indices $\mathcal{I}$, and a categorical feature \textit{diet} that can assume only three values: \textit{vegan}, \textit{vegetarian}, or \textit{omnivore}. We use one-hot encoding to replace the feature \textit{diet} and describe a CE with the dummy (binary) variables $x_{vegan}$, $x_{vegetarian}$, $x_{omnivore}$. From (\ref{eqn:epstrustregionconstr}), we have \begin{align*} x_j = \sum_{i\in \mathcal{I}} \lambda_i\bar{x}_{i,j}, ~~ j\in \{vegan, vegetarian, omnivore\}, \end{align*} with $\sum_{i\in\mathcal{I}} \lambda_i = 1$. One of the dummy variables, say $x_{vegan}$, can assume value 1 only if it is the convex combination of data points $\bar{\sibm{x}}_i$ with $\bar{x}_{i,vegan} = 1$ and $\bar{x}_{i,vegeterian} = \bar{x}_{i,omnivore} = 0$. Thus, $\lambda_i > 0$ only when $\bar{x}_{i,vegan}=1$, and consequently, we obtain $x_{vegetarian} = x_{omnivore} = 0$. The effectiveness of the data manifold region might be hampered by the fact that the CH includes low-density regions. In this case, \citet{maragno2021mixedinteger} advocate a two-step approach: first, clustering is used to identify distinct high-density regions, and then, the data manifold region is represented as the union of the (enlarged) convex hulls of the individual clusters. \textbf{Causality.} CEs might be inefficient or unrealistic when causal relations are not considered in the generation process. Both these situations are exemplified in \citet{Karimi.2020}, where the authors show the importance of causal relations to obtain CEs that better answer the question “what \textit{should be done} in the future considering the laws governing the world.” When a causal model is available, we can formulate the causal relations among variables as extra constraints of the optimization model. Applying the Abduction-Action-Prediction steps \citep{Pearl.2013}, \citet{Karimi.2020} define the endogenous variables (with indices in the set $\mathcal{E}$) as \begin{align} x_i = \hat{x}_i + c_i(\bm{p}_i) - c_i(\hat{\bm{p}}_i), ~~ i \in \mathcal{E}, \end{align} where $c_i(\bm{p}_i)$ is a function of the parents of $x_i$, namely the predecessors of the feature $i$ in the SCM. Both $\hat{x}_i$ and $c_i(\hat{\bm{p}}_i)$ are known before the optimization and therefore treated as parameters. When there is not an explicit formulation of $c_i(\cdot)$, we are in a constraint learning scenario where an ML model can be trained and embedded into the optimization as $c_i = h_i(\bm{p}_i)$ for all $i \in \mathcal{E}$. \textbf{Diversity.} Most of the methods for generating multiple and diverse CEs in the literature require multiple runs and extra constraints to generate diverse CEs for the same input. Following an iterative approach, we can generate diverse CEs using constraints on the actionability of features \citep{Russell.2019}, or constraints on the distance between the subsequent CE and all the previously generated ones \citep{Karimi.2019fy}. Again in an iterative way, we can also use the data manifold constraints to generate diverse CEs (i) by finding one CE for each clustered CH, (ii) by enlarging the CH with increasing $\epsilon$ whenever the data manifold constraints are active. The use of diversity constraints offers great flexibility at the expense of computation time. As an alternative, we propose to solve one single optimization model and use the pool of \textit{incumbent solutions} as the set of CEs. In mixed-integer optimization, solvers like Gurobi or CPLEX allow retrieving the sub-optimal solutions found during the tree search procedure \citep{gurobi, cplex2009v12}. In this way, collecting a set of CEs comes at no cost in terms of computation time. \section{Computational Study}\label{sec:demonstration} In this section, we demonstrate the effectiveness of OCL through empirical experiments on multiple datasets. The experiments are executed using \texttt{OptiCL}\footnote{\hyperlink{https://github.com/hwiberg/OptiCL}{https://github.com/hwiberg/OptiCL}, under the MIT license} \citep{maragno2021mixedinteger}, an open-source Python package for optimization with constraint learning. \texttt{OptiCL} has been originally designed to help practitioners in modeling an optimization problem whose constraints are partially unknown, but where ML models can be deployed to learn them. However, as detailed in Section~\ref{sec:generate}, the problem of generating CEs directly relates to an OCL problem. \texttt{OptiCL} currently supports several MIO-representable predictive models, including logistic regression (lr), support vector machines (svm), (optimal) decision trees (cart), random forests (rf), gradient boosting machines (gbm), and neural networks with ReLU activation functions (mlp). Moreover, \texttt{OptiCL} allows for trust region constraints as defined in (\ref{eqn:epstrustregionconstr}). Whenever a causal model is available but the relations are not explicit, \texttt{OptiCL} allows representing the relation using one of the MIO-representable ML models. The open-source implementation for reproducing all our results is available at \texttt{https://github.com/tabearoeber/CE-OCL}. \subsection{Case study: German Credit Data}\label{sec:case_study} We demonstrate the generation of CEs on the Statlog (German Credit Data) dataset \citep[][]{Dua:2019}, which is one of the standard datasets in the CE literature\footnote{We also provide another demonstration on the Statlog (Heart) dataset \citep[][]{Dua:2019} in Appendix~\ref{app:heart}}. The German Credit dataset classifies people described by a set of 20 features as good or bad credit risk, see Table \ref{tab:description} in Appendix~\ref{app:german} for an overview of the features. For this demonstration, we gradually add constraints to the model and present the generated CEs at each step in Table~\ref{tab:display_demo}. The table is divided into six parts (A-F), each showing the set of CEs generated, and a dash is used to represent no change to the corresponding features. The following mathematical model is used to generate CEs and contains all the constraints -- criteria -- presented in Section~\ref{sec:generate}: \allowdisplaybreaks \begin{subequations} \begin{align} \underset{{\bm{x}, \bm{z}, \bm{s}\in \mathbb{R}^n, \bm{\lambda} \in \mathbb{R}_{\geq0}^{|\mathcal{I}|}}}{\mbox{minimize\hspace{4mm} }} \ & \ell_2(\bm{x}, \bm{\hat{x}}) + \alpha \sum_i z_i + \beta \ell_1(\bm{s}, \bm{\tilde{s}}) & \owntag[eq:final0]{proximity, sparsity, and closeness}\\ \mbox{subject to\hspace{4mm}} \ & h(\bm{x}) = 1 \owntag[eq:final1]{validity}\\ & |\bm{x} - \bm{\hat{x}}| \leq M \bm{z},\owntag[eq:final2]{sparsity}\\ & \sum_{i \in \mathcal{I}} \lambda_i \bm{\bar{x}}_i = \bm{x} + \bm{s}, & \owntag[eq:final3]{data manifold closeness}\\ & \sum_{i \in \mathcal{I}} \lambda_i = 1, &\owntag[eq:final4]{data manifold closeness}\\ & x_i \geq 0, ~~ i \in \{F1, F2, F3, F6 ,F7\} \owntag[eq:final5]{actionability}\\ & x_i \geq \hat{x}_i, ~~ i \in \{F4, F5\}\owntag[eq:final6]{actionability}\\ & x_i = \hat{x}_i, ~~ i \in \{F11, F17, F19\}\owntag[eq:final7]{immutability}\\ & x_{F10} \in \mathcal{C}_{F10},\owntag[eq:final8]{conditional immutability}\\ & x_{F1} = \hat{x}_{F1} + h_{causality}(x_{F2}) - h_{causality}(\hat{x}_{F2}), \owntag[eq:final9]{causality}\\ & \bm{x} \in \mathcal{L}, \owntag[eq:final10]{Domain (real, integer, binary)} \end{align} \end{subequations} In Table~\ref{tab:evaluation_demo}, we present the evaluation of these CEs using several evaluation metrics proposed by \citet{Mothilal.2020}. Validity, sparsity, categorical proximity, categorical diversity, and sparsity-based diversity range in the interval [0,1], where 0 and 1 represent the worst and the best scores ($\uparrow^1_0$), respectively. Continuous diversity is a positive number, and the higher it is, the better ($\uparrow_0^+$). Continuous proximity is a negative number, and the closer it is to 0, the better ($\uparrow^0_{-}$). \input{tables/table_demonstration} We fit several ML models to the data, all of which performed similarly well. For demonstration purposes, we have chosen a linear support vector machine. The factual instance $\bm{\hat{x}}$ used for this case study is reported in Table \ref{tab:display_demo}. We start the demonstration considering only validity, proximity, and coherence (Part A), and using the $\ell_2$-norm as a distance function. The optimal solution suggests several changes in the factual instance and is not actionable in practice due to the negative value for F2 (credit amount). To induce sparsity (Part B), we use auxiliary variables to keep track of the number of features changed and penalize them in the objective function. Multiple and diverse CEs are generated using incumbent solutions (Part C). To ensure that the set of generated CEs is valuable in practice, we add actionability constraints (Part D), such that certain variables are restricted to be positive, or equal or larger than the value in the factual instance. Other variables, such as F11 (foreign worker), F17 (personal status), and F19 (credit purpose) are fixed to be equal to the corresponding value in the $\bm{\hat{x}}$. We also consider conditionally immutable features like F10, which is a categorical variable representing the years of employment at the current job; for a factual instance with value \textit{1$\leq$X$<$4} in F10, a CE should not take on the values \textit{unemployed} or \textit{$<$1}, but only the same value or categories ranked higher (\textit{4$\leq$X$<$7} or \textit{$\geq7$}). Respecting these constraints, the set of generated CEs seems more realistic however, they may still not be attainable in practice. Specifically, if we consider solution (c) of Part D, the only suggested change concerns F4 (age). However, this CE is unlikely to represent a realistic data point, considering the other feature values remain unchanged. In other words, CEs that do not resemble the training data come with the risk of being unattainable in practice. To this end, we use the idea of a \textit{data manifold region}, as detailed in Section \ref{sec:generate}. As a result, in Part E, we obtain a more realistic set of CEs, although at the expense of sparsity and (categorical) proximity (see the scores reported in Table~\ref{tab:evaluation_demo}). From a qualitative point of view, the three CEs show a more sensible combination of feature values compared to those in Part D. Finally, we can leverage the partial SCM provided by \citet{Karimi.2020} for this dataset, which shows that F1 (duration) is causally related to F2 (credit amount). To this respect, several MIO-representable predictive models are trained through \texttt{OptiCL} and the multi-layer perceptron (MLP) achieves the best performance in terms of mean squared error on a 5-fold cross-validation set. While solution (c) of Part E suggests a counterintuitive increase in the credit amount with half of the duration time, the fitted MLP learns the (more intuitive) positive correlation between these two features. In Part F, we display the set of CEs that satisfy also the learned causality constraints. \subsection{Comparison against other methods}\label{sec:comparison} In this section, we compare \texttt{CE-OCL} to four open-source tools for generating CEs: Growing Spheres \cite{Laugel2017}, \texttt{FACE}\cite{Poyiadzi2020}, Actionable Recourse \cite{Ustun.2019}, and DiCE \cite{Mothilal.2020}. The experiments are performed using \texttt{CARLA} \cite{pawelczyk2021carla}, a Python library to benchmark counterfactual explanation and recourse models. The predictive model used in the experiments is a random forest and the evaluation is performed by generating a counterfactual for 30 different factual instances on four datasets available in \texttt{CARLA}: Adult, Give Me Some Credit, COMPAS, and HELOC. We average the results for the evaluation metrics proposed by \citet{Mothilal.2020} and present them together with the standard error (s.e.) in Table~\ref{tab:carla_comparison}. While \texttt{CE-OCL} can deal with causality and closeness constraints, this does not apply to \texttt{DiCE} which uses a post-hoc filtering approach to remove unrealistic CEs. In addition to causality and closeness constraints, Actionable Recourse, and Growing Sphere cannot generate more than one counterfactual for each instance. \texttt{FACE} does not support diversity and causality constraints but it is able to generate CEs close to the data manifold region. Therefore, we report in Table~\ref{tab:carla_comparison} both the results obtained with \texttt{CE-OCL} including validity, proximity, coherence, sparsity, and immutability constraints, and the results obtained including also the closeness constraints, \texttt{CE-OCL}\_tr. The results show that, across all datasets, both \texttt{CE-OCL} and \texttt{CE-OCL}\_tr exhibit better performance in terms of validity, categorical proximity, and sparsity. Actionable Recourse and \texttt{CE-OCL}/\texttt{CE-OCL}\_tr perform equally well in terms of continuous proximity. \input{tables/tab_carla} We performed a more thorough comparison between \texttt{CE-OCL} and \texttt{DiCE} on the same four datasets but this time generating three CEs for each instance and using all the predictive models supported by both \texttt{OptiCL} and \texttt{DiCE}; that is random forest (rf). In Table~\ref{tab:ce-ocl_vs_dice}, we report the results obtained with \texttt{CE-OCL} including validity, proximity, coherence, sparsity, diversity, and actionability together with the results obtained considering also the data manifold closeness, (\texttt{CE-OCL}\_tr). The results clearly show how \texttt{CE-OCL} outperforms \texttt{DiCE} in terms of validity, categorical proximity, continuous proximity, and sparsity. While both methods have a categorical diversity score very close to zero in every scenario, \texttt{DiCE} has a generally better performance in terms of continuous diversity. Similarly, \texttt{DiCE} has a better sparsity-based diversity score with the exception of the COMPAS dataset. The addition of closeness constraints (\texttt{CE-OCL}\_tr) has a negative effect on the sparsity and proximity scores but it positively affects the diversity scores when compared to \texttt{CE-OCL}. This was to be expected, as the data manifold region forces solutions to be located in a high-density region, which might lead to optimal solutions with more feature changes. While the sparsity decreases, this loss comes at a high potential of more valuable counterfactuals. \input{tables/dice_comparison} \clearpage \section{Discussion}\label{sec:conclusion} With this work, we propose \texttt{CE-OCL}, a generic approach for interpretability to generate sensible and practical counterfactual explanations. In Section~\ref{sec:generate}, we have described how a set of constraints mathematically represents each criterion that makes a good CE. We have also introduced a new definition of data manifold region based on the (enlarged) CH of the data points. The main advantage of \texttt{CE-OCL} is the flexibility and modularity that allows the user to generate CEs concerning the preferred desiderata, as shown in Section~\ref{sec:case_study}. Through the experiments in Section~\ref{sec:comparison}, we have confirmed the effectiveness of \texttt{CE-OCL} by comparing it with a recent and comprehensive tool, \texttt{DiCE}. Although \texttt{CE-OCL} is performing relatively well in terms of diversity, we acknowledge the limitations of using incumbent solutions as multiple counterfactuals caused by the lack of control over the solutions' diversity. Whenever we have specific diversity requirements to meet, the iterative approaches proposed by \citet{Russell.2019} and \citet{Karimi.2019fy} may suit best. Moreover, owing to the MIO structure of \texttt{CE-OCL} and various constraints used to satisfy the established criteria, the feasibility space may shrink to the point of being empty, making the optimization problem infeasible. In the infeasibility case, we recommend following an approach similar to that presented in Section 1, where constraints are added one at a time. Infeasibility problems due to data manifold constraints can be mitigated by enlarging the data manifold region at the (potential) expense of the sensibility of the CEs. For future research, we plan to investigate the effect of clustering and enlargement of the data manifold region on the CE quality and on diversity. We also intend to extend \texttt{CE-OCL} with additional criteria like robustness in the sense that the generated CEs are not point solutions, but that they are defined by ranges in the feature values. \section*{Acknowledgments}{This work was supported by the Dutch Scientific Council (NWO) grant OCENW.GROOT.2019.015, Optimization for and with Machine Learning (OPTIMAL).} \newpage
{ "timestamp": "2022-09-23T02:15:14", "yymm": "2209", "arxiv_id": "2209.10997", "language": "en", "url": "https://arxiv.org/abs/2209.10997" }
\section{Introduction} The resolution of scanned interferometric spectroscopy is usually limited by the maximum optical path difference (MOPD) of the interferometer. As part of our development of mass-correlated rotational alignment spectroscopy (CRASY),\cite{Schroter2011} we constructed an interferometer with an effectively infinite MOPD,\cite{Schroter2018} utilizing the discrete pulse train emitted from a femtosecond laser oscillator. Here we present the highest resolution data measured to-date and discuss the resolution-limiting factors in our molecular beam experiments. Discussed concepts are equally applicable to other types of interferometric or time-domain spectroscopy that rely on the scanning of a spatial path difference or a temporal delay range. CRASY is a type of rotational coherence spectroscopy (RCS)\cite{Frey2011} and measures rotational Raman spectra in the time-domain by scanning the path difference between two interferometer arms. The resulting data is useful for the structural characterization of non-dipolar molecules\cite{Schroter2011,Schroter2015,Heo2022a,Heo2022b} that are inaccessible to Fourier-transform microwave Spectroscopy (FTMW).\cite{Grabow2011} By extending the scanned interferometer length, our work obtained order-of-magnitude improved resolution\cite{Schroter2018,Lee2019} as compared to preceding RCS experiments or Fourier-transform infrared spectroscopy (FTIR).\cite{Albert2011} \begin{comment} Rotational coherence spectroscopy (RCS) is an interferometric, impulsive laser measurement that delivers broad-band rotational Raman spectra.\cite{Frey2011} The resulting data is useful for the structural characterization of non-dipolar molecules that are inaccessible to Fourier-transform microwave Spectroscopy (FTMW).\cite{Grabow2011} Data is acquired by scanning the time-delay between picosecond excitation and an ionization pulses, with the former exciting and the latter probing a coherent rotational wave-packet. As part of our development of correlated rotational alignment spectroscopy (CRASY), an RCS method, we developed an experiment with an effectively infinite interferometer size and improved the resolution by orders-of-magnitude as compared to preceding experiments. Here we will present corresponding experimental data and simulations and discuss the attainable spectroscopic resolution and accuracy of interferometric experiments, in general, and RCS / CRASY experiments, in particular. \end{comment} The energy resolution ($\Delta E$) of spectroscopic experiments is fundamentally limited by the observation time $\Delta t$, i.e., the time over which the investigating particles (usually photons) and investigated molecules interact. The time-frequency formulation of Heisenberg's uncertainty principle states this fundamental resolution limit as $\Delta E \cdot \Delta t = \hbar/2$. Independent of the experiment, the effective observation time is always limited by the coherence time (or lifetime) of the observed quantum states. This leads to lifetime-broadening of observed spectral lines, either due to an intrinsically limited lifetime of the observed states or due to interactions with an environment, e.g., through molecular collisions. When lifetime broadening is small, the effective resolution will be limited by the constraints of the spectroscopic experiment. In frequency domain spectroscopy, the coherence length of the interacting photons limits the effective observation time. The situation is fundamentally different in interferometric Fourier-transform spectroscopy, where the observation time is limited by the MOPD \replaced{that can be achieved within}{of} the interferometer. The non-apodized full-width-at-half-maximum (FWHM) resolution limit in FTIR is given as $\Delta \widetilde{\nu}^{FWHM} = 0.61 \cdot \mathrm{d^{-1}_{MOPD}}$.\cite{Albert2011} The highest-resolution interferometer described in the literature features a MOPD of 11.7~m,\cite{Albert2018} which corresponds to a delay range of $t_\textrm{max} = 39$ ns and a non-apodized resolution limit of 15.6 MHz FWHM. RCS is based on the impulsive excitation and probing of rotational coherence with ultrafast (femtosecond or picosecond) laser pulses. In our CRASY variant of RCS, the rotationally excited molecules are probed by resonant multi-photon photoionization and rotational coherence is observed as interferometric signal modulation \replaced{of}{in} resulting ion signals. CRASY therefore correlates rotational spectra with observed ion masses and thereby facilitates the assignment of signals in heterogeneous samples. As in FTIR, or other scanned interferometric spectroscopies, RCS experiments scan the optical path difference in an interferometer and shows a resolution limited by the MOPD. \section{Infinite Interferometer Design for CRASY} To obtain mass-CRASY data, ion signals were detected in a time-of-flight mass spectrometer as function of the scanned delay between alignment and ionization laser pulses. The experimental details for mass-CRASY measurements were described previously \cite{Schroter2015,Schroter2018,Lee2019,Ozer2020,Lee2021} and here we focus on the interferometer design used to scan extended optical path differences. Focussed alignment pulses with 800 nm wavelength, $\leq$2 ps pulse duration and 100 $\mu$J-level pulse power created a coherent rotational wavepacket by impulsive Raman excitation. Alignment, in this context, denotes the transient molecular alignment that is commonly observed upon excitation of a coherent wavepacket.\cite{Stapelfeldt2004} Ionization pulses with 200 nm or 266 nm wavelength, 45 fs pulse duration and few-$\mu$J pulse power photoionized molecules by two-photon resonant photoionization. Ion signals showed temporal signal modulations due to the interference of the coherent rotational states in the probe step, as depicted in Fig\ \ref{CRASY_data_example} (A) and (B). Fourier-transformation of these signal modulations reveal the spectrum of the coherent wavepacket, as shown in Fig.\ \ref{CRASY_data_example} (C). \begin{figure}[ht] \includegraphics[width=240pt]{Fig1_CRASY_data_example.png} \caption{Mass-CRASY data from a 50 ns delay scan of a sample containing benzene (mass 78 u, blue), perdeuterated benzene (mass 84 u, green), carbon disulfide (mass 76 u, orange), and naturally occurring heavy isotopologues (darker colors). (A) Delay dependent ion signals show significant signal modulation due to interference of coherently excited rotational states. (B) A section of the signal modulation trace for mass 84 u. (C) Rotational Raman spectrum obtained by Fourier-transformation of the trace shown in (B).} \label{CRASY_data_example} \end{figure} The interferometer for high-resolution spectroscopy should have the longest possible MOPD, combined with a small step size and high positioning accuracy. As described above, the achievable spectroscopic resolution is directly proportional to the MOPD and, as described by the Shannon-Nyquist theorem,\cite{Shannon1949} the spectroscopic range $\nu_\mathrm{max}$ is inversely proportional to the sampling step size ($\nu_\mathrm{max} = 1/(2 \cdot t_{step\:size}$)). CRASY is performed on cold molecular beams with beam temperatures below 10 K and a 0.5 ps to 5 ps steps size (maximum spectroscopic range of $\nu_\mathrm{max} =$ 0.1 THz to 1 THz) is sufficient to resolve the complete thermally occupied set of rotational states. The positioning accuracy should remain well-below the scanned step size to avoid a degradation of \added{the} spectroscopic resolution. Interferometers used for FTIR, RCS, and other types of scanned interferometer spectroscopy are based on opto-mechanical delay stages, i.e., moving mirrors in one interferometer arm to change the optical path length. The largest interferometers can be found at national synchrotron facilities but are not practical within the restricted space and budget of University-based laboratory research. Instead, our interferometer combined electronic and opto-mechanical delays to achieve an infinite effective delay range within a compact and affordable interferometer design. A schematic representation the infinite interferometer is shown in Fig.\ \ref{InterferometerDesign}. The mechanical delay was based on a 30-cm Physik Instrumente, MD-531 motorized stage. The optical beam path across the stage was 16-times folded to obtain a MOPD of 4.8 m (16 ns). Longer delays were achieved by electronic pulse selection of oscillator pulses that were amplified in two separate regenerative Ti:Sa amplifiers, forming the two arms of the interferometer. The repetition rate of the laser oscillator (Coherent Vitara) was a 80 MHz and selection of subsequent pulses added discrete 12.5 ns delays in the second interferometer arm. Note that the timing accuracy of the electronic delays is governed by the stability of the oscillator repetition rate and not the accuracy of the electronic delay generator. \begin{figure}[ht] \includegraphics[width=240pt]{Fig2_InterferometerDesign.png} \caption{Schematic depiction of the infinite interferometer design. Pulses from a femtosecond laser oscillator are split and recombined with beam splitters (BS) and amplified in two separate amplifiers. Electronic selection of different oscillator pulses for amplifier 1 and 2 introduce\replaced{s}{d} discrete pulse delays in multiples of 12.5 ns. An opto-mechanical delay stage add\replaced{s}{ed} additional delays of 0--16 ns with picosecond step size and femtosecond accuracy. } \label{InterferometerDesign} \end{figure} The pulse selection delay was controlled via an electronic delay generator (SRS-DG535) and allowed to extend the delay range to quasi-arbitrary values. The amplifiers were operating at 1-kHz repetition rate and probe pulses delayed by more than 1 ms therefore arrive after a subsequent pump pulse. The molecular beam velocity in our experiments is in the range of 1000 m/s and molecules travel \replaced{decimeter}{meter} distances within milliseconds. Experiments with $>1$ ms delays can therefore rely on spatial discrimination of the excited molecules. Therefore, for all practical purposes, our set-up represents an interferometer with infinite MOPD and the achievable spectroscopic resolution is no longer limited by the size of the interferometer, but \replaced{rather by}{by other experimental limitations, such as }the ability to track the molecular beam. \section{High-Resolution CRASY Data} Fig.\ \ref{CS2_16_vs_50_ns} shows rotational Raman spectra for \ce{CS2} and illustrates the progress achieved by combining electronic and opto-mechanical delays. The maximal opto-mechanical delay range of 16 ns was sufficient to obtain an effective resolution of 60~MHz FWHM, as shown in Fig.\ \ref{CS2_16_vs_50_ns} (Top). The effective resolution remained below the non-apodized resolution limit of 38 MHz because the mechanical delay stage was not perfectly flat, leading to a loss of signal when the stage approached either end of the delay range. A combined opto-mechanical and electronic delay range of 16.7 m (50 ns) gave a greatly enhanced effective resolution of 17.5 MHz FWHM, near the resolution limit of 16.7 MHz, as shown in Fig.\ \ref{CS2_16_vs_50_ns} (Bottom). The sample \replaced{for the latter spectrum}{used for the latter measurement} contained only trace amounts of \ce{CS2} and the spectrum therefore had a lower signal-to-noise ratio \added{(SNR)}. \begin{figure}[ht] \includegraphics[width=240pt]{Fig3_CS2_16_vs_50_ns.png} \caption{Rotational Raman spectra for \ce{CS2} from CRASY data-sets. (Top) Spectrum obtained by scanning a 16 ns delay range (4.8 m MOPD) with an opto-mechanical delay stage. (Bottom) Spectrum obtained by scanning a 56 ns delay range (equivalent to 16.7 m MOPD), using combined opto-mechanical and electronic delays. Enlarged insets reveal the effective resolution for selected transition lines.} \label{CS2_16_vs_50_ns} \end{figure} The measurement time required to scan large delays scales linearly with the scan range and the collection of mass spectra for a large number of alignment-ionization delays was time-consuming and created exceedingly large data-sets. The data shown in Fig.\ \ref{CS2_16_vs_50_ns} (Bottom) was obtained with delay scan range of 50 ns and a 2 ps step-size and therefore required the accumulation of 25\,000 mass spectra. Data was acquired with 500 Hz repetition rate and ion signals for 1000 laser shots were accumulated for each mass spectrum. The resulting measurement time was almost 14 hours. Each time-of-flight mass spectrum contained 400\,000 point\added{s} and the acquired raw data quantity corresponds to 10 Gb. It is readily apparent that the brute-force extension of the scanned delay range will lead to impractical requirements in terms of measurement time and \deleted{for} data storage. We reduced the data quantity by lossless compression and the use of sparse data formats. Because mass spectroscopic data is highly discrete, we routinely achieved $>100$-fold in-memory compression with zlib compression algorithms. Fourier transform analysis is only possible on decompressed data, but downsampling of the mass axis and the conversion into sparse data formats facilitated the signal analysis. Random sparse sampling was used to accelerate long delay scans, i.e., data was only measured for a randomly selected sub-set of delay\replaced{s along the}{ points along an} extended time axis. Different sparse sampling strategies were explored in the field of multi-dimensional NMR experiments,\cite{Hoch2014,Pelczer1991} and are discussed in more detail, below. Fig.\ \ref{M76_full_and_sparse_sampling} compares spectra from a fully-sampled and a sparsely-sampled measurement, with the latter collecting mass spectra only for 5.5\% of delays along an extended delay axis. The sparsely sampled data was acquired 2.5-times faster than the fully sampled data and improved the spectroscopic resolution by a factor 20. Sparse sampling added noise to the spectra, as readily apparent in the logarithmic representation depicted in Fig.\ \ref{M76_full_and_sparse_sampling}. \begin{figure}[ht] \includegraphics[width=240pt]{Fig4_Sparse_sampling.png} \caption{Rotational Raman spectra for \ce{CS2} obtained with continuous 1-ps sampling of a 15 ns delay (Top) and random sparse sampling of 17\,000 mass spectra along a 312 ns delay \added{axis }(5.5\% sampling rate, Bottom). Note the logarithmic scale of the ordinate.} \label{M76_full_and_sparse_sampling} \end{figure} The highest resolution spectrum measured to-date with the CRASY technique was based on a 10 $\mu$s scan of a benzene sample, containing residual \ce{CS2} in small concentration. Due to the limited size of our spectrometer window, tracking of the molecular beam was only achieved \replaced{up to}{for a delay range} $<3$ $\mu$s, reducing the achieved signal contrast and resolution. Fig.\ \ref{fig:kHz_resolution_Spectrum} show\added{s} signal for the \ce{CS2} mass channel, with an effective resolution of 330 kHz FWHM. This data represents the highest-resolution Fourier-transform interferometric spectrum in the world, representing a 50-fold improvement over the highest-resolution FTIR data in the literature\cite{Albert2011,Albert2015,Albert2018}. To comprehend the scale of this improvement, we invite the reader to visualize the \removed{large} 11.7 m MOPD interferometer used for the latter experiments (see Ref.\ \citenum{Albert2011b} for a photographic image) versus the km-scale MOPD achieved in our laboratory experiment. \begin{figure}[ht] \centering \includegraphics[width=240pt]{Fig5_kHz_resolution_Spectrum.png} \caption{\small Highest resolution rotational Raman spectrum obtained with the mass-CRASY technique. The inset, with 30\,000-fold enlarged abscissa, reveals the 330 kHz FWHM \added{effective} resolution for the J=6--8 transition in \ce{CS2}. } \label{fig:kHz_resolution_Spectrum} \end{figure} Table \ref{tab:Spectroscopic_resolution} compares the resolution limit of various spectroscopic techniques that are used to characterize rotational spectra at high resolution. CRASY currently represents the highest-resolution method for rotational Raman spectroscopy and \added{, more generally,} for the investigation of non-dipolar molecules. Modern FTMW experiments reach a significantly higher resolution,\cite{Shipman2011} but can only be performed for dipolar species and only cover a spectral range of tens of GHz, more than one order-of-magnitude below the spectral coverage obtained with CRASY. In terms of resolving power ($\frac{\mathrm{spectral\:range}}{\mathrm{resolution}}$), CRASY is at parity with state-of-the-art FTMW experiments. \begin{table} \caption{Resolution for common types of rotationally resolved spectroscopies.} \begin{ruledtabular} \begin{tabular}{lc} {Spectroscopic method} & {Resolution limit}\\ \colrule Raman, single-mode laser\saa & 1500 MHz \cite{Weber2011}\\ Raman, FTIR\saa & 300 MHz \cite{Weber2011}\\ Raman, RCS\sbb & 150 MHz \cite{Frey2011,Weber2011}\\ Raman, low resolution CRASY\saa & 39 MHz \cite{Lee2019}\\ Coherent anti-Stokes Raman\saa & 30 MHz \cite{Weber2011}\\ FTIR\replaced{\saa}{\sbb} & 16 MHz \cite{Albert2011}\\ Raman, high resolution CRASY\saa & 330 kHz \scc\\ FTMW\saa & few kHz \cite{Grabow2011,Shipman2011}\\ \end{tabular} \end{ruledtabular}\\ \footnotesize{\saa Achieved effective resolution. \sbb Theoretical resolution limit. \scc This work.} \label{tab:Spectroscopic_resolution} \end{table}% Table \ref{tab:Spectroscopic_resolution} omitted frequency comb measurements\cite{Hansch2006,Diddams2020}\added{and related techniques}. Highest-resolution frequency comb measurement cover only very small spectral ranges and a direct comparison is therefore not meaningful. Dual-comb or direct comb spectroscopy (DCS)\cite{Foltynowicz2011,Gambetta2016,Muraviev2020} allow the rapid, broad-band, and high-resolution characterization of molecular spectra. We omit DCS from our table because it requires extended interaction times with significant molecular sample densities and, to our knowledge, the resolution of all broad-band DCS spectra is subject to significant \removed{Doppler} broadening. A comparison with estimated DCS resolution limits is therefore not meaningful \added{because the achieved effective resolution remains far below the theoretical resolution limit}. \section{Limits to Spectroscopic Resolution and Accuracy} The use of an infinite interferometer removes resolution limits due to the MOPD. We must therefore consider other factors that limit the resolution or accuracy of interferometric measurements. Three distinct types of uncertainties must be considered: (i) Uncertainties in delay positions accrued over the length of the opto-mechanical delay line. (ii) Uncertainties in the laser oscillator repetition rate, which affect the accuracy of the discrete 12.5 ns pulse-selection delays. (iii) Uncertainties due to Doppler shifts and Doppler broadening. In the following, we discuss each error source separately. \subsection{Uncertainties in Opto-Mechanical Delays} The MD-531 motorized stage in our interferometer contains a 100~nm internal encoder. The encoder is mounted on an aluminum rod with correspondingly large thermal instabilities,\footnote{The thermal expansion coefficient of aluminum at 25$^\circ$C is $1.1\cdot10^{-5} \frac{m}{mK}$ \cite{CRC_Aluminum_expansion_coefficient}.} and the calibration of known spectra revealed relative positioning errors up to $\Delta r / r = 10^{-4}$. We addressed the problems with the internal stage encoder by mounting an external optical encoder (Sony Laserscale BL57-RE) with a thermal expansion coefficient of $0.7 \cdot 10^{-6}$ m/(m$\cdot$K). Comparison of internal and external encoder positions revealed a very linear error for the internal encoder. The internal encoder was then calibrated over a range of 12.5 ns against the oscillator repetition rate, by measuring laser cross-correlation signals displaced by one oscillator pulse jump. With a typical $\pm 0.2\, ^{\circ}$C temperature stability on our laser table, we found that the internal encoder was sufficient to confine positioning uncertainties to $\Delta r / r < 3 \cdot 10^{-6}$ ($<40$ fs uncertainty across the stage). Higher accuracy was available by monitoring the external encoder with resulting positioning uncertainties below $\Delta r/ r < 2 \cdot 10^{-7}$ ($\approx$3 fs uncertainty across the stage). Additional uncertainties arose due to the variation of the air refractive index $n$ with air pressure, temperature, and composition. For 800 nm light, the air refractive index changes by $\Delta n / n \approx 10^{-7}$ for a 40 Pa change in air pressure, a 0.1 $^{\circ}$C change in temperature or a 10\% change in air humidity.\cite{NIST_air_refractive_index, Ciddor1996} These uncertainties were readily suppressed by the continuous measurement of, and correction for, changes in air temperature, pressure, and humidity. The NIST shop-floor equation\cite{NIST_air_refractive_index}\footnote{Air index of refraction $n$ based on pressure $P$ (kPa), temperature $T$ ($^{\circ}$C), and relative humidity $RH$ (\%): $n= 1+7.86 \cdot 10^{-4} \cdot P / (273 + T) - 1.5 \cdot 10^{-11} \cdot RH (T^2 + 160)$} was sufficient to approximate $n$ with a relative uncertainty of $ \Delta n/n < 10^{-7}$. Uncertainties accrued over the range of the opto-mechanical delay line are reset with each \added{electronic} pulse selection delay jump, when the mechanical delay line is re-set to its initial position. The impact of opto-mechanical delay uncertainties therefore scales inversely to the number $N$ of \replaced{oscillator}{electronic delay} jumps and becomes small for large $N$\replaced{ and it is}{. It is therefore} sufficient to suppress relative stage positioning errors into a regime where the accrued phase shift for the highest measured frequencies becomes negligible. Stage errors in the \added{$\Delta r / r = $}$10^{-6}$ regime correspond to a 12.5 fs phase shift across the mechanically scanned 12.5 ns delay range and \replaced{are}{were} negligible for any feasible experiment with our minimal 50 fs laser pulse duration (impulsive Raman excitation possible for $<10$ THz transition frequencies). We should note that \added{when} significant calibration errors between the mechanical stage and the oscillator repetition rate \added{occurred, this} le\removed{a}d to the formation of 80 MHz sidebands\replaced{, which are}{ that were} readily identified in the experimental spectra. \subsection{Uncertainties in the Oscillator Repetition Rate} Extended delays are achieved by delayed electronic pulse-selection of subsequent oscillator pulses from a Coherent Vitara laser oscillator. This adds delays in multiples of 12.5 ns to the probe arm of interferometer. Any undetected drift of the laser oscillator repetition rate from the nominal value of 80 MHz will introduce a corresponding uncertainty. We used a frequency counter (Aim-TTI TF930) to monitor the stability of the oscillator against a GPS-stabilized clock (Leo Bodnar GPSDO) with an expected frequency accuracy of \added{$\Delta \nu / \nu = $}$\le10^{-10}$. Figs.\ \ref{AllanDev} and \ref{AllanMDev} show the Allan deviation and the modified Allan deviation \added{for the oscillator frequency,} measured over a 1-day period. For periods $<100$ s, \replaced{we observed a}{the observed} slope of -1 (-1.5) in the Allan (modified Allan) deviation\removed{, which} is characteristic for random white noise.\cite{Riley2008} This noise is due to the frequency counter digitization noise ($\pm 1$ count over the measurement period) and does not reflect any drift of the clock or oscillator. For periods $>100$ s, the Allan deviation remained $<10^{-10}$, giving an upper limit for the frequency stability of the oscillator. It is quite possible that the GPS clock stability \replaced{is}{was} limiting in this regime and that the oscillator frequency \replaced{is}{was} more stable than our measurement indicates. \begin{figure}[ht] \includegraphics[width=240pt]{Fig6_AllanDev.png} \caption{Allan deviation for the 80 MHz Coherent Vitara-T laser oscillator, measured against a \deleted{Leo Bodnar }GPS-disciplined clock. } \label{AllanDev} \end{figure} \begin{figure}[ht] \includegraphics[width=240pt]{Fig7_AllanMDev.png} \caption{Modified Allan deviation for the 80 MHz Coherent Vitara-T laser oscillator, measured against a \deleted{Leo Bodnar }GPS-disciplined clock. } \label{AllanMDev} \end{figure} Slow frequency drifts of the oscillator are readily corrected by a corresponding adjustment of the opto-mechanical delay position. A continuous monitoring of the oscillator rate therefor allows a \emph{feed-forward} correction of delays, which fulfills the same purpose as the feed-back oscillator stabilization in frequency comb spectroscopy,\cite{Hansch2006} albeit with much smaller \replaced{technological}{technical} efforts. With our inexpensive monitoring system and a typical frequency counter period of 1 or 10 seconds, we readily achieved a single-sigma uncertainty (Allan deviation) $\Delta \nu / \nu\ll 10^{-7}$ and a longer frequency counter integration period can reduce this uncertainty \removed{down} towards the $\Delta \nu / \nu \approx 10^{-10}$ noise floor of the frequency measurement. We expect that the uncertainty can be further reduced with the use of a high-fidelity reference clock: similar oscillators \replaced{are}{were} used in frequency comb experiments and were stabilized to \replaced{several orders}{order}-of-magnitude better performance. The feedback stabilization required for the \replaced{latter}{frequency-comb measurements} introduces an additional source of noise that is absent \replaced{with}{in} our feed-forward stabilization \added{scheme}. \subsection*{Uncertainties Arising from Doppler Effects} \label{Doppler effects} Textbooks commonly label skimmed molecular beam spectroscopy as "Doppler-free". But as illustrated in Figs.\ \ref{Doppler_broadening} and \ref{Doppler_shift}, the non-zero collimation angle and imperfect alignment of a molecular beam contributes some Doppler effects. We separately assessed Doppler broadening and Doppler shifts by geometric consideration of the molecular beam velocity components $v_{\parallel}$ in the direction of the alignment and ionization laser beams. \begin{figure}[ht] \includegraphics[width=240pt]{Fig8_Doppler_broadening.png} \caption{Illustration of Doppler broadening. The non-zero molecular beam collimation angle $\epsilon$ leads to a distribution of beam velocity components $v_\parallel$ that are parallel to the laser beam propagation axis: $v_{\parallel} = v\mathrm{_{beam} \cdot tan}(\epsilon)$.} \label{Doppler_broadening} \end{figure} \begin{comment} Vapor pressure of CS2 at 20C is ~0.4 bar. At 20 bar we have some \end{comment} In our experiments, a 1 mm skimmer at 280 mm distance from the pulsed valve led to a molecular beam collimation angle of $\epsilon = 0.10^{\circ}$. We measured the molecular beam velocity for a helium-seeded beam of \ce{CS2} to be $v_\mathrm{beam} \approx 1100$ m/s and calculated a velocity spread $v_{\parallel} = \pm1.96$ m/s and a maximal Doppler broadening of $\Delta \nu_{\textit{Db}} / \nu = v_{\parallel} / c = 6.6 \cdot 10^{-9}$. This estimated Doppler broadening \replaced{is}{was} significantly larger than a value obtained based on the textbook treatment (see Chapter 4 in Demtr\"oder \cite{Demtroder2011}) because we accounted for the similar velocity profiles of the heavier CS$_2$ molecules and the lighter helium atoms in the seeded molecular beam. Doppler broadening becomes relevant when it approaches or exceeds the spectroscopic resolution. In our best CRASY data, we observe sub-MHz line-width for 100 GHz line frequencies ($\Delta \nu_\mathrm{FWHM} / \nu$ in the $10^{-6}$ regime). The Doppler broadening estimated above is several orders-of-magnitude smaller than our best achieved resolution and will not affect spectroscopic results until we reach sub-kHz level resolution. The use of slower molecular beams and better molecular beam collimation can further reduce Doppler broadening and we expect that sub-100 Hz line widths \removed{for 100 GHz rotational lines} can be observed \added{for 100 GHz lines} before Doppler broadening becomes a limiting factor. A Doppler shift occurs if the angle between laser and molecular beam deviates from $\alpha = 90^{\circ}$. As illustrated in Fig.\ \ref{Doppler_shift}, tracking of the molecular beam then changes the effective path length of the alignment versus the ionization arm of the interferometer and introduces an additional delay of $\delta t = \frac{\Delta x}{c} = \Delta t \left(1 + \frac{v_\mathrm{beam}}{c} \cdot \mathrm{sin}(\alpha) \right)$. Resulting Fourier transformed spectra show a frequency shift proportional to the delay time errors $\delta t / \Delta t$. The line position for well-resolved lines can be determined with an accuracy that is orders-of-magnitude better than the spectroscopic resolution and our experiment is therefore highly sensitive to Doppler shifts. To measure the angle $\alpha$ between laser and molecular beam, we propagated a laser pointer through the skimmer onto the pulsed valve orifice and measured the relative angle of laser pointer beam and alignment / ionization laser beams against a reference frame. For our experiments, we determined an angle of $\alpha = 91.6 \pm 0.4 ^{\circ}$ and calculated a Doppler correction factor of $\delta t / \Delta t_{m} = (1.0 \pm0.26) \cdot 10^{-7}$. The Doppler shift was not negligible for our most-accurate measurements and reduced measured rotational frequencies by one part in $10^7$ (some 320 Hz for a 3.2 GHz rotational constant fitted for \ce{CS2}).\cite{Schroter2018} The Doppler shift uncertainty of $2.6 \cdot 10^{-8}$ can be reduced by a careful measurement of the relative angles between the molecular beam and the laser beams. The Doppler shift can be measured, and thereby completely eliminated, by performing complementary experiments with laser beams propagating in opposite directions, as measurements from opposing directions show opposite signs for the Doppler shift. \begin{figure}[ht] \includegraphics[width=240pt]{Fig9_Doppler_shift.png} \caption{Illustration of the Doppler shift: To correct for molecular beam propagation within the delay $\Delta t$ between alignment and ionization pulses, the ionization laser is tracking the molecular beam for a distance of $d = \Delta t \cdot v_\mathrm{beam}$. When the angle between molecular beam and laser beams deviates from $90^{\circ}$, this leads to an additional path $\delta x$ for the ionization beam and an additional delay $\delta t = \delta x / c$.} \label{Doppler_shift} \end{figure} \subsection*{Signal Degradation by Sparse Sampling} Sparse sampling is an essential tool to extend the optical path difference and thereby the spectroscopic resolution without excessive requirements in terms of measurement time and data storage. A number of sparse sampling approaches were discussed in the context of multi-dimensional NMR spectroscopy.\cite{Pelczer1991,Hoch2008,Hyberts2012,Hoch2014} Sparse sampling methods other than random sampling affect the line shape and are therefore problematic, unless the natural line shape in the investigated spectra is known or negligible. Randomly sparse sampled data merely shows an elevated noise level and correspondingly reduced signal-to-noise ratio\deleted{(SNR)}, without introducing any significant artifacts.\footnote{Random sparse sampling is equivalent to the multiplication of a continuous time-domain trace with a binary [0,1] \replaced{'white noise array'}{masking array}, which masks out the unmeasured data points. The multiplication of traces in the time domain corresponds to a folding of their spectra in the Fourier-domain. A \replaced{white noise}{random binary} array transforms into a flat \added{noise} spectrum and therefore merely adds noise \replaced{to}{without otherwise affecting} the measured spectrum.} The combination of sparse sampling with an infinite interferometer therefore offers a unique spectroscopic tool, where resolution and SNR can be freely traded against one another. Fig\replaced{ure}{.\ } \ref{Sparse_Sampling_simulation} shows the effect of random sparse sampling in a simulated delay trace for CS$_2$. The Fourier transform of a fully sampled trace show\replaced{s}{ed} negligible noise due to the synthetic nature of the data. To simulate 3\% random sparse sampling, 97\% of all points in the delay trace were set to zero. The random selection was based on the Mersenne-Twister pseudo-random number generator as implemented in the numpy library of the Python programming language. The Fourier transform of the sparsely sampled data show\replaced{s}{ed} significant noise but only a modest degradation of the resolution. An estimate of the noise distribution, using the modified Z-score,\cite{Iglewizc1993} gave a noise level of $\sigma$ = 0.2\% relative to the largest signal peak. Experimentally observed SNRs in sparsely sampled data showed similar signal degradation. Note that the loss of information scales proportionally to $\sqrt{\mathrm{samples}}$ and longer scans can be performed with lower sparse sampling rates. \begin{figure}[ht] \includegraphics[width=240pt]{Fig10_Sparse_Sampling_simulation.png} \caption{Simulated Fourier-transformed rotational Raman power spectrum for CS$_2$ at 8 K. (a) Full sampling of 100 ns time delay with $\Delta t$ = 1 ps steps ($10^5$ samples). (b) Sparse sampling of 3000 randomly selected delays from (a) created $< 1$\% sampling noise. Insets show a 50-fold or 2500-fold enlarged abscissa.} \label{Sparse_Sampling_simulation} \end{figure} \section{Expected Resolution Limits for CRASY Experiments} For all practical purposes, the coherence lifetime of cold rotational states in small molecules is only limited by collisions. Rotational decoherence in collision-free, pulsed molecular beams therefore only occurs when the molecules hit the spectrometer wall. The resolution limit of CRASY is therefore purely a function of the MOPD. With the infinite interferometer design presented above, we removed all practical limitations to the MOPD and other experimental factors become limiting: (i) The molecular beam travels with supersonic velocities and must be accurately tracked. (ii) Sparse sampling is necessary to achieve large MOPDs within reasonable measurement times, but may degrade the SNR to a point where spectra can no longer be resolved. The resolution of current CRASY measurements is limited by factor (i): due to a limited window size we can track the molecular beam only over distances of a few mm. The beam velocity for our dilute, helium-seeded molecular beams was measured to be $v_{b} \approx 1100$ m/s and the 330 kHz resolution data shown in Fig.\ \ref{fig:kHz_resolution_Spectrum} therefore required \removed{a} tracking of the beam over a distance of nearly 2 mm. The molecular beam velocity can be significantly reduced by using a heavier seed gas with lower speed-of-sound. With a suitably larger laser window, the tracking distance can be extended, e.g., an extension to 10 cm tracking, would offer a 50-fold increase of the accessible MOPD. We expect that the combination of longer tracking and slower molecular beams will push the resolution limit into the single-kHz regime. Further extensions would require the construction of a spectrometer with a dedicated chamber for decimeter- or meter-scale tracking of the molecular beam, e.g., as depicted in Fig.\ \ref{fig:Extended_tracking_chamber}. Note that the signal collections might be facilitated with the correlation to other spectroscopic observables. E.g., probing of rotational coherence via fluorescence excitation would remove the nonlinearity of our two-photon photoionization probe step and might allow the multiplexed detection of signals along the molecular beam axis. \begin{figure}[ht] \centering \includegraphics[width=240pt]{Fig11_Extended_tracking_chamber.png} \caption{\small Schematic representation of a photoion-photoelectron spectrometer with extended optical access for molecular beam tracking. } \label{fig:Extended_tracking_chamber} \end{figure} \replaced{To collect spectroscopic data with single-kHz resolution and a 100 GHz spectral range would require the sampling of a time axis containing some $10^8$ points.}{The sampling of a time axis containing some $10^8$ points could allow to collect spectroscopic data with single-kHz resolution and a 100 GHz spectral range.} Clearly, this is only possible with severe sparse sampling and a corresponding degradation of the SNR. Fig.\ \ref{fig:millisecond_scan_simulation} shows that such measurements are feasible: a simulated spectrum based on the sampling of 100\,000 points along a 2 ms \replaced{time-delay axis}{delay range} combines excellent signal-to-noise ratio with sub-kHz resolution. \begin{figure}[ht] \centering \includegraphics[width=240pt]{Fig12_Simulation_of_ms_Scan.png} \caption{\small Simulation of a \ce{CS2} spectrum with 0.61 kHz non-apodized resolution, based on a 2 ms scan range, a nominal 5 ps step size, and 0.05\% sparse sampling. Simulated signal count rates were in the range of few-hundred counts per data point, corresponding to typical experimental count rates. The inset shows a 100-kHz section of the simulation.} \label{fig:millisecond_scan_simulation} \end{figure} In conclusion, we demonstrated that \removed{the use of} pulse-selection from a stable laser oscillator allows to perform interferometric spectroscopy with an effectively infinite interferometer. This approach removed previous limits to the available interferometric MOPD and we presented rotational spectra with sub-MHz effective resolution over a 500 GHz spectral range range. The achieved resolution \replaced{is}{was} several orders-of-magnitude better than that achieved by any preceding RCS or FTIR measurements and corresponds to the scanning of \removed{almost} km-scale path differences. Further order-of-magnitude improvements are expected and \replaced{are}{progress is} only limited by experimental challenges such as the requirement to track skimmed molecular beams over extended distances. \begin{acknowledgments} The authors acknowledge funding support from the National Research Foundation of Korea, grant NRF-2018R1D1A1A02042720 and Samsung Science and Technology Foundation, grant SSTF-BA2001-08. \end{acknowledgments}
{ "timestamp": "2022-09-23T02:11:41", "yymm": "2209", "arxiv_id": "2209.10885", "language": "en", "url": "https://arxiv.org/abs/2209.10885" }
\section{Introduction} Real arithmetic problems appear in many application domains, including safety-critical application domains, such as the verification of cyber-physical systems (CPS). Very often, these problems involve $\exists$ and $\forall$ quantifiers, which pose theoretical and practical computational challenges \cite{DBLP:journals/jsc/DavenportH88,DBLP:journals/jsc/Weispfenning88,DBLP:conf/cade/PlatzerQR09}. The best known way of handling arbitrary quantified statements is with \textit{quantifier elimination (QE)}, which transforms quantified statements into logically equivalent quantifier-free formulas, which are then evaluated. Alfred Tarski \cite{Tarski} proved that the theory of real-closed fields is decidable, by establishing that algorithms to perform quantifier elimination on formulas in the first-order logic of real arithmetic exist; in practice, these algorithms tend to be complicated. Given the safety-critical nature of real arithmetic questions, it is not surprising that considerable attention has been given to formally verifying algorithms for real QE \cite{AssiaQE, BKR, DBLP:conf/tphol/Harrison07, DBLP:journals/mscs/Mahboubi07, li2019deciding, harrison, NASAHutch, NASATarski, DBLP:journals/jar/Nipkow10, DBLP:conf/cade/PlatzerQR09, scharager2021verified}. However, while considerable progress has been made on verifying \textit{univariate} QE methods (methods for QE problems that only involve one variable, and so have at most one quantifier) \cite{BKR,li2019deciding,NASAHutch,NASATarski}, and while a variety of works have focused on verifying \textit{special-purpose} QE methods (that is, methods which target some fragment of multivariate QE problems) \cite{DBLP:conf/tphol/Harrison07,DBLP:journals/jar/Nipkow10,DBLP:conf/cade/PlatzerQR09,scharager2021verified}, only limited progress has been made on verifying \textit{complete} multivariate QE algorithms (i.e., algorithms that are capable of resolving \emph{any} real QE problem). Multivariate QE algorithms are significantly more challenging. Multivariate polynomials are unlike univariate polynomials, because they may have infinitely many roots, their leading coefficients may have zeros, and polynomial division is not always unique. Additionally, whereas univariate QE problems only involve a single quantifier and always reduce to True or False, multivariate QE problems can involve nested quantifiers and free variables. To our knowledge, the main published progress on verifying complete multivariate QE algorithms in theorem provers is threefold: first, Mahboubi \cite{DBLP:journals/mscs/Mahboubi07} \textit{implemented} (but did not yet verify) the fastest-known QE algorithm, \textit{cylindrical algebraic decomposition (CAD)} \cite{Collins} in Coq; second, McLaughlin and Harrison developed a \textit{proof-producing} (but not verified) procedure based on the Cohen-H\"{o}rmander algorithm in HOL Light \cite{harrison}; finally, Cohen and Mahboubi verified Tarski's original QE algorithm in Coq \cite{cohen_phd,AssiaQE}. Unfortunately, both Tarski's original QE algorithm and the Cohen-H\"{o}rmander algorithm have non-elementary complexity (i.e. the complexity is not bounded by any tower of powers of two). While McLaughlin and Harrison's procedure can solve simple microbenchmarks, they acknowledge considerable experimental limitations\footnote{This is not only due to the complexity of the Cohen-H\"{o}rmander algorithm, but also because proof-producing algorithms are not verified once and for all but, instead, have to produce a new proof of correctness per question, which incurs significant overhead compared to fully verified ones \cite{harrison,DBLP:conf/cade/PlatzerQR09}.} \cite{harrison}. Similarly, Cohen and Mahboubi consider their work to be primarily a theoretical contribution \cite{AssiaQE}. The dearth of efficient formally-verified support for QE is in part a consequence of the intricacy of QE algorithms. Scharager \emph{et al.} \cite{scharager2021verified} observe a tradeoff between the computational efficiency of an algorithm and the tractability of verification. Most notably, the CAD algorithm is efficient but complex and tremendously difficult to verify; only the significantly simpler univariate case has been fully verified (independently, in Isabelle/HOL \cite{li2019deciding} and PVS \cite{NASAHutch}). Further, in order for CAD to realize its full potential for efficiency, many further insights \cite{Brown,DBLP:journals/jsc/CollinsH91,DBLP:conf/issac/DolzmannSS04,McCallumProj} beyond the original development \cite{Collins} are needed, and improving CAD (and algorithms for real QE at large) is an active area of research. The lack of efficient \emph{verified} QE methods is also a consequence of the challenge posed by verification. Working within the formal setting of a theorem prover adds both a considerable layer of rigor but also intricacy, which is why even small progress needs significant effort. For example, Mahboubi \cite{DBLP:journals/mscs/Mahboubi07} discusses the many challenges involved in implementing CAD in Coq---a significantly more arduous and involved task than implementing CAD in an unverified computer algebra system (which also took decades \cite{DBLP:journals/cca/Brown03,strzeMathematica}). In our work, we target a potential \textit{sweet spot} within the tradeoff between complexity and verification amenability \cite{scharager2021verified} by verifying a \emph{complete} multivariate QE algorithm loosely based on the \textit{Ben-Or, Kozen, and Reif (BKR)} algorithm \cite{DBLP:journals/jcss/Ben-OrKR86} (but presently with less efficiency). The BKR algorithm shares some theoretical similarity to Tarski's original QE algorithm (in that it uses a matrix equation to store sign information for polynomials), but it includes an additional reduction step for greater efficiency. This was an influential algorithm which was later extended into a number of improved and/or generalized variants with compelling parallel complexity bounds, including ones by Renegar \cite{DBLP:journals/jsc/Renegar92b}, Canny \cite{1993Improved}, and Cucker \emph{et al.} \cite{DBLP:journals/aaecc/CuckerLMPR92}. As prior work \cite{HongTechRpt} has drawn a strong distinction between computational complexity and practical efficiency (with particular attention to Renegar \cite{DBLP:journals/jsc/Renegar92b}), these complexity bounds will not necessarily translate into immediate practical efficiency. However, a followup work \cite{DBLP:journals/cj/HeintzRS93} argued for the potential of algorithms with strong theoretical complexity bounds to realize efficiency on fragments of real arithmetic, and these algorithms remain influential. Prior work \cite{BKR} verified the \emph{univariate} case of BKR in Isabelle/HOL; the authors argue that BKR is likely more amenable to formalization than CAD, and potentially complementary to CAD. We extend this development \cite{BKR_AFP,BKR} into a multivariate QE algorithm. Our multivariate algorithm is something of a hybrid: it is a mixture of Tarski's original QE algorithm \cite{Tarski} and BKR \cite{DBLP:journals/jcss/Ben-OrKR86}, with insights from Renegar \cite{DBLP:journals/jsc/Renegar92b}. It currently does not exploit \textit{all} of the reduction from BKR, which limits its efficiency. Thus, like Cohen and Mahboubi \cite{AssiaQE}, we view our contribution as being primarily theoretical \textit{from the perspective of efficiency}. However, we also view our algorithm as being a significant stepping stone towards the BKR algorithm and, eventually, its variants. In particular, it would be of considerable interest to verify a method that more closely realizes the parallel complexity bounds of Renegar \cite{DBLP:journals/jsc/Renegar92b}. Such a method will naturally take time to develop, and will likely only be realized in stages. \textit{Contributions.} (1) Our work is the first complete multivariate QE algorithm formalized in Isabelle/HOL. (2) To our knowledge, it is the first formalized multivariate QE algorithm to include insights from BKR, and it is a first step towards a less complex verified algorithm (e.g. in the style of Renegar \cite{DBLP:journals/jsc/Renegar92b}), which could ideally complement an eventual formalized algorithm based on CAD. (3) Because much of the source material is either sparsely written (e.g. \cite{DBLP:journals/jcss/Ben-OrKR86}) or highly mathematical (e.g. \cite{algRAG,DBLP:journals/jsc/Renegar92b}), it was not a priori obvious what the formalized algorithm should look like (this formalization barrier is discussed in \rref{sec:Difficult}). The rigorous nature of verification forced us to clearly identify the essential building blocks of the algorithm: In our formalization, \emph{all} definitions are mathematically precise and verifiable, and all their correctness properties are identified and proved. The formalization is approximately 8500 lines of code (not counting code from previous developments that only needed minor modifications, and from several files from prior work by Li, Passmore, and Paulson \cite{li2019deciding} which were provided to us in correspondence). It includes various advances to Isabelle/HOL's existing libraries, particularly the library for multivariate polynomials, which could help pave the way for future multivariate QE algorithms in Isabelle/HOL. \section{Quantifier Elimination}\label{sec:QE} Our QE algorithm works by eliminating one quantifier at a time. Hence, if we have polynomials in $n + 1$ variables, we can consider them as univariate polynomials in a variable of interest with coefficient polynomials in $n$ variables. For example, if $x$ is our variable of interest, then we can treat $3xyz^2 + 6x^2wv + 5xy + 1$ as the following polynomial in $x$: $(6wv)x^2 + (3yz^2 + 5y)x + 1$. For clarity, and WLOG, we assume throughout this section that our variable of interest is $x$. The key component of both multivariate and univariate BKR is a \textit{sign-determination algorithm} which is concerned with finding all \textit{consistent sign assignments} to a set of polynomials $\{q_1, \dots, q_k\}$. A \textit{sign assignment} is a mapping that assigns each polynomial to a \textit{sign}, i.e. positive, zero, or negative (represented by $1, 0,$ and $-1$). A sign assignment is called \textit{consistent} if it is actually realized at some real point. At the heart of the sign-determination algorithm that we formalize is a \textit{matrix equation} that is capable of storing sign information for a set of polynomials in variables $x, y_1, \dots, y_n$, under a set of assumptions on polynomials in $y_1, \dots, y_n$. Our overall quantifier elimination algorithm takes a formula and identifies the polynomials that occur in the formula. It then generates a number of matrix equations, each of which captures some sign information for the polynomials, subject to some list of assumptions. Collectively, it is important that the generated matrix equations have exhaustive assumptions---in the sense that for every possible set of assumptions, there is at least one corresponding matrix equation. We call sets of assumptions \textit{branches}. Branches are refined throughout the construction with additional assumptions until each multivariate matrix equation has assumptions that generate a unique matrix equation. Initial branches, which are not fully refined, may still have multiple associated matrix equations. WLOG, we assume that we are eliminating a $\forall$ quantifier (because $\exists$ quantifiers can be transformed into $\forall$ quantifiers with appropriate negations). We do some initial branching (this is needed to guide the computations of the matrix equations), and for each branch, we check whether \textit{all} of the associated matrix equations describe a sign condition on our polynomials that satisfies the original formula. We filter our initial branches to pick out the ones that satisfy this property. Finally, we return a disjunction of all assumptions of the initial branches in this filtered list. \begin{figure}[t] \centering \includegraphics[scale=0.5]{HighLevelQE.pdf} \caption{A visual overview of the QE algorithm.} \label{fig:QEOverview} \end{figure} \rref{fig:QEOverview} visualizes how this QE algorithm works on an example. We begin with formula $\exists y. \forall x. (xy^2>0 \lor y^2+x^2>0),$ where our focus is on eliminating the $\forall x$ quantifier. We first identify the polynomials of interest in this formula and view them as univariate polynomials in $x$ (with coefficients that are polynomials in $y$): these are $y^2x$ and $x^2+y^2$. Next, we determine all consistent sign assignments to these polynomials of interest given all \textit{possible}\footnote{Here, we differ from the BKR algorithm, which would branch on all \textit{consistent} sign assumptions on $y^2$. That is, we consider a branch where $y^2 < 0$, because this is a possible (but inconsistent) sign assumption: even though $y^2$ is never negative, our algorithm does not discern this when branching.} sign assumptions on $y^2$, where $y^2$ is significant because it is the \textit{leading coefficient} of $y^2x$ (technically our algorithm will do some additional and unnecessary branching, but for the clarity of this example we focus on the branch on $y^2$; see \rref{sec:SignDet} for a more in-depth discussion of the branching). Internally, our algorithm performs sign determination using matrix equation constructions (but this is not pictured in the figure). We then pick out the sign assignments that solve our original QE problem---that is, we are looking for one of our polynomials of interest, $y^2x$ or $x^2 + y^2$, to be positive. Signs that satisfy this condition are pictured in green. Then, we filter our branches to find the ones where \textit{every} sign assignment satisfies the original QE problem. This happens only in the branch where $y^2$ is assumed to be positive. This means that $y^2 > 0$ is logically equivalent to $\forall x. (xy^2>0 \lor x^2+y^2>0)$, which means that $\exists y. \forall x. (xy^2>0 \lor x^2+y^2>0)$ is logically equivalent to $\exists y. (y^2 > 0)$, whose quantifier $\exists y$ can be eliminated further. If our original QE question was instead $\exists y. \forall x. (xy^2~\geq~0 \lor x^2+y^2>0),$ then both the branch with assumption $y^2 > 0$ and the branch with assumption $y^2 = 0$ would satisfy our QE problem. This means that the disjunction $y^2 > 0 \lor y^2 = 0$ is logically equivalent to $\forall x. (xy^2\geq 0 \lor x^2+y^2>0)$, and so our output in this case would be $\exists y. (y^2 > 0 \lor y^2 = 0)$. Here it is important to note that there are many logically equivalent outputs to any given QE problem. For example, if our original QE question were $\forall x. ({(xy^2= 0 \land x^2+y^2=0)} \lor {(xy^2= 0 \land x^2+y^2<0)} ),$ then two possible correct outputs that are logically equivalent are $y^2 = 0$, and $y^2<0 \lor y^2=0$. Here, $y^2 = 0$ is the simplest output. While the output of our QE algorithm is always logically correct, it is \textit{not} guaranteed to be in the simplest form. In particular, assumptions for branches that are inconsistent will often be included in the final disjunction, which has no impact on logical correctness, only formula complexity. We now turn to more detailed descriptions of the sign determination procedure, the multivariate matrix equation, and the full quantifier elimination procedure. \subsection{Sign Determination}\label{sec:SignDet} Finding sign information for polynomials $q_1, \dots q_k$ in variables $x, y_1, \dots, y_n$ is, on the surface, a continuous problem---the most obvious way to determine the sign information would be to evaluate $(q_1, \dots, q_k)$ on $\mathbb{R}^k$, which is clearly not computationally viable. To account for this, BKR and Renegar reduce the sign-determination problem to a problem with the following format: find sign information for $q_1, \dots, q_k$ \emph{at the roots} of some cleverly chosen polynomial $p$. This problem is clearly computationally viable for univariate polynomials, because polynomials in one variable only have finitely many roots. It is a (non-obvious) key insight that it is also computationally viable for multivariate polynomials \cite{DBLP:journals/jcss/Ben-OrKR86, DBLP:journals/jsc/Renegar92b}. Intuitively, the output of the univariate algorithm only depends on the \emph{signs} of the real polynomial coefficients and not on the actual \emph{values} of those coefficients. Thus, the algorithm lifts to the multivariate case by making \emph{sign assumptions} on (multivariate) polynomial coefficients in variables $y_1, \dots, y_n$. In our multivariate setting, $p$ is chosen as $p = (\prod q_i) \cdot \frac{\partial}{\partial x}(\prod q_i)$. To see what makes this particular polynomial useful, consider some valuation $\nu$ on $y_1, \dots, y_n$ (i.e., some assignment of $y_1, \dots, y_n$ to real values). Let $\nu(f)$ denote the evaluation of polynomial $f$ in valuation $\nu$. Now, the roots of $\nu(p) =(\prod \nu(q_i)) \cdot \frac{d}{dx}(\prod \nu(q_i))$ contain all of the roots of the $\nu(q_i)$'s (since each $\nu(q_i)$ divides $\nu(p)$), as well as sample points from intervals between the roots (by Rolle's theorem \cite{BKR}). Because these intervals are \textit{sign-invariant}---that is, no $\nu(q_i)$ changes sign in any of these intervals, since no $\nu(q_i)$ can change sign without passing through a root---sign information at a single point within any of these intervals is \textit{representative} of sign information for the entire interval. So, we see that the only intervals which the roots of $\nu(p)$ do not adequately cover are the extreme intervals---the leftmost and rightmost, which lie beyond any of the roots of $\nu(p)$---for which sign information can be computed with a limit calculation on the $\nu(q_i)$'s.\footnote{In the formalization of univariate case \cite{BKR}, the polynomial $p$ was chosen so as to directly sample from these intervals by using the Cauchy root bound, a mathematical quantity that bounds the roots of a set of univariate polynomials. This followed BKR's original work \cite{DBLP:journals/jcss/Ben-OrKR86}. However, since the Cauchy root bound is for univariate polynomials only, we must work instead with limit computations as Renegar does \cite{DBLP:journals/jsc/Renegar92b}.} So, this polynomial $p$ allows a natural lifting from the univariate QE algorithm to the multivariate case, but the correctness justification needs an extensive covering of the influence of all possibilities for valuation $\nu$. This is visualized in \rref{fig:SignDet}. Here, we have polynomials $q_1 = y^2x + 1$ and $q_2 = yx + 1$, so $p = (y^2x + 1)(yx + 1)(2xy^3 + y^2 + y)$. For the purposes of illustration, we consider two sample valuations: in $\nu_1$, we set $y = 2$, and in $\nu_2$, we set $y = -1$. As depicted, in both valuations, to find sign information for $q_1$ and $q_2$, it suffices to find sign information for $q_1$ and $q_2$ at the roots of $p$ and the limit points. \begin{figure}[t] \centering \includegraphics[scale=0.42]{SignDet.pdf} \caption{An example of sign determination.} \label{fig:SignDet} \end{figure} We formalize this procedure for sign determination in the \isa{sign\_determination} function. The first input to this function is a list of polynomials \isa{qs} of type \isa{rmpoly}, where \isa{rmpoly} is our abbreviation for \isa{real mpoly poly}. Here, \isa{poly} is Isabelle/HOL\xspace's type for univariate polynomials, \isa{mpoly} is the type for multivariate polynomials, and \isa{real} is the type for real numbers, so an \isa{rmpoly} is a univariate polynomial whose coefficients are real multivariate polynomials. Say initially we have polynomials in variables $x, y_1, \dots, y_n$; then the \isa{rmpoly} type arises when we treat those polynomials as being univariate in $x$ with coefficients in $y_1, \dots, y_n$. Unlike in computer algebra, these polynomials are not restricted to have any particular representation; rather, they are elements of the free term algebra. The next input to \isa{sign\_determination} is a list of initial assumptions of type \isa{(real mpoly\ \isasymtimes\ rat)\ list}, which we abbreviate as \isa{assumps}. Here, \isa{rat} is Isabelle/HOL\xspace's type for rational numbers, and so each assumption in the list pairs a real multivariate polynomial with an associated rational number that indicates a sign condition on the polynomial (0, 1, or -1). This type is useful in specifying any known sign information on polynomials in $y_1, \dots, y_n$. The output of \isa{sign\_determination} is a list of pairs of assumptions and associated sign assignments to \isa{qs}. Each sign assignment has type\footnote{Technically, we could use \isa{int\ list} for sign assignments, since each member of the sign assignment list is $1$, $0$, or $-1$, but as noted elsewhere \cite{BKR}, it is easier to work with \isa{rat\ list} in the matrix equation construction.} \isa{rat\ list}. The assumptions have type \isa{assumps} (for the same reason as before), and as each assumption may have multiple associated sign assignments, each assumption is paired with a \textit{list} of associated sign assignments, as demonstrated by the \isa{assumps\ \isasymtimes\ (rat\ list\ list)} type. The output, of type \isa{(assumps \isasymtimes\ (rat\ list\ list))\ list}, contains an exhaustive set of assumptions (in order to capture \textit{all} consistent sign assignments for the $q_i$'s). \begin{isabelle} \signdetermination \end{isabelle} Here, the \isa{lc\_assump\_generation\_list} function generates an exhaustive list of \textit{possible} branches, \isa{branches}, that contain assumptions on the signs of the leading coefficients of the input polynomials \isa{qs}. An important subtlety is that the leading coefficient of the polynomial $q_i$ may be different in different branches. For example, the leading coefficient of $(y+1)x^2 + yx + 2$ is $y+1$ in a branch where $y+1$ is assumed to be nonzero, $y$ in a branch where $y+1$ is zero and $y$ is assumed to be nonzero, and $2$ in a branch where both $y+1$ and $y$ are assumed to be zero. To best account for this subtlety, each element of \isa{branches} contains both the generated assumptions (which determine the branch) \textit{and} a list of polynomials which contains a simplified version of the \isa{qs}: to be precise, $q_i = c_1x^{d_1} + \cdots c_mx^{d_m}$ simplifies to $c_jx^{d_j} + \cdots c_mx^{d_m}$ iff $c_1, \dots, c_{j-1}$ are all assumed to be zero and $c_j$ is assumed to be nonzero. For example, given a list of input polynomials $[(y+1)x^2 + yx + 2, y^2 + (y+1)x^5]$, an element of \isa{branches} could be: $([(y + 1, 0), (y, 1), (y^2, 1)] , [yx + 2, y^2 + (y+1)x^5])$. The list of assumptions $[(y + 1, 0), (y, 1), (y^2, 1)]$ specifies that, in this branch, $y + 1$ is assumed to be 0 and $y$ and $y^2$ are assumed to be positive. Under these assumptions, $(y+1)x^2 + yx + 2$ simplifies to $yx + 2$ and $y^2 + (y+1)x^5$ simplifies to $y^2 + (y+1)x^5$ (as the purpose of the simplification is to determine the leading coefficient, it is not mission critical to fully simplify $y^2 + (y+1)x^5$ to $y^2$, and our code is not optimized to do so). Currently, \isa{lc\_assump\_generation\_list} naively generates branches by branching on \textit{all possible} sign assignments to the leading coefficients, rather than on all \textit{consistent} ones as BKR would. Thus, branches with inconsistent assumptions can be generated: for example, the branch $([(y + 1, 0), (y, 1), (y^2, -1)],$ $[yx + 2, y^2 + (y+1)x^5])$ could be generated by the function \isa{lc\_assump\_generation} despite its inconsistent assumptions ($y^2$ is assumed to be negative). Additionally, although \isa{lc\_assump\_generation\_list} takes an input list of assumptions, \isa{assumps}, as an argument, it does not enforce consistency of the output branches with \isa{assumps}; however, before splitting on the sign of a polynomial $f$, it will check whether \isa{assumps} already contains sign information for $f$. Branching on the signs of the leading coefficients of the \isa{qs} provides important information for two reasons: First, because these signs are relevant for the matrix equation computation (\rref{sec:MatEq}); and second, because knowing the sign of the first non-zero leading coefficient for every $q_i$ allows us to easily compute the signs at the limit points\footnote{The sign of $q_i$ at $\infty$ equals the sign of its leading coefficient, whereas the sign of $q_i$ at $-\infty$ is the sign of its leading coefficient multiplied by $(-1)^{\deg q_i}$, where ${\deg q_i}$ is the degree of $q_i$.}. The \isa{sign\_determination} function maps over \isa{branches}, and for each computes the polynomial $p = (\prod q_i) \cdot \frac{\partial}{\partial x}(\prod q_i)$, stored in \isa{poly\_p\_branch} (cross reference \rref{fig:SignDet}). Although it would suffice to compute $p$ beforehand, and then simplify it appropriately on each branch given the associated assumptions (for example, in a branch where $y = 0$, $q_1 = y^2x + 1$, and $q_2 = yx + 1$, the polynomial $p = (y^2x + 1)(yx + 1)(2xy^3 + y^2 + y)$ simplifies to $p = 0$), it is more direct\footnote{Our polynomials do not have any fixed representation, and equality checking is a potentially costly operation. Further, even if two polynomials are not identically equivalent, they may be so under a branch's assumptions (for example, $y^2 + y + 1$ is equivalent to $y^2$ if $y + 1$ is assumed to be 0).} to compute $p$ in each branch. That is, given $q_1 = y^2x + 1$, and $q_2 = yx + 1$, if in a given branch we know that $y = 0$, we also know that the leading coefficient of $q_1$ is 1 and the leading coefficient of $q_2$ is 1, which means that $q_1 = 1$ and $q_2 = 1$, and so $p = (1\cdot 1)\cdot (\frac{\partial}{\partial x} (1 \cdot 1)) = 0$. Next, for each branch, \isa{sign\_determination} performs a calculation (this is formalized in our function \isa{limit\_points\_on\_branch}) to find the signs of \isa{qs} at $\infty$ and $-\infty$. These are stored in \isa{pos\_limit\_branch} and \isa{neg\_limit\_branch}, respectively. Then, it makes a call to our \isa{calculate\_data\_assumps\_M} function (discussed in \rref{sec:MatEq}) to calculate a list of matrix equations for each branch, each of which stores sign information under some assumptions (assumptions in our formalization only accumulate, so the output assumptions contain the original branch's assumptions). It pulls out the assumptions and sign conditions from the matrix equations with the \isa{extract\_signs} function, which returns a list of type \isa{(assumps\ \isasymtimes\ rat\ list\ list)\ list}. This list is stored in \isa{mat\_eq\_signs\_on\_branch}. Finally, the positive and negative limit sign conditions \isa{pos\_limit\_branch} and \isa{neg\_limit\_branch} are appended to each list of sign conditions calculated with the matrix equations (with Isabelle/HOL\xspace's \isa{\isacharhash} operator), and the resulting list of assumptions and associated sign conditions is returned. It is now time to discuss the matrix equation. \subsection{The Multivariate Matrix Equation}\label{sec:MatEq} The multivariate matrix equation, like the univariate matrix equation, is concerned with finding sign information for a set of polynomials $q_1, \dots, q_n$ at the roots of an auxiliary polynomial $p$. One advantage of formalizing a multivariate QE algorithm based on BKR and Tarski is that the construction of the multivariate matrix equation is very similar to the construction of the univariate matrix equation. Thus, to understand the multivariate matrix equation, we first need to consider the construction of the univariate matrix equation. At its core, the univariate matrix equation relies on computing \textit{Tarski queries}, so we start there. \subsubsection{Computing Multivariate Tarski Queries}\label{sec:TQ} Tarski queries are defined as follows: \begin{definition}\cite{BKR} Given \textit{univariate} polynomials $p, q$ with $p \neq 0$, the \textit{Tarski query} $N(p, q)$ is: \begin{align*} N(p, q) =&\ \#\{ x \in \mathbb{R} ~|~ p(x) =0, q(x) > 0 \}\ - \\ &\ \#\{ x \in \mathbb{R} ~|~ p(x) =0, q(x) < 0\}. \end{align*} \end{definition} These Tarski queries can be computed from the Euclidean remainder sequence that starts with $p$ and $p'q$: \begin{proposition} (Sturm-Tarski Theorem) Let $p \neq 0$ and $q$ be real \emph{univariate} polynomials. Let $p_1 = p$, $p_2 = p'q$, $p_3, \dots, p_k$ be the Euclidean remainder sequence of $p$ and $p'q$, where $$p_i = c_ip_{i+1} - p_{i+2},$$ for $c_i\in\mathbb{R}[x]$ and where $\deg(p_{i+2})<\deg(p_{i+1})$. Let $a_i$ be the leading coefficient of $p_i$ and let $d_i := \deg(p_i)$. Let $S^+(p, q)$ denote the number of sign changes in the sequence $a_1, \dots, a_k$, and let $S^-(p, q)$ denote the number of sign changes in the sequence $(-1)^{d_1}a_1 \dots, (-1)^{d_k} a_k$. Then $N(p, q) = S^-(p, q) - S^+(p, q)$. \end{proposition} This result is from the literature \cite[Prop. 8.1] {DBLP:journals/jsc/Renegar92b} (with an unnecessary assumption removed that is not included in other references \cite{algRAG} or in Isabelle's existing formalization \cite{Sturm_Tarski-AFP} of the Sturm-Tarski theorem). Critically, in the Sturm-Tarski theorem, it is not the values of $a_1, \dots, a_k$ that matter; rather, it is the signs that matter; this is what enables the multivariate generalization \cite{DBLP:journals/jcss/Ben-OrKR86}. Consider polynomials $p \neq 0$ and $q$ in $x$ with coefficients that are polynomials in $y_1, \dots, y_n$ (i.e., $p, q \in \mathbb{R}[y_1, \dots, y_n][x]$). Then, we can form Euclidean remainder sequences of $p$ and $p'q$ with respect to $x$. The Euclidean remainder sequence is no longer unique---instead, there are multiple sequences, each depending on the signs of the coefficients of $p$ and $q$ (as coefficients that are polynomials can have different signs at different points). Once we fix a sequence and find the leading coefficients, we need to consider (by branching) \textit{all} sign assignments to those coefficients\footnote{Full BKR would consider all consistent sign assignments instead.}, and output a list of Tarski queries and the assumptions they are subject to. For example, if we take polynomials $p = y^2x + 1$ and $q = yx + 1$, then if $y^2 = 0$, then $y = 0$ so $p = q = 1$, and the Euclidean remainder sequence is just $1$, and $N(p, q) = 0$.\footnote{Technically, our formalization would do more branching than this for two reasons: First, it will branch on $y^2 = 0$, $y^2 > 0$, and (unnecessarily) $y^2 < 0$; and second, because it will not determine that $y^2 = 0$ implies $y = 0$---and so it will not know that $q = 1$ whenever $y^2 = 0$.} However, if $y \neq 0$, then our Euclidean remainder sequence is $y^2x + 1, y^3x + y^2, -(1 - y)$, where we have calculated $y^2x + 1 = \frac{1}{y}\cdot(y^3x + y^2) + (1 - y)$, using assumption $y \neq 0$ for $\frac{1}{y}$. Now, continuing the computation of $N(y^2x + 1, yx + 1)$, we find that the leading coefficients of our Euclidean remainder sequence (assuming $y \neq 0$) are $y^2, y^3,$ and $-(1-y)$. Next, we consider the possible sign assignments to $y^2, y^3$, and $-(1 -y)$. For example, $(+, +, -)$ is one such sign assignment. So, we have Tarski query $N(p, q) = S^-(p, q) - S^+(p, q) = 0 - 1 = -1$ under the assumptions that: $y \neq 0$, $y^2 > 0$, $y^3 > 0$, and $-(1 - y) < 0$. Our output for $N(y^2x + 1, yx + 1)$ would be a list of all the Tarski queries under all possible assumptions. This computation is visualized in \rref{fig:MultivTQ} (where, for purposes of space, only two output branches are explicitly shown). \begin{figure}[t] \centering \includegraphics[scale=0.48]{MultivTQ.pdf} \caption{Computing Tarski queries for $p = y^2x + 1$, $q = yx + 1$.} \label{fig:MultivTQ} \end{figure} Note that Euclidean remainder sequences for multivariate polynomials sometimes contain fractions. While we could have chosen to work with Euclidean remainder sequences in a \textit{fraction field}, this would require complicated type switching in the formalization. Instead, we use \textit{pseudo-remainder sequences} for multivariate polynomials. Pseudo-remainder sequences are essentially Euclidean remainder sequences for polynomials, but normalized so as not to contain fractions (and to not affect the result of the Sturm-Tarski computation \cite{li2019deciding}). We formalize pseudo-remainder sequences for multivariate polynomials of type \isa{real\ mpoly\ poly}, which we abbreviate as \isa{rmpoly} (currently, our formalization naively branches on the signs of the leading coefficients of the relevant polynomials). Here, we benefit from prior work: The Sturm-Tarski theorem was formalized in Isabelle/HOL by Wenda Li \cite{Sturm_Tarski-AFP}, and Li, Passmore, and Paulson later developed univariate Tarski queries with pseudo-remainder sequences \cite{li2019deciding}. Since QE is concerned with sign information for multiple polynomials simultaneously, it is useful to generalize the notion of Tarski queries to \textit{sets} of polynomials \cite{BKR} as follows: \begin{definition} Given a polynomial $p$ and a list of polynomials $q_1, \dots, q_n$, let $I$ and $J$ be subsets of $\{1, \dots, n\}$. Then, the Tarski query $N(I, J)$ with respect to $p$ is \begin{align*} N(&I, J) = N(p^2 + \left(\Sigma_{i \in I} q_i^2\right), \Pi_{j \in J}~q_j) =\\ &\#\{ x \in \mathbb{R} ~|~ p(x) =0, \forall i \in I. ~q_i(x) = 0, \Pi_{j \in J}~q_j(x) > 0 \}\ - \\ &\ \#\{ x \in \mathbb{R} ~|~ p(x) =0, \forall i \in I. ~q_i(x) = 0, \Pi_{j \in J}~q_j(x) < 0\}. \end{align*} \end{definition} The matrix equation determines the signs of $q_1,$ $\dots, q_n$ at the zeros of $p$ by computing $N(I, J)$ for a representative set of combinations of subsets $I, J$ of $q_1, \dots, q_n$ (see \rref{sec:usingtq}). There are two key lemmas that we prove about multivariate Tarski queries. The first is a soundness lemma showing that the resulting multivariate Tarski queries agree, on every point satisfying the associated assumptions, with what the univariate Tarski query would have been: \begin{isabelle} \multivTQone \end{isabelle} Here, the \isa{construct\_NofI\_M} function constructs a list of multivariate Tarski queries and the assumptions they are subject to. As input, it takes a polynomial \isa{p}, an initial set of assumptions \isa{acc}, and two lists of polynomials \isa{I} and \isa{J}. Both \isa{p} and all of the polynomials in \isa{I} and \isa{J} have type \isa{rmpoly}, i.e. they are univariate polynomials in $x$ with polynomial coefficients in some variables $y_1, \dots, y_n$. The \isa{inset} assumption assumes that we have some particular Tarski query \isa{tarski\_query} that is subject to the assumptions \isa{assumps}, which are assumptions on polynomials in $y_1, \dots, y_n$. Now, the \isa{construct\_NofI\_R} function is the function to compute univariate Tarski queries from prior work \cite{BKR}, so the conclusion of the lemma is that \isa{tarski\_query} is exactly the (unique) univariate Tarski query that would be computed from evaluating \isa{p} and all of the polynomials in \isa{I}, \isa{J} on \isa{val} (using the \isa{eval\_mpoly\_poly} and \isa{eval\_mpoly\_poly\_list} functions), where \isa{val} is any assignment of real values to $y_1, \dots y_n$ where the assumptions \isa{assumps} are realized. The second key lemma is a completeness result: \begin{isabelle} \multivTQtwo \end{isabelle} Here, this shows that if initial assumptions \isa{init\_assumps} are satisfied by valuation \isa{val}, then there is some resulting assumptions and Tarski query pair \isa{(assumps, tq)} where all final assumptions \isa{assumps} are satisfied by \isa{val}. Together, these two lemmas give a strong result: the soundness lemma shows us that the multivariate results coincide with univariate results in all projections meeting the final assumptions, and the completeness lemma shows us that for any projection meeting the initial assumptions, there is some corresponding Tarski query whose associated (final) assumptions are met by the projection. Or, on a more intuitive level, the completeness lemma shows us that our function to compute multivariate Tarski queries generates useful output whenever it is given useful input, and the soundness lemma shows that useful output has the desired mathematical meaning. \subsubsection{Using Multivariate Tarski Queries}\label{sec:usingtq} The matrix equation connects a vector of information about \textit{possible sign assignments} for a set of multivariate polynomials---i.e., sign assignments that are not necessarily consistent---on the LHS, to a vector of multivariate Tarski queries on the RHS. The univariate matrix equation is defined as follows, where we closely follow the definition of the univariate matrix equation \cite{BKR}, but adapted to our purposes\footnote{The univariate BKR paper \cite{BKR} follows the matrix equation developed in Ben-Or, Kozen, and Reif's original paper \cite{DBLP:journals/jcss/Ben-OrKR86}, where $p$ is assumed to be coprime with each $q_i$. Because this assumption no longer makes sense in a multivariate setting, we use the matrix equation developed by Renegar \cite{DBLP:journals/jsc/Renegar92b}. While prior work \cite{BKR} formalized both styles of matrix equation \cite{BKR_AFP}, only the former was discussed at length in the paper.}: \begin{definition} Fix univariate polynomials of interest $p$ and $q_1, \dots, q_k$. Let $\tilde{\Sigma} = \{\tilde{\sigma}_1, \dots, \tilde{\sigma}_m\}$ be a set of possible sign assignments to $q_1, \dots, q_k$, and assume $\tilde{\Sigma}$ contains all consistent sign assignments to $q_1, \dots, q_k$ at the roots of $p$. Let $S$ be a set of pairs of subsets $(I_1, J_1),$ $\dots,$ $(I_{l}, J_{l})$ where for all $1 \leq i \leq l$, $I_i \subseteq \{1, \dots, k\}$ and $J_i \subseteq \{1, \dots, k\}$. Then the \emph{matrix equation} for $\tilde{\Sigma}$ and $S$ is the relationship $M \cdot w = v$ between the following three entities: \begin{itemize} \item $M$, the $l$-by-$m$ matrix with entries \[M_{i,j} = \left(\Pi_{\ell \in I_i} (1 - (\tilde{\sigma}_j(q_\ell))^2)\right) \cdot \left(\Pi_{\ell \in J_i} \tilde{\sigma}_j(q_\ell)\right)\in\{-1,0,1\}\] for $(I_i,J_i) \in S$ and $\tilde{\sigma}_j \in \tilde{\Sigma}$, \item $w$, the length $m$ vector whose entries count the number of roots of $p$ where $q_1, \dots, q_k$ has sign assignment $\tilde{\sigma}$, i.e., $ w_i = \#\{x \in \mathbb{R} ~|~ p(x) = 0, \text{sgn}(q_{\ell}(x)) = \tilde{\sigma}_i(q_{\ell}) ~\text{for all}~ 1\leq \ell \leq k\}$, \item $v$, the length $l$ vector consisting of Tarski queries for the subsets, i.e., $v_i = N(I_i, J_i)$. \end{itemize} \end{definition} Intuitively, as noted by prior work \cite{BKR}, the meaning of a matrix equation is captured by its associated list of signs and list of (pairs of) subsets. Both the matrix $M$ and the RHS vector $v$ are fully computable from these two lists, and $w$, which stores information about which possible sign assignments are consistent (sign assignment $\tilde{\sigma}_i$ is consistent iff $w_i$ is nonzero), is calculated as $M^{-1}\cdot v$. Now, for multivariate polynomials the situation is more complicated. We can still construct a matrix equation for multivariate polynomials---the definition of the matrix $M$ is the same as it was in the univariate setting, but the righthandside vector uses our function to construct a list of Tarski queries for multivariate polynomials. Each RHS vector---and so each matrix equation---comes with an associated list of assumptions which were generated by the multivariate Tarski queries. So, for an input list of multivariate polynomials $p$ and $q_1, \dots, q_k$, we construct a \textit{list} of multivariate matrix equations that store sign information for these polynomials, subject to certain assumptions on polynomials in one fewer variable. The overall construction is very similar to that in the univariate case \cite{BKR}. It proceeds by induction on the number of $q$'s, so that the base case is for a single $q$. Smaller matrix equations are successively combined and reduced to form the matrix equation for $q_1, \dots, q_n$. The reduction is what differentiates the matrix equation of BKR from that of Tarski: information for inconsistent sign assignments is removed at appropriate intervals, which decreases the size of the matrix equation. In the univariate case, the size of the matrix equation is bounded by $(\text{card}\{x.\ p(x) = 0\})^2$, where $\text{card}\{x.\ p(x) = 0\}$ is the number of roots of the polynomial $p$. The size of a multivariate matrix equation is bounded by the number of roots of $p$ in a valuation satisfying the associated assumptions. As the univariate reduction step mainly involves computations on the matrix $M$, which is unchanged in the multivariate setting, it generalizes quite naturally, and so our hybrid algorithm essentially inherits reduction in the matrix equation construction, thus incorporating insights from BKR into our hybrid algorithm. We formalize our multivariate matrix equation construction in the \isa{calculate\_data\_assumps\_M} function (cross reference \rref{sec:SignDet}), and prove the following two key lemmas: \begin{isabelle} \calcdatacorrect \end{isabelle} This first lemma connects the behavior of our multivariate matrix equation constructor function to the Renegar-style univariate matrix equation function (\isa{calculate\_data\_R}) formalized in prior work \cite{BKR}. That is, on any valuation \isa{val} that satisfies the assumptions \isa{assumps}, the associated multivariate matrix equation \isa{mat\_eq}, which finds the consistent sign assignments for \isa{qs} at the zeros of some \isa{p} in the valuation \isa{val}, is equal to the univariate matrix equation that find the consistent sign assignments for \isa{eval\_qs} at the zeros of \isa{eval\_p} , where \isa{eval\_p} is \isa{p} evaluated on \isa{val} and \isa{eval\_qs} is \isa{qs} evaluated on \isa{val}. This is a soundness lemma, since it explains that whenever our output is useful, it has the correct mathematical meaning. \begin{isabelle} \calcdatacomplete \end{isabelle} This second lemma shows that when we give logically consistent input assumptions to the function \isa{calculate\_data\_assumps\_M}, some output with logically consistent assumptions will be generated (i.e., useful input generates useful output). These lemmas are analogous to those discussed for multivariate Tarski queries; taken together, they help us prove key correctness properties of our \isa{elim\_forall} method, which serves to eliminate a single universal quantifier. We now turn to a discussion of our top-level QE methods, including \isa{elim\_forall}. \subsection{Overall Quantifier Elimination Algorithm}\label{sec:qe} To best explain our formalized QE algorithm, we must first touch on the framework we are working with. We build on the framework of Scharager \emph{et al.} \cite{scharager2021verified}; this work verified (in Isabelle/HOL) the virtual substitution algorithm, an efficient QE method that applies to a low-degree fragment of real arithmetic. Their development sets up a framework for multivariate QE (including a type for real QE problems and a function to evaluate QE problems at real-valued points); by building on this, we are ultimately able to link together our verified (complete, inefficient) QE method with Scharager \emph{et al.}'s, essentially using their (incomplete but experimentally promising) verified QE method as a preprocessing step for our algorithm. Accordingly, we work with formulas of type \isa{atom\ fm} \cite{scharager2021verified}, which have the following grammar: \begin{align*} F, G~&\mathrel{::=}~\text{TrueF}~ ~|~ \text{FalseF} ~|~ (\text{Atom}(\text{Eq}\ p)) ~|~ (\text{Atom}(\text{Less}\ p))~|~ \\ &\ (\text{Atom}(\text{Leq}\ p)) ~|~ (\text{Atom}(\text{Neq}\ p)) ~|~ \text{And}\ F\ G ~|~ \text{Or}\ F\ G ~|~ \\ &\ \ \text{Neg}\ F ~|~ \text{ExQ}\ F ~|~ \text{AllQ}\ F ~|~ \text{ExN}\ n\ F ~|~ \text{AllN}\ n\ F, \end{align*} where $p$ is a real polynomial and $n \in \mathbb{N}$. Here, $(\text{Atom}(\text{Eq}\ p))$ captures the relationship $p = 0$, $(\text{Atom}(\text{Less}\ p))$ captures $p < 0$, $(\text{Atom}(\text{Leq}\ p))$ captures $p \leq 0$, and $(\text{Atom}(\text{Neq}\ p))$ captures $p \neq 0$. Further, $\text{And}\ F\ G$ captures the logical meaning of $F \land G$, $\text{Or}\ F\ G$ captures $F \lor G$, and $\text{Neg}\ F$ captures $\lnot F$. Finally, $\text{ExQ}\ F$ indicates that formula $F$ is quantified by an existential quantifier, $\text{AllQ}\ F$ indicates that $F$ is quantified by a universal quantifier, $\text{ExN}\ n\ F$ indicates that $F$ is quantified by a block of $n$ existential quantifiers, and $\text{AllN}\ n\ F$ indicates that $F$ is quantified by a block of $n$ universal quantifiers. In these formulas, variables are represented with de Bruijn indices; \isa{Var\ 0} is the variable quantified by the innermost quantifier, \isa{Var\ 1} is the variable quantified by the second innermost quantifier, and so on. We operate on quantifiers inside-out, i.e. we start with the quantifier attached to \isa{Var 0}. Our \isa{elimforall} function is designed to eliminate a single $\forall$ quantifier. It parallels the method visualized in \rref{fig:QEOverview}. \begin{isabelle} \elimforall \end{isabelle} Here, \isa{extract\_polys} finds the polynomials \isa{qs} in our formula \isa{F}, and \isa{univariate\_in\ qs\ 0} transforms our polynomials \isa{qs} to have the \isa{rmpoly} type (so that they are univariate polynomials in \isa{Var 0}, with coefficients that are multivariate polynomials in bigger variables). The resulting list of polynomials is called \isa{univ\_qs}. Then, in \isa{reindexed\_univ\_qs}, we transform the coefficients of every polynomial in \isa{univ\_qs} (which do not contain \isa{Var 0}) by lowering every variable index by 1. This lowering is crucial for finding all possible signs/assumptions pairs for our multivariate polynomial coefficients (cross reference \rref{sec:SignDet}), as \isa{sign\_determination} expects polynomials in \isa{Var\ 0}. We then retain all the sign assignments that satisfy our formula of interest, and return a disjunction of the associated assumptions. If our original formula involved polynomials in variables \isa{Var\ 0}, \dots, \isa{Var\ n}, then, because of the reindexing step, these assumptions will be polynomials in variables \isa{Var\ 0}, \dots, \isa{Var\ n - 1}. Our new \isa{Var\ 0}, which was previously \isa{Var\ 1}, will correctly match to the new innermost quantifier, which was previously the second innermost quantifier, and so on. Our top-level QE method, named \isa{qe}, heavily relies on \isa{elim\_forall} and \isa{elim\_exist} (where \isa{elim\_exist\ F} is defined as \isa{Neg\ \isacharparenleft elim\_forall\ \isacharparenleft Neg\ F\isacharparenright \isacharparenright}): \begin{isabelle} \qe \end{isabelle} Here, because the connective names \isa{And} and \isa{Or} are overloaded in the files we import, we must specify that they come from the \isa{PolyAtoms} file \cite{scharager2021verified}. Our top-level correctness theorem says that for any assignment \isa{\isasymnu} of the free variables in \isa{F} to real numbers, our original formula \isa{F} has the same truth-value as \isa{qe\ F}; or, in other words, \isa{F} and \isa{qe\ F} are logically equivalent: \begin{isabelle} \qecorrect \end{isabelle} Here, \isa{PolyAtoms.eval} is the function formalized by Scharager \emph{et al.} \cite{scharager2021verified} to evaluate formulas of type \isa{atom\ fm} on valuations. This function accounts for the reindexing of free variables that naturally takes place during QE. For example, $\forall x.\ x^2y \leq 0$ is logically equivalent to $y \leq 0$, but since variables are represented with de Bruijn indices, where the innermost quantifier corresponds with \isa{Var\ 0}, $\forall x.\ x^2y \leq 0$ is represented in the \isa{atom\ fm} type as \isa{AllQ (Leq\ ((Var\ 0)\isacharcircum 2\ \isasymcdot\ Var\ 1))} whereas $y \leq 0$ is represented as \isa{Leq (Var 0)}. In \isa{PolyAtoms.eval}, this subtlety is handled by defining, e.g., \isa{\ PolyAtoms.eval\ (AllQ\ F)\ v} as \isa{(\isasymforall\ x.\ (PolyAtoms.eval\ F\ (x\isacharhash v)))}, where \isa{x\isacharhash v} is the list with head \isa{x} and tail \isa{v}. So, \isa{qe\_correct} shows that \isa{F} evaluated on any mapping of free variables to real numbers is equal to \isa{qe\ F} evaluated on that same mapping, which establishes that \isa{qe} is sound. We also show that \isa{qe} fully removes quantifiers in the following lemma, where \isa{countQuantifiers} counts the number of existential or universal quantifiers in formula \isa{F}: \begin{isabelle} \qeremoves \end{isabelle} \noindent This result shows that \isa{qe} is complete. To our knowledge, \isa{qe} is the first sound and complete algorithm for real QE to be formalized in Isabelle/HOL\xspace (previous work \cite{DBLP:journals/jar/Nipkow10, scharager2021verified} was sound but not complete). We now turn to some further details regarding our formalization. \section{Formalization Details} Isabelle/HOL\xspace is well-suited for us; we not only benefit considerably from the well-devel\-oped libraries (including aforementioned prior work \cite{BKR,li2019deciding,scharager2021verified}), but also from Isabelle/HOL\xspace's support for automated proof search in Sledgehammer \cite{DBLP:conf/lpar/PaulsonB10}. However, at the same time, working in the formal setting of Isabelle/HOL\xspace poses considerable challenges. In this section, we begin by discussing some of those challenges, followed by some of the high-level proof techniques that helped us succeed in our formalization. We then discuss some useful low-level details regarding our extensions to Isabelle/HOL\xspace's multivariate polynomials library. Finally, we discuss our code export and the performance of our algorithm. \subsection{Challenges}\label{sec:Difficult} Many design decisions for the functions described in \rref{sec:QE} were not initially evident. For example, the need to consistently track assumptions and pass them in as an argument to our functions throughout the calculation of the matrix equation was initially not obvious. At first, we wrote a function that was nearly identical to \isa{calculate\_data\_assumps\_M}, with the one major difference that we did not include \isa{assumps} as an argument to this function. While this function was fully capable of generating a multivariate matrix equation, we soon realized we had made a major mistake when we tried to extend it into a larger QE algorithm. After this, we were careful to always include an argument for assumptions in our functions if it could possibly be applicable, regardless of whether or not it seemed immediately relevant. The challenge of correctly formalizing the algorithm in Isabelle/HOL\xspace is heightened because the precision of formalization sometimes identifies details that were underspecified in the source material. Indeed, BKR's discussion of the multivariate QE algorithm was limited to only two pages and proceeds at a very high level \cite{DBLP:journals/jcss/Ben-OrKR86}. Renegar \cite{DBLP:journals/jsc/Renegar92b} is considerably more detailed, but is also written in the style of mathematics, which necessitates some translation to the level of formalization. For example, the way in which the limit point calculation should be formalized, while entirely obvious in retrospect, did not become clear to us until we fixed a method of branching---and indeed, our initial method of formalizing the limit point calculation, which was agnostic to branching, did not make it into the final code for the algorithm. Of this calculation, Renegar writes the following, in which he uses the notation $g_i$ where we use $q_i$, and $f$ instead of $p$ \cite{DBLP:journals/jsc/Renegar92b}: ``. . . each consistent sign vector of $\{g_i\}_i$ occurs at some real zero of $f$ except, perhaps, for the sign vectors of points to the right or left of all real zeros of $\prod_i g_i$. However, the latter two consistent sign vectors are trivially determined from the leading coefficients of the polynomials $g_i$.'' While this completely describes the mathematical use of the limit point calculations, it took some time to translate it into Isabelle/HOL\xspace definitions and proofs. Finally, a last challenge is that even simple details can become complex in the formalized setting of a theorem prover. For example, working with multivariate polynomials in Isabelle/HOL\xspace poses a challenge, as the formal setting requires rigor even for operations that are simple on paper but may become much more involved when formalized. For example, the transformation to treat a multivariate polynomial as univariate in some variable of interest is immediate on paper, but in Isabelle/HOL it is more subtle, precisely because the type of our object is changing: $3xyz^2 + 6x^2wv + 5xy + 1$ has type \isa{real\ mpoly}, whereas $(6wv)x^2 + (3yz^2 + 5y)x + 1$ has type \isa{rmpoly} (see also \rref{sec:SignDet}). \subsection{High Level Proof Techniques} Though treating multivariate polynomials as univariate in some variable of interest poses low-level challenges in our formal setting, it affords significant high-level simplifications. Many of our proofs rely on the technique of universal \textit{projection}---we assume fixed real values for all variables aside from a variable of interest, which lets us work with \textit{truly} univariate polynomials. Projection allows us to connect functions in our multivariate construction to corresponding functions in the univariate construction from prior work \cite{BKR}. This works because the multivariate case of the BKR algorithm builds rather directly on the univariate case, making it amenable to formalization, as noted previously \cite{BKR}. In consequence, each key function involved in the construction of the multivariate matrix equation requires two top-level associated lemmas. The first is a soundness lemma which connects the behavior of the multivariate function to a corresponding univariate function \cite{BKR} through projection. The second is a completeness lemma which establishes that data for all possible projections is captured by the function for some assumptions. Some examples of these soundness and completeness lemmas are seen in \rref{sec:MatEq} (see \isa{multiv\_tarski\_query\_correct}, \isa{multiv\_tarski\_query\_complete}, e.g.); there are many more in the actual proof development. This proof structure does not seek to closely mimic the (highly mathematical) proofs in the source material \cite{BKR,DBLP:journals/jsc/Renegar92b}, but rather to translate the key intuition into a shape which is amenable to formalization. Our construction and proofs are designed to be modular, and we often rely on induction to prove key properties of helper functions. In particular, we found it very helpful to use custom induction theorems, supplementing those automatically generated by Isabelle/HOL\xspace. For example, the \isa{spmods\_multiv\_aux} function shown (abridged) below computes a list of pseudo-remainder sequences for polynomials \isa{p} and \isa{q} together with corresponding sign assumptions on the leading coefficients of the polynomials in each sequence. \begin{isabelle} \spmodsauxmanual \end{isabelle} The function branches depending on whether \isa{q} is the zero polynomial, otherwise, it recurses on the (possible) signs of its leading coefficient \isa{lead\_coeff q}. Here, \isa{assumps} specifies a list of assumed input sign conditions, which are checked for assumptions on \isa{lead\_coeff q}. Notably, \isa{spmods\_multiv\_aux} is \emph{not} structurally recursive; its termination uses the fact that, on each recursive call, the degree of the polynomial arguments \isa{one\_less\_degree q} or \isa{mul\_pseudo\_mod p q} strictly decreases. For such functions, Isabelle/HOL\xspace automatically generates induction theorems, but these theorems lack the usual case-splitting support for structurally recursive functions~\cite{DBLP:conf/mkm/Wenzel06}. The following snippet shows the Isabelle/HOL\xspace subgoal (\verb|cases|) structure that results from applying induction with the generated theorem for \isa{spmods\_multiv\_aux}. {\small\begin{verbatim} // apply (induct ... spmods_multiv_aux.induct) Proof outline with cases: case (1 p q assumps) ... qed \end{verbatim}} Although \isa{spmods\_multiv\_aux.induct} can, \emph{in principle}, be used to prove the aforementioned soundness and completeness properties for \isa{spmods\_multiv\_aux}, we found the proofs tedious in practice because they lack the case structuring benefits of Isabelle/HOL\xspace's proof language~\cite{DBLP:conf/mkm/Wenzel06}---users essentially have to redo the termination argument for each case when invoking the inductive hypothesis. Instead, we manually prove an alternative induction theorem that mimics the branching structure of \isa{spmods\_multiv\_aux} (one base case, three branches with recursion). As before, a snippet of the Isabelle/HOL\xspace subgoal (cases) structure is shown below (comments added to illustrate the branching structure). {\small\begin{verbatim} // apply (induct ... spmods_multiv_aux_induct) Proof outline with cases: case (Base p q assumps) ... // base case (q = 0) next case (Rec p q assumps) ... // lookup_assump_aux returns None next case (Lookup0 p q assumps) ... // lookup_assump_aux returns Some 0 next case (LookupN0 p q assumps r) ... // otherwise qed \end{verbatim}} Though some manual effort is needed to state and prove \isa{spmods\_multiv\_aux\_induct}, our subsequent, repeated use of this customized induction theorem makes it well worth the initial investment. Manual induction theorems are also used elsewhere in the development, particularly to verify invariant properties of the helper function that underlies the branching function \isa{lc\_assump\_generation\_list} (see \rref{sec:SignDet}). \subsection{Library Extensions}\label{sec:LibExt} We turn to some of our key results for multivariate polynomials and the library extensions they prompted. As seen in \rref{sec:Difficult}, we need a function to convert polynomials of type \isa{real\ mpoly} to polynomials of type \isa{real\ mpoly\ poly}. Eberl and Thiemann formalized one such way of doing this in their \isa{mpoly\_to\_mpoly\_poly} definition \cite{Factor_Algebraic_Polynomial-AFP}. We provide the following alternate definition, which is executable: \begin{isabelle} \mpolytopolyalt \end{isabelle} This function relies on the \isa{isolate\_variable\_sparse} function \cite{Virtual_Substitution-AFP}, where \isa{isolate\_variable\_sparse\ p\ x\ i} finds the coefficient of \isa{x\isacharcircum i} in \isa{p}. For each \isa{i} from 0 to the degree of \isa{x} in \isa{p}, we find this coefficient and construct a monomial of type \isa{poly} with degree \isa{i} and this coefficient. Our final polynomial is the sum of all of these monomials. We connect our new definition to \isa{mpoly\_to\_mpoly\_poly} in the following lemma: \begin{isabelle} \multivasuniv \end{isabelle} This enables a natural interface between Eberl and Thiemann's work \cite{Factor_Algebraic_Polynomial-AFP} and the large collection of lemmas regarding \isa{isolate\_variable\_sparse} \cite{Virtual_Substitution-AFP}. We benefit from Eberl and Thiemann's lemmas regarding \isa{mpoly\_to\_mpoly\_poly} in one of our main results regarding polynomials, which is useful in our correctness proof for \isa{elim\_forall} (cross reference \rref{sec:qe}): \begin{isabelle} \reindexeduniveval \end{isabelle} This lemma relates the evaluation of multivariate polynomials, of type \isa{real\ mpoly}, and multivariate polynomials \textit{treated as univariate polynomials} in the variable of interest \isa{Var 0}, of type \isa{rmpoly}. Here, \isa{eval\_mpoly} is our name for the natural definition of multivariate polynomial evaluation which substitutes real values for variables. Because variables are represented with de Bruijn indices, we can store the values to substitute in a list \isa{L}, where the element of \isa{L} at position 0 is then substituted for \isa{Var\ 0}, the element of \isa{L} at position 1 is substituted for \isa{Var\ 1}, and so on. If the length of \isa{L} is shorter than the number of variables, a default value of 0 is substituted for any variables that are not covered by \isa{L}. This definition previously existed \cite{scharager2021verified} namelessly: \begin{isabelle} \evalmpoly \end{isabelle} The \isa{eval\_mpoly\_poly} function maps \isa{eval\_mpoly} over the coefficients of a \isa{real\ mpoly\ poly}. Next, the \isa{lowerPoly} function is from Scharager \emph{et al.} \cite{scharager2021verified}; here, it serves to reindex variables in multivariate polynomials, so that \isa{lowerPoly\ 0\ 1\ q} lowers every variable index in \isa{q} by 1. The \isa{univariate\_in} operator is our function to perform this multivariate to univariate transformation. Let $q_i$ be the polynomial at the $i$th index of \isa{qs}, and $uq_i$ be the polynomial at the $i$th index of \isa{univ\_qs}---then the first assumption of \isa{reindexed\_univ\_qs\_eval} says that $uq_i$ is the polynomial that we obtain by treating $q_i$ as univariate in \isa{Var 0}. The second assumption says that \isa{reindexed\_univ\_qs} is the list of polynomials obtained by lowering all variable indices in the \textit{coefficients} of the \isa{univ\_qs} by 1. Let us call $ruq_i$ the polynomial at the $i$th index of \isa{reindexed\_univ\_qs}. Then, lemma \isa{reindexed\_univ\_qs\_eval} captures the mathematical equivalence of $q_i$ and $ruq_i$ by showing that evaluating $q_i$ on the valuation $v$ = \isa{x\isacharhash xs} gives the same result as evaluating the \textit{coefficients} of $ruq_i$ on \isa{xs} and then evaluating the resulting univariate polynomial (which now has constant coefficients) on \isa{x}. The proof of this key lemma required that we first prove the following fundamental extensionality result, which says that if two polynomials \isa{p} and \isa{q} (in $n$ variables) have identical evaluations on $\mathbb{R}^n$, then they are themselves identical: \begin{isabelle} \mpolyeval \end{isabelle} Since real multivariate polynomials are fundamental to many areas of mathematics, it is our hope that our library developments will be useful to others, including in the formalization of other QE algorithms, but also more widely. \subsection{Code Export}\label{sec:Export} We export our multivariate QE algorithm to SML code, which removes overhead and allows us to better test our algorithm on examples\footnote{This step requires trusting Isabelle/HOL's code generator in addition to the theorem prover's trusted core. Partial progress has been made on verifying Isabelle's code generator \cite{DBLP:conf/esop/HupelN18}.}. Building on the framework of Scharager \emph{et al.} (by using the same type for QE formulas and the same evaluation function for formulas) makes the connection with the verified virtual substitution algorithm \cite{scharager2021verified} very easy\footnote{The top-level correctness theorems for verified virtual substitution \cite{scharager2021verified} have a very similar shape to \isa{qe\_correct}, as they state that for each top-level formalized virtual substitution method \isa{V} and valuation \isa{\isasymnu}, \isa{PolyAtoms.eval F \isasymnu} equals \isa{PolyAtoms.eval\ (V\ F)\ \isasymnu}. This makes it easy to verify that, for any valuation \isa{\isasymnu}, \isa{PolyAtoms.eval\ F\ \isasymnu} equals \isa{PolyAtoms.eval\ ((qe \isasymcirc\ V)\ F)\ \isasymnu}. }. This means that we are able to retain efficiency \cite{scharager2021verified} on examples that are tractable for virtual substitution. However, because virtual substitution is \textit{not} a complete QE method (i.e., it is not able to solve all QE problems), the efficiency, or lack thereof, of our (complete) algorithm is still significant. Unfortunately (but not unexpectedly), without the link to virtual substitution, our hybrid multivariate algorithm is not at all efficient; it appears to hang on all but the simplest univariate examples. However, we do not consider our algorithm's present inefficiency to be a fatal flaw, since we envision it as being a (major) stepping stone on the way towards an optimized algorithm. As noted previously \cite{scharager2021verified}, unverified computer algebra systems have realized efficient QE in part because many have been extensively optimized over several decades; thus, it is natural that optimized verified algorithms will similarly take time to develop. While inefficiency is not unexpected given that even Renegar may not realize practical efficiency in its current state \cite{HongTechRpt,DBLP:journals/cj/HeintzRS93}, at present, we strongly suspect that part of the efficiency bottleneck for our algorithm is the untenable branching in the computation of the multivariate Tarski queries; this can be significantly reduced in the future by implementing an algorithm that more closely follows BKR. We also believe that our algorithm's lack of inherent optimizations is another contributing factor; as one example, we currently branch unnecessarily on the signs of constant coefficients. However, it does not make sense to focus on optimizing our algorithm at this stage (optimizations may be brittle). Once the branching reflects the full reduction of BKR, then inefficiencies (such as the unnecessary branching on constant coefficients) should be identified and handled appropriately. \section{Related Work} From a theoretical standpoint, the most closely related work is one by Cyril Cohen, who formalized a sign-determination algorithm with reduction in Coq that, to our understanding, uses the same matrix equation as our algorithm, although the details of his formalization look quite different from ours.\footnote{This is in part because the setup is considerably different: while we extended a univariate QE procedure with reduction into multivariate, Cohen added reduction to an already multivariate sign-determination procedure.} To our knowledge, he has not yet used this improved sign-determination algorithm for a QE algorithm, and this work is unpublished, but a writeup is available on his webpage \cite{CyrilReduction}. Additionally, because the algorithm we verify is a hybrid between Tarski's QE algorithm and BKR, our work shares some theoretical overlap with Cohen and Mahboubi's formalization of Tarski's algorithm in Coq \cite{cohen_phd,AssiaQE}. From a practical standpoint, we benefit from the well-developed Isabelle/HOL libraries. This includes, of course, the verification of univariate BKR \cite{BKR} and virtual substitution \cite{scharager2021verified}, which have already been discussed at length. Additionally, we build on the formalization of pseudo-remainder sequences described by Li, Passmore, and Paulson \cite{li2019deciding}. This is not yet publicly available on the Isabelle/HOL Archive of Formal Proofs (AFP), but Wenda Li generously provided us with his code upon request. Although we formalize our own functions to generate pseudo-remainder sequences, which interface well with our assumptions-based framework (and which are specialized to the \isa{rmpoly} type), we derive insights from Li's code and mimic some of his structure in our functions, adapted appropriately to our purposes. We also benefit from proving a connection between our functions and his. \section{Conclusion and Future Work} We develop and formalize Isabelle/HOL's first complete multivariate quantifier elimination (QE) algorithm for the first-order logic of real-closed fields. Our algorithm mixes ideas from Tarski's original QE algorithm \cite{Tarski} and more efficient algorithms by BKR \cite{DBLP:journals/jcss/Ben-OrKR86} and Renegar \cite{DBLP:journals/jsc/Renegar92b}; the formalization requires rigorizing high-level mathematical insights \cite{DBLP:journals/jcss/Ben-OrKR86,DBLP:journals/jsc/Renegar92b}. We realize a number of ideas suggested in prior work by extending a univariate formalization of BKR \cite{BKR} to the multivariate world and by building on the framework of Scharager \emph{et al.} \cite{scharager2021verified} in order to link our work with an efficient verified virtual substitution QE algorithm. While our algorithm (on its own) currently has prohibitive inefficiency, its nontrivial library extensions and theoretical interest (including its potential to be extended into variant algorithms with promising parallel complexity \cite{1993Improved,DBLP:journals/aaecc/CuckerLMPR92,DBLP:journals/jsc/Renegar92b}) make it a meaningful contribution. Future work includes first extending our algorithm to one that realizes the full reduction of BKR \cite{DBLP:journals/jcss/Ben-OrKR86}. After this, it would be interesting to identify other areas of inefficiency and aggressively optimize. In addition to fine-tuning the branching to avoid splitting on trivial cases (most notably, on constants), one very significant (and challenging) task will be to optimize the computation of the Tarski queries; this was previously noted in the univariate case also \cite{BKR}. Overall, our contribution lays considerable groundwork for more optimized verified QE algorithms with inherent parallelism. \section{Acknowledgements} This material is based upon work supported by the National Science Foundation under Grant No. CNS-1739629, a National Science Foundation Graduate Research Fellowship under Grants Nos. DGE1252522 and DGE1745016, by the AFOSR under grant number FA9550-16-1-0288, and by A*STAR, Singapore. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, AFOSR, or A*STAR. \let\oldbibliography\thebibliography \renewcommand{\thebibliography}[1]{% \oldbibliography{#1}% \addtolength{\itemsep}{-6pt}% }
{ "timestamp": "2022-09-23T02:14:41", "yymm": "2209", "arxiv_id": "2209.10978", "language": "en", "url": "https://arxiv.org/abs/2209.10978" }
\section{Mirror Descent versus Proximal Point}\label{sec:mirror-descent} To highlight an important message of our work, in this section, we briefly discuss a mirror descent scheme with alternating updates, and we compare it to our proximal point algorithm in~\Cref{fig:ours_comparison}. Note that in contrast to the classical RL setting, where proximal point and mirror descent coincide because of the linear objective, in imitation learning this is not the case. The updates for the mirror descent scheme involve alternation between updating the occupancy measure $\mathbf{d}_k$ and the feature expectation vector $\mbs{\lambda}_k$ in one stage and the cost weights in a second stage. That is, \begin{align} (\mbs{\lambda}_{k},\mathbf{d}_{k})&=\argmin_{(\mbs{\lambda},\mathbf{d})\in\mathcal{M}_{\mbs{\Phi}}}\langle \mbs{\mu},\mbf{c}_{\mbf{w}_{k}}\rangle +\tfrac{1}{\eta}D(\mbs{\lambda}||\mbs{\Phi}^\intercal\mathbf{d}_{k-1})+\tfrac{1}{\alpha}H(\mathbf{d}||\mathbf{d}_{k-1}), \label{eq:RL_update}\\ \mbf{w}_{k+1}&=\argmin_{\mbf{w}\in\Delta_{[m]}}\innerprod{{\mbs{\mu}_{{\pi_{\textup{E}}}}-\mathbf{d}_{k}}}{\mbf{c}_\mbf{w}}+\tfrac{1}{\beta}D(\mbf{w}||\mbf{w}_{k}). \end{align} One can notice that the update in \Cref{eq:RL_update} corresponds to one update of Logistic $Q$-Learning \cite{Bas-Serrano:2020}. Therefore, it can be implemented by maximizing the negative logistic Bellman error that is now a function only of the variable $\mbs{\theta}$ and not of both $(\mbs{\theta},\mbf{w})$ as in PPM. The next proposition is the counterpart of Proposition~\ref{eq:q-update} for the mirror descent scheme. \begin{proposition}\label{prop:KKT-conditions} For a parameter $\mbs{\theta}\in\ar^m$, we define the state-action logistic value function $\mathbf{Q}_{\mbs{\theta}}\in\ar^{|\sspace||\aspace|}$ by $\mathbf{Q}_{\mbs{\theta}}\triangleq\mbs{\Phi}\mbs{\theta}$, and the $k$-step state logistic value function $\mbf{V}_{\mbs{\theta}}^k\in\ar^{|\sspace|}$ by \[ V_{\mbs{\theta}}^k(s)\triangleq-\frac{1}{\alpha}\log\left(\sum_a \pi_{\mathbf{d}_{k-1}}(a|s)e^{-\alpha Q_{\mbs{\theta}}(s,a)}\right). \] Moreover, for a fixed cost $\mbf{c}=\mbf{c}_\mbf{w}$, we define the $k$-step Bellman error function $\boldsymbol{\delta}_{\mbs{\theta},\mbf{w}}^k$ by $ \boldsymbol{\delta}_{\mbs{\theta},\mbf{w}}^k\triangleq\mbf{w}+\gamma\mbf{M}\mbf{V}_{\mbs{\theta}}^k-\mbs{\theta}. $ Then, the unique solution of the aforementioned problem is given by \begin{align} \lambda_k(i) &\propto (\mbs{\Phi}^\intercal\mathbf{d}_{k-1})(i)\,e^{-\eta\delta_{\mbs{\theta}_k,\mbf{w}_{k}}^k(i)},\\ \pi_{\mathbf{d}_k}(a|s)&\propto\pi_{\mathbf{d}_{k-1}}(a|s)\,e^{-\alpha Q_{\mbs{\theta}_k}(s,a)},\\ w_{k+1,i}&\propto w_{k,i}\,e^{-\beta\langle \mbs{\phi}_i\,,\,\mbs{\mu}_{\pi_{\textup{E}}}-\mathbf{d}_{k}\rangle}, \end{align} where $\mbs{\theta}_k$ is the maximizer of the negative $k$-step logistic Bellman error function \[ \mathcal{G}_k(\mbs{\theta})\triangleq-\frac{1}{\eta}\log\sum^m_{i=1}(\mbs{\Phi}^\intercal\mathbf{d}_{k-1})(i)e^{-\eta\delta^k_{\mbs{\theta},\mbf{w}_{k}}(i)}+(1-\gamma)\innerprod{\mbs{\nu}_0}{\mbf{V}_{\mbs{\theta}}^k}. \] \end{proposition} Proposition~\ref{prop:KKT-conditions} leads to an actor critic scheme that has three separate and alternating updates: (i) policy update stage, (ii) policy evaluation update, and (iii) cost weights update. Similar actor critic-schemes for different MDP models, and different policy evaluation objectives (e.g., minimizing the squared Bellman error) have been also proposed in~\cite{Zhang:2020,Liu:2022,Shani:2021}. Contrary to these schemes, in our proximal imitation learning algorithm, the policy evaluation step involves optimization of a single objective over both cost and $Q$-functions. In this way, we avoid instability or poor convergence in optimization due to nested policy evaluation and cost update steps. In section~\ref{sec:comparison}, we verify numerically that PPM outperforms Mirror Descent in simple tabular environments (see \Cref{fig:ours_comparison}). \section{Introduction}\label{introduction} This work is concerned with the prototypical setting of imitation learning (IL) where \begin{enumerate} \item An expert provides demonstrations of state-action pairs in an environment. The expert could be optimal or suboptimal with respect to an unknown cost/reward function. \item The learner chooses distance measure between its policy to be learned and the expert empirical distribution estimated from demonstrations. \item The learner employs an algorithm, which additionally may or may not use interactions with the environment, to minimize the chosen distance. \end{enumerate} In IL, the central goal of the learner is to recover a policy competitive with expert with respect to the underlying unknown cost function. IL is important for several real world applications like driving \cite{Knox:2021}, robotics \cite{Osa:2018}, and economics/finance \cite{Charpentier:2020} at the expense of following resources: ({\sc R1}) expert demonstrations, ({\sc R2}) (optional) interactions with the environment where the expert collected the demonstrations, and ({\sc R3}) computational resources for solving the problem template. Interestingly, while there is a vast amount of literature using optimization ideas on the IL problem template, i.e. Lagrangian duality \cite{Ho:2016,Fu:2018, Ke:2020, Kostrikov:2019, Kostrikov:2020}, resource guarantees are still widely missing since the optimization literature focuses on the resource ({\sc R3}) where IL literature mainly focuses on the first two resources ({\sc R1}) and ({\sc R2}). Our work leverages deeper connections between optimization tools and IL by showing how classical optimization tools can be applied in a linear programming formulation of IL problem guaranteeing efficiency in all ({\sc R1}), ({\sc R2}), ({\sc R3}). \begin{comment} Behavioral cloning \cite{Pomerleau:1991} that recasts the problem as supervised learning, has been the first proposed approach. Due to the covariate shift problem \cite{ross2010efficient, ross2011reduction}, the efficiency in terms of expert trajectories (R1) is low. To address this issue, \cite{Russell:1998, Ng:2000, Abbeel:2004, Ratliff:2006, Syed:2007, Neu:2007, Ziebart:2008, Abbeel:2008, Levine:2010, Levine:2011} proposed to cast the problem as inverse reinforcement learning (IRL), i.e. use the demonstrations to infer the cost function optimized by the expert. IRL improves the efficiency in terms of expert trajectories, at the cost of introducing the need of running reinforcement learning (RL) repetitively that is prohibitive in terms of computation (R2) and environment samples (R3). A successive line of work started with \cite{syed2008apprenticeship} highlights that repeated calls to a RL routine can be avoided. In particular, \cite{syed2008apprenticeship} proposes a linear programming (LP) formulation that leads to a high dimensional matrix game that unfortunately still constitutes a computational bottleneck (R2). This work inspired generative adversarial imitation learning (GAIL) \cite{Ho:2016} that introduces a cost regularization term and solve the problem with alternated updates. Several algorithms based on the alternated updates have been proposed \cite{Fu:2018, ke2020imitation, Krostikov:2019, Kostrikov:2020}. Despite achieving strong empirical performance with efficiency in terms of all (R1), (R2) and (R3) the theoretical understanding of methods with alternating updates have been studied only under the finite horizon setting~\cite{Liu:2021}, tabular MDPs~\cite{Shani:2021}, or in the infinite horizon case but with restrictive assumptions such linear quadratic regulator setting~\cite{Cai:2019}, generative model, coherence assumption \cite{Kamoutsi:2021,Bas-Serrano:2021} and bounded concentrability coefficients~\cite{Zhang:2020, Xu:2020}. Finally, a recent line of work~\cite{Garg:2021, Kalweit:2020} in IL bypasses the need of optimizing over cost functions by directly recovery a policy parametrized by state action value functions. These works achieved strong empirical performance, but guarantees are still lacking in the literature. \end{comment} \textbf{Our contributions:} This work aims at designing an algorithm enjoying both theoretical guarantees and convincing empirical performance. Our methodology is rooted in classical optimization tools and the LP approach to MDPs. More precisely, the method uses the recently repopularized overparameterization technique to obtain the Q-function as a Lagrangian multiplier~\cite{Mehta:2020,Bas-Serrano:2021} and solves the associated program using a \textsc{PPM} update with appropriately chosen Bregman divergences. This results to an actor-critic algorithm, with the key feature that the policy evaluation step involves optimization of a single concave and smooth objective over both cost and $Q$-functions. In this way, we avoid instability or poor convergence due to adversarial training~\cite{Ho:2016,Zhang:2020,Liu:2022,Shani:2021}, and can also recover an explicit cost along with Q-function. We further account for potential optimization errors, presenting an error propagation analysis that leads to rigorous guarantees for both online and offline setting. For the context of linear MDPs~\cite{Bas-Serrano:2021, Yang:2019, Jin:2020, Cai:2020, Wang:2020b, Agarwal:2020b, Neu:2020}, we provide explicit convergence rates and error bounds for the suboptimality of the learned policy, under mild assumptions, significantly weaker than those found in the literature until now. To our knowledge, such guarantees in this setting are provided for the first time. Finally, we demonstrate that our approach achieves convincing empirical performance for both linear and neural network function approximation. \textbf{Related Literature.} The first algorithm addressing the imitation learning problem is behavioral cloning \cite{Pomerleau:1991}. Due to the covariate shift problem \cite{Ross:2010,Ross:2011}, it has low efficiency in terms of expert trajectories ({\sc R1}). To address this issue, \cite{Russell:1998, Ng:2000, Abbeel:2004, Ratliff:2006, Syed:2007, Neu:2007, Ziebart:2008, Abbeel:2008, Levine:2010, Levine:2011} proposed to cast the problem as inverse reinforcement learning (IRL). IRL improves the efficiency in terms of expert trajectories, at the cost of introducing the need of running reinforcement learning (RL) repetitively, which can be prohibitive in terms of environment samples ({\sc R2}) and computation ({\sc R3}). A successive line of work started with \cite{Syed:2008} highlights that repeated calls to an RL routine can be avoided. This work inspired generative adversarial imitation learning (GAIL) \cite{Ho:2016} and other follow-up works \cite{Fu:2018, Ke:2020, Kostrikov:2019, Kostrikov:2020} that leveraged optimization tools like primal-dual algorithms but did not try to deepen the optimization connections to derive efficiency guarantees in terms of all ({\sc R1}),({\sc R2}),({\sc R3}). Finally, a recent line of work~\cite{Garg:2021, Kalweit:2020} in IL bypasses the need of optimizing over cost functions and thus avoids instability due to adversarial training. Although these algorithms achieve impressive empirical performance in challenging high dimensional benchmark tasks, they are hampered by limited theoretical understanding. This is the fundamental difference from our work, which enjoys both favorable practical performance and strong theoretical guarantees Existing model-free IL theoretical papers with global convergence guarantees assume either a finite horizon episodic MDP setting~\cite{Liu:2022}, or tabular MDPs~\cite{Shani:2021}, or the infinite horizon case but with restrictive assumptions, such as linear quadratic regulator setting~\cite{Cai:2019}, continuous kernelized nonlinear regulator \cite{Chang:2021, Kakade:2020}, generative model, coherence assumption \cite{Kamoutsi:2021,Bas-Serrano:2021}, or a linear transition law that can be completely specified by a finite-dimensional matrix~\cite{Zhang:2020,Liu:2022}. On the other hand, we provide convergence guarantees and error bounds for the context of linear MDPs ~\cite{Bas-Serrano:2021, Yang:2019, Jin:2020, Cai:2020, Wang:2020b, Agarwal:2020b, Neu:2020}. Despite being linear, the transition law can still have infinite degrees of freedom. To our knowledge, such guarantees in this setting are provided for the first time. Our work applies the technique known as regularization in the online learning literature \cite{Abernethy:2008, Shalev-Shwartz:2012} and Bregman proximal-point or smoothing in optimization literature \cite{Rockafellar:1976, Nesterov:2005} to the LP formulation for MDPs \cite{Manne:1960, DeGhellinck:1967, Denardo:1970, Borkar:1988, Hernandez-Lerma:1996, Hernandez-Lerma:1999, DeFarias:2003, DeFarias:2004, Schweitzer:1985, Petrik:2009, Petrik:2010, Abbasi-Yadkori:2014, Laksh:2018, Chen:2018, MohajerinEsfahani:2018, Wang:2019, Lee:2019a, Bas-Serrano:2020, Cheng:2020, Jin:2020, Shariff:2020}. From this perspective, we can see Deep Inverse Q-Learning~\cite{Kalweit:2020} and IQ-Learn~\cite{Garg:2021} that consider entropy regularization in the objective as smoothing using uniform distribution as center point. In our case, we instead use as center point the previous iteration of the algorithm (for the online case) or the expert (for the offline case). From the technical point of view, the most important related works are the analysis of REPS/Q-REPS~\cite{Peters:2010, Bas-Serrano:2021, Pacchiano:2021} and O-REPS~\cite{Zimin:2013} that first pointed out the connection between REPS and PPM. We build on their techniques with some important differences. In particular, while in the LP formulation of RL, PPM and mirror descent \cite{Beck:2003, Hazan:2016} are equivalent, recognizing that they are \textit{not equivalent} in IL is critical for stronger empirical performance. As an independent interest, our techniques can be used to improve upon the best rate for REPS in the tabular setting \cite{Pacchiano:2021} and to extend the guarantees to linear MDPs. In order to discuss in more detail our research questions and situate them among prior related theoretical and practical works, we provide in Appendix~\ref{app:related-literature} an extended literature review. \section{Background} \label{sec:background} \subsection{Markov Decision Processes}\label{sec:IL:MDPs} The RL environment and its underlying dynamics are typically abstracted as an MDP given by a tuple $(\sspace,\aspace,P,\mbs{\nu}_0,\mbf{c},\gamma)$, where $\sspace$ is the state space, $\aspace$ is the action space, $P:\sspace\times\aspace\rightarrow\Delta_{\sspace}$ is the transition law, $\mbs{\nu}_0\in\Delta_{\sspace}$ is the initial state distribution, $\mbf{c}\in[0,1]^{|\sspace||\aspace|}$ is the cost, and $\gamma\in(0,1)$ is the discount factor. For simplicity, we focus on problems where $\sspace$ and $\aspace$ are finite but too large to be enumerated. A \emph{stationary Markov policy} $\pi\colon\sspace\to\Delta_{\aspace}$ interacts with the environment iteratively, starting with an initial state $s_0\sim\mbs{\nu}_0$. At round $t$, if the system is at state $s_t$, an action $a_t\sim\pi(\cdot|s_t)$ is sampled and applied to the environment. Then a cost $c(s,a)$ is incurred, and the system transitions to the next state $s_{t+1}\sim P(\cdot|s,a)$. The goal of RL is to solve the optimal control problem $ \rho_\mbf{c}^\star\triangleq\min_{\pi}\rho_\mbf{c}(\pi), $ where $\rho_\mbf{c}(\pi)\triangleq(1-\gamma)\innerprod{\mbs{\nu}_0}{\mbf{V}^\pi_\mbf{c}}$ is the \emph{normalized total discounted expected cost} of $\pi$. The \emph{state value function} $\mbf{V}_\mbf{c}^\pi\in\ar^{|\sspace|}$ of $\pi$, given cost $\mbf{c}$, is defined by $ V_\mbf{c}^{\pi}(s) \triangleq\Exp_s^{\pi}\Big[\sum_{t=0}^\infty \gamma^t c(s_t, a_t)\Big]$, where $\Exp^{\pi}_{s}$ denotes the expectation with respect to the trajectories generated by $\pi$ starting from $s_0=s$. The \emph{optimal value function} $\mbf{V}_\mbf{c}^\star\in\ar^{|\sspace|}$ is defined by $ V_\mbf{c}^\star(s) \triangleq \min_{\pi}V_\mbf{c}^\pi(s). $ The \emph{optimal state-action value function} $\mathbf{Q}^\star_\mbf{c}\in\ar^{|\sspace||\aspace|}$, given by $Q_\mbf{c}^\star(s,a)\triangleq c(s,a)+\gamma\sum_{s'}V_\mbf{c}^\star(s')P(s'|s,a)$, is known to characterize optimal behaviors. Indeed $\mbf{V}^\star_\mbf{c}$ is the unique solution to the \emph{Bellman optimality equation} $V^\star_\mbf{c}(s)=\min_{a}Q^\star_\mbf{c}(s,a)$. In addition, any deterministic policy $\pi^\star_\mbf{c}(s)=\arg\min_a Q^\star_\mbf{c}(s,a)$ is known to be optimal. For every policy $\pi$, we define the \emph{normalized state-action occupancy measure} $\mbs{\mu}_\pi\in\Delta_{\sspace\times\aspace}$, by $ \mu_\pi(s,a) \triangleq (1-\gamma) \sum_{t=0}^\infty \gamma^t \Prob_{\mbs{\nu}_0}^{\pi}\left[s_t=s,a_t=a\right], $ where $\Prob_{\mbs{\nu}_0}^{\pi}[\cdot]$ denotes the probability of an event when following $\pi$ starting from $s_0\sim\mbs{\nu}_0$. The occupancy measure can be interpreted as the discounted visitation frequency of state-action pairs. This allows us to write $\rho_{\mbf{c}}(\pi)=\innerprod{\mbs{\mu}_\pi}{\mbf{c}}$. \subsection{Imitation Learning} \looseness=-1 Similarly to RL, the IL problem is posed in the MDP formalism, with the critical difference that the true cost $\mbf{c_{\textup{true}}}$ is unknown. Instead, we have access to a finite set of truncated trajectories sampled \textrm{i.i.d.} by executing an expert policy ${\pi_{\textup{E}}}$ in the environment. The goal is to learn a policy that performs better than ${\pi_{\textup{E}}}$ with respect to the unknown $\mbf{c_{\textup{true}}}$. To this end, we adopt the \emph{apprenticeship learning} formalism~\cite{Abbeel:2004,Syed:2008,Ho:2016b,Ho:2016,Shani:2021}, which carries the assumption that $\mbf{c_{\textup{true}}}$ belongs to a class of cost functions $\mathcal{C}$. We then seek an \emph{apprentice policy} $\pi_{\textup{A}}$ that outperforms the expert across $\mathcal{C}$ by solving the following optimization problem \begin{equation}\label{eq:IL} \zeta^\star\triangleq\min_{\pi} d_{\mathcal{C}}(\pi,{\pi_{\textup{E}}}), \end{equation} where $d_\mathcal{C}(\pi,{\pi_{\textup{E}}})\triangleq\max_{\mbf{c}\in\mathcal{C}} \big(\rho_\mbf{c}(\pi)-\rho_\mbf{c}({\pi_{\textup{E}}})\big)$ defines the $\mathcal{C}$-distance between $\pi$ and ${\pi_{\textup{E}}}$~\cite{Ho:2016,Chen:2020a,Zhang:2020,Liu:2022}. Then, $\pi_{\textup{A}}$ satisfies the goal of IL, since it holds that $\rho_{\mbf{c_{\textup{true}}}}(\pi_{\textup{A}})-\rho_{\mbf{c_{\textup{true}}}}({\pi_{\textup{E}}})\le\zeta^\star\le 0$. Intuitively, the cost class $\mathcal{C}$ distinguishes the expert from other policies. The maximization in~(\ref{eq:IL}) assigns high total cost to non-expert policies and low total cost to ${\pi_{\textup{E}}}$~\cite{Ho:2016}, while the minimization aims to find the policy that matches the expert as close as possible with respect to $d_{\mathcal{C}}$. By writing $d_\mathcal{C}$ in its \emph{dual} form $\bar{d}_{\mathcal{C}}(\mbs{\mu}_\pi,\mbs{\mu}_{{\pi_{\textup{E}}}})\triangleq\max_{\mbf{c}\in\mathcal{C}} \big(\innerprod{\mbs{\mu}_\pi}{\mbf{c}}-\innerprod{\mbs{\mu}_{{\pi_{\textup{E}}}}}{\mbf{c}}\big)$, it can be interpreted as an \emph{integral probability metric}~\cite{Muller:1997,Kent:2021} between the occupancy measures $\mbs{\mu}_\pi$ and $\mbs{\mu}_{{\pi_{\textup{E}}}}$. Depending on how $\mathcal{C}$ is chosen, $d_{\mathcal{C}}$ turns to a different metric of probability measures like the $1$-Wasserstein distance~\cite{Xiao:2019,Dadashi:2021} for $\mathcal{C}=\textup{Lip}_1(\sspace\times\aspace)$, the total variation for $\mathcal{C}=\{\mbf{c}\mid\norm{\mbf{c}}_\infty\le 1\}$, or the maximum mean discrepancy for $\mathcal{C}=\{\mbf{c}\mid\norm{\mbf{c}}_{\mathcal{H}}\le 1\}$, where $\textup{Lip}_1(\sspace\times\aspace)$ denotes the space of $1$-Lipschitz functions on $\sspace\times\aspace$, and $\norm{\cdot}_{\mathcal{H}}$ denotes the norm of a reproducing kernel Hilbert space $\mathcal{H}$~\cite{Shalev-Shwartz:2014}. \looseness=-1 In our theoretical analysis, we focus on linearly parameterized cost classes~\cite{Syed:2007,Syed:2008,Ho:2016,Liu:2022,Shani:2021} of the form $\mathcal{C}\triangleq\{\mbf{c}_{\mbf{w}}\triangleq\sum_{i=1}^m w_i \mbs{\phi}_i \mid \mbf{w}\in\mathcal{W}\}$, where $\{\mbs{\phi}_i\}_{i=1}^m\subset\Re_+^{\abs{\sspace}\abs{\aspace}}$ are fixed feature vectors, such that $\norm{\mbs{\phi}_i}_1 \le 1$ for all $i\in[m]$, and $\mathcal{W}$ is a a convex constraint set for the cost weights $\mbf{w}$. This assumption is not necessarily restrictive as usually in practice the true cost depends on just a few key properties, but the desirable weighting that specifies how different desiderata should be traded-off is unknown~\cite{Abbeel:2004}. Moreover, the cost features can be complex nonlinear functions that can be obtained via unsupervised learning from raw state observations~\cite{Brown:2020b,Chen:2020b}. The matrix $\mbs{\Phi}\triangleq\begin{bmatrix}\mbs{\phi}_1&\ldots&\mbs{\phi}_{m}\end{bmatrix}$ gives rise a \emph{feature expectation vector} (FEV) $\FEV{\pi} \triangleq (\rho_{\mbs{\phi}_1}({\pi_{\textup{E}}}),\ldots,\rho_{\mbs{\phi}_{m}}({\pi_{\textup{E}}}))^\intercal \in\Re^m$ of a policy $\pi$. Then, by choosing $\mathcal{W}$ to be the $\ell_2$ unit ball $B_1^m\triangleq\{\mbf{w}\in\ar^m\mid\norm{\mbf{w}}_2\le1\}$~\cite{Abbeel:2004}, we get a \emph{feature expectation matching} objective $d_{\mathcal{C}}(\pi,\pi_{{\pi_{\textup{E}}}})=\norm{\FEV{\pi}-\FEV{{\pi_{\textup{E}}}}}_2$, while for $\mathcal{W}$ being the probability simplex $\Delta_{[m]}$~\cite{Syed:2007,Syed:2008} we have a worst-case excess cost objective $d_{\mathcal{C}}(\pi,\pi_{{\pi_{\textup{E}}}})=\max_{i\in[m]}\big(\rho_{\mbs{\phi}_i}(\pi)-\rho_{\mbs{\phi}_i}({\pi_{\textup{E}}})\big)$. For clarity, we will replace $\mbf{c}$ by $\mbf{w}$ in the notation of the quantities defined in Section~\ref{sec:IL:MDPs}. \section{A \pdfmath{Q}-Convex-Analytic Viewpoint} \label{sec:LP_form} Our methodology builds upon the convex-analytic approach to AL, first introduced by~\cite{Syed:2008}, with the key difference that we consider a different convex formulation that introduces $Q$-functions as slack variables. This allows to design a practical scalable model-free algorithm with theoretical guarantees. Let $\mathfrak{F}\triangleq\{\mbs{\mu}\in\ar^{|\sspace||\aspace|}\mid (\mathbf{B}-\gamma\mbf{P})^\intercal\mbs{\mu}=(1-\gamma)\mbs{\nu}_0,\; \mbs{\mu}\geq\mathbf{0}\}$ be the \emph{state-action polytope}, where $\mbf{P}$ is the vector form of $P$, i.e., $P_{(s,a),s'}\triangleq P(s'|s,a)$, and $\mbf{B}$ is a binary matrix defined by $B_{(s,a),s'}\triangleq 1$ if $s=s'$, and $B_{(s,a),s'}\triangleq 0$ otherwise. The linear constraints that define the set $\mathfrak{F}$, also known as \emph{Bellman flow constraints}, precisely characterize the set of state-action occupancy measures. \begin{proposition}[\citealp{Puterman:1994}]\label{pror:state-action-polytope} We have that $\mbs{\mu}\in\mathfrak{F}$ if and only if there exists a unique stationary Markov policy $\pi$ such that $\mbs{\mu}=\mbs{\mu}_\pi$. If $\mbs{\mu}\in\mathcal{F}$ then the policy $ \pi_{\mbs{\mu}}(a|x) \triangleq \frac{\mbs{\mu}(x,a)}{\sum_{a'\in\aspace}\mbs{\mu}(x,a')} $ has occupancy measure $\mbs{\mu}$. \end{proposition} Using Proposition~\ref{pror:state-action-polytope} and the dual form of the $\mathcal{C}$-distance $\bar{d}_{\mathcal{C}}(\mbs{\mu},\mbs{\mu}_{{\pi_{\textup{E}}}})=\max_{\mbf{w}\in\mathcal{W}}\innerprod{\mbs{\mu}-\mbs{\mu}_{{\pi_{\textup{E}}}}}{\mbf{c}_{\mbf{w}}}$, it follows that~(\ref{eq:IL}) is equivalent to the primal convex program $\zeta^\star=\min_{\mbs{\mu}}\{\bar{d}_{\mathcal{C}}(\mbs{\mu},\mbs{\mu}_{{\pi_{\textup{E}}}})\mid\mbs{\mu}\in\mathfrak{F}\}$. In particular for $\mathcal{W}=\Delta_{[m]}$ and by using an epigraphic transformation, we end up with an LP program~\cite{Syed:2008}, while for $\mathcal{W}=B_1^m$ we get a quadratic objective with linear constraints~\cite{Abbeel:2004}. A slight variation of the above reasoning is to introduce a mirror variable $\mathbf{d}$ and split the Bellman flow constraints in the definition of $\mathfrak{F}$. We then get the primal convex program \begin{equation}\label{eq:primal} \zeta^\star=\min_{(\mbs{\mu},\mathbf{d})}\{\bar{d}_{\mathcal{C}}(\mbs{\mu},\mbs{\mu}_{{\pi_{\textup{E}}}})\mid(\mbs{\mu},\mathbf{d})\in\mathfrak{M}\}, \tag{\color{blue}Primal} \end{equation} where the new polytope is given by $\mathfrak{M}\triangleq\{(\mbs{\mu},\mathbf{d})\mid\mathbf{B}^\intercal\mathbf{d}=\gamma\mbf{P}^\intercal\mbs{\mu}+(1-\gamma)\mbs{\nu}_0,\; \mbs{\mu}=\mathbf{d},\; \mathbf{d}\geq\boldsymbol{0}\}$. This overparameterization trick has been first introduced by Mehta and Meyn~\cite{Mehta:2009} and has been recently revisited by~\cite{Bas-Serrano:2021,Neu:2020,Lee:2019a,Neu:2021,Mehta:2020,Lu:2021}. A salient feature of this equivalent formulation is that it introduces a $Q$-function as Lagrange multiplier to the equality constraint $\mathbf{d}=\mbs{\mu}$, and so lends itself to data-driven algorithms. To motivate further this new formulation, in Appendix~\ref{app:strong-duality}, we shed light to its dual and provide an interpretation of the dual optimizers. In particular, when $\mathcal{W}=B_1^m$, we show that $(\mathbf{V}^\star_{\mathbf{w_{\textup{true}}}},\mathbf{Q}^\star_{\mathbf{w_{\textup{true}}}},\mathbf{w_{\textup{true}}})$ is a dual optimizer. For our theoretical analysis we focus on the linear MDP setting~\cite{Jin:2020}, i.e., we assume that the transition law is linear in the feature mapping. We denote by $\mbs{\phi}(s,a)$ the $(s,a)$-th row of $\mbs{\Phi}$. \begin{assumption}[Linear MDP]\label{ass:linear-MDP} There exists a collection of $m$ signed measures $\boldsymbol{\omega}=(\omega_1,\ldots,\omega_m)$ on $\sspace$, such that $P(\cdot|s,a)=\innerprod{\boldsymbol{\omega}(\cdot)}{\phi(s,a)}$, for all $(s,a)$. Moreover $\mbs{\phi}(s,a)\in\Delta_{[m]}$, for all $(s,a)$. \end{assumption} \looseness=-1 Assumption~\ref{ass:linear-MDP} essentialy says that the transition matrix $\mbf{P}$ has rank at most $m$, and $\mbf{P}=\mbs{\Phi}\mbf{M}$ for some matrix $\mbf{M}\in\ar^{m\times|\sspace|}$. It is worth noting that in the case of continuous MDPs, despite being linear, the transition law $P(\cdot|s,a)$ can still have infinite degrees of freedom. This is a substantial difference from the recent theoretical works on IL~\cite{Zhang:2020,Liu:2022,Shani:2021} which consider either a linear quadratic regulator, or a transition law that can be completely specified by a finite-dimensional matrix such that the degrees of freedom are bounded. \looseness=-1 Assumption~\ref{ass:linear-MDP} enables us to consider a relaxation of~(\ref{eq:primal}). In particular, we aggregate the constraints $\mbs{\mu}=\mathbf{d}$ by imposing $\mbs{\Phi}^\intercal\mbs{\mu}=\mbs{\Phi}^\intercal\mathbf{d}$ instead, and introduce a variable $\mbs{\lambda} = \mbs{\Phi}^\intercal \mbs{\mu}$. It follows that $\mbs{\lambda}$ lies in the $m$-dimensional simplex $\Delta_{[m]}$. Then, we get the following convex program \begin{equation}\label{eq:primal'} \zeta^\star=\min_{(\mbs{\lambda},\mathbf{d})}\{\max_{\mbf{w}\in\mathcal{W}}\innerprod{\mbs{\lambda}}{\mbf{w}}-\innerprod{\mbs{\mu}_{{\pi_{\textup{E}}}}}{\mbf{c}_{\mbf{w}}}\mid(\mbs{\lambda},\mathbf{d})\in\mathfrak{M}_{\mbs{\Phi}}\}, \tag{\color{blue}Primal$^\prime$} \end{equation} where $\mathfrak{M}_{\mbs{\Phi}}\triangleq\{(\mbs{\lambda},\mathbf{d})\mid\mathbf{B}^\intercal\mathbf{d}=\gamma\mbf{M}^\intercal\mbs{\lambda}+(1-\gamma)\mbs{\nu}_0,\; \mbs{\lambda}=\mbs{\Phi}^\intercal\mathbf{d},\;\mbs{\lambda}\in\Delta_{[m]},\; \mathbf{d}\in\Delta_{\sspace\times\aspace}\}$. As shown in~\cite{Neu:2020,Bas-Serrano:2021,Neu:2021}, for linear MDPs, the set of occupancy measures $\mathfrak{F}$ can be completely characterized by the set $\mathfrak{M}_{\mbs{\Phi}}$ (c.f., Proposition~\ref{prop:q-update}). While the number of constraints and variables in~(\ref{eq:primal'}) is intractable for large scale MDPs, in the next paragraph, we show how this problem can be solved using a proximal point scheme. \begin{comment} \begin{align*} &\min_{\mathbf{d}\in\Delta_{\sspace\times\aspace}, \mbs{\lambda}\in\Delta_{[m]}} \max_{\mbf{w}\in\wspace}\innerprod{\mbf{w}}{\mbs{\Phi}^\intercal \mathbf{d} - \FEV{{\pi_{\textup{E}}}}} \\ \quad &\mbf{B}^\intercal \mathbf{d} = \gamma \mbf{M}^\intercal \mbs{\lambda} + ( 1 -\gamma) \nu_0 \\ & \mbs{\lambda} = \mbs{\Phi}^\intercal \mathbf{d} \end{align*} \begin{multline*} \min_{\mathbf{d}\in\Delta_{\sspace\times\aspace}, \mbs{\lambda}\in\Delta_{[m]}} \max_{\mbf{w} \in\wspace, \mbs{\theta}, \mbf{V}} \innerprod{\mbf{w}}{ \mbs{\lambda} - \mbs{\Phi}^\intercal \mbs{\mu}_{{\pi_{\textup{E}}}}} + \\ \innerprod{ \mbf{V}}{\mbf{B}^\intercal \mathbf{d} - \gamma \mbf{M}^\intercal \mbs{\lambda} + ( 1 -\gamma) \nu_0} + \innerprod{\mbs{\theta}}{ \mbs{\lambda} - \mbs{\Phi}^\intercal \mathbf{d}} \end{multline*} \end{comment} \section{Proximal Point Imitation Learning} \label{sec:PPM} By using a Lagrangian decomposition, we have that~(\ref{eq:primal'}) is equivalent to the following bilinear saddle-point problem \begin{equation} \min_{\mbf{x}\in \mathcal{X}} \max_{\mbf{y} \in \mathcal{Y}} \innerprod{\mbf{y}}{\mathbf{A}\mbf{x}+\mathbf{b}} \label{eq:SPP}, \tag{\color{blue}SPP} \end{equation} where $\mathbf{A}\in\ar^{(2m+|\sspace|)\times(m+|\sspace||\aspace|})$, and $\mathbf{b}\in\ar^{(m+|\sspace|+|\sspace||\aspace|)}$ are appropriately defined (see Appendix~\ref{app:SPP}), $\mbf{x}\triangleq[\mbs{\lambda}^\intercal$, $\mathbf{d}^\intercal ]^\intercal $, $\mbf{y}\triangleq[\mbf{w}^\intercal$, $\mbf{V}^\intercal , \mbs{\theta}^\intercal ]^\intercal$, $\mathcal{X} \triangleq \Delta_{[m]} \times \Delta_{\sspace\times\aspace}$, and $\mathcal{Y}\triangleq\mathcal{W}\times\ar^{|\sspace|}\times\ar^{m}$. Since in practice we do not have access to the whole policy ${\pi_{\textup{E}}}$, but instead can observe a finite set of \textrm{i.i.d.} sample trajectories $\mathcal{D}_{\textup{E}}\triangleq\{(x_0^{(l)},a_0^{(l)},x_1^{(l)},a_1^{(l)},\ldots,x_H^{(l)},a_H^{(l)})\}_{l=1}^{n_{\textup{E}}}\sim{\pi_{\textup{E}}}$, we define the vector $\widehat{\mathbf{b}}$ by replacing $\FEV{{\pi_{\textup{E}}}}$ with its empirical counterpart $\EFEV{{\pi_{\textup{E}}}}$ (by taking sample averages) in the definition of ${\mathbf{b}}$. We then consider the empirical objective $f(\mbf{x})\triangleq\max_{\mbf{y}\in\mathcal{Y}} \innerprod{\mbf{y}}{\mathbf{A}\mbf{x} + \widehat{\mathbf{b}}}$ and apply PPM on the decision variable $\mbf{x}$. For the $\mbs{\lambda}$-variable we use the relative entropy $D(\mbs{\lambda} || \mbs{\lambda}^\prime)\triangleq\sum^m_{i=1} \lambda(i)\log\frac{\lambda(i)}{\lambda^\prime(i)}$, while for the occupancy measure $\mathbf{d}$ we use the conditional relative entropy $H(\mathbf{d}||\mathbf{d}^\prime)\triangleq\sum_{s,a} d(s,a)\log\frac{\pi_\mathbf{d}(a|s)}{\pi_{\mathbf{d}^\prime}(a|s)}$. With this choice we can rewrite the PPM update as \begin{equation}\label{eq:q-update} (\mbs{\lambda}_{k+1},\mathbf{d}_{k+1})=\argmin_{\mbs{\lambda}\in\Delta_{[m]},\mathbf{d} \in\Delta_{\sspace\times\aspace}}\max_{\mbf{y}\in\mathcal{Y}}\innerprod{\mbf{y}}{ \mathbf{A}\left[ {\begin{array}{ccc} \mbs{\lambda} \\ \mathbf{d} \end{array} } \right]+ \widehat{\mathbf{b}}} + \frac{1}{\eta}D(\mbs{\lambda}||\mbs{\Phi}^\intercal\mathbf{d}_k) + \frac{1}{\alpha}H(\mathbf{d}||\mathbf{d}_k), \\ \end{equation} where we used primal feasibility to replace $\mbs{\lambda}_k$ with $\mbs{\Phi}^\intercal\mathbf{d}_k$ as the center point of the relative entropy. \looseness=-1 PPM is implicit, meaning that it requires the evaluation of the gradient at the next iterate $\mbf{x}_{k+1}$. Such a requirement makes it not implementable in general. However, in the following, we describe a procedure to apply proximal point to our specific $f(\mbf{x})$. The following Proposition summarizes the result. \begin{proposition}\label{prop:q-update} For a parameter $\mbs{\theta}\in\ar^m$, we define the logistic state-action value function $\mathbf{Q}_{\mbs{\theta}}\in\ar^{|\sspace||\aspace|}$ by $\mathbf{Q}_{\mbs{\theta}}\triangleq\mbs{\Phi}\mbs{\theta}$, and the $k$-step logistic state value function $\mbf{V}_{\mbs{\theta}}^k\in\ar^{|\sspace|}$ by \[ V_{\mbs{\theta}}^k(s)\triangleq-\frac{1}{\alpha}\log\left(\sum_a \pi_{\mathbf{d}_{k-1}}(a|s)e^{-\alpha Q_{\mbs{\theta}}(s,a)}\right). \] Moreover, we define the $k$-step reduced Bellman error function $\boldsymbol{\delta}_{\mbf{w},\mbs{\theta}}^k\in\ar^m$ by $ \boldsymbol{\delta}_{\mbf{w},\mbs{\theta}}^k\triangleq\mbf{w}+\gamma\mbf{M}\mbf{V}_{\mbs{\theta}}^k-\mbs{\theta}. $ Then, the PPM update $(\mbs{\lambda}_k^\star,\mathbf{d}_k^\star)$ in~\ref{eq:q-update} is given by \begin{align} \lambda_k^\star(i) &\propto (\mbs{\Phi}^\intercal \mathbf{d}_{k-1})(i)\,e^{-\eta\delta_{\mbf{w}_k^\star,\theta_{k}^\star}^k(i)},\label{eq:update1}\\ \pi_{\mathbf{d}_k^\star}(a|s)&\propto\pi_{\mathbf{d}_{k-1}}(a|s)\,e^{-\alpha Q_{\mbs{\theta}_k^\star}(s,a)},\label{eq:update2} \end{align} where $(\mbf{w}_k^\star,\mbs{\theta}_k^\star)$ is the maximizer {over $\mathcal{W}\times\ar^m$} of the $k$-step logistic policy evaluation objective \begin{equation} \mathcal{G}_k(\mbf{w},\mbs{\theta})\triangleq-\frac{1}{\eta}\log\sum_{i=1}^m (\mbs{\Phi}^\intercal \mathbf{d}_{k-1})(i) e^{-\eta\delta^k_{\mbf{w},\mbs{\theta}}(i)}+(1-\gamma)\innerprod{\mbs{\nu}_0}{\mbf{V}_{\mbs{\theta}}^k}- \innerprod{\EFEV{{\pi_{\textup{E}}}}}{\mbf{w}}.\label{eq:PEobjective} \end{equation} Moreover, it holds that $\mathcal{G}_k(\mbf{w}_k^\star,\mbs{\theta}_k^\star)=\innerprod{\mbs{\lambda}_{k}^\star}{\mbf{w}_k^\star} - \innerprod{\EFEV{{\pi_{\textup{E}}}}}{\mbf{w}_k^\star} + \frac{1}{\eta}D(\mbs{\lambda}_{k}^\star ||\mbs{\Phi}^\intercal \mbs{\lambda}_{k-1}) + \frac{1}{\alpha}H(\mathbf{d}_{k}^\star||\mathbf{d}_{k-1}).$ If in addition Assumption~\ref{ass:linear-MDP} holds, then $\mathbf{d}_k^\star$ is a valid occupancy measure, i.e., $\mathbf{d}_k^\star\in\mathfrak{F}$ and so $\mathbf{d}_k^\star=\mbs{\mu}_{\pi_{\mathbf{d}_k^\star}}$. \end{proposition} The proof of Proposition~\ref{prop:q-update} is broken down into a sequence of lemmas and is presented in Appendix~\ref{app:proof-of-upadates-proposition}. It employs an \texttt{analytical-oracle} $\mbf{g}:\mathcal{Y}\rightarrow\mathcal{X}$ given by \begin{align*} \mbf{g}(\mbf{y}; \mbf{x}_k) &\triangleq\argmin_{\mbs{\lambda}\in\Delta_{[m]},\mathbf{d} \in\Delta_{\sspace\times\aspace}}\innerprod{\mbf{y}}{ \mathbf{A}\left[ {\begin{array}{ccc} \mbs{\lambda} \\ \mathbf{d} \end{array} } \right]+ \widehat{\mathbf{b}}} + \frac{1}{\eta}D(\mbs{\lambda}||\mbs{\Phi}^\intercal\mathbf{d}_k) + \frac{1}{\alpha}H(\mathbf{d}||\mathbf{d}_k), \end{align*} and a \texttt{max-oracle} $\mathbf{h}:\mathcal{X}\rightarrow\mathcal{Y}$ given by $ \mathbf{h}(\mbf{x}) \triangleq \argmax_{\mbf{y}\in\mathcal{Y}} \innerprod{\mbf{y}}{\mathbf{A}\mbf{g}(\mbf{y};\mbf{x})} + \frac{1}{\tau}D_{\Omega}(\mbf{g}(\mbf{y};\mbf{x})||\mbf{x}), $ where we used $D_\Omega$ to compact the two divergences. By noting that the PPM update~\Cref{eq:q-update} can be rewritten as $ \mbf{x}_{k+1 = \mbf{g}(\mathbf{h}(\mbf{x}_k); \mbf{x}_k), \label{eq:proximal_point_compact_update} $ its analytical computation is reduced to the characterization of the two aforementioned oracles. In particular, the updates~(\ref{eq:update1})--(\ref{eq:update2}) come from the \texttt{analytical-oracle} while~(\ref{eq:PEobjective}) is the objective of the \texttt{max-oracle}. \begin{comment} The computation of the dual maximizer $\mbf{y}^\star$ allows to rewrite \Cref{eq:proximal_point} as $ \mbf{x}_{k+1}=\mathrm{argmin}_{\mbf{x}\in\mathcal{X}}\innerprod{\mbf{y}^\star}{ \mathbf{A}\mbf{x} + \widehat{\mathbf{b}}} + \frac{1}{\tau}D_{\Omega}(\mbf{x}||\mbf{x}_k). $ Then, by applying first-order optimality conditions, the update for $\mbf{x}_{k+1}$ takes an analytical closed form \begin{equation} \mbf{x}_{k+1}=P_{\mathcal{X}}\bs{\nabla\Omega^{-1} \bs{\nabla \Omega(\mbf{x}_k)-\tau \mathbf{A}^\intercal \mbf{y}^\star}}. \label{eq:closed_form} \end{equation} So, the remaining challenge is the computation of $\mbf{y}^\star$. \end{comment} \begin{comment} The first order optimality conditions, gives an update for $\mbf{x}_{k+1}$: \begin{equation*} \mathbf{A}^\intercal \mbf{y}^*(\mbf{x}_{k+1}) + \frac{1}{\tau}(\nabla\Omega(\mbf{x}_{k+1}) - \nabla \Omega(\mbf{x}_k)) = 0 \end{equation*} That implies: \begin{equation*} \mbf{x}_{k+1}=P_{\mathcal{X}}\bs{\nabla\Omega^{-1} \bs{\nabla \Omega(\mbf{x}_k)-\tau \mathbf{A}^\intercal \mbf{y}^*(\mbf{x}_{k+1})}} \end{equation*} where $P_{\mathcal{X}}$ denotes projection into the set $\mathcal{X}$ and $\mbf{y}^*(\mbf{x}):=\mathrm{argmax}_{\mbf{y}\in\mathcal{Y}} \innerprod{\mbf{y}}{\mathbf{A}\mbf{x} + \mathbf{b}} + \frac{1}{\tau}D(\mbf{x}||\mbf{x}_k)$ is output by a \texttt{max-oracle} queried at $\mbf{x}$. Surprisingly, $\mbf{y}^*(\mbf{x}_{k+1})$ can be computed despite the dependence on the next iterate $\mbf{x}_{k+1}$. The reason being that $\forall \mbf{y} \in \mathcal{Y}$, we can write the next primal iterate $\mbf{x}_{k+1}$ as a function of the dual variable $\mbf{y}$. That is: \begin{multline*} \mbf{x}_{k+1}(\mbf{y}) = \mathrm{argmin}_{\mbf{x}\in\mathcal{X}} \innerprod{\mbf{y}}{ \mathbf{A}\mbf{x}+ \mathbf{b}} \\ + \frac{1}{\tau}D_{\Omega}(\mbf{x}||\mbf{x}_k)\tag{\textbf{AnalyticalOracle}} \end{multline*} that, by first order optimality condition, implies: \begin{equation*} \mbf{x}_{k+1}(\mbf{y})=P_{\mathcal{X}}\bs{\nabla\Omega^{-1} \bs{\nabla \Omega(\mbf{x}_k)-\tau \mathbf{A}^\intercal \mbf{y}}} \end{equation*} \AK{\mbf{x}_{k+1}} \Cref{lemma:analytical_oracle} provides an explicit form for the Bregman divergence we apply in our analysis. Exploiting the \texttt{analytical-oracle}, we can create the \texttt{max-oracle} as follows: \begin{multline} \mbf{y}^*(\mbf{x}_{k+1})=\mathrm{argmax}_{\mbf{y}\in\mathcal{Y}} \innerprod{\mbf{y}}{\mathbf{A}\mbf{x}_{k+1}(\mbf{y}) + \mathbf{b}} \\+ \frac{1}{\tau}D_{\Omega}(\mbf{x}_{k+1}(\mbf{y})||\mbf{x}_k)\\ = \mathrm{argmax}_{\mbf{y}\in\mathcal{Y}} \innerprod{\mbf{y}}{\mathbf{A}P_{\mathcal{X}}\bs{\nabla\Omega^{-1} \bs{\nabla \Omega(\mbf{x}_k)-\tau \mathbf{A}^\intercal \mbf{y}}} + \mathbf{b}} \\ + \frac{1}{\tau}D_{\Omega}\br{P_{\mathcal{X}}\bs{\nabla\Omega^{-1} \bs{\nabla \Omega(\mbf{x}_k)-\tau \mathbf{A}^\intercal \mbf{y}}}||\mbf{x}_k}\tag{\textbf{MaxOracle}} \end{multline} We remark that in the last equation the dependence on the next primal iterate $\mbf{x}_{k+1}$ disappears. This is the key fact that allows the application of proximal point to our setting. \end{comment} The choice of conditional entropy as Bregman divergence for the $\mbs{\lambda}$ variable living in the probability simplex is standard in the optimization literature and is known to mitigate the effect of dimension. In particular, as noted in~\cite{Neu:2007}, the classic REPS algorithm~\cite{Peters:2010} can be seen as mirror descent with relative entropy regularization. On the other hand, the choice of conditional entropy as Bregman divergence for the $\mathbf{d}$ variable is less standard and has been popularized by Q-REPS \cite{Bas-Serrano:2021}. Such particular divergence leads to an actor-critic algorithm that comes with several merits. By Proposition~\ref{prop:q-update}, it is apparent that we get analytical softmin updates for the policy $\pi_{\mathbf{d}}$ rather than the occupancy measure $\mathbf{d}$. Moreover, these softmin updates are expressed in terms of the logistic $Q$-function and do not involve the unknown transition matrix $\mbf{P}$. Consequently, we avoid the problematic occupancy measure approximation and the restrictive coherence assumption on the choice of features needed in~\cite{Bas-Serrano:2020,Kamoutsi:2021}, as well as the biased policy updates appearing in REPS \cite{Peters:2010, Pacchiano:2021}. In addition, the newly introduced logistic policy evaluation objective $\mathcal{G}_k(\mbf{w},\mbs{\theta})$ has several desired properties. It is concave and smooth in $(\mbf{w},\mbs{\theta})$ and has bounded gradients. Therefore, it does not suffer from the pathologies of the squared Bellman error~\cite{Mnih:2015} and does not require heuristic gradient clipping techniques. Moreover, unlike~\cite{Kamoutsi:2021} it allows a model-free implementation without the need for a generative model (see Section~\ref{sec:PPM_model_free}) \looseness=-1 We stress the fact that the \texttt{max-oracle} of our proximal point scheme performs the cost update and policy evaluation phases jointly. This is a rather novel feature of our algorithm that differs from the separate cost update and policy evaluation step used in recent theoretical imitation learning works~\cite{Zhang:2020,Shani:2021,Liu:2022}. Our joint optimization over cost and $Q$-functions avoids instability due to adversarial training and can also recover an explicit cost along with the $Q$-function without requiring knowledge or additional interaction with the environment (see Section~\ref{sec:experiments}). It is worth noting that application of primal-dual mirror descent to~(\ref{eq:SPP}) does not have this favorable property. While in the standard MDP setting, proximal point and mirror descent coincide because of the linear objective, in imitation learning proximal point optimization makes a difference. In Appendix~\ref{sec:mirror-descent}, we include a more detailed discussion and numerical comparison between PPM and mirror descent updates. \subsection{Practical Implementation} \label{sec:PPM_model_free} Exact optimization of the logistic policy evaluation objective is infeasible in practical scenarios, due to unknown dynamics and limited computation power. In this section, we design a practical algorithm that uses only sample transitions by obtaining stochastic (albeit biased) gradient estimators. Proposition~\ref{prop:q-update} gives rise to Proximal Point Imitation Learning (\texttt{P$^2$IL}), a model-free actor-critic IRL algorithm described in Algorithm~\ref{alg:PPIQL}. The key feature of \texttt{P$^2$IL} is that the policy evaluation step involves optimization of a single smooth and concave objective over both cost and state-action value function parameters. In this way, we avoid instability or poor convergence in optimization due to nested policy evaluation and cost updates, as well as the undesirable properties of the widely used squared Bellman error. In particular, the $k$th iteration of \texttt{P$^2$IL} consists of the following two steps : (i) (\textbf{Critic Step}) Computation of an approximate maximizer $(\mbf{w}_k,\mbs{\theta}_k)\approx\argmax_{\mbf{w},\mbs{\theta}}{\mathcal{G}}_k(\mbf{w},\mbs{\theta})$ of the concave logistic policy evaluation objective, by using a biased stochastic gradient ascent subroutine; (ii) (\textbf{Actor Step}) Soft-min policy update $ \pi_{k}(a|s)\propto\pi_{k-1}(a|s)\,e^{-\alpha Q_{\mbs{\theta}_k}(s,a)} $ expressed in terms of the logistic $Q$-function. \begin{algorithm}[!t] \caption{Proximal Point Imitation Learning: \texttt{P$^2$IL}$(\mbs{\Phi},\mathcal{D}_{\textup{E}},K,\eta, \alpha)$} \label{alg:PPIQL} \begin{algorithmic} \STATE {\bfseries Input:} Feature matrix $\boldsymbol{\Phi}$, expert demonstrations $\mathcal{D}_{\textup{E}}$, number of iterations $K$, step sizes $\eta$ and $\alpha$ \STATE {\bfseries Input:} Number of SGD iterations T, SGD learning rates $\boldsymbol{\beta}=\{\beta_t\}_{t=0}^{T-1}$, number-of-samples function $n:\mathds{N}\rightarrow\mathds{N}$ \STATE Initialize $\pi_0$ as uniform distribution over $\aspace$ \STATE Compute the empirical FEV $\EFEV{{\pi_{\textup{E}}}}$ using expert demonstrations $\mathcal{D}_{\textup{E}}$ \FOR{$k=1,\ldots K$} \STATE \texttt{// Critic-step (policy evaluation)} \STATE Initialize $\mbs{\theta}_{k,0}=\mathbf{0}$ and $\mbf{w}_{k,0}=\mathbf{0}$ \FOR{$t=0,\ldots T-1$} \STATE Run $\pi_{k-1}$ and collect \textrm{i.i.d.} samples $\{(s_{{k-1}}^{(n)},a_{{k-1}}^{(n)},s_{{k-1}}^{\prime (n)})\}_{n=1}^{n(t)+1}$ \STATE Compute biased stochastic gradient estimators $$(\widehat{\nabla}_{\mbf{w}}\mathcal{G}_k(\mbf{w}_k,\mbs{\theta}_k),\widehat{\nabla}_{\mbs{\theta}}\mathcal{G}_k(\mbf{w}_k,\mbs{\theta}_k))=\textrm{BSGE}(k,\mbf{w}_t,\mbs{\theta}_t,n(t))$$ \STATE $\mbf{w}_{k,t+1}=\Pi_{\mathcal{W}}(\mbf{w}_{k,t}+\beta_t \widehat{\nabla}_{\mbf{w}}\mathcal{G}_k(\mbf{w}_k,\mbs{\theta}_k))$ \STATE $\mbs{\theta}_{k,t+1}=\Pi_{\Theta}(\mbs{\theta}_{k,t}+\beta_t \widehat{\nabla}_{\mbs{\theta}}\mathcal{G}_k(\mbf{w}_k,\mbs{\theta}_k))$ \ENDFOR \STATE $(\mbf{w}_k,\mbs{\theta}_k)=(\frac{1}{T}\sum_{t=1}^T\mbf{w}_{k,t},\frac{1}{T}\sum_{t=1}^T\mbs{\theta}_{k,t})$ \STATE \texttt{// Actor-step (policy update)} \STATE Policy update: $$ \pi_{k}(a|s)\propto\pi_{k-1}(a|s)\,e^{-\alpha Q_{\mbs{\theta}_k}(s,a)} $$ \ENDFOR \STATE {\bfseries Output:} Mixed policy $\widehat{\pi}_K$ of $\{\pi_k\}_{k\in[K]}$ \end{algorithmic} \end{algorithm} The domain $\Theta$ in Algorithm~\ref{alg:PPIQL} is the $\ell_{\infty}$-ball with appropriately chosen radius $D$. Moreover, $\Pi_{\Theta}(\mathbf{x})\triangleq\arg\min_{\mathbf{y}\in\Theta}\norm{\mathbf{x}-\mathbf{y}}_2$ (resp. $\Pi_{\mathcal{W}}(\mathbf{\mbf{w}})$) denotes the Euclidean projection of $\mathbf{x}$ (resp. $\mbf{w}$) onto $\Theta$ (resp. $\mathcal{W}$). The following proposition ensures that $\max_{\mbf{w},\mbs{\theta}\in\wspace\times\mathbb{R}^m}{\mathcal{G}}_k(\mbf{w},\mbs{\theta}) = \max_{\mbf{w},\mbs{\theta}\in\wspace\times\Theta}{\mathcal{G}}_k(\mbf{w},\mbs{\theta})$. Therefore, this constraint does not change the problem optimality, but will considerably accelerate the convergence of the algorithm by considering smaller domains. \begin{proposition}\label{prop:optimal_theta_bound} \label{cor:bound_theta} It holds that $\norm{\mbs{\theta}^\star_k}_{\infty} \leq M\frac{1 + \abs{\log\beta}}{1 - \gamma}\triangleq D$, for some constant $M\geq1$. \end{proposition} In order to estimate the gradients $\nabla_{\mbs{\theta}}\, \mathcal{G}_k(\mbf{w},\mbs{\theta})$ and $\nabla_{\mbf{w}}\, \mathcal{G}_k(\mbf{w},\mbs{\theta})$ we invoke the Biased Stochastic Gradient Estimator subroutine (\textrm{BSGE}) (Algorithm~\ref{alg:BSGE}) given in Appendix~\ref{app:stochastic gradients}. By using the linear MDP Assumption~\ref{ass:linear-MDP} and leveraging ridge regression and plug-in estimators, the proposed stochastic gradients can be computed via simple linear algebra with computational complexity $\textup{poly}(m,n(t))$, independent of the size of the state space. \subsection{Theoretical Analysis} \label{sec:theorems} The first step in our theoretical analysis is to study the propagation of optimization errors made by the algorithm on the true policy evaluation objective. In particular at each iteration step $k$, the ideal policy evaluation update $(\mbf{w}_k^\star,\mbs{\theta}_k^\star)$ and the ideal policy update $\pi_k^\star$ are given by $ (\mbf{w}_k^\star,\mbs{\theta}^\star_k)=\arg\max_{\mbf{w},\mbs{\theta}}\mathcal{G}_k(\mbf{w},\mbs{\theta})$, and $\pi_k^\star(a|s)=\pi_{k-1}(a|s)e^{-\alpha(Q_{\mbs{\theta}^\star_k}(s,a)-V^k_{\mbs{\theta}^\star_k}(s))}. $ On the other hand, consider the realised policy evaluation update $(\mbf{w}_k,\mbs{\theta}_k)$ such that ${\mathcal{G}}_k(\mbf{w}_k^\star,\mbs{\theta}_k^\star)-\mathcal{G}_k(\mbf{w}_k,\mbs{\theta}_k)=\epsilon_k$, the corresponding policy $\pi_k$ given by $\pi_k=\pi_{k-1}(a|s)e^{-\alpha(Q_{\mbs{\theta}_k}(s,a)-V^k_{\mbs{\theta}_k}(s))}$, and let $\mathbf{d}_k\triangleq\mbs{\mu}_{\pi_k}$. We denote by $\widehat{\pi}_K$ the extracted mixed policy of $\{\pi_k\}_{k=1}^K$. We are interested in upper-bounding the suboptimality gap $d_{\mathcal{C}}(\widehat{\pi}_K,{\pi_{\textup{E}}})$ of Algorithm~\ref{alg:PPIQL} as a function of $\varepsilon_k$. To this end, we need the following assumption. \begin{figure*}[t] \centering \begin{tabular}{ccccc} \subfloat[RiverSwim]{% \includegraphics[width=0.16\linewidth]{plot/only_proximal_point/RiverSwim-v0_normalized.pdf} } & \subfloat[CartPole]{% \includegraphics[width=0.16\linewidth]{plot/only_proximal_point/CartPole-v1_normalized.pdf} } & \subfloat[DoubleChain]{% \includegraphics[width=0.16\linewidth]{plot/only_proximal_point/DoubleChainProblem-v0_normalized.pdf} } & \subfloat[Gridworld]{% \includegraphics[width=0.16\linewidth]{plot/only_proximal_point/WindyGrid-v0_normalized.pdf} } & \subfloat[Acrobot]{% \includegraphics[width=0.16\linewidth]{plot/only_proximal_point/Acrobot-v1_normalized.pdf} } \\ \multicolumn{5}{c}{ \includegraphics[scale=0.5]{plot/final_paper_legend_horizontal.pdf} } \end{tabular} \caption{\textbf{Online IL Experiments}. We show the total returns vs the number of env steps. } \label{fig:simple_env_results} \end{figure*} \begin{assumption} \label{ass:eigenvalue} It holds that $ \lambda_{\mathrm{min}}(\Exp_{(s,a)\sim \mathbf{d}_k}{\mbs{\phi}(s,a)\mbs{\phi}(s,a)^T}) \geq \beta $, for all $k\in[K]$. \end{assumption} Assumption~\ref{ass:eigenvalue} states that every occupancy measure $\mathbf{d}_k$ induces a positive definite feature covariance matrix, and so every policy $\pi_k$ explores uniformly well in the feature space. This assumption is common in the RL theory literature~\cite{Abbasi-Yadkori:2019b, Hao:2021, Duan:2020, Lazic:2020, Abbasi-Yadkori:2019c, Agarwal:2020b}. It is also related to the condition of persistent excitation from the control literature~\cite{Narenda:1987}. We can now state our error propagation theorem. \begin{theorem} \label{thm:error_propagation} Let $\widehat{\pi}_K$ be the output of running Algorithm~\ref{alg:PPIQL} for $K$ iterations, with $n_{\textup{E}}\geq\frac{2\log(\frac{2m}{{\boldsymbol{\delta}}})}{\varepsilon^2}$ expert trajectories of length $H\geq\frac{1}{1-\gamma}\log(\frac{1}{\varepsilon})$. Let $C\triangleq \frac{1}{\beta\eta}\big(\sqrt{\frac{2 \alpha}{1 - \gamma}} + \sqrt{8 \eta}\big) + \sqrt{\frac{18 \alpha}{1 - \gamma}}$. Then, with probability at least $1-\delta$, it holds that $ d_{\mathcal{C}}(\widehat{\pi}_K, {\pi_{\textup{E}}}) \leq \frac{1}{K}\Big( \frac{D(\mbs{\lambda}^*||\mbs{\Phi}^T\mathbf{d}_0)}{\eta} + \frac{H(\mathbf{d}^*||\mathbf{d}_0)}{\alpha} + C\sum_k \sqrt{\epsilon_k} + \sum_k \epsilon_k\Big)+\varepsilon. \label{eq:lemma_bound_on_saddle_point} $ \end{theorem} By Theorem~\ref{thm:error_propagation}, whenever the policy evaluation errors $\varepsilon_k$, as well as the estimation error $\varepsilon$ can be kept small, Algorithm~\ref{alg:PPIQL} ouputs a policy $\widehat{\pi}_K$ with small suboptimality gap $\rho_{\mbf{c_{\textup{true}}}}(\widehat{\pi}_K)-\rho_{\mbf{c_{\textup{true}}}}({\pi_{\textup{E}}})$. Notably, there is no direct dependence on the size of the state space or the dimension of the feature space. In the ideal case, where $\varepsilon_k=0$ for all $k$, the convergence rate is $\mathcal{O}(1/K)$. The provided error propagation analysis still holds with general function approximation, i.e., in the context of deep RL. Indeed, by choosing $\mbs{\Phi}=\mathbf{I}$, Assumption~\ref{ass:linear-MDP} is trivially satisfied and the $\mbs{\theta}$ variable in the objective $\mathcal{G}_k$ is replaced by a $Q$-function. In practice, the estimation error $\varepsilon$ can be made arbitrary small, by increasing the number of expert demonstrations $n_{\textup{E}}$. Moreover, the next theorem ensures that under Assumptions~\ref{ass:linear-MDP} and~\ref{ass:eigenvalue} the biased stochastic gradient ascent (BSGA) subroutine has sublinear convergence rate. \begin{theorem} \label{thm:biased_sgd} Let $(\mbf{w}_k,\mbs{\theta}_k)$ be the output of the BSGA subroutine in \Cref{alg:PPIQL} for $T$ iterations, with $n(t) \geq \max\br{\mathcal{O}\br{\frac{\gamma^2 m D t }{(\eta+\alpha)^2\beta}\log\frac{Tm}{\delta}}, \mathcal{O}\br{\frac{m t}{(\eta+\alpha)^2\beta}\log\frac{Tm}{\delta}}}$ sample transitions, and learning rates $\beta_t=\mathcal{O}(\frac{1}{\sqrt{t}})$. Then, $\epsilon_k = {\mathcal{G}}_k(\mbf{w}_k^\star,\mbs{\theta}_k^\star)-\mathcal{G}_k(\mbf{w}_k,\mbs{\theta}_k) \leq \mathcal{O}(\frac{\max\{\eta,1\} m D}{\beta \sqrt{T}})$, with probability $1-\delta$. \end{theorem} \begin{corollary}[Resource guarantees] \label{cor:sample_complexity} Choose $\eta=\alpha=1$ and let $K=\Omega\br{\epsilon^{-1}}$, $T=\Omega\br{\epsilon^{-4}}$. Then for $\Omega\br{KT} = \Omega\br{\epsilon^{-5}}$ sample transitions, $\Omega\br{\varepsilon^{-2}}$ expert trajectories and approximately solving $\Omega\br{\epsilon^{-1}}$ concave maximization problems, we can ensure $d_{\mathcal{C}}(\widehat{\pi}, {\pi_{\textup{E}}})\leq \mathcal{O}(\epsilon + \varepsilon)$, with high probability. \end{corollary} \textbf{Offline Setting.} Finally, we notice that using $\mbs{\Phi}^\intercal\mbs{\mu}_{{\pi_{\textup{E}}}}$ as the reference distribution for the relative entropy we can obtain an offline algorithm that does not require environment interactions. By reinterpreting smoothing \cite{Nesterov:2005} as one step of proximal point, and using similar arguments as in the proof of \Cref{thm:error_propagation}, we can provide similar theoretical guarantees for the offline setting. The details as well as the optimization of the empirical policy evaluation objective are presented in Appendix~\ref{app:offline} (see Theorems~\ref{thm:offline_error_propagation} { and~\ref{thm:Donsker-Varadhan}}). \begin{comment} The estimation error $\varepsilon$ can be made arbitrary small, by increasing the number of expert demonstrations $n_{\textup{E}}$. The policy evaluation error $\varepsilon_k$ can be decomposed as \begin{align*} \varepsilon_k&\le \underbrace{2\sup_{(\mbf{w},\mbs{\theta})}\abs{\mathcal{G}^m_k(\mbf{w}, \mbs{\theta}) - \widetilde{\mathcal{G}}^{}_k(\mbf{w}, \mbs{\theta})}}_{\text{estimation error}}\\ &\phantom{{}=} +\underbrace{2|\widetilde{\mathcal{G}^{}}_k(\mbf{w}_k,\mbs{\theta}_k) - \sup_{\mbf{w},\mbs{\theta}}\widetilde{\mathcal{G}^{}}_k(\mbf{w},\mbs{\theta})|}_{\text{optmizization error}}. \end{align*} We next discuss the bounds of these two different error sources. \begin{figure*}[t] \centering \begin{tabular}{cccc} \subfloat[WideTree]{% \includegraphics[width=0.2\linewidth]{plot/only_proximal_point/WideTree-v0_normalized.pdf} } & \subfloat[RiverSwim]{% \includegraphics[width=0.2\linewidth]{plot/only_proximal_point/RiverSwim-v0_normalized.pdf} } & \subfloat[SingleChain]{% \includegraphics[width=0.2\linewidth]{plot/only_proximal_point/SingleChainProblem-v0_normalized.pdf} } & \subfloat[CartPole]{% \includegraphics[width=0.2\linewidth]{plot/only_proximal_point/CartPole-v1_normalized.pdf} } \\ \subfloat[DoubleChain]{% \includegraphics[width=0.2\linewidth]{plot/only_proximal_point/DoubleChainProblem-v0_normalized.pdf} } & \subfloat[TwoStateStochastic]{% \includegraphics[width=0.2\linewidth]{plot/only_proximal_point/TwoStateStochastic-v0_normalized.pdf} } & \subfloat[Gridworld]{% \includegraphics[width=0.2\linewidth]{plot/only_proximal_point/WindyGrid-v0_normalized.pdf} } & \subfloat[Acrobot]{% \includegraphics[width=0.2\linewidth]{plot/only_proximal_point/Acrobot-v1_normalized.pdf} } \\ \multicolumn{4}{c}{ \includegraphics[scale=0.5]{plot/final_paper_legend_horizontal.pdf} } \end{tabular} \caption{Online Imitation Experiments. Using Proximal Point initialized at the uniform policy. } \label{fig:simple_env_results} \end{figure*} Optimizing $\mathcal{G}^{}_k$ instead of $\mathcal{G}^m_k$ introduces a bias quantified by the following lemma. \begin{lemma} \textbf{Estimation error} \label{lemma:G_tilde_bias} With $\Lambda_{\bar{n}}^k = \Lambda^{k-1}+ \sum^{\bar{n}}_{n=1}\mbs{\phi}(s^k_n,a^k_n)\mbs{\phi}(s^k_n,a^k_n)^\intercal$, assume that $ N \geq \max_k \frac{\max_{s,a}\norm{\mbs{\phi}(s,a)}_{(\Lambda^k)^{-1}}}{\min_n \min_{s,a}\norm{\mbs{\phi}(s,a)}_{(\boldsymbol{\Lambda}_{k,N}^k)^{-1}}}$ and that $\chi \geq 1$ and let $C= 3B \norm{(\mbs{\Phi}^\intercal\mbs{\Phi})^{-1}\mbs{\Phi}^\intercal}_{\infty}$. Then, with probability $1 - \delta^\prime$, it holds that: \begin{multline} \sum^K_{k=1}\abs{\widetilde{\mathcal{G}}_k(\mbf{w},\mbs{\theta}) -\mathcal{G}^m_k(\mbf{w},\mbs{\theta})} \\\leq m C \sqrt{\log \left(\frac{4K^2NB}{m \chi\delta^\prime}\right)}\sqrt{2K Nm\log \left(\frac{2KN}{m\chi}\right)} \end{multline} and \begin{multline} \sum^K_{k=1}\sqrt{\abs{\widetilde{\mathcal{G}}_k(\mbf{w},\mbs{\theta}) -\mathcal{G}^m_k(\mbf{w},\mbs{\theta})}} \\\leq \sqrt{m C \sqrt{\log \left(\frac{4K^2NB}{m \chi\delta^\prime}\right)}}\sqrt[4]{2K^3Nm\log \left(\frac{2KN}{m\chi}\right)} \end{multline} \end{lemma} On the other hand, since $\widetilde{\mbf{M}}_k$ is known for every $k$ it is possible to compute unbiased estimates of $\widetilde{\mathcal{G}}_k$. However, we do not have a closed form expression for the occupancy measure $\mathbf{d}_k$. Hence, we build an empirical sample based estimate $\widetilde{\mathcal{G}}^{\mathrm{emp}}_k$ as follows: \begin{multline*} \widetilde{\mathcal{G}}^{\mathrm{emp}}_k \triangleq - \frac{1}{\eta} \log \left( \sum^{N^\prime}_{n=1}\sum_{i} \mbs{\phi}_i(s^k_n,a^k_n) e^{-\eta {\widetilde{\boldsymbol{\delta}}}^{k}_{\mbf{w},\mbs{\theta}}(i)} \right) \\ - \innerprod{\mbs{\rho}_{\phim}(\widehat{\expert})}{\mbf{w}} + (1 - \gamma) \innerprod{\mbs{\nu}_0}{\mbf{V}_{\mbs{\theta}}^k} \end{multline*} where $\{(s^k_n, a^k_n)\}^{N^\prime}_{n=1}$ are state action pairs sampled according to the occupancy measure $\mathbf{d}_k$. The following provides a concentration result for $\widetilde{\mathcal{G}}^{\mathrm{emp}}_k$. \begin{lemma} \label{lemma:concentration_G_emp} $\forall (\mbf{w},\mbs{\theta})\in\wspace\times\Theta$ with probability $1 - \delta^{\prime\prime}$, it holds that: \begin{equation} \abs{\widetilde{\mathcal{G}}^{\mathrm{emp}}_k(\mbf{w},\mbs{\theta}) -\widetilde{\mathcal{G}}_k(\mbf{w},\mbs{\theta})} \leq \frac{e^{2\eta B}}{\eta} m \sqrt{\frac{2\log\frac{2 (4BN)^m}{\delta^{\prime\prime}}}{N}} \end{equation} \end{lemma} Therefore, we have vanishing variance for the estimator $\mathcal{G}^{\mathrm{emp}}_k$. Finally, we can maximize $\mathcal{G}^{\mathrm{emp}}_k$ efficiently exploiting its concavity and smoothness. It follows that in high probability, we can bound the suboptimality of the average policy as follows. \begin{corollary} It holds with probability $(1 - \delta)( 1-\delta^\prime)( 1-\delta^{\prime\prime})$, that: \begin{equation} d_{\mathcal{C}}(\widehat{\pi}, {\pi_{\textup{E}}}) \leq \mathcal{O}\left(\frac{1}{K^{1/4}} + \frac{1}{{N^\prime}^{1/4}}\right) + \varepsilon \end{equation} that implies a sample complexity of $\mathcal{O}(\epsilon^{-8})$ to reach a $(\epsilon + \varepsilon)$-optimal policy. \end{corollary} \end{comment} \section{Experiments} \begin{comment} \begin{figure*}[t] \centering \begin{tabular}{cccc} \subfloat[Two State Deterministic]{% \includegraphics[width=0.24\linewidth]{plot/baseline_comparison/TwoStateProblem-v0_normalized.pdf} } & \subfloat[WideTree]{% \includegraphics[width=0.24\linewidth]{plot/baseline_comparison/WideTree-v0_normalized.pdf} } & \subfloat[RiverSwim]{% \includegraphics[width=0.24\linewidth]{plot/baseline_comparison/RiverSwim-v0_normalized.pdf} } & \subfloat[SingleChain]{% \includegraphics[width=0.24\linewidth]{plot/baseline_comparison/SingleChainProblem-v0_normalized.pdf} } \\ \subfloat[DoubleChain]{% \includegraphics[width=0.24\linewidth]{plot/baseline_comparison/DoubleChainProblem-v0_normalized.pdf} } & \subfloat[Two State Stochastic]{% \includegraphics[width=0.24\linewidth]{plot/baseline_comparison/TwoStateStochastic-v0_normalized.pdf} } & \subfloat[Gridworld]{% \includegraphics[width=0.24\linewidth]{plot/baseline_comparison/WindyGrid-v0_normalized.pdf} } & \subfloat[Legend]{% \includegraphics[scale=0.5]{plot/baseline_comparison/final_paper_legend.pdf} } \\ \end{tabular} \caption{Online Imitation Experiments in tabular domains} \label{fig:simple_env_results} \end{figure*} \end{comment} \begin{figure*}[t] \centering \begin{tabular}{ccccc} \subfloat[Acrobot\label{fig:offline_acrobot}]{% \includegraphics[width=0.16\linewidth]{plot/offline/Acrobot-v1_offline_normalized.pdf} } & \subfloat[CartPole\label{fig:offline_cartpole}]{% \includegraphics[width=0.16\linewidth]{plot/offline/CartPole-v1_offline_normalized.pdf} } & \subfloat[LunarLander\label{fig:offline_lunarlander}]{% \includegraphics[width=0.16\linewidth]{plot/offline/LunarLander-v2_offline_normalized.pdf} } & \subfloat[Pong\label{fig:pong}]{% \includegraphics[width=0.16\linewidth]{plot/continuous_control/PongNoFrameskip-v4_normalized.pdf} } & \includegraphics[width=0.16\linewidth]{plot/offline_legend.pdf \\ \subfloat[HalfCheetah\label{fig:halfcheetah}]{% \includegraphics[width=0.16\linewidth]{plot/continuous_control/HalfCheetah-v2_normalized.pdf} } & \subfloat[Ant\label{fig:ant}]{% \includegraphics[width=0.16\linewidth]{plot/continuous_control/Ant-v2_normalized.pdf} } & \subfloat[Hopper\label{fig:hopper}]{% \includegraphics[width=0.16\linewidth]{plot/continuous_control/Hopper-v2_normalized_long.pdf} } & \subfloat[Walker2d\label{fig:walker}]{% \includegraphics[width=0.16\linewidth]{plot/continuous_control/Walker2d-v2_normalized_dense.pdf} } & \includegraphics[width=0.16\linewidth]{plot/atari_legend.pdf \\ \end{tabular} \caption{\textbf{Neural function approximation experiments.} \Cref{fig:offline_cartpole,fig:offline_acrobot,fig:offline_lunarlander} show the total returns vs the number of expert trajectories. \Cref{fig:ant,fig:halfcheetah,fig:hopper,fig:walker} show the total returns vs the number of env steps. \Cref{fig:pong} shows the total return vs the number of expert state-action pairs. \label{fig:offline_experiments}} \end{figure*} \label{sec:experiments} In this section, we demonstrate that our approach achieves convincing empirical performance in both online and offline IL settings on several environments. The precise setting is detailed in Appendix~\ref{app:experiments}. \looseness=-1 \textbf{Online Setting.} We first present results in various tabular environments where we can implement our algorithm without any practical relaxation outperforming GAIL \cite{Ho:2016}, AIRL \cite{Fu:2018} and IQ-Learn \cite{Garg:2021}. Results are given in \Cref{fig:simple_env_results}. Good performance but inferior to IQ-Learn is observed also for continuous states environments (CartPole and Acrobot) where we used neural networks function approximation. \textbf{Offline Setting.} \Cref{fig:offline_cartpole,fig:offline_acrobot,fig:offline_lunarlander} shows that our method is competitive with the state-of-the-art offline IL methods IQLearn \cite{Garg:2021} and AVRIL \cite{Chan:2021} that recently showed performances superior to other methods like \cite{Jarrett:2021}\cite{Kostrikov:2020}. We also tried our algorithm in the complex image-based \texttt{Pong} task from the Atari suite. \Cref{fig:pong} shows that the algorithm reaches the expert level after observing $2e5$ expert samples. We did not find AVRIL competitive in this setting, and skip it for brevity. In these settings, we verified that the algorithmic performance is convincing even for costs parameterized by neural networks. \textbf{Continuous control experiments.} \begin{comment} \begin{figure}[t] \centering \begin{tabular}{ccc} \subfloat[HalfCheetah-v2\label{fig:halfcheetah}]{% \includegraphics[width=0.3\linewidth]{plot/continuous_control/HalfCheetah-v2_normalized.pdf} } & \subfloat[Ant-v2\label{fig:ant}]{% \includegraphics[width=0.3\linewidth]{plot/continuous_control/Ant-v2_normalized.pdf} } & \subfloat[Pong\label{fig:pong}]{% \includegraphics[width=0.3\linewidth]{plot/continuous_control/PongNoFrameskip-v4_normalized.pdf} } \end{tabular} \caption{Continuous control and visual input experiments. \label{fig:large_scale_experiments}} \end{figure} \end{comment} We attain the expert performance also in $2$ MuJoCo environments: \texttt{Ant}, \texttt{HalfCheetah}, {\texttt{Hopper}, and \texttt{Walker}} (see \Cref{fig:ant,fig:halfcheetah,fig:hopper,fig:walker}). The additional difficulty in implementing the algorithm in continuous control experiments is that the analytical form of the policy improvement step is no longer computationally tractable because this would require to compute an integral over the continuous action space. Therefore, we approximated this update using the Soft Actor Critic (SAC) \cite{Haarnoja:2018} algorithm. SAC requires environment samples making the algorithm online. The good empirical result opens the question of analyzing policy improvement errors as in \cite{Geist:2019}. \textbf{Recovered Costs.} A unique algorithmic feature of the proposed methodology is that we can explicitly recover a cost along with the $Q$-function without requiring adversarial training. In Figures~\ref{fig:cost} and~\ref{fig:gridworld_cost}, we visualize our recovered costs in several simple tabular environments. Most importantly, we verify that the recovered costs induce nearly optimal policies w.r.t. the unknown true cost function. Compared to IQ-Learn, we do not require knowledge or further interaction with the environment. Regarding the transfer capability to new dynamics, we experimented on \texttt{Gridworld} (Figure~\ref{fig:transfer_cost}) and noticed that the recovered cost induces an optimal policy for the new dynamics while the imitating policy fails. We elaborate details in Appendix~\ref{app:recovered-rewards}. \section{Discussion and Outlook} In this work, we studied a Proximal Point Imitation Learning (\texttt{P$^2$IL}) algorithm with both theoretical guarantees and convincing empirical performance. Our methodology is rooted in classical optimization tools and the LP approach to MDPs. The most significant merits of \texttt{P$^2$IL} are the following: (i) It optimizes a convex and smooth logistic Bellman evaluation objective over both cost and Q-functions. In particular, it avoids instability due to adversarial training and can also recover an explicit cost along with Q function; (ii) In the context of linear MDPs, it comes with efficient resource guarantees and error bounds for the suboptimality of the learned policy (Theorem~\ref{thm:biased_sgd} and Corollary~\ref{cor:sample_complexity}). In particular, given $\mathrm{poly}(1/\varepsilon,\log(1/\delta),m)$ many samples , it recovers an $\varepsilon$-optimal policy, with probability $1-\delta$. Notably, the bound is independent of the size of the state-action space; (iii) Beyond the linear MDP setting, it can be implemented in a model-free manner, for both online and offline setups, with general function approximation without losing its theoretical specifications. This is justified by providing an error propagation analysis (Theorems~\ref{thm:error_propagation} and~\ref{thm:offline_error_propagation}), guaranteeing that small optimization errors lead to high-quality output policy; (iv) It enjoys not only strong theoretical guarantees but also favorable empirical performance. At the same time, our newly introduced methods bring challenges and open questions. One interesting question is whether one can accelerate the PPM updates and improve the convergence rate. Another direction for future work is to provide rigorous arguments for the near-optimality of the recovered cost function. On the practical side, we plan to conduct experiments in more challenging environments than MuJoCo and Atari. We hope our new techniques will be useful to future algorithm designers and lay the foundations for overcoming current limitations and challenges. In Appendix B, we point out in detail a few interesting future directions. \section*{Code repository} The code is available at the following link \url{https://github.com/lviano/P2IL}. \section*{Acknowledgements} The authors would like to thank one anonymous reviewer for their suggestions to improve the presentation and for motivating us to inspect the recovered cost function. Luca Viano has received financial support from the Enterprise for Society Center (E4S). Angeliki Kamoutsi has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme grant agreement OCAL, No. 787845. Gergely Neu was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No.~950180). Igor Krawczuk and Volkan Cevher have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n° 725594 - time-data). Luca Viano also acknowledges travel support from ELISE (GA no 951847). \bibliographystyle{unsrt}
{ "timestamp": "2022-09-23T02:14:20", "yymm": "2209", "arxiv_id": "2209.10968", "language": "en", "url": "https://arxiv.org/abs/2209.10968" }
\section{Introduction} Survey data remains an important part of research in many different areas, including political science~\cite{sturgis2021demise}. Many survey questions are about political or otherwise personal beliefs or intentions, and individuals will rightfully be concerned if their responses may be made public. This concern even has the potential to reduce participation, which may bias the survey results. To address this problem, survey researchers typically keep their datasets secret in order to protect the privacy of respondents, and take additional steps to protect privacy when revealing aggregate results. These practices make it difficult to share survey data with other researchers, and in spite of the steps taken to protect privacy, respondents often remain concerned about the privacy of their responses. Differential privacy~\cite{dwork2006calibrating, dwork2014algorithmic} is a strong formal definition of individual privacy, and it has been previously applied to survey data to protect the privacy of respondents~\cite{evans2022differentially}. Differential privacy works by adding noise to results destined for public release. Releasing more results requires adding more noise, because of the potential for correlation between results to reveal more information about a respondent than any single result does on its own. In differential privacy, this principle is called \emph{sequential composition}. For survey researchers, sequential composition means that the error in the differentially private statistics they release increases with the number of statistics. For summary statistics about per-question responses, the error can grow large for long surveys with many questions. We propose a novel mechanism for releasing differentially private statistics, the Tabular-DDP Mechanism\xspace, that can significantly improve error for releases of multiple statistics---including summary statistics about survey results. The key insight of the Tabular-DDP Mechanism\xspace is that a single respondent's answers to different survey questions are \emph{not} necessarily 100\% correlated, so the amount of noise required to use sequential composition is larger than necessary. The Tabular-DDP Mechanism\xspace works by building an approximate causal model of the distribution underlying the collected survey data, then using the model to estimate correlations between statistics in the desired data release. The mechanism leverages incomplete correlations (and independence) to reduce the amount of noise required, based on a relaxed privacy definition called dependent differential privacy~\cite{liu2016dependence}. In this paper, we formalize the Tabular-DDP Mechanism\xspace and prove that it satisfies dependent differential privacy. Then, we apply the Tabular-DDP Mechanism\xspace to real-world survey data from the American National Election Studies (ANES). We conduct an empirical evaluation of the accuracy of the Tabular-DDP Mechanism\xspace; the results suggest that the Tabular-DDP Mechanism\xspace can improve accuracy for summary statistics for this kind of survey data by several times in comparison to the standard Laplace mechanism (with sequential composition). \paragraph{Contributions.} We make the following contributions: \begin{itemize}[leftmargin=14pt] \item We initiate the study of optimal mechanisms for differentially private summary statistics for survey results, based on the insight that responses are not completely correlated \item We define the Tabular-DDP Mechanism\xspace, a novel dependent differential privacy mechanism designed for incompletely-correlated high-dimensional statistic \item We evaluate the Tabular-DDP Mechanism\xspace experimentally using real survey data to demonstrate its accuracy benefit \end{itemize} \section{Background} \subsection{Survey Data} \begin{figure} \begin{mdframed} \begin{enumerate}[leftmargin=7pt] \item First, how much do you think people can change the kind of person they are? \\ \textit{$\;\;\;\bigcirc$ Completely $\;\;\;\bigcirc$ A lot $\;\;\;\bigcirc$ A moderate amount \\ $\;\;\;\bigcirc$ A little $\;\;\;\bigcirc$ Not at all}\\[1pt] \item If you wanted to defend an opinion of yours, how successfully do you think you could do that?\\ \textit{$\;\;\;\bigcirc$ Extremely successfully $\;\;\;\bigcirc$ Very successfully \\$\;\;\;\bigcirc$ Moderately successfully $\;\;\;\bigcirc$ Slightly successfully \\$\;\;\;\bigcirc$ Not successfully at all?} \end{enumerate} \end{mdframed} \caption{Example questions and responses from the ANES 2006 Survey. The survey has a total of 72 questions.} \label{fig:example_survey} \end{figure} The motivating use case for our work is privacy in survey data. Such data is collected by posing survey questions like the examples in Figure~\ref{fig:example_survey} to individuals, and aggregating and analyzing the responses. To protect privacy, the responses themselves are typically kept secret; even summary statistics about the responses are often not released publicly, because they could potentially reveal information about individual respondents. The standard approach for protecting privacy in survey data is \emph{de-identification}: the removal of \emph{personally identifiable information} (PII) like names and phone numbers~\cite{connors2019transparency, plutzer2019privacy} before sharing the data. However, de-identification approaches do not always fully protect privacy: they are frequently subject to re-identification attacks~\cite{henriksen2016re}, which recover the removed PII. In addition, aggressive de-identification can remove useful information from the data, reducing utility. We focus on producing privacy-preserving histograms of response counts for each question (i.e. for each question, how many respondents chose each possible response for that question), with a formal privacy guarantee. Based on this goal, the number of statistics we want to release grows linearly with the number of questions in the survey. \subsection{Differential Privacy} Differential privacy~\cite{dwork2006calibrating, dwork2014algorithmic} is a formal privacy definition based on the notion of indistinguishability. Informally, for every hypothetical individual who \emph{could} contribute data for an analysis, differential privacy ensures that the analysis results will not reveal whether or not the individual \emph{did} contribute data. \begin{definition} [Differential Privacy] A randomized \emph{mechanism} $\mathcal{M}$ satisfies $(\epsilon,\delta)$-differential privacy if, for all datasets $D$ and $D'$ that differ in the data of one individual, and all possible sets of outcomes $S$: \[ \Pr[\mathcal{M}(D)\in \mathcal{S}] \leq e^{\epsilon} \Pr[\mathcal{M}(D')\in \mathcal{S}] + \delta \] \end{definition} Differential privacy is \emph{compositional}: if $\mathcal{M}_1$ satisfies $(\epsilon_1, \delta_1)$-differential privacy, and $\mathcal{M}_2$ satisfies $(\epsilon_2, \delta_2)$-differential privacy, then releasing the results of both mechanisms satisfies $(\epsilon_1 + \epsilon_2, \delta_1 + \delta_2)$-differential privacy. Differential privacy is closed under \emph{post-processing}: if $\mathcal{M}$ satisfies $(\epsilon, \delta)$-differential privacy, then $f \circ \mathcal{M}$ satisfies $(\epsilon, \delta)$-differential privacy for any function $f$. Differential privacy is defined in terms of \emph{neighboring databases} that differ in the data of one individual. The formal specification of this idea makes a big difference to the privacy guarantee obtained in practice. The standard approach~\cite{dwork2014algorithmic} is to assume that each individual contributes exactly one row to the database, so the \emph{distance} between two databases is equal to the number of rows on which they differ. When one individual may contribute multiple rows, a different distance metric must be used to ensure privacy. To achieve differential privacy, we can add noise as prescribed by one of several basic mechanisms. The two most commonly-used mechanisms are the \emph{Laplace mechanism}, which ensures pure $\epsilon$-differential privacy, and the \emph{Gaussian mechanism}, which ensures $(\epsilon, \delta)$-differential privacy. In both cases, the scale of the noise is determined by the query's \emph{sensitivity}, which measures the influence of a single individual's data on the query's output. The \emph{$L1$ sensitivity} of a function $f:\mathcal{D} \rightarrow \mathbb{R}^k$ is defined as follows, where $d$ is a distance metric on databases: \[ \Delta_1 f = \max_{D, D' . d(D, D') \leq 1} \lVert f(D) - f(D') \rVert_1 \] The \emph{$L2$ sensitivity} $\Delta_2 f$ is defined the same way, but with the $L2$ norm instead of the $L1$ norm. \begin{theorem}[The Laplace Mechanism] Given a numeric query $f:\mathcal{D} \rightarrow \mathbb{R}^k$, the Laplace mechanism adds to the query answer $f(D)$ with a vector $(\eta_1,\cdots,\eta_k)$, where $\eta_i$ are i.i.d. random variables drawn from the Laplace distribution centred at 0 with scale $b=\Delta_1 f/\epsilon$, denoted by $Lap(b)$. The Laplace mechanism preserves $(\epsilon,0)$-differential privacy. \end{theorem} \subsection{Dependent Differential Privacy} Sometimes, \emph{correlations may exist between individuals} that allow an adversary to make inferences about one individual based on the data of another. Consider, for example, a dataset of GPS locations that includes members of a chess club. If the chess club meets at 3pm on Thursdays, then the locations of the club's members at that time will be highly correlated with one another! The adversary may be able to learn the \emph{most popular} location of chess club members during the meeting time, and then \emph{infer}, based on their belief about correlations in the data, that an \emph{individual} chess club member is highly likely to have been at the popular location. In this case, the correlation in the data enabled the inference: absent the knowledge that chess club members are likely to be in the same location during the meeting time, the adversary would not be able to make the inference. Importantly, \emph{differential privacy does not promise to prevent this inference}. Arguably, it is not a privacy violation at all. However, in some cases such inferences are highly likely to reveal information that may prove harmful, so a significant body of work has investigated ways of refining the definition of differential privacy to account for this risk~\cite{song2017pufferfish, liu2016dependence, niu2019making, kessler2015deploying, liang2020pufferfish, zhang2022attribute}. The most important for our setting is \emph{dependent differential privacy}, due to Liu et al.~\cite{liu2016dependence}. Dependent differential privacy can be seen as a strengthening of differential privacy, which reduces to differential privacy when no correlations are present in the data. Dependent differential privacy is defined as follows: \begin{definition}[Dependent Neighboring Databases] Two databases $D(L, \mathcal{R})$ and $D'(L, \mathcal{R})$ are dependent neighboring databases if the modification of a tuple value in database $D(L, \mathcal{R})$ causes a change in at most $L-1$ other tuple values in $D'(L, \mathcal{R})$ due to the probabilistic dependence relationship $\mathcal{R}$ between the data tuples. \end{definition} \begin{definition}[Dependent Differential Privacy] A randomized mechanism $\mathcal{M}$ satisfies $(\epsilon, \delta)$-dependent differential privacy if for all pairs of dependent neighboring databases $D(L, \mathcal{R})$ and $D'(L, \mathcal{R})$ and all possible sets of outcomes $S$: \[ \Pr[\mathcal{M}(D(L, \mathcal{R}))\in \mathcal{S}] \leq e^{\epsilon} \Pr[\mathcal{M}(D'(L, \mathcal{R}))\in \mathcal{S}] + \delta \] \end{definition} This definition is designed to capture inferences made on the dependence relationship $\mathcal{R}$ while preserving important properties of differential privacy. Like differential privacy, dependent differential privacy is compositional and closed under post-processing. Liu et al.~\cite{liu2016dependence} propose a definition of \emph{dependent sensitivity} that allows the use of the Laplace mechanism to satisfy dependent differential privacy. Dependent sensitivity is large when significant correlations in the data could enable inferences like our earlier example, and is equal to $L1$ sensitivity when no correlations exist. \begin{definition}[Dependent sensitivity~\cite{liu2016dependence}] The dependent sensitivity of a query $Q$ with $L1$ sensitivity $\Delta Q$ is: % \[ DS^Q = \sum_{j = C_{i1}}^{C_{iL}} \rho_{i j} \Delta Q \] % Where $\rho_{i j}$ represents the \emph{dependence coefficient} between records $i$ and $j$. \end{definition} \section{Privacy for Survey Data} Differential privacy assumes complete correlation between the attributes of a single individual, and so releasing statistics about multiple columns of a tabular dataset requires the use of sequential composition. The key insight of our approach is the observation that complete correlation often \emph{does not} exist between attributes, so the use of sequential composition provides very loose upper bounds on the actual privacy loss for these statistics. We propose the use of a dependent differential privacy mechanism for releasing statistics about multiple attributes in tabular data, including survey data. Under valid assumptions about the distribution of the underlying data, dependent differential privacy provides strong privacy protection for participants in the dataset---but with less noise required. The primary challenges lie in modeling correlations between columns and in efficiently calculating the dependent sensitivity of queries over the data based on these models. \begin{figure} \centering \begin{tabular}{c c c} \begin{minipage}{90pt} \centering {\footnotesize \begin{tabular}{|c |c| c|} \hline Prize & First & Monty \\ Door & Selection & Opens \\ \hline 1 & 1 & 2 \\ 3 & 2 & 1 \\ 3 & 3 & 2 \\ \hline \end{tabular} } \textbf{Original Table} \end{minipage} & $\Rightarrow$ & \begin{minipage}{60pt} \centering {\footnotesize \begin{tabular}{|c|} \hline Prize Door\\ \hline 1\\ 3\\ 3\\ \hline First Selection \\ \hline 1\\ 2\\ 3\\ \hline Monty Opens \\ \hline 2\\ 1\\ 2\\ \hline \end{tabular} } \textbf{Transformed Table} \end{minipage}\\ \end{tabular} \caption{Example Tabular Data: Records of Monty Hall Games.} \label{fig:monty_data} \end{figure} \subsection{Example: Monty Hall} \begin{figure} \centering \begin{tikzpicture}[ node distance=1cm and 0cm, mynode/.style={draw,ellipse,text width=2cm,align=center} ] \node[mynode] (sp) {Prize Location}; \node[mynode,below right=of sp] (gw) {Monty Opens}; \node[mynode,above right=of gw] (ra) {First Selection}; \path (sp) edge[-latex] (gw) (gw) edge[latex-] (ra); \end{tikzpicture} \caption{Bayesian Network for the Monty Hall Problem} \label{fig:bayesian_network} \end{figure} As a simple example of our setting, consider the Monty Hall problem. The problem describes a game involving a contestant, a host (Monty Hall), and three doors. One door contains a goat, one contains a prize, and one is empty; the contestant's goal is to choose the door with the prize. The game proceeds in three steps: \begin{enumerate} \item The contestant chooses a door (the ``First Selection''). \item Monty opens a door that is \emph{neither} the ``First Selection'' \emph{nor} the door with the prize (revealing either the goat or nothing at all). \item The contestant is given the opportunity to change their selection to the other non-open door, or keep their first selection. \item The contestant's final selection is opened. If the door contains the prize, the contestant wins. \end{enumerate} The Bayesian network corresponding to the Monty Hall problem appears in Figure~\ref{fig:bayesian_network}. This problem is famous for being counterintuitive---we assume that the event of Monty opening one of the doors does not affect the probability that the contestant has made the right choice, but in fact it does! This effect is encoded in the Bayesian network: which door Monty opens depends on both the location of the prize \emph{and} the contestant's first selection. Imagine we have collected observations of Monty Hall games, as in Figure~\ref{fig:monty_data}, and we would like to release statistics about these games under differential privacy. We can release histograms for all three attributes summarizing the game outcomes, and add Laplace noise with scale $\frac{1}{\epsilon}$ to each one. By the sequential proposition property of differential privacy, the total privacy cost is $3\epsilon$. Note that it is not possible to use parallel composition in this case, because adding or removing a whole row of data changes the results of all three histograms. \subsection{Modeling Correlations} \label{sec:model-corr} Calculating dependent sensitivity requires the ability to evaluate the probability that an attribute takes a particular value given the values of the other attributes in the same row. We model these correlations using a Bayesian network, in a similar way to previous work~\cite{liu2016dependence, song2017pufferfish}. \begin{figure} \centering \begin{tikzpicture}[ node distance=.6cm and .5cm, mynode/.style={draw,ellipse,text width=1cm,align=center} ] \node[mynode] (3) {V06P431}; \node[mynode, below=of 3] (4) {V06P432}; \node[mynode, below left=of 4] (5) {V06P433}; \node[mynode, below right=of 4] (6) {V06P434}; \node[mynode, below=of 5] (8) {V06P510}; \node[mynode, below=of 6] (7) {V06P505}; \path (3) edge[-latex] (4) (4) edge[-latex] (5) (4) edge[-latex] (6) (5) edge[-latex] (8) (6) edge[-latex] (7); \end{tikzpicture} \caption{Example Bayesian network for a subset of ANES 2006 Survey Data. Each node represents one column in the original dataset; each edge is associated with a conditional probability table encoding the conditional dependencies between column values.} \label{fig:ex_network} \end{figure} A Bayesian network is a graphical model (directed acyclic graph) that represents conditional dependencies between variables. In our setting, each column of the dataset is represented by a variable in the Bayesian network (i.e. a node in the graph) and the conditional probability table associated with each edge in the graph encodes the conditional dependencies between column values. We represent a Bayesian network learned from the dataset using a triple $(V, E, P)$, where $V$ and $E$ are the vertices and edges of the graph, respectively, and $P$ is the conditional probability table. For every pair of attributes $X_1, X_2 \in V$, if $X_1$ is conditionally dependent on $X_2$, then an edge $(X_1, X_2) \in E$ will connect them, and the conditional probability table will record the corresponding conditional probability distribution: for every possible value $v_1$ and $v_2$ that attributes $X_1$ and $X_2$ could take, $P(X_1=v_1, X_2=v_2) = \Pr[X_2 = v_2 \mid X_1 = v_1]$. In a survey, we expect that the attributes of a single individual's results will be correlated with each other. We model the extent of this correlation using a Bayesian network, so that we can apply mechanisms for dependent differential privacy (described in Section~\ref{sec:depend-sens-tabul}). \subsection{Learning the Model} The major challenge of this approach is defining the Bayesian network itself. Previous work has assumed that the network is already known, and is public information~\cite{liu2016dependence, song2017pufferfish} (and often, that it has a specific form---e.g. a Markov chain). Our approach is to learn the Bayesian network from the data itself. Learning the structure of Bayesian networks from data is a challenging but well-studied problem~\cite{scanagatta2019survey, tsamardinos2006max}; our implementation uses the Pomegranate library for Python. An example Bayesian network learned from a subset of the columns of the ANES 2006 Survey dataset appears in Figure~\ref{fig:ex_network}. Approaches for learning structures like these do not scale well to large networks (e.g. hundreds of attributes---as is common in surveys). In order to make the model-learning component of our approach tractable, we split the attributes into smaller chunks (in our evaluation, we include 10 attributes per chunk), and learn a model for just the attributes in each chunk. Then, to provide privacy for the whole response, we add noise to each chunk separately and use the sequential composition property to determine the total privacy loss. \subsection{Privacy Considerations} \label{sec:priv-cons} The approach we have outlined raises several important concerns about the real-world privacy we can expect from the guarantee. First, Pufferfish privacy and its variants (including dependent differential privacy) represent weaker guarantees than $\epsilon$-differential privacy; in the context of survey data, the weakening of the guarantee is similar to the difference between node- and edge-level privacy in graphs~\cite{kasiviswanathan2013analyzing}. In our setting: \begin{itemize} \item \textbf{$\epsilon$-differential privacy} protects the presence or absence of \emph{one individual} in the survey results \item \textbf{$\epsilon$-dependent differential privacy} protects the presence or absence of \emph{one answer to a survey question} in the survey results \end{itemize} The difference between these guarantees is significant, and our weaker guarantee may not be applicable in some cases. In cases where survey answers may be sensitive, but participation in the survey is not, the dependent differential privacy guarantee may be appropriate, and enable better utility in the results. Second, learning a Bayesian network from the sensitive data presents two additional concerns: (1) the model's structure may reveal properties of the underlying distribution (e.g. enabling attribute inference), and (2) the model's structure may reveal properties of individual records in the data (enabling inferences about individuals). In our setting, (1) is not a major concern, since the underlying distribution of responses is what we would like to learn. However, concern (2) is an issue in our setting. It is possible that learning the Bayesian network from the data could reveal information specific to individuals---though in large datasets, this information is likely to be minimal. To alleviate this issue, a differentially private learning algorithm could be used~\cite{zhang2017privbayes}. An additional concern is that the learning process could produce a model that does not actually match the underlying distribution---either because the learning process fails to learn the correct model, or because the data does not represent the underlying distribution very well. In this case---as in other applications of Pufferfish privacy---unexpected privacy failures could occur due to the mismatch between \emph{expected} and \emph{actual} correlations in the data. All of these concerns represent limitations of our approach, and are important areas for future improvement. \section{Dependent Sensitivity for Tabular Data} \label{sec:depend-sens-tabul} \begin{algorithm}[t] \SetKwData{count}{count} \SetKwData{Lap}{Lap} \SetKwData{noisyCount}{noisyCount} \SetKwData{total}{total} \SetKwData{pr}{Prob} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \let\oldnl\n \newcommand{\nonl}{\renewcommand{\nl}{\let\nl\oldnl} \Input{\hspace{1pt}Database $D$ with $n$ columns, query $Q$ to be run on each column, chunk size $k$, privacy parameter $\epsilon$} \Output{\hspace{1pt}Privacy-preserving statistics for each column} $\{D_1, \dots, D_{\lfloor {n}/{k} \rfloor} \} \leftarrow \textsc{SplitColumns}(D, k)$\\ \For{$D_i \in \{D_1, \dots, D_{\lfloor {n}/{k} \rfloor} \}$}{ $(V, E, P) \leftarrow \textsc{LearnNetwork}(D_i)$\\ \For{$(X_i, X_j) \in E$}{ $\rho_{i,j} \leftarrow \max_{d_j, d_{i_1}, d_{i_2}} \log \Big( \frac{\Pr[X_i = d_{i_1}, X_j = d_j]} {\Pr[X_i = d_{i_2}, X_j = d_j]} \Big)$ } $DS \leftarrow {\sum_{i,j} \rho_{i,j}}$ \hfill \textit{calculate dependent sens.}\\ \For{$X_i \in \textit{columns}(D_i)$}{ $R_i \leftarrow Q(X_i) + \textsf{Lap}\Big( \frac{n DS}{k \epsilon} \Big)$ \hfill \textit{calculate noisy result}\\ } } \Return{$\textbf{R}$}\\[10pt] \nonl $\textsc{SplitColumns}(D, k)$ splits dataset $D$ column-wise into chunks, so that each chunk has at most $k$ columns.\\ \nonl $\textsc{LearnNetwork}(D_i)$ learns a causal model for dataset $D_i$, expressed as a Bayesian network.\\ \caption{The Tabular-DDP Mechanism.} \label{alg:mechanism} \end{algorithm} This section describes the \emph{Tabular-DDP Mechanism\xspace}, formalized in Algorithm~\ref{alg:mechanism}, which adapts the dependent sensitivity approach of Liu et al.~\cite{liu2016dependence} to the setting of multi-attribute tabular data. \paragraph{Transforming the data.} We adopt the definition of dependent sensitivity from Liu et al., as defined earlier. To scale noise to dependent sensitivity, we need the data to be represented in the form $X = \{X_1, \dots, X_n\}$, where we assume that the attributes of each $X_i$ may be \emph{completely} dependent on one another, and there may \emph{additionally} be correlations between two tuples $X_i$ and $X_j$. To fit these assumptions, we transform the tabular representation of our data table into a single-column table, as shown in Figure~\ref{fig:monty_data}, by concatenating the columns. After this transformation, each tuple has only a single attribute, and the domain of that attribute is the product of the table's original attributes. The transformed data fits the assumptions of dependent sensitivity. In the new representation, which has only a single column, the Bayesian network in Figure~\ref{fig:bayesian_network} encodes correlations between \emph{rows} rather than columns, as expected for dependent sensitivity. \paragraph{Calibrating noise to dependent sensitivity.} With the transformed data, it is possible to apply the mechanisms of Liu et al. directly: \begin{enumerate} \item Transform the tabular data to a single-column representation \item Add Laplace noise to the results of querying the transformed data, scaled to the dependent sensitivity of the query \end{enumerate} Next, we introduce a slight modification to the mechanism that avoids the need for explicit transformation of the data. \paragraph{The Tabular-DDP Mechanism\xspace.} The Tabular-DDP Mechanism\xspace, defined in Algorithm~\ref{alg:mechanism}, simulates the process described above, and scales the additive Laplace noise to the \emph{effective} dependent sensitivity of applying a query to multiple attributes of a tabular dataset in parallel. First, the mechanism splits the dataset into chunks column-wise (line 1), to make the modeling task computationally tractable. Next, for each chunk, the mechanism learns a Bayesian network encoding the causal relationships in the data (line 3). The \textsc{LearnNetwork} function refers to an off-the-shelf tool for learning the network and returning a representation containing the conditional probability table, as described earlier (Section~\ref{sec:model-corr}). The larger the number of columns $k$ in each chunk, the more computationally challenging this task is. Then, the mechanism computes the effective dependent sensitivity by summing the dependence coefficients for all attributes in the table (line 5). Here, the mechanism uses the conditional probability table in the learned Bayesian network to calculate the probability ratio: \[\frac{\Pr[X_j = d_j | X_i = d_{i_1}]} {\Pr[X_j = d_j | X_i = d_{i_2}]} \] Finally, the mechanism releases the result of running the query $Q$ and adding Laplace noise scaled to the dependent sensitivity (line 8). Like the process defined above, the Tabular-DDP Mechanism\xspace satisfies $\epsilon$-dependent differential privacy, as long as the learned Bayesian network accurately represents the underlying data distribution. For each column-wise chunk of the dataset, the mechanism satisfies $\lfloor \frac{n}{k} \rfloor \epsilon$-dependent differential privacy, for a total privacy cost bounded by $\epsilon$-dependent differential privacy by sequential composition. \paragraph{Privacy.} To prove privacy for the Tabular-DDP Mechanism\xspace, we will view the dataset implicitly in the single-column representation described above (with correlations between tuples, rather than columns) and leverage the privacy result of Liu et al.~\cite{liu2016dependence}: \begin{lemma}[Liu et al.~{\cite[Theorem 8]{liu2016dependence}}] The dependent sensitivity for publishing any query $Q$ over a dependent (correlated) dataset is \[DS^Q = \max_i DS_i^Q\] \label{lem:dep_sens} \end{lemma} Here, $i$ refers to a tuple index, and $DS_i^Q = \sum_i \rho_{i,j} \Delta Q_j$ is the dependent sensitivity for the $i$th tuple. If we can show that the Tabular-DDP Mechanism\xspace correctly calculates $DS^Q$ and adds Laplace noise scaled to that sensitivity, then it follows that the Tabular-DDP Mechanism\xspace satisfies dependent differential privacy. \begin{theorem} If the learned Bayesian network $(V, E, P)$ accurately represents the underlying distribution of the dataset $D$, then the Tabular-DDP mechanism (Algorithm~\ref{alg:mechanism}) satisfies $\epsilon$-dependent differential privacy. \end{theorem} \begin{proof} We show that Algorithm~\ref{alg:mechanism} satisfies $\frac{k\epsilon}{n}$-dependent differential privacy for each chunk of columns. By sequential composition, if there are at most $\frac{n}{k}$ chunks, then the mechanism has a total privacy cost of $\epsilon$-dependent differential privacy. For each chunk of columns, we have the following for the sensitivity calculated by Algorithm~\ref{alg:mechanism}, leveraging the fact that our counting queries have sensitivity $\Delta Q_j = 1$: \begin{align*} DS = & \sum_{i, j} \rho_{i,j}\\ \geq & \max_i \sum_j \rho_{i,j} \Delta Q_j\\ = & DS^Q \end{align*} % By Lemma~\ref{lem:dep_sens}, noise scaled to $\frac{DS}{\epsilon}$ will satisfy $\epsilon$-dependent differential privacy. Algorithm~\ref{alg:mechanism} adds Laplace noise scaled to: \[\frac{n DS}{k \epsilon}\] % which satisfies $\frac{k\epsilon}{n}$-dependent differential privacy, as required. \end{proof} \paragraph{Utility.} The same accuracy bounds proven by Liu et al.~\cite{liu2016dependence} also apply to the Tabular-DDP Mechanism\xspace. These results bound the error for any individual column of the statistics returned by the mechanism. \begin{definition}[$(\alpha, \beta)$-accuracy~\cite{dwork2014algorithmic, liu2016dependence}] A randomization algorithm $\mathcal{A}$ satisfies $(\alpha, \beta)$-accuracy for a query function $Q$ if: \[\Pr[\max_D \lvert \mathcal{A}(D) - Q(D) \rvert > \alpha] \leq \beta\] \end{definition} \begin{lemma} The Tabular-DDP Mechanism\xspace provides $(\alpha, \beta)$-accuracy for each column of the dataset $D$, for $\beta = \exp \Big(\frac{-\epsilon \alpha}{DS^Q} \Big)$. \label{lem:accuracy_column} \end{lemma} \begin{proof} Follows directly from Liu et al.~\cite{liu2016dependence}, Theorem 10. \end{proof} In addition, we can extend the utility bounds from Liu et al.~\cite{liu2016dependence} to bound $L1$ error for Tabular-DDP Mechanism\xspace. We leverage a result on the sum of Laplace samples from Chan et al.~\cite{chan2011private} (which adapts the Chernoff bound). We define $L_1$ accuracy in the same way as $(\alpha, \beta)$ accuracy, but using the $L_1$ error. \begin{lemma}[Sum of independent Laplace samples ({\cite[Lemma 2.8]{chan2011private}})] Suppose $\gamma_i$'s are independent random variables, where each $\gamma_i$ has Laplace distribution $\mathsf{Lap}(b_i)$. Suppose $Y := \sum_i \gamma_i$, and $b_M := \max_i b_i$. Let $\nu \geq \sqrt{\sum_i b_i^2}$ and $0 < \lambda < \frac{2\sqrt{2}\nu^2}{b_M}$. Then: \[\Pr[Y > \lambda] \leq \exp\Big(\frac{-\lambda^2}{8\nu^2}\Big)\] \label{lem:chernoff} \end{lemma} \begin{definition}[$L_1$-accuracy] A randomization algorithm $\mathcal{A}$ satisfies $L_1 (\alpha, \beta)$-accuracy for a query function $Q$ if: \[\Pr[\max_D \lVert \mathcal{A}(D) - Q(D) \rVert_1 > \alpha] \leq \beta\] \end{definition} \begin{theorem} The Tabular-DDP Mechanism\xspace provides $L_1 (\alpha, \beta)$-accuracy for $\beta = \exp\Big(\frac{- \sqrt{2}\epsilon\alpha}{4 DS}\Big)$. \end{theorem} To prove the accuracy bound, we consider that the $L_1$ error introduced by the mechanism is a result \emph{only} of the noise samples added to each result in line 8 of Algorithm~\ref{alg:mechanism} (i.e. $\lVert \mathcal{A}(D) - Q(D) \rVert_1$ is exactly equal to the sum of the noise samples added by the mechanism). Each of these noise samples is conditionally independent from the others, so Lemma~\ref{lem:chernoff} applies, and gives an upper bound on the $L_1$ error resulting from the noise. \begin{proof} Set $\lambda = \frac{2\epsilon\sqrt{2}\nu^2}{DS}$ and $\nu = \sqrt{n}\frac{DS}{\epsilon}$. By Lemma~\ref{lem:chernoff}, we have: \begin{align*} \Pr[\max_D \lVert \mathcal{A}(D) - Q(D) \rVert_1 > \alpha] \leq & \exp\Big(\frac{-\alpha^2}{8\nu^2}\Big) \\ = & \exp\Big(\frac{-\alpha\alpha}{8\nu^2}\Big) \\ = & \exp\Big(\frac{-\alpha\frac{2\epsilon\sqrt{2}\nu^2}{DS}}{8\nu^2}\Big) \\ = & \exp\Big(\frac{- \sqrt{2}\epsilon\alpha}{4 DS}\Big) \\ \end{align*} \end{proof} Thus the accuracy of Tabular-DDP Mechanism\xspace is independent of the dimensionality of the statistic being released, except as encoded in the dependent sensitivity. \paragraph{Limitations.} Our approach has several important limitations. First, as discussed in Section~\ref{sec:priv-cons}, the privacy guarantee is strictly weaker than standard $\epsilon$-differential privacy, and additional unexpected privacy failures could occur if the learned Bayesian networks do not actually correspond to the underlying population distribution. Second, the Tabular-DDP Mechanism\xspace is based on Laplace noise, and uses $L_1$ sensitivity; for high-dimensional data, if $(\epsilon, \delta)$-differential privacy is sufficient, the Gaussian mechanism with $L_2$ sensitivity may produce better accuracy. We hope to extend the Tabular-DDP Mechanism\xspace to Gaussian noise with $L_2$ sensitivity in future work. \section{Evaluation} \begin{figure*} \centering \hspace*{-18mm}\includegraphics[width=1.2\textwidth]{figures/accuracy_results_0.1.pdf} \caption{Accuracy results: comparison between the Tabular-DDP mechanism and the Laplace mechanism for $\epsilon=0.1$.} \label{fig:all_results} \end{figure*} \begin{figure} \centering \begin{tabular}{l cc} \hline \textbf{ANES Survey} & \textbf{Questions} & \textbf{Responses}\\ \hline Pilot Study 2006 & 72 & 675 \\ Eval. of Govt. \& Society 2010 & 117 & 1275 \\ Eval. of Govt. \& Society 2011(a) & 139 & 1315 \\ Eval. of Govt. \& Society 2011(b) & 139 & 1240 \\ Eval. of Govt. \& Society 2012 & 190 & 1314 \\ Pilot Study 2013 & 141 & 1635 \\ Pilot Study 2019 & 402 & 3165 \\ Pilot Study 2020 & 153 & 3080\\ \hline \end{tabular} \caption{Evaluation Datasets.} \end{figure} \begin{figure} \centering \includegraphics[width=.45\textwidth]{figures/accuracy_results_multk_line.pdf} % \caption{Accuracy results: effect of the chunk size $k$ for $\epsilon=1.0$. The green dashed line is a rough representation of the trend in $k$'s effect on accuracy.} \label{fig:k_results_results} \end{figure} \begin{figure*} \centering \begin{tabular}{c c} \includegraphics[width=.45\textwidth]{figures/scalability_results_10.pdf} & \includegraphics[width=.45\textwidth]{figures/scalability_results_15.pdf} \\ $k=10$ & $k=15$\\ \end{tabular} \caption{Scalability results: per-component running time for the Tabular-DDP algorithm on each dataset. Note the difference in $y$ axis scales. Model learning time increases exponentially with the value of $k$, and dominates for larger values of $k$.} \label{fig:scalability} \end{figure*} Our empirical evaluation seeks to answer two questions: \begin{enumerate} \item \textbf{Q1: Accuracy.} How does the accuracy of the Tabular-DDP Mechanism\xspace compare to the Laplace mechanism? \item \textbf{Q2: Scalability.} How does the size and dimensionality of the dataset impact the running time of the Tabular-DDP Mechanism\xspace? \end{enumerate} To answer the first question, we evaluated the accuracy of the Tabular-DDP Mechanism\xspace for computing summary statistics for 9 survey datasets released by the American National Election Studies. The results suggest that Tabular-DDP Mechanism\xspace can significantly increase accuracy over the Laplace mechanism for these real-world datasets. To answer the second question, we measured running time for each component of Tabular-DDP Mechanism\xspace; the results suggest that Tabular-DDP Mechanism\xspace scales to realistic datasets, and that the primary scalability challenge comes from learning the Bayesian network from the data. \paragraph{Datasets.} Our datasets were drawn primarily from the American National Election Studies (ANES) database. Each dataset included columns that corresponded to the answers for questions in the metadata datasheet. Questions that were multiple choice were frequently designed by indexed characters (for example, in the dataset ANES 2011, multiple choice question responses are represented as sequential columns ("c3c1", "c3c2", where each possible answer is indicated by a number and choice, or otherwise, for that column "-1. Inapplicable, legitimate skip" ), and if there was branch logic for indexed questions, the numeric values were sentinel values. \paragraph{Methodology.} We compared Tabular-DDP Mechanism\xspace to the Laplace mechanism, which provides $\epsilon$-differential privacy and assumes that attributes in each individual record may be completely correlated with one another. To simulate the computation of summary statistics for each survey, we ran a histogram query on each column of the survey results (i.e. we queried the count of each response category for each question of the survey). \subsection{Experiment 1: Accuracy} \paragraph{Experiment Setup.} Our first experiment examines the accuracy of the Tabular-DDP Mechanism\xspace by comparing it to the standard Laplace mechanism. We ran 100 trials for each experiment, and report $L2$ error. We used $\epsilon \in \{0.1, 1, 10\}$ for both mechanisms. \paragraph{Results.} The results for $\epsilon = 0.1$ appear in Figure~\ref{fig:all_results}. Additional results for other values of $\epsilon$ appear in Figure~\ref{fig:all_results_appendix} in the Appendix, and are consistent with these. We set $k =10$ (i.e. 10 columns per ``chunk'' of the dataset, so that each Bayesian network covers 10 columns). The results show that the Tabular-DDP Mechanism\xspace consistently outperforms the Laplace mechanism in terms of accuracy at a given level of privacy. Figure~\ref{fig:k_results_results} shows accuracy results for various values of the chunk size $k$. The results suggest that the accuracy advantage of the Tabular-DDP Mechanism\xspace over the Laplace mechanism increases as $k$ increases; when $k=5$, for example, the accuracy advantage of the Tabular-DDP Mechanism\xspace is fairly small, and it is much larger when $k=15$. These results match our expectations about the Tabular-DDP Mechanism\xspace: as $k$ increases, the Tabular-DDP Mechanism\xspace takes better advantage of the partiality of correlations between attributes. \subsection{Experiment 2: Scalability} \paragraph{Experiment Setup.} Our second experiment measures running time of Tabular-DDP Mechanism\xspace to determine whether or not it can scale to realistic datasets. We instrumented our implementation to separately measure the running time of (1) learning the Bayesian network from the data, (2) calculating the dependent sensitivity, and (3) generating the noise samples themselves. We ran Tabular-DDP Mechanism\xspace on the same datasets and recorded the running time of each component; we performed 5 trials and report the average running time of each component. We set $k =10$ (i.e. 10 columns per ``chunk'' of the dataset, so that each Bayesian network covers 10 columns). \paragraph{Results.} The results appear in Figure~\ref{fig:scalability}, and suggest that Tabular-DDP Mechanism\xspace is capable of scaling to realistic datasets like the ANES surveys we considered. The running time for Tabular-DDP Mechanism\xspace in this experiment is dominated by the time to calculate dependent sensitivity based on the Bayesian network associated with the target columns. Running time was higher for surveys with more questions (e.g. the ANES 2012 and 2019 surveys, which had more columns than other datasets). For all of the datasets we considered, when $k=10$, Tabular-DDP Mechanism\xspace was able to compute summary statistics for all columns in about 10 seconds or less. For small values of $k$, the running time is dominated by the time taken to calculate dependent sensitivity. However, as $k$ increases, the model learning time quickly dominates the total time, due to the fundamental scalability challenges of learning models over many attributes. The running time for the ANES 2020 survey increases 10x---from about 10 seconds to over 100 seconds---when $k$ increases from 10 to 15. \subsection{Discussion} Based on the results of our experiments, we answer the original research questions as follows. \textbf{(1)}: for the survey data we studied, the accuracy of the Tabular-DDP Mechanism\xspace improves on the Laplace mechanism---when $k \geq 10$, the improvement is often 2x or more. \textbf{(2)}: the Tabular-DDP Mechanism\xspace is slower than the Laplace mechanism, but for $k \leq 10$, it scales easily to realistic survey datasets with hundreds of columns and thousands of responses. Our experimental results clearly demonstrate the tradeoff between running time and accuracy in the Tabular-DDP Mechanism\xspace: accuracy increases with larger values of $k$, but running time also increases (exponentially!). Fortunately, the results suggest that significant accuracy gains can be achieved with small enough values of $k$ that running time is reasonable. More scalable approaches for learning Bayesian networks may allow increasing $k$ further, and thus improving accuracy even more. \section{Related Work} A significant amount of previous work has considered the privacy implications of correlations within sensitive data. The most general framework for formalizing privacy while taking correlations into account is Pufferfish privacy~\cite{song2017pufferfish}, introduced earlier. The Pufferfish framework allows specifying any model of correlations in the underlying population as a probability distribution over possible datasets. Dependent differential privacy~\cite{liu2016dependence} can be defined as a particular variant of Pufferfish privacy. Our work builds on these definitions, providing a new mechanism that satisfies dependent differential privacy (and thus, Pufferfish privacy). Many different mechanisms have been proposed for Pufferfish privacy; most are designed for a specific purpose where the correlations in the underlying data are known ahead of time to the analyst and have a specific structure. Many of these consider \emph{temporal} correlations---multiple data records contributed by the same individual over time---and model these correlations using Markov chains. Solutions have been proposed for social media settings~\cite{song2017pufferfish}, smart meter data~\cite{niu2019making, kessler2015deploying}, and web browsing data~\cite{liang2020pufferfish}. In contrast to these approaches, the Tabular-DDP Mechanism\xspace is designed to learn a general model of the underlying correlations from the data itself. Recent work by Zhang et al.~\cite{zhang2022attribute} proposes Pufferfish mechanisms for \emph{attribute privacy}. This work uses similar techniques to ours, but has a different privacy goal: attribute privacy aims to prevent \emph{population-level} inferences about attributes of the dataset (for example, the distribution of race and gender in the original dataset). Our work, in contrast, aims to prevent inferences about \emph{individuals}. Previous work has explored the application of differential privacy to protect privacy in survey data~\cite{d2015differential, evans2021statistically, evans2022differentially}. This work has focused on ensuring statistical validity and avoiding bias in the inferences made using differentially private statistics. Previous work in this area has applied well-known differential privacy mechanisms like the Laplace mechanism. \section{Conclusion} We have presented the Tabular-DDP Mechanism\xspace, a novel dependent differential privacy mechanism that can improve accuracy over the standard Laplace mechanism for high-dimensional statistics that are not completely correlated. We have shown how to apply the Tabular-DDP Mechanism\xspace to protect privacy in summary statistics for survey data; our experimental results show a significant improvement in accuracy compared to the standard Laplace mechanism in that setting. \bibliographystyle{plain} \subsection{Privacy Definition} We define attribute differential privacy (ADP) as a relaxation of $\epsilon$-differential privacy. ADP relaxes the \emph{distance metric} on pairs of databases. The ``standard'' distance metric on a pair of databases measures the \emph{number of rows on which they differ}; the distance metric for ADP instead measures the \emph{number of attributes on which they differ}. \begin{definition}[Distance metric for ADP] The distance between two databases $D_1$ and $D_2$ under attribute differential privacy (ADP) is: % \[d_{ADP}(D_1, D_2) = \sum_{c \in cols} \lVert D_1[c] - D_2[c] \rVert_1\] \label{def:distance_metric} \end{definition} \noindent The formal definition of attribute differential privacy is the same as the definition of differential privacy, but uses the distance metric $d_{ADP}$. \begin{definition}[Attribute Differential Privacy (ADP)] A randomized mechanism $\mathcal{M}: \mathcal{D} \rightarrow \mathbb{R}^k$ satisfies $\epsilon$-attribute differential privacy ($\epsilon$-ADP) if for all neighboring databases $D_1$ and $D_2$ such that $d_{ADP}(D_1, D_2) \leq 1$, and all possible sets of outcomes $O$: % \[ \Pr[\mathcal{M}(D_1) \in O] \leq \exp(\epsilon) \cdot \Pr[\mathcal{M}(D_2) \in O]\] \end{definition} The implication of this definition is that our privacy guarantee no longer protects the \emph{presence or absence of an individual} in the data, but rather the \emph{presence or absence of a single attribute} in the data. These two guarantees are clearly not interchangeable! However, in many contexts, privacy harm to individuals comes not from the disclosure of their participation, but from the disclosure of their sensitive attributes in the underlying data. ADP is designed to provide higher accuracy statistics in this common context. \subsection{ADP for Monty Hall} As an example of the ADP guarantee, consider the data collected from Monty Hall games appearing in Figure~\ref{fig:monty_data}. ADP protects the values of \emph{attributes} in this data, rather than the presence or absence of whole \emph{rows}. An ADP mechanism for answering queries over this data may reveal that a game took place, but should not reveal any of the individual attributes associated with that game. This goal is reflected in the distance metric for databases (Definition~\ref{def:distance_metric}). The distance metric says that two databases of Monty Hall games are neighbors if they differ in one attribute. Databases that differ in more than one attribute---including those that differ in whole rows---are not neighbors. Unlike in differential privacy, adding or removing a row does not produce a neighboring database. Achieving ADP is challenging, due to the possible correlations between attributes. The Bayesian network in Figure~\ref{fig:bayesian_network} tells us that we can release statistics about the first two columns using parallel composition, because they are conditionally independent. For example, we might release noisy histograms over the ``Prize Door'' and ``First Selection'' columns; we can add Laplace noise scaled to $\frac{1}{\epsilon}$ to each histogram and achieve $\epsilon$-ADP \emph{in total}, by parallel composition. If we would also like to release a histogram over the ``Monty Opens'' column, then our accounting task becomes more complicated: this column is \emph{not} conditionally independent of the others. In standard $\epsilon$-differential privacy, we would use sequential composition. Under attribute differential privacy, when the dependence is not \emph{fully} conditional (i.e. with probability 1), as in the Monty Hall problem, we can bound the privacy loss more tightly using the mechanism proposed in Section~\ref{sec:adp-mech}. \section{An ADP Mechanism for Counting Queries} \label{sec:adp-mech} \paragraph{Problem setup.} We have two columns, $A$ and $B$. Each has a domain of values $dom(a) = \mathcal{A}$ and $dom(b) = \mathcal{B}$. Let $Q_A$ and $Q_B$ be linear counting queries over the two columns (e.g. parts of a histogram query, that count after filtering). Let $L(\cdot)$ be the Laplace mechanism. \subsection{Dependent Sensitivity} \begin{definition}[Dependent sensitivity] Let $f_A : \mathcal{A} \rightarrow \mathbb{R}^k$ be a query over attribute $A$ with $L_1$ sensitivity of 1. Let $C_1, \dots, C_k$ be the attributes that $A$ conditionally depends on. The \emph{dependent $L_1$ sensitivity} of $f$ is: \[\max_{a, a' \in \mathcal{A}} \max_{c_1, \dots, c_k \in \mathcal{C_1}, \dots, \mathcal{C_1}} \frac{\Pr[a \mid c_1, \dots, c_k ]} {\Pr[a' \mid c_1, \dots, c_k ]} \] \end{definition} When the set of dependencies $c_1, \dots, c_k$ is empty (i.e. the attribute $A$ is independent), then the dependent sensitivity is 1, as expected. \jn{not sure about this, need to check} \begin{definition}[Laplace Mechanism for ADP] Let $f_A : \mathcal{A} \rightarrow \mathbb{R}$ be a query over attribute $A$ with dependent sensitivity $s$. The following mechanism satisfies $\epsilon$-ADP: % \[F_A(D) = f_A(D[A]) + \textsf{Lap}\Big(\frac{s}{\epsilon}\Big)\] % where $\textsf{Lap}(\sigma)$ is a draw from a Laplace distribution with mean 0 and scale $\sigma$. \label{def:laplace_mech} \end{definition} \noindent It is no surprise that we can define the Laplace mechanism for ADP; this definition also satisfies $\epsilon$-differential privacy, and provides no utility benefit. In the next section, we will define a parallel version of the mechanism that yields better utility. \subsection{Parallel Laplace Mechanism} Consider two queries $f_A$ and $f_B$, sensitivity-1 queries on attributes $A$ and $B$ respectively. Releasing both query results under $\epsilon$-differential privacy requires sequential composition. Under ADP, we can apply the \emph{Parallel Laplace Mechanism}. \begin{definition}[Parallel Laplace Mechanism] Let $f_A : \mathcal{A} \rightarrow \mathbb{R}$ and $f_B : \mathcal{B} \rightarrow \mathbb{R}$ be queries over attribute $A$ and $B$ respectively. Let $s_A$ be the dependent sensitivity of $f_A$, and $s_B$ be the dependent sensitivity of $f_B$. The following mechanism satisfies $\epsilon$-ADP: % \[F_{A,B}(D) = \Big(f_A(D[A]) + \textsf{Lap}\Big(\frac{s_A}{\epsilon}\Big), f_B(D[B]) + \textsf{Lap}\Big(\frac{s_B}{\epsilon}\Big)\Big)\] \label{def:laplace_mech_parallel} \end{definition} \noindent Under sequential composition, the privacy cost would be $2\epsilon$; as we will see later, the savings in privacy cost gets \emph{better} as the number of attributes increases. This kind of parallel composition is impossible under $\epsilon$-differential privacy, and is the key factor behind the increased utility possible under ADP. \begin{theorem}[Privacy of the Parallel Laplace mechanism] The parallel Laplace mechanism for ADP (Definition~\ref{def:laplace_mech_parallel}) satisfies $\epsilon$-ADP. \end{theorem} \begin{proof} \jn{TODO: proof sketch} \begin{align*} &\max_{A, A', B} \frac{\Pr[L(Q_A(A)) = \hat{c_a}, L(Q_B(B)) = \hat{c_b} ]} {\Pr[L(Q_A(A')) = \hat{c_a}, L(Q_B(B)) = \hat{c_b} ]} & \text{def. of privacy}\\ = &\max_{A, A'} \frac{\Pr[L(Q_A(A)) = \hat{c_a} ] \sum_{B} \Pr[L(Q_B(B)) = \hat{c_b}] \Pr[Q_B(B) \mid A]} {\Pr[L(Q_A(A')) = \hat{c_a} ] \sum_{B} \Pr[L(Q_B(B)) = \hat{c_b}] \Pr[Q_B(B) \mid A']} & \text{law total prob.}\\ \leq &\max_{A, A'} \frac{\Pr[L(Q_A(A)) = \hat{c_a} ]} {\Pr[L(Q_A(A')) = \hat{c_a} ]} \cdot \max_{A, A'} \frac{\sum_{B} \Pr[L(Q_B(B)) = \hat{c_b}] \Pr[B \mid A]} {\sum_{B} \Pr[L(Q_B(B)) = \hat{c_b}] \Pr[B \mid A']} & \text{loose upper bound}\\ \leq & \exp(\epsilon) \cdot \max_{A, A'} \frac{\sum_{B} \Pr[L(Q_B(B)) = \hat{c_b}] \Pr[B \mid A]} {\sum_{B} \Pr[L(Q_B(B)) = \hat{c_b}] \Pr[B \mid A']} & \text{def. Laplace mech.}\\ \leq & \exp(\epsilon) \cdot \max_{A, A'} \max_B \frac{\Pr[L(Q_B(B)) = \hat{c_b}] \Pr[B \mid A]} {\Pr[L(Q_B(B)) = \hat{c_b}] \Pr[B \mid A']}& \text{loose upper bound}\\ = & \exp(\epsilon) \cdot \max_{A, A'} \max_B \frac{\Pr[B \mid A]} {\Pr[B \mid A']}& \text{cancel}\\ = & \log \Big(\epsilon + \max_{A, A'} \max_B \frac{\Pr[B \mid A]} {\Pr[B \mid A']}\Big)& \text{log rule}\\ \end{align*} \end{proof} \begin{figure} \centering \begin{tabular}{l l ||l l l} Prize & First & MO & MO & MO \\ & Select. & =1 & =2 & =3 \\ \hline \hline Door 1 & Door 1 & 0 & $\frac{1}{2}$ & $\frac{1}{2}$ \\ Door 1 & Door 2 & 0 & 0 & 1 \\ Door 1 & Door 3 & 0 & 1 & 0 \\ Door 2 & Door 1 & 0 & 0 & 1 \\ Door 2 & Door 2 & $\frac{1}{2}$ & 0& $\frac{1}{2}$ \\ Door 2 & Door 3 & 1 & 0& 0 \\ Door 3 & Door 1 & 0 & 1& 0 \\ Door 3 & Door 2 & 1 & 0& 0 \\ Door 3 & Door 3 & $\frac{1}{2}$& $\frac{1}{2}$& 0 \\ \end{tabular}} \caption{Conditional Probability Table, Monty Hall Problem.} \end{figure} \paragraph{Example: parallel ADP for Monty Hall.} We now apply the parallel Laplace mechanism for ADP to the Monty Hall problem defined earlier. For simplicity of presentation, we will release counts for only the ``first selection'' and ``Monty opens'' attributes. \begin{proposition} Releasing the following two values satisfies a \emph{total} of $\epsilon$-ADP: \begin{enumerate} \item $Q_{\text{First Selection} = 1}(D) + \textsf{Lap}\Big(\frac{s_1}{\epsilon}\Big)$ \item $Q_{\text{Monty Opens} = 1}(D) + \textsf{Lap}\Big(\frac{s_2}{\epsilon}\Big)$ \end{enumerate} \end{proposition} \begin{proof} We need to show that: % \[\max_{D, D'} \frac{\Pr[L(Q_A(D)) = \hat{c_a}, L(Q_B(D)) = \hat{c_b} ]} {\Pr[L(Q_A(D')) = \hat{c_a}, L(Q_B(D)) = \hat{c_b} ]}\] Under the distance metric for ADP, there are two cases: either attribute $A$ changes (``first selection'') or attribute $B$ changes (``Monty opens''). \paragraph{Case 1: attribute $A$ changes.} We need to show that: % \[\max_{A, A', B} \frac{\Pr[L(Q_A(A)) = \hat{c_a}, L(Q_B(B)) = \hat{c_b} ]} {\Pr[L(Q_A(A')) = \hat{c_a}, L(Q_B(B)) = \hat{c_b} ]}\] \paragraph{Case 2: attribute $B$ changes.} We need to show that: % \[\max_{A, B, B'} \frac{\Pr[L(Q_A(A)) = \hat{c_a}, L(Q_B(B)) = \hat{c_b} ]} {\Pr[L(Q_A(A)) = \hat{c_a}, L(Q_B(B')) = \hat{c_b} ]}\] \end{proof} \jn{I'm stuck here} \section{An ADP Mechanism for Histograms}
{ "timestamp": "2022-09-23T02:12:27", "yymm": "2209", "arxiv_id": "2209.10908", "language": "en", "url": "https://arxiv.org/abs/2209.10908" }