The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Missing a closing quotation mark in string. in row 54
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 50538)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 54
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Introduction} Reconstruction of signals from cross correlations has interesting applications in many fields of science and engineering such as optics, quantum mechanics, electron microscopy, antenna testing, seismic interferometry, or imaging in general \cite{Garnier16, Griffiths07, wap10, schuster}. Using cross correlations of measurements collected at different locations presents several advantages since the inversion does not require knowledge of the emitter positions, or the probing pulses shapes as only time differences matter. Cross correlations have been used, for example, when imaging is carried out with opportunistic sources whose properties are mainly unknown \cite{Garnier09,Garnier14,Daskalakis16,Helin18}. In many applications, we seek information about an object or a signal $\mbox{\boldmath$\rho$}\in\mathcal{C}^{K}$ given data $\mbox{\boldmath$b$}\in\mathcal{C}^{N}$ most often related through a linear transformation \begin{equation} \label{eq:invprob} {\cal A} \,\mbox{\boldmath$\rho$} = \mbox{\boldmath$b$} \, , \end{equation} where ${\cal A}\in\mathcal{C}^{N\times K}$ is the measurement or model matrix. When the signal $\mbox{\boldmath$\rho$}$ is compressed or when the data is scarce, $N < K$, in which case (\ref{eq:invprob}) is underdetermined and infinitely many signals or objects match the data. However, if the signal $\mbox{\boldmath$\rho$}$ is sparse so only $M\ll K$ components are different than zero, $\ell_1$-minimization algorithms that solve \begin{equation}\label{l1normsol} \mbox{\boldmath$\rho$}_{\ell_1}=\mathop{\mbox{argmin}} \|\vect \rho\|_{\ell_1}, \hbox{ subject to } {\cal A} \vect \rho= \vect b\, \end{equation} can recover the true signal efficiently even when $N\ll K$. On the other hand, there are situations in which it is difficult or impossible to record high quality data, $\mbox{\boldmath$b$}$, and it is more convenient to use the cross correlated data contained in the matrix \begin{equation} \label{eq:cross0} B = \mbox{\boldmath$b$}\,\mbox{\boldmath$b$}^*\in\mathcal{C}^{N\times N}\, \end{equation} to find the desired information about the object or signal $\mbox{\boldmath$\rho$}$ (see \cite{Garnier09} and references therein). One way to address this problem is to lift it to the matrix level and reformulate it as a low-rank matrix linear system, which can be solved by using nuclear norm minimization as it was suggested in \cite{CMP11,Candes13} for imaging with intensities-only. This makes the problem convex over the appropriate matrix vector space and, thus, the unique true solution can be found using well established algorithms involving are not as efficient as $\ell_1$-minimization algorithms \textcolor{black}{that only involve lightweight operations such as matrix-vector multiplications \cite{Beck09}}. Furthermore, the big caveat is that the computational cost rapidly becomes prohibitively large as the dimension of the problem increases quadratically with $K$, making its solution infeasible. In this paper we suggest a different approach. We propose to consider the linear matrix equation \begin{equation} \label{eq:modelC-intro} {\cal A} X {\cal A}^* = B\, \end{equation} for the correlated signal $X=\mbox{\boldmath$\rho$}\,\mbox{\boldmath$\rho$}^*\in\mathcal{C}^{K\times K}$, vectorize both sides so \begin{equation} \label{eq:vec00} \mbox{vec}({\cal A} X {\cal A}^*) = \mbox{vec}(B)\, , \end{equation} and use the Kronecker product $\otimes$, and its property $\mbox{vec}(PQR)=(R^T\otimes P)\mbox{vec}(Q)$, to express the matrix multiplications as the linear transformation \begin{equation} \label{eq:vec1-intro} ({\bar {\cal A}} \otimes {\cal A})\,\mbox{vec}( X ) = \mbox{vec}(B)\, . \end{equation} Thus, we can promote the sparsity of the sought image using $\ell_1$-minimization algorithms that are much faster than nuclear norm minimization ones. However, the dimension of the unknown $\mbox{vec}( X )$ in (\ref{eq:vec1-intro}) also increases quadratically with $K$, so this approach by itself would still be impractical when $K$ is not very small. Hence, we propose to use a {\em Noise Collector} to reduce the dimensionality of problem (\ref{eq:vec1-intro}). The {\em Noise Collector} was introduced in \cite{Moscoso20b} to eliminate the clutter in the recovered signals when the data are contaminated by additive noise. In this paper, we use the Noise Collector to absorb part of the signal instead. Specifically, we treat as noise the signal that corresponds to the $K^2 -K$ off-diagonal entries in the matrix $X$. Using the {\em Noise Collector} allows us to ignore these entries and construct a linear system with the same number of unknowns as the original problem (\ref{eq:invprob}) that uses linear data. As a consequence a dimension reduction from $K^2$ to $K$ unknowns is achieved. The main result of this paper is Theorem \ref{d-reduce} which says that under certain decoherence conditions on the matrix ${\cal A}$, we can find the support of an M-sparse signal exactly if the data is noise-free or the noise is low enough. Furthermore, Theorem \ref{d-reduce} shows that the level of sparsity $M$ that can be recovered increases from $O(\sqrt{N}/\sqrt{\ln N})$ to $O(N/\sqrt{\ln N})$ when quadratic cross correlation data are used instead of the linear ones. The numerical experiments included in this paper support the results of Theorem \ref{d-reduce}. They show that the support of a signal can be found exactly if the noise in the data is not too large with almost no extra computational cost with respect to the original problem (\ref{eq:invprob}) that considers linear data with no correlations. Once the support has been found, a trivial second step allows us to find the signal, including its phases. The reconstruction is exact when there is no noise in the data and the results are very satisfactory even for noisy data with low signal to noise ratios. That is, our numerical experiments suggest that the approach presented here is robust with respect to additive noise. Additional properties of this approach are that for any level of noise the solution has no false positives, and that the algorithm is parameter-free, so it does not require an estimation of the energy of the {\em off-diagonal signal} that we need to absorb, or of the level of noise in the data. The paper is organized as follows. In Section \ref{sec:passive}, we summarize the model used to generate the signals to be recovered, which in our case are images. In Section \ref{sec:NC}, we present the theory that supports the proposed strategy for dimension reduction when correlated data are used to recover the signals. Section \ref{sec:algo} explains the algorithm for carrying out the inversion efficiently. Section \ref{sec:numerics} shows the numerical experiments. Section \ref{sec:conclusions} summarizes our conclusions. The proofs of the theorems are given in \ref{sec:proofs}. \section{Passive array imaging} \label{sec:passive} We consider processing of passive array signals where the object to be imaged is a set of point sources at positions ${ \vec{\vect z}}_{j}$ and (complex) amplitudes $\alpha_j$, $j=1,\dots,M$. The data used to image the object are collected at several sensors on an array; see Figure \ref{fig:setup}. The imaging system is characterized by the array aperture $a$, the distance $L$ to the sources, the bandwidth $B$ and the central wavelength $\lambda_0$ of the signals. \begin{figure}[htbp] \includegraphics[scale=1]{x/fig1.pdf} \caption{General setup for passive array imaging. The source at ${ \vec{\vect z}}_j$ emits a signal that is recorded at all array elements $\vec{\vect x}_r$, $r=1,\ldots,\textcolor{black}{N_r}$.} \label{fig:setup} \end{figure} The sources are located inside an image window IW discretized with a uniform grid of points $\vec{\vect y}_k$, $k=1,\ldots,K$. Thus, the signal to be recovered is the source vector \begin{equation} \label{eq:rhotilde} \vect {\tilde \rho}=[{\tilde \rho}_{1},\ldots,{\tilde \rho}_{K}]^{\intercal}\in\mathbb{C}^K\,, \end{equation} whose components ${\tilde \rho}_k$ correspond to the amplitudes of the $M$ sources at the grid points $\vec{\vect y}_k$, $k=1,\ldots,K$, with $K\gg M$. This vector has components ${\tilde\rho}_{k} = \alpha_j$ if $\vec{\vect y}_{k} = { \vec{\vect z}}_j$ for some $j=1,\ldots,M$, while the others are zero. Denoting by $G(\vec{\vect x},\vec{\vect y};\omega)$ the Green's function for the propagation of a wave of angular frequency $\omega$ from point $\vec{\vect y}$ to point $\vec{\vect x}$, we define the single-frequency Green's function vector that connects a point $\vec{\vect y}$ in the IW with all the sensors on the array located at points $\vec{\vect x}_r$, $r = 1,\ldots,\textcolor{black}{N_r}$, so $$ \vect g(\vec{\vect y};\omega)=[G(\vec{\vect x}_{1},\vec{\vect y};\omega), G(\vec{\vect x}_{2},\vec{\vect y};\omega),\ldots, G(\vec{\vect x}_{N},\vec{\vect y};\omega)]^{\intercal} \in \mathbb{C}^{\textcolor{black}{N_r}} \,. $$ In three dimensions, $ {\displaystyle G(\vec{\vect x},\vec{\vect y};\omega)= \frac{\exp\{i \omega|\vec{\vect x}-\vec{\vect y}|/c_0\}}{4\pi|\vec{\vect x}-\vec{\vect y}|} } $ if the medium is homogeneous. Hence, the signals of frequencies $\omega_l$ recorded at the sensors locations $\vec{\vect x}_r$ are $$b(\vec{\vect x}_r, \omega_l)=\sum_{j=1}^M\alpha_j G (\vec{\vect x}_r,{ \vec{\vect z}}_j; \omega_l)\, ,\quad r = 1,\ldots,\textcolor{black}{N_r}\, $$ They form the single-frequency data vector $\vect b(\omega_l) = [ b(\vec{\vect x}_1, \omega_l), b(\vec{\vect x}_2, \omega_l),\dots,b(\vec{\vect x}_N, \omega_l)]^{\intercal}\in \mathbb{C}^{\textcolor{black}{N_r}}$. As several frequencies $\omega_l$, $l=1,\dots,N_f$, are used to recover (\ref{eq:rhotilde}), all the recorded data are stacked in the multi-frequency column data vector \begin{equation} \vect b = [ \vect b(\omega_1)^{\intercal},\vect b(\omega_2)^{\intercal},\dots, \vect b(\omega_{N_f})^{\intercal}]^{\intercal} \in \mathbb{C}^{\textcolor{black}{N}} \, , \mbox{with} \, \textcolor{black}{N=N_r N_f}\,. \label{eq:data} \end{equation} \subsection{The inverse problem with linear data} When the data (\ref{eq:data}) are available and reliable, one can form the linear system \begin{equation} {\cal A}\,\mbox{\boldmath$\rho$} = \mbox{\boldmath$b$} \label{eq:system} \end{equation} to recover (\ref{eq:rhotilde}). Here, $ {\cal A}$ is the $\textcolor{black}{N}\times K$ measurement matrix whose columns $\vect a_k$ are the multi-frequency Green's function vectors \begin{equation}\label{eq:ak} \vect a_k = \frac{1}{c_k}\, [ \vectg(\vec{\vect y}_k ; \omega_1)^{\intercal}, \vectg(\vec{\vect y}_k; \omega_2)^{\intercal}, \dots, \vectg(\vec{\vect y}_k; \omega_{N_f})^{\intercal} ]^{\intercal} \in \mathbb{C}^{\textcolor{black}{N}}\, , \end{equation} where $c_k$ are scalars that normalize these vectors to have $\ell_2$-norm one, and \begin{equation}\label{eq:rho} \mbox{\boldmath$\rho$}=\mbox{diag}(c_1, c_2,\dots,c_K)\,\vect {\tilde \rho}, \end{equation} where $\vect {\tilde \rho}$ is given by (\ref{eq:rhotilde}). Then, one can solve (\ref{eq:system}) for the unknown vector $\mbox{\boldmath$\rho$}$ using a number of $\ell_2$ and $\ell_1$ inversion methods to find the sought image. In general, $\ell_2$ methods are robust but the resulting resolution is low. On the other hand, $\ell_1$ methods provide higher resolution but they are much more sensitive to noise in the data. Hence, they cannot be used with poor quality data unless one carefully takes care of the noise. \subsection{The inverse problem with quadratic cross correlation data} In many instances, imaging with cross correlations helps to form better and more robust images. This is the case, for example, when one uses high frequency signals and has a low-budget measurement system with inexpensive sensors that are not able to resolve the signals well. Another situation is when the raw data (\ref{eq:data}) can be measured but it is more convenient to image with cross correlations because they help to mitigate the effects of the inhomogeneities of the medium between the sources and the sensors \cite{bakulin,Garnier15} Assume that all the cross correlated data contained in the matrix \begin{equation} B = \mbox{\boldmath$b$}\,\mbox{\boldmath$b$}^*\in\mathcal{C}^{N\times N}\, \end{equation} are available for imaging. Then, one can consider the linear system \begin{equation} \label{eq:modelC} {\cal A} X {\cal A}^* = B\, , \end{equation} and seek the correlated image $X=\mbox{\boldmath$\rho$}\,\mbox{\boldmath$\rho$}^*\in\mathcal{C}^{K\times K}$ that solves it. The unknown matrix $X$ is rank 1 and, hence, one possibility is to look for a low-rank matrix by using nuclear norm minimization as it was suggested for imaging with intensities-only in \cite{CMP11,Candes13}. This is possible in theory, but it is unfeasible when the problem is large because the number of unknowns grows quadratically and, therefore, the computational cost rapidly becomes prohibitive. For example, to form an image with $1000\times 1000$ pixels one would have to solve a system with $10^{12}$ unknowns. Instead, we suggest the following strategy. We propose to vectorize both sides of (\ref{eq:modelC}) so \begin{equation} \label{eq:vec0} \mbox{vec}({\cal A} X {\cal A}^*) = \mbox{vec}(B)\, , \end{equation} where $\mbox{vec}(\cdot)$ denotes the vectorization of a matrix formed by stacking its columns into a single column vector. Then, we use the Kronecker product $\otimes$, and its property $\mbox{vec}(PQR)=(R^T\otimes P)\mbox{vec}(Q)$, to express the matrix multiplications as the linear transformation \begin{equation} \label{eq:vec1} ({\bar {\cal A}} \otimes {\cal A})\,\mbox{vec}( X ) = \mbox{vec}(B)\, . \end{equation} With this formulation of the problem we can use an $\ell_1$ minimization algorithm to form the images, which is much faster than a nuclear norm minimization algorithm that needs to compute the SVD of the iterate matrices. However, with just this approach the main obstacle is not overcome, as the dimensionality still grows quadratically with the number of unknowns $K$. Hence, we propose a dimension reduction strategy that uses a {\em Noise Collector} \cite{Moscoso20b} to absorb a component of the data vector that does not provide extra information about the signal support. We point out that this component is not a gaussian random vector as in \cite{Moscoso20b}, but a deterministic vector resulting from the off-diagonal terms of $X$ that are neglected. \section{The noise collector and dimension reduction } \label{sec:NC} \subsection{The Noise Collector} \label{subsec:PlainNC} The {\em Noise Collector}~\cite{Moscoso20b} is a method to find the vector $ {\mbox{\boldmath$\chi$}} \in \mathbb{C}^{\cal K}$ in \begin{equation} \label{eq:nc_general} T \, \vect \chi = \vect d_0 + \vect e \,, \end{equation} from highly incomplete measurement data $\vect d = \vect d_0 + \vect e \in \mathbb{C}^{\cal N}$ possibly corrupted by noise $\vect e \in \mathbb{C}^{\cal N}$, where $ 1 \ll {\cal N} < {\cal K}$. Here, $T$ is a general measurement matrix of size ${\cal N}\times {\cal K}$, whose columns have unit length. The main results in~\cite{Moscoso20b} ensure that we can still recover the support of $\vect \chi$ when the data is noisy by looking at the support of $\vect \chi_{\tau}$ found as \begin{equation}\label{rho_tt} \left( \vect \chi_{\tau}, \vect \eta_{\tau} \right) = \arg\min_{ \small \vect \chi, \small \vect \eta} \left( \tau \| \vect \chi \|_{\ell_1} + \| \vect \eta \|_{\ell_1} \right), \hbox{ subject to } T \vect \chi + {\cal C} \vect \eta =\vect d, \end{equation} with an {\color{black} $O(1)$} no-phatom weight $\tau$, and a {\it Noise Collector} matrix $\mathcal{C} \in \mathbb{C}^{{\cal N}\times \Sigma}$ with $\Sigma = {\cal N}^\beta$, for $\beta>1$. If the noise $\vect e$ is Gaussian, then the columns of ${\cal C}$ can be chosen independently and at random on the unit sphere $\mathbb{S}^{{\cal N}-1}=\left\{ x \in \mathbb{R}^{{\cal N}} , \| x \|_{\ell_2} =1 \right\}$. The weight $\tau>1$ is chosen so it is expensive to approximate $\vect e$ with the columns of $T$, but it cannot be taken too large because then we loose the signal $\vect \chi$ that gets absorbed by the {\it Noise Collector} as well. Intuitively, $\tau$ is a measure of the rate at which the signal is lost as the noise increases. For practical purposes, $\tau$ is chosen as the minimal value for which $\vect \chi = 0$ when the data is pure noise, i.e., when $\vect d_0=0$. The key property is that the optimal value of $\tau$ does not depend on the level of noise and, therefore, it is chosen in advance, before the {\it Noise Collector} is used for a specific task. We have the following result. \begin{theorem}\label{pnas}~\cite{Moscoso20b} Fix $\beta>1$, and draw $\Sigma={\cal N}^{\beta}$ columns to form the Noise Collector ${\cal C}$, independently, from the uniform distribution on $\mathbb{S}^{{\cal N}-1}$. Let $\vect \chi$ be an $M$-sparse solution of the noiseless system $T \vect \chi=\vect d_0$, and $\vect \chi_\tau$ the solution of~(\ref{rho_tt}) with $ \vect d= \vect d_0 + \vect e$. Denote the ratio of minimum to maximum significant values of $\vect \chi$ as \begin{equation} \label{def:gamma} \gamma= \min_{i \in \mbox{supp}(\vect \chi)} \frac{|\chi_i|}{ \| \vect \chi \|_{\ell_\infty}}. \end{equation} Assume that the columns of $T$ are incoherent, so that \begin{equation}\label{Mcond} |\langle \vect t_i, \vect t_j \rangle| \leq \frac{1}{3M} \mbox{ for all } i \mbox{ and } j. \end{equation} Then, for any $\kappa > 0$, there are constants $\tau=\tau(\kappa, \beta)$, $c_1=c_1(\kappa, \beta, \gamma)$, and ${\cal N}_0= {\cal N}_0(\kappa, \beta)$ such that, if the noise level satisfies \begin{equation}\label{esti} \max\left(1, \| \vect e \|_{\ell_2}\right) \leq c_1 \frac{\| \vect d_0\|_{\ell_2}^2 }{ \| \vect \chi \|_{\ell_1}} \sqrt{\frac{{\cal N}}{ \ln {\cal N}}}, \end{equation} then $\mbox{supp}(\vect \chi_{\tau}) = \mbox{supp}(\vect \chi)$ for all ${\cal N}>{\cal N}_0$ with probability $1-1/{\cal N}^{\kappa}$. \end{theorem} To gain a better understanding of this theorem, let us consider the case where $T$ is the identity matrix (the classical denoising problem) and all coefficients of $\vect d_0= \vect \chi$ are either 1 or 0. Then $\| \vect d_0 \|^2_{\ell_2} = \| \vect \chi \|_{\ell_1}=M$. In this case, an acceptable level of noise is \begin{equation}\label{thm3 \| \vect e \|_{\ell_2} \lesssim \| \vect d_0 \|_{\ell_2} \sqrt{ \frac{\cal N} {M \ln {\cal N}}} \sim \sqrt{ \frac{\cal N} {\ln {\cal N}}}. \end{equation} The estimate~(\ref{thm3}) implies that we can handle more noise as we increase the number of measurements. This holds for two reasons. Firstly, a typical noise vector $\vect e$ is almost orthogonal to the columns of $T$, so \begin{equation}\label{no-random} |\langle \vect t_i,\vect e \rangle | \leq c_0\sqrt{ \frac {\ln {\cal N}}{\cal N}} \| \vect e \|_{\ell_2} \end{equation} for some $c_0= c_0 (\kappa)$ with probability $1-1/{\cal N}^{\kappa}$. In particular, a typical noise vector $\vect e$ is almost orthogonal to the signal subspace $V$. More formally, suppose $V$ is the $M$-dimensional subspace spanned by the column vectors $\vect t_j$ with $j$ in the support of $\vect \chi$, and let $W=V^{\perp}$ be the orthogonal complement to $V$. Consider the orthogonal decomposition $\vect e =\vect e^{v} + \vect e^{w}$, such that $\vect e^{v}$ is in $V$ and $\vect e^{w}$ is in $W$. Then, \[ \| \vect e^{v} \|_{\ell_2} \lesssim \sqrt{\frac M{\cal N}} \| \vect e \|_{\ell_2} \] with high probability that tends to $1$, as ${\cal N} \to \infty$. In Theorem~\ref{pnas}, a quantitative estimate of this convergence is $1-1/{\cal N}^{\kappa}$. It means that if a signal is sparse so $M \ll {\cal N}$, then we can recover it for very low signal-to-noise ratios. Secondly, and more importantly, if the columns of the noise collector $\mathcal{C}$ are also almost orthogonal to the signal subspace, then it is too expensive to approximate the signal $\vect d_0$ with the columns of $\mathcal{C}$ and, hence, we have to use the columns of the measurement matrix $T$. If we draw the columns of $\mathcal{C}$, independently, from the uniform distribution on $\mathbb{S}^{{\cal N}-1}$, then they will be almost orthogonal to the signal subspace with high probability. It is again estimated as $1-1/{\cal N}^{\kappa}$ in Theorem~\ref{pnas}. Finally, the incoherence condition~(\ref{Mcond}) implies that it is too expensive to approximate the signal $\vect d_0$ with columns $T$ that are not in the support of $\vect \chi$ and, hence, there are no false positives. In Theorem~\ref{pnas} we used randomness twice: the noise vector $\vect e$ was random and the columns of the noise collector were drawn at random. Note that in both cases randomness could be replaced by deterministic conditions requiring that $\vect e$ and the columns of $\mathcal{C}$ are almost orthogonal to the signal subspace. It is natural to assume that the noise vector $\vect e$ is a random variable and, as we explain in~\cite{Moscoso20b}, the columns of $\mathcal{C}$ are random because it is hard to construct a deterministic $\mathcal{C}$ that satisfies the almost orthogonality conditions. In the present work we still construct the matrix $\mathcal{C}$ randomly, but we sometimes treat the vector $\vect e$ as deterministic, as for example, in our Theorem~\ref{thm-ortho}. Inspection of the proofs in~\cite{Moscoso20b} shows that the only condition on $\vect e$ we need to verify from Theorem~\ref{pnas} is~(\ref{no-random}). Thus, the next Theorem is a deterministic reformulation of Theorem~\ref{pnas}. The proof is given in \ref{sec:proof2}. \begin{theorem}\label{thm-ortho} Assume conditions on $\vect \chi$, $T$, and ${\cal C}$ are as in Theorem~\ref{pnas} and define $\gamma$ as in (\ref{def:gamma}). Then, for any $\kappa > 0$, there are constants $\tau_0=\tau_0(\kappa, \beta)$, $c_0=c_0(\kappa, \beta)$, and ${\cal N}_0= {\cal N}_0(\kappa, \beta, \gamma)$, $\alpha=\alpha(c_0, \kappa, \beta)$ such that the following two claims hold. (i) If $\vect e$ satisfies~(\ref{no-random}) for all $\vect t_i$, $i \not\in \mbox{ supp }(\vect \chi) $; all columns of $T$ satisfy \begin{equation}\label{no-random-2} |\langle \vect t_i,\vect t_j \rangle | \leq c_0 \frac{\sqrt{\ln {\cal N}}}{\sqrt{\cal N}} \end{equation} for all $i$ and $j$; the sparsity $M$ is such that \begin{equation}\label{eq:M} M \leq \alpha \frac{ \sqrt{\cal N}}{\sqrt{\ln {\cal N}}}; \end{equation} and $\tau \geq \tau_0$, then $\mbox{supp}(\vect \chi_{\tau}) \subset \mbox{supp}(\vect \chi)$ with probability $1-1/{\cal N}^{\kappa}$. (ii) If, in addition, the noise is not large, so \begin{equation}\label{new-estimate1} \left| \langle \vect t_m, \vect e \rangle \right| \leq \min_{i \in \mbox{supp}(\vect \chi)} |\chi_i|/2 \end{equation} for all $\vect t_m$, $m \in \mbox{supp}(\vect \chi)$, and \begin{equation}\label{new-estimate2} \| \vect e\|_{\ell_2} \leq c_1 \| \vect \chi \|_{\ell_1} \end{equation} for some $c_1$, then $\mbox{supp}(\vect \chi) = \mbox{supp}(\vect \chi_{\tau})$ for all ${\cal N}>{\cal N}_0$ with probability $1-1/{\cal N}^{\kappa}$. \end{theorem} In contrast to Theorem~\ref{pnas}, we require in Theorem~\ref{thm-ortho} condition~(\ref{no-random}) to hold only for for $\vect t_i$, $i \not\in \mbox{ supp }(\vect \chi)$, that is for the columns of $T$ outside the support of $\vect \chi$. For the columns inside the support, $i \in \mbox{ supp }(\vect \chi)$, we relax condition~(\ref{no-random}) to condition~(\ref{new-estimate1}). Thus Theorem~\ref{thm-ortho} has slightly weaker assumptions than Theorem~\ref{pnas}. For a random $\vect e$ this weakening in not essential, because one needs to know the support of $\vect \chi$ in advance. It turns out that for our $\vect e$ this weakening will become important (see Remark~\ref{new_e} in the end of~\ref{sec:proof3}) . \subsection{Dimension reduction for quadratic cross correlation data } \label{subsec:dimReduce} The $N^2\times K^2$ linear problem (\ref{eq:vec1}) that uses quadratic cross correlation data is notoriously hard to solve due to its high dimensionality. Therefore, we propose the following strategy for robust dimensionality reduction. The idea is to treat the contribution of the off-diagonal elements of $X=\mbox{\boldmath$\rho$}\,\mbox{\boldmath$\rho$}^*\in\mathcal{C}^{K\times K}$ as {\it noise} and, thus, use the {\it Noise Collector} to absorb it. Namely, we define \begin{equation} \label{eq:def0} \vect \chi =\mbox{diag}(X) = [|\rho_1|^2, |\rho_2|^2, \dots, |\rho_K|^2]^T\, , \end{equation} and re-write~(\ref{eq:vec1}) as \begin{equation} \label{eq:dimred} T \, {\mbox{\boldmath$\chi$}} + \mathcal{C} \, \mbox{\boldmath$\eta$}= \mbox{\boldmath$d$}\, , \end{equation} where {\color{black} we replace the off-diagonal elements by the Noise Collector term $\mathcal{C} \, \mbox{\boldmath$\eta$}$ and} \begin{equation} \label{eq:def1} T=({\bar {\cal A}} \otimes {\cal A})_{\vect \chi} \end{equation} contains only the $K$ columns of ${\bar {\cal A}} \otimes {\cal A}$ corresponding to ${\mbox{\boldmath$\chi$}}$. Thus, the size of $\vect \chi$ is ${\cal K}$ and the size of $T$ is ${\cal N} \times {\cal K}$, with ${\cal K}=K$ and ${\cal N} =N^2$. In practice, the measurements may be subsampled as well, so the size of the system can be further reduced to ${\cal N} \times {\cal K}$, with ${\cal N} = O(N)$ and ${\cal K}=K$. Problem (\ref{eq:dimred}) can be understood as an exact linearization of the classical phase retrieval problem, where all the interference terms $\rho_i \rho^*_j$ for $i\ne j$ are absorbed in $\mathcal{C} \, \mbox{\boldmath$\eta$}$, with $\mbox{\boldmath$\eta$}$ being an unwanted vector considered to be noise in this formulation. In other words, the phase retrieval problem with $K$ unknowns has been transformed to the linear problem (\ref{eq:dimred}) that also has $K$ unknowns. Note, though, that in phase retrieval only autocorrelation measurements are considered, while in (\ref{eq:dimred}) we also use cross-correlation measurements. In the next theorem we use all the measurements $\vect d \in \mathbb{C}^{\cal N}$, so ${\cal N} =N^2$ in~(\ref{eq:dimred}). This is done for simplicity of presentation, but in practice ${\cal N} =O(N)$ measurements are enough. We will choose a solution of~(\ref{eq:dimred}) using~(\ref{rho_tt}). As in Theorems~\ref{pnas} and \ref{thm-ortho}, the vector $\mbox{\boldmath$\eta$}$ in (\ref{eq:dimred}) has ${\cal N}^{\beta}$ entries that do not have physical meaning. Its only purpose is to absorb the off-diagonal contributions in $\vect e = \mbox{\boldmath$d$} - T \vect \chi $. We point out that the magnitude of $\vect e$ is not small if $M\ge 2$. Indeed, the contribution of ${\mbox{\boldmath$\chi$}}=\mbox{diag}(X)$ to the data $\mbox{\boldmath$d$}$ is of order $M$, while the contribution of the off-diagonal terms of $X$ is of order $M^2$. Furthermore, the vector $\vect e$ is not independent of $\vect \chi$ anymore. \begin{theorem}\label{d-reduce} Fix $|\rho_i|$. Suppose the phases $\rho_i/|\rho_i|$ are independent and uniformly distributed on the (complex) unit circle. Suppose $X$ is a solution of~(\ref{eq:vec1}), $ \vect \chi =\mbox{diag}(X)$ is $M$-sparse, and $T=({\bar {\cal A}} \otimes {\cal A})_{\vect \chi}: \mathbb{C}^{\cal K} \to \mathbb{C}^{\cal N}$, ${\cal K}=K$ and ${\cal N} = N^2$. Fix $\beta>1$, and draw $\Sigma={\cal N}^{\beta}$ columns for ${\cal C}$, independently, from the uniform distribution on $\mathbb{S}^{{\cal N}-1}$. Denote \begin{equation} \label{def:Delta} \Delta = \sqrt{N} \max_{i \neq j} |\langle \vect a_i, \vect a_j \rangle|, \end{equation} and define $\gamma$ as in (\ref{def:gamma}). Then, for any $\kappa > 0$, there are constants $\alpha=\alpha(\kappa, \gamma, \Delta)$, $\tau=\tau(\kappa, \beta)$, and ${\cal N}_0= {\cal N}_0(\kappa, \beta, \gamma, \Delta)$ such that the following holds. If $M \leq \alpha N/\sqrt{\ln N}$ and $ \vect \chi_{\tau}$ is the solution of~(\ref{rho_tt}), then $ \mbox{supp}(\vect \chi) = \mbox{supp}(\vect \chi_{\tau})$ for all ${\cal N}>{\cal N}_0$ with probability $1-1/{\cal N}^{\kappa}$. \end{theorem} {\color{black} The proof of Theorem~\ref{d-reduce} is given in \ref{sec:proof3}. In Theorem~\ref{d-reduce} the scaling for sparse recovery is $M \leq \alpha N/\sqrt{\ln N}$. This result is in good agreement with our numerical experiments, see Figure~\ref{fig_phase}. In order to obtain this scaling we introduced our probabilistic framework - in Theorem~\ref{d-reduce} assuming that the phases of the signals are random. The idea is that a vector with random phases better describes a typical signal in many applications. The dimension reduction, however, could be done without introducing the probabilistic framework. We state and prove a deterministic version of Theorem~\ref{d-reduce} in~\ref{sec:proof4} for completeness. In this case the scaling for sparse recovery is more conservative: $M \leq \alpha \sqrt{N}/\sqrt{\ln N}$, and it does not agree with our numerical experiments. } \section{Algorithmic implementation} \label{sec:algo} A key point of the propose strategy is that the $M$-sparsest solution of (\ref{eq:dimred}) can be effectively found by solving the minimization problem \begin{eqnarray}\label{rho_t} \left( \mbox{\boldmath$\chi$}_{\tau}, \vect \eta_{\tau} \right) = \arg\min_{ \small \vect {\mbox{\boldmath$\chi$}} , \small \vect \eta} \left( \tau \| \vect {\mbox{\boldmath$\chi$}} \|_{\ell_1} + \| \vect \eta \|_{\ell_1} \right),\\ \hbox{ subject to }T \, {\mbox{\boldmath$\chi$}} + {\cal C} \vect \eta =\mbox{\boldmath$d$} , \nonumber \end{eqnarray} with an $O(1)$ no-phatom weight $\tau$. The main property of this approach is that if the matrix $T$ is incoherent enough, so its columns satisfy assumption (\ref{Mcond}) of Theorem \ref{pnas}, the $\ell_1$-norm minimal solution of (\ref{rho_t}) has a zero false discovery rate for any level of noise, with probability that tends to one as the dimension of the data ${\cal N}$ increases to infinity. More specifically, the relative level of noise that the {\em Noise Collector} can handle is of order $O(\sqrt{{\cal N}}/\sqrt{M \ln {\cal N}})$. Below this level of noise there are no false discoveries. To find the minimizer in (\ref{rho_t}), we define the function \begin{eqnarray} F(\mbox{\boldmath$\chi$}, \vect \eta, \vect z) &=& \lambda\,(\tau \| \mbox{\boldmath$\chi$} \|_{\ell_1} + \| \vect \eta \|_{\ell_1}) \\ &+& \frac{1}{2} \| T \vect \chi + {\cal C} \vect \eta - \mbox{\boldmath$d$} \|^2_{\ell_2} + \langle \vect z, \mbox{\boldmath$d$} - T \mbox{\boldmath$\chi$} - {\cal C} \vect \eta \rangle \nonumber \end{eqnarray} for a no-phantom weight $\tau$, and determine the solution as \vspace{-0.2cm} \begin{equation}\label{min-max} \max_{\vect z} \min_{\mbox{\boldmath$\chi$},\vect \eta} F(\mbox{\boldmath$\chi$},\vect \eta,\vect z) . \end{equation} This strategy finds the minimum in (\ref{rho_t}) exactly for all values of the regularization parameter $\lambda$. Thus, the method is fully automated, meaning that it has no tuning parameters. To determine the exact extremum in (\ref{min-max}), we use the iterative soft thresholding algorithm GeLMA~\cite{Moscoso12} that works as follows. Pick a value for the no-phantom weight $\tau$; for optimal results calibrate $\tau$ to be the smallest value for which $\vect \chi=0$ when the algorithm is fed with pure noise. In our numerical experiments we use $\tau= 2$. Next, pick a value for the regularization parameter, for example $\lambda=1$, and choose step sizes $\Delta t_1< 2/\|[T \, | \, {\cal C}]\|^2$ and $\Delta t_2< \lambda/\|T\|$\footnote{Choosing two step sizes instead of the smaller one $\Delta t_1$ improves the convergence speed.}. Set $\vect \mbox{\boldmath$\chi$}_0= \vect 0$, $\vect \eta_0=\vect 0$, $\vect z_0=\vect 0$, and iterate for $k\geq 0$: \begin{eqnarray} && \vect r = \mbox{\boldmath$d$} - T \,\mbox{\boldmath$\chi$}_k - {\cal C} \,\vect\eta_k\nonumber \, ,\\ &&\vect \mbox{\boldmath$\chi$}_{k+1}=\mathcal{S}_{ \, \tau \, \lambda \Delta t_1} ( \mbox{\boldmath$\chi$}_k +\Delta t_1 \, T^*(\vect z_k+ \vect r)) \nonumber \, ,\\ &&\vect \eta_{k+1}=\mathcal{S}_{\lambda \Delta t_1} ( \vect\eta_k +\Delta t_1 \, {\cal C}^*(\vect z_k+ \vect r)) \nonumber \, ,\\ &&\vect z_{k+1} = \vect z_k + \Delta t_2 \, \vect r \label{eq:algo}\, , \end{eqnarray} where $\mathcal{S}_{r}(y_i)=\mbox{sign}(y_i)\max\{0,|y_i|-r\}$. \subsection{The Noise Collector: construction and properties} To construct the {\em Noise Collector} matrix $\mathcal{C} \in \mathbb{C}^{{\cal N} \times {\cal N}^\beta}$ that satisfies the assumptions of Theorem \ref{pnas} one could draw ${\cal N}^\beta$ normally distributed ${\cal N}$-dimensional vectors, normalized to unit length. Thus, the additional computational cost incurred for implementing the {\em Noise Collector} in (\ref{eq:algo}), due to the terms ${\cal C} \vect \eta_k$ and ${\cal C}^* (\vect z_k + \vect r)$, would be $O({\cal N}^{\beta+1})$, which is not very large as we use $\beta \approx 1.5$ in practice. The computational cost of (\ref{eq:algo}) without the {\em Noise Collector} mainly comes from the matrix vector multiplications $T \,\mbox{\boldmath$\chi$}_k$ which can be done in $O({\cal N}{\cal K})$ operations and, typically, ${\cal K} \gg {\cal N}$. To further reduce the additional computational time and memory requirements we use a different construction procedure that exploits the properties of circulant matrices. The idea is to draw instead a few normally distributed ${\cal N}$-dimensional vectors of length one, and construct from each one of them a circulant matrix of dimension ${\cal N} \times {\cal N}$. The columns of these matrices are still independent and uniformly distributed on $\mathbb{S}^{{\cal N}-1}$, so they satisfy the assumptions of Theorem \ref{pnas}. The full {\em Noise Collector} matrix is then formed by concatenating these circulant matrices together. More precisely, the {\em Noise Collector} construction is done in the following way. We draw ${\cal N}^{\beta-1}$ normally distributed ${\cal N}$-dimensional vectors, normalized to unit length. These are the generating vectors of the {\em Noise Collector}. To these vectors are associated ${\cal N}^{\beta-1}$ circulant matrices $\mathcal{C}_i \in \mathbb{C}^{{\cal N} \times {\cal N}}$, $i=1, \ldots, {\cal N}^{\beta-1}$, and the {\em Noise Collector} matrix is constructed by concatenation of these ${\cal N}^{\beta-1}$ matrices, so $$ {\cal C} = \left[ {\cal C}_1 \left| {\cal C}_2 \left| \mathcal{C}_3 \left| \ldots \right. \right. \right. \left| \mathcal{C}_{{\cal N}^{\beta-1}} \right. \right] \in \mathbb{C}^{{\cal N} \times {\cal N}^\beta}. $$ We point out that the {\em Noise Collector} matrix ${\cal C}$ is not stored, only the ${\cal N}^{\beta-1}$ generating vectors are saved in memory. On the other hand, the matrix vector multiplications ${\cal C} \vect \eta_k$ and ${\cal C}^* (\vect z_k + \vect r)$ in (\ref{eq:algo}) can be computed using these generating vectors and FFTs \cite{Gray06}. This makes the complexity associated to the {\em Noise Collector} $O({\cal N}^{\beta} \log({\cal N}))$. To explain this further, we recall briefly below how a matrix vector multiplication can be performed using the FFT for a circulant matrix. For a generating vector $\vect c= [c_0, c_1,\ldots,c_{{\cal N}-1}]$, the $\mathcal{C}_i$ circulant matrix takes the form $$\mathcal{C}_i = \left[ \begin{array}{llll} c_0 & c_{{\cal N}-1} & \ldots & c_1 \\ c_1 & c_{0} & \ldots & c_2 \\ \vdots & & \ddots & \vdots \\ c_{{\cal N} -1} & c_{{\cal N} -2} & \ldots & c_0 \\ \end{array} \right] \, $$ This matrix can be diagonalized by the Discrete Fourier Transform (DFT) matrix, i.e., $$ \mathcal{C}_i = {\cal F} \Lambda {\cal F}^{-1} $$ where ${\cal F}$ is the DFT matrix, ${\cal F}^{-1}$ is its inverse, and $\Lambda$ is a diagonal matrix such that $\Lambda = \mbox{diag}({\cal F} \vect c)$, where $\vect c$ is the generating vector. Thus, a matrix vector multiplication $\mathcal{C}_i \vect \eta$ is performed as follows: (i) compute $\vect {\hat \eta} = {\cal F}^{-1} \vect \eta$, the inverse DFT of $\vect \eta$ in ${\cal N} \log({\cal N})$ operations, (ii) compute the eigenvalues of $\mathcal{C}_i$ as the DFT of $\vect c$, and component wise multiply the result with $\vect {\hat \eta}$ (this step can also be done in ${\cal N} \log({\cal N})$ operations), and (iii) compute the FFT of the vector resulting from step (ii) in, again, ${\cal N} \log({\cal N})$ operations. Consequently, the cost of performing the multiplication ${\cal C} \vect \eta_k$ is ${\cal N}^{\beta-1} {\cal N} \log({\cal N})= {\cal N}^{\beta} \log({\cal N})$. As the cost of finding the solution without the {\em Noise Collector} is $O({\cal N} {\cal K})$ due to the terms $T \,\mbox{\boldmath$\chi$}_k$, the additional cost due to the {\em Noise Collector} is negligible since ${\cal K} \gg {\cal N}^{\beta-1}\log({\cal N})$ because, typically, ${\cal K} \gg {\cal N}$ and $\beta \approx 1.5$. \section{Numerical results} \label{sec:numerics} We consider processing of passive array signals. We seek to determine the positions ${ \vec{\vect z}}_{j}$ and the complex amplitudes $\alpha_j$ of $M$ point sources, $j=1,\dots,M$, from measurements of polychromatic signals on an array of receivers; see Figure \ref{fig:setup}. The source imaging problem is considered here for simplicity. The active array imaging problem can be cast under the same linear algebra framework even when multiple scattering is important \cite{CMP14}. The array consists of $N_r=21$ receivers located at $x_r=-\frac{a}{2}+\frac{r-1}{N_r-1} a$, $r=1,\ldots,N_r$, where $a=100\lambda$ is the array aperture. The imaging window (IW) is at range $L=100 \lambda$ from the array and the bandwidth $B=f_0/3$ of the emitted pulse is $1/3$ of the central frequency $f_0$, so the resolution in range is $c/B=3 \lambda$ while in cross-range it is $\lambda L/a= \lambda$. We consider a high frequency microwave imaging regime with central frequency $f_0=60$GHz corresponding to $\lambda_0=5$mm. We make measurements for $N_f=21$ equally spaced frequencies spanning a bandwidth $B=20$GHz. The array aperture is $a=50$cm, and the distance from the array to the center of the IW is $L=50$cm. Then, the resolution is $\lambda_0 L/a=5$mm in the cross-range (direction parallel to the array) and $c_0/B=15$mm in range (direction of propagation). These parameters are typical in microwave scanning technology \cite{Laviada15}. We consider an IW with $K=1681$ pixels which makes the dimension of $X=\mbox{\boldmath$\rho$}\,\mbox{\boldmath$\rho$}^*$ equal to $K^2= 2825761$. The pixel dimensions, i.e., the resolution of the imaging system, is $5 {\rm mm} \times 15 {\rm mm}$. The total number of measurements is $N=N_r N_f=441$. Thus, we can form $N^2=194481$ cross-correlations over frequencies and locations. Let us first note that with these values for $N$ and $K$, which in fact are not big, we cannot form the full matrix $({\bar {\cal A}} \otimes {\cal A})$ so as to solve (\ref{eq:vec1}) for $\mbox{vec}( X )$ because of its huge dimensions. For this reason, we propose to reduce the dimensionality of the problem to $K$ unknowns. Thus, we recover $\mbox{diag}(X)$ only, and neglect all the off-diagonal terms of $X$ corresponding to the interference terms $\rho_k \rho^*_{k'}$ for $k\ne k'$. We treat their contributions to the cross-correlated data as {\em noise}, which is absorbed in a fictitious vector $\mbox{\boldmath$\eta$}$ using a {\em Noise Collector}. We stress that this {\em noise} is never small if $M\ge 2$, as its contribution to the the cross-correlated data is of order $O(M^2)$, while the contribution of $\mbox{diag}(X)$ is only of order $O(M)$. \begin{figure} \begin{center} \includegraphics[scale=1]{x/fig2.pdf} \end{center} \caption{The true ${\mbox{\boldmath$\chi$}}=\mbox{diag}(X)=\mbox{diag}(\mbox{\boldmath$\rho$}\,\mbox{\boldmath$\rho$}^*)$, i.e., the absolute values squared of the point sources amplitudes.. The dimension of the image is $K=1681$.} \label{fig0} \end{figure} In the following examples, we consider imaging of $M=8$ point sources; see Fig. \ref{fig0}. Instead of the $N^2$ cross-correlated data which are, in principle, available, we only use $\mathcal{N}=21 N$ cross-correlated data picked at random. This reduces even more the dimensionality of the problem we solve. In Fig. \ref{fig1}, we present the results when the used data is noise-free. The left column shows the results when we use the $\ell_1$ algorithm (\ref{eq:algo}); the top plot is the recovered image and the bottom plot the recovered ${\mbox{\boldmath$\chi$}}=\mbox{diag}(X)=\mbox{diag}(\mbox{\boldmath$\rho$}\,\mbox{\boldmath$\rho$}^*)$ vector. The support of the sources is exact but the amplitudes are not. If it is important for an application to recover the amplitudes with precision, one can consider in a second step the full problem (\ref{eq:vec1}) for $\mbox{vec}(X)$ with all the interference terms $\rho_k \rho^*_{k'}$ for $k\ne k'$, but restricted to the exact support found in the first step. If there is no noise in the data, this second step finds the exact values of the amplitudes efficiently using an $\ell_2$ minimization method; see the right column of Fig. \ref{fig1}. \begin{figure} \begin{center} \includegraphics[scale=1]{x/fig3.pdf} \end{center} \caption{Imaging $M=8$ sources using correlations and the NC. The dimension of the image is $K=1681$. The dimension of the linear data is $N=441$. The $\ell_1$ images are obtained using $21N$ of the $N^2$ correlation data. Noise free data.} \label{fig1} \end{figure} In Figs. \ref{fig2} and \ref{fig3} we consider the same configuration of sources but we add white Gaussian noise to the data. The resulting SNR values are $10$dB and $0$dB, respectively. In both cases, the solutions obtained in the first step look very similar to the one obtained in Fig. \ref{fig1} for noise free data. This is so, because the noise in the data is dominated by the neglected interference terms. The actual effect of the additive noise is only seen in the 2nd step when we solve for $\mbox{vec}(X)$, restricted to the support, using an $\ell_2$ minimization method. Indeed, when the data are noisy we cannot recover the exact values of the amplitudes. Still, since an $\ell_2$ method is used on the correct support, the reconstructions are extremely robust and give very good results, even when the SNR is $0$dB. \begin{figure} \begin{center} \includegraphics[scale=1]{x/fig4.pdf} \end{center} \caption{Imaging $M=8$ sources using correlations and the NC. The dimension of the image is $K=1681$. The dimension of the linear data is $N=441$. The $\ell_1$ images are obtained using $21N$ of the $N^2$ correlation data. Data with 10dB SNR.} \label{fig2} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=1]{x/fig5.pdf} \end{center} \caption{Imaging $M=8$ sources using correlations and the NC. The dimension of the image is $K=1681$. The dimension of the linear data is $N=441$. The $\ell_1$ images are obtained using $21N$ of the $N^2$ correlation data. Data with 0dB SNR.} \label{fig3} \end{figure} To illustrate the robustness of the reconstructions of the entire matrix $X=\mbox{\boldmath$\rho$}\,\mbox{\boldmath$\rho$}^*$ we also plot in Fig. \ref{fig4} the angle of $X_\tau$ compared to the angle of $X$ restricted on the support recovered during the first step. We get an exact reconstruction for noise-free data. The error in the reconstruction increases as the SNR decreases but the results are very satisfactory even for the $0$dB SNR case. \begin{figure} \begin{center} \includegraphics[scale=1]{x/fig6.pdf} \end{center} \caption{Imaging $M=8$ sources using correlations and the NC. The dimension of the image is $K=1681$. The dimension of the linear data is $N=441$. The angle of the components of $X_\tau$ compared to angle of the components of the true $X$ restricted on the support.} \label{fig4} \end{figure} Again, the big advantage of the proposed $\ell_1$ minimization approach that seeks only for the components of $\mbox{diag}(X)$, and uses a {\em Noise Collector} to absorb the interference terms that are treated as noise, is that it is linear in the number of pixels $K$ instead of quadratic. This allows us to consider large scale problems. Moreover, as we observed in the results of Figs. \ref{fig1} to \ref{fig4}, the number of data $\mathcal{N}$ used to recover the images do not need to be $N^2$, but only a multiple of $N$. In Fig. \ref{fig_phase} we illustrate the performance of the proposed $\ell_1$ approach for different sparsity levels $M$ and data sizes $\mathcal{N}$. There is no additive noise added to the data in this figure. Success in recovering the true support of the unknown $\vect \chi$ corresponds to the value $1$ (yellow) and failure to $0$ (blue). The small phase transition zone (green) contains intermediate values. The red line is the the estimate $\sqrt{\mathcal{N}}/(2\sqrt{\ln \mathcal{N}})$. These results are obtained by averaging over 10 realizations. \begin{figure}[htbp] \begin{center} \includegraphics[scale=1]{x/fig7.pdf} \end{center} \vspace*{-0.2cm} \caption{Algorithm performance for exact support recovery during the first step using $\ell_1$ and the Noise Collector. Success corresponds to the value $1$ (yellow) and failure to $0$ (blue). The small phase transition zone (green) contains intermediate values. The red line is the estimate $\sqrt{\mathcal{N}}/(2\sqrt{\ln \mathcal{N}})$. Ordinate and abscissa are the data used $\mathcal{N}$ and the sparsity $M$. } \label{fig_phase} \end{figure} \section{Conclussions} \label{sec:conclusions} In this paper, we consider the problem of sparse signal recovery from cross correlation measurements. The unknown in this case is the correlated matrix signal $X=\vect \rho \vect \rho^*$ whose dimension grows quadratically with the size $K$ of $\vect \rho$ and, hence, inversion becomes computationally unfeasible as $K$ increases. To overcome this issue, we propose a novel dimension reduction approach. Specifically, we vectorize the problem and consider as unknown only the diagonal terms $|\rho_i|^2$ of $X$ whose dimension is $K$ and are related to the data through a linear transformation. The off-diagonal interfernce terms $\rho_i \rho^*_j$ for $i\ne j$ are treated as noise and are absorbed using the {\em Noise Collector} approach introduced in \cite{Moscoso20b}. This allows us to recover the signal exactly using efficient $\ell_1$-minimization algorithms. The cost of solving this dimension reduced problem is similar to the one using linear data. Furthermore, our numerical experiments show that the suggested approach is robust with respect to additive noise in the data. Finally, we point out that when using cross correlated data the maximum level of sparsity that can be recovered increases to $O(N/\sqrt{\ln N})$ instead of $O(\sqrt{N}/\sqrt{\ln N})$ for the linear data. \section*{Acknowledgments} The work of M. Moscoso was partially supported by Spanish MICINN grant FIS2016-77892-R. The work of A.Novikov was partially supported by NSF DMS-1813943 and AFOSR FA9550-20-1-0026. The work of G. Papanicolaou was partially supported by AFOSR FA9550-18-1-0519. The work of C. Tsogka was partially supported by AFOSR FA9550-17-1-0238 and FA9550-18-1-0519.
{ "timestamp": "2020-10-15T02:20:14", "yymm": "2010", "arxiv_id": "2010.07012", "language": "en", "url": "https://arxiv.org/abs/2010.07012" }
\section{Introduction}\label{s:intro} Waves are a ubiquitous phenomenon in the atmosphere of a star, including that of the Sun. Of the different types of waves present in the solar atmosphere, internal gravity waves (IGWs) are perhaps the least studied of them all. Propagating with frequencies below the acoustic waves, IGWs are buoyancy-driven waves naturally occurring in a continuously stratified fluid. The solar atmosphere happens to be an ideal environment for their generation, sustenance and eventual dissipation. The main driver of IGWs in the solar atmosphere are the convective updrafts penetrating into the stably stratified atmospheric layer above. IGWs have been detected in the solar atmosphere and are thought to significantly contribute towards the total wave energy flux in the lower atmosphere \cite{2008ApJ...681L.125S}. Low-frequency ultraviolet (UV) brightness fluctuations observed in the internetwork region of the low solar chromosphere are believed to be caused by IGWs dissipating in the higher layers \cite{2003A&A...407..735R}. However, the strong effects of magnetic field orientation (attack angle) on the propagation of IGW have been demonstrated by Newington \& Cally \cite{2010MNRAS.402..386N, 2011MNRAS.417.1162N} and realistic numerical simulations have shown that the magnetic field present in the solar atmosphere influence the propagation of these waves and likely hinder them from propagating into the upper atmosphere \cite{2017ApJ...835..148V,2019ApJ...872..166V,2020A&A...633A.140V}. At this point it is still unclear what role the magnetic field plays in the propagation of IGWs into higher layers and how this fits with the brightness fluctuations seen in chromosphere. While it is well known that the quiet solar atmosphere is permeated by magnetic field, the exact topology of the magnetic field depends on whether one examines a network or an internetwork region on the solar surface. The network field is mainly characterised by strong vertical component that form magnetic flux concentrations which merge with other network field bundles outlining a supergranulation cell. The area surrounded by the supergranular network (i.e., the cell interior) is usually referred to as the internetwork, where the magnetic field topology is of more mixed orientation, often with a predominance of horizontal fields. The internetwork usually has fewer strong, vertical flux tubes and is mainly pervaded by weaker horizontal fields. For a recent review on the observational aspects of the quiet solar magnetism, the reader is referred to Bellot Rubio et al.\cite{2019LRSP...16....1B}. What does this mean for IGWs propagating in a magnetic environment as diverse as the solar atmosphere? The combined effect of buoyancy, gas pressure, and magnetic field results in a more complicated picture of magneto-acoustic-gravity waves than the one simply described by the acoustic-gravity wave spectrum, wherein the waves are clearly decoupled throughout the atmosphere. The complexity arises due to the additional anisotropy introduced by the magnetic field direction, leading to a multitude of wave modes that depend locally on the angle between the direction of gravitational acceleration and the magnetic field. Furthermore, the highly dynamic and inhomogeneous atmosphere of the Sun provides an environment in which changes happen rapidly from region to region rendering it practically impossible to assign an average property to the atmosphere as seen by a passing wave. This is much more important for IGWs, as local changes may happen faster than the characteristic time scales of the waves. The effect of magnetic field on the propagation of IGWs in a model solar atmosphere permeated by a uniform static magnetic field of different orientations were undertaken by Newington \& Cally \cite{2010MNRAS.402..386N, 2011MNRAS.417.1162N}. They showed that in regions of highly inclined magnetic fields, IGWs undergo mode conversion to upward propagating (field-guided) acoustic or Alfv\'{e}nic waves. They suggest that the upward propagating acoustic waves are then likely to dissipate by shock formation before reaching the upper chromosphere, while converted Alfv\'{e}nic waves can propagate to higher layers. In contrast to the case of highly inclined field, the presence of a vertical field results in the reflection of these waves back into the lower atmosphere. Radiative damping effects do not play a significant role in the mode conversion higher up, but may be important in the surface layers where the IGWs are likely generated. In this paper, we address this problem with more realistic models of the solar magnetic environment that mimic the atmospheric conditions as closely as possible. Our aim is to better understand the link between the brightness fluctuations in the internetwork region and internal gravity waves, and the role of the magnetic field topology on their propagation characteristics. This paper extends on the previous studies \cite{2017ApJ...835..148V,2019ApJ...872..166V,2020A&A...633A.140V} by comparing the propagation of IGWs in models with different magnetic field orientations. These environments are far from idealistic and have magnetic field properties that are more representative of the network/internetwork-like regions of the solar surface. The outline of this paper is as follows: in \S\ref{s:method} we present the models that we use in this study and describe the data analysis method for investigating waves, in \S\ref{s:results} we present the results of our wave analysis and in \S\ref{s:conclusion} we provide our conclusions. \section{Method}\label{s:method} We carry out full three-dimensional simulations of the near-surface layer of the Sun using the {CO$^{\rm 5}$BOLD} code \cite{2012JCoPh.231..919F}. The numerical code solves the time-dependent nonlinear MHD equations in a 3D Cartesian domain with an external gravity field and including non-grey radiative transfer. Approximately, the lower half of the computational domain is in the convective layer and the upper half extends into the stable atmospheric layer where the waves we are interested in propagate. The waves are studied by looking at characteristic properties revealed by their dispersion relation. In the following, we describe the construction of the models and the data analysis in more detail. \subsection{Numerical models}\label{s:method:models} We consider three models that mainly differ in the initial magnetic field introduced before the start of the simulation. Firstly, a non-magnetic model (Sun-v0) is constructed by setting the initial magnetic flux density to zero. In order to mimic a magnetic network-like environment, we construct a second model with an initial, vertical, homogeneous field of 100~G flux density (Sun-v100). Finally, to represent the magnetic field behaviour in an internetwork-like region, we build a third model with an initial, horizontal, homogeneous field of 100~G flux density (Sun-h100). The former two models were part of an earlier study \cite{2019ApJ...872..166V,2020A&A...633A.140V}. In this work, we present the new model that is constructed by introducing a horizontal magnetic field aligned in the $x$ coordinate direction. All models, including the field free model, were calculated with the very same MHD-solver (HLL-MHD; see Freytag et al. \cite{2012JCoPh.231..919F}). All three simulations are carried out on a computational domain of 38.4$\times$38.4$\times$2.8~Mm$^{\rm 3}$, discretized onto a 480$\times$480$\times$120 mesh. The domain extends $\sim$1.3~Mm above and $\sim$1.5~Mm below the mean Rosseland optical depth $\tau_{R}$=1. For more details on the numerical setup, the reader is referred to Vigeesh et al. \cite{2019ApJ...872..166V}. The main difference between the three models is in the initial condition. Additionally, they also differ in the prescription of the top and bottom boundary conditions for the magnetic field. We use periodic boundary conditions for the side boundaries for all the three models. This dictates the fluid flow, radiation, and the magnetic field components to be periodic in the lateral direction. The top boundary is open for flow and radiation and the in-flowing material at the bottom boundary carries with it a constant specific entropy to maintain the radiative flux corresponding to an effective temperature ($T_{\rm{eff}}$) of $\sim$5770~K. For the vertical field model (Sun-v100), the top and bottom boundary condition is such that the vertical component of the magnetic field is constant across the boundary and the transverse component drops to zero at the boundary. For the horizontal field case (Sun-h100), the vertical component of the magnetic field is fixed at its initial value of zero and a constant extrapolation applies to the transverse component across the boundary. The bottom boundary condition for the Sun-v100 model is the same as its top boundary condition. For the Sun-h100 case, however, the up-flowing material carries with it horizontal magnetic flux density of 100~G when ascending into the computational domain. The three set of simulations were run for 4~hr with a snapshot captured every 30~s, giving a total of 480 time points. Figure~\ref{fig:frtop} shows the outward radiative flux, normalized by $\sigma T_{\rm{eff}}^4$, as a function of time for the three simulations. It should be noted that this quantity is measured at a higher cadence. The vertical magnetic model (Sun-v100) shows a slightly elevated effective temperature presumably due to the radiative channeling effect of vertical flux concentration \cite{2018A&A...614A..78S}. Independent of the different initial and boundary conditions, all models show a stable radiative output throughout the entire span of the simulation run. Having such a similar, stable and continuous time series allows us to perform Fourier analysis for studying waves and comparing them on an equal footing. \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{fig1.pdf} \caption{Outward radiative flux, normalized by $\sigma T_{\rm{eff}}^4$, as a function of time for the non-magnetic (Sun-v0) in gray, vertical (Sun-v100) in blue, and horizontal field (Sun-h100) model in red.} \label{fig:frtop} \end{figure} Figure \ref{fig:temperature} shows the temperature at a reference level of $\tau_R=1$ and the absolute magnetic field strength at the same layer, for snapshots taken 3 hours after the start of the simulation for each of the three models. The granular structure, represented here in grayscale, is similar in all the models. The total magnetic field strength is overplotted on the two magnetic models: Sun-v100 (center) and Sun-h100 (right). We note from these maps that most of the magnetic flux in the Sun-v100 is preferentially located in the intergranular lanes, while the Sun-h100 model does not harbor as much vertical flux concentrations as the former case. The magnetic field in the Sun-h100 case is more scattered and shows mixed orientations at the $\tau_{_R}=1$ reference layer. \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{fig2.pdf} \caption{Temperature at a reference level of $\tau_R=1$ (in gray) and absolute magnetic field strength at $\tau_{R}=1$ (in color with $\alpha$-blending to highlight the stronger fields) from the three models of solar magnetoconvection: Sun-v0 (left), Sun-v100 (middle), and Sun-h100 (right). The snapshots shown here are taken 3 hr after the start of the simulation.} \label{fig:temperature} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{fig3a.pdf}\\ \includegraphics[width=1\linewidth]{fig3b.pdf} \caption{Visualization of magnetic fields (colored) in the {${x}$-${z}$} plane showing the predominant orientation of the field in the two magnetic simulations: Sun-v100 (top) and Sun-h100 (bottom). The white contours denote the $\tau_{_R}$=1 surface and the red contour is the plasma-$\beta$=1 surface.} \label{fig:magnetic_fields} \end{figure} The difference in topology of the magnetic models in both cases can be seen by examining a vertical section through the computational domain. In Figure~\ref{fig:magnetic_fields} we show a representation of the magnetic field lines on a $x$-$z$ plane in a small region of the Sun-v100 (top panel) and Sun-h100 (bottom panel) models. The region is arbitrarily chosen to show a fully developed magnetic flux concentration in both of the models. Also shown overlaid on the plots are the contours of $\tau_{_R}$=1 in white and the plasma-$\beta$=1 in red, where plasma-$\beta$ is the ratio of the gas pressure to magnetic pressure. The difference between the two models can be clearly discerned from the magnetic field orientation. The main difference occurs above the equipartition surface $\beta$=1 layer, where for the Sun-v100 model we have predominantly vertical field components and for the Sun-h100 model the field lines are predominately horizontal with vertical footpoints connecting them to the intergranular lane. The plasma-$\beta$ contour in these two examples is shown to dip below the $\tau_{_R}$=1 contour with the magnetic field strength surpassing 1~kG at these locations. Between the height where the average $\tau_{_R}$=1 and average plasma-$\beta$=1, the two models are similar and show mixed orientation of the magnetic field but with a preference to their initial direction, except of course at the locations of flux concentration where they are vertically aligned. In Figure~\ref{fig:flux_full_factor}, the left panel shows the ratio of mean absolute vertical component of the magnetic field ($|B_{z}|$) to the absolute field strength $|\mathbf{B}|$ of both models and the right panel shows the area fraction covered by $|\mathbf{B}|>1$ \,kG field of both models as a function of height. In the case of Sun-v100, for layers above $z=0$\,Mm, we see that a fraction of more than 0.75 of the magnetic flux density is vertically directed and increases with height above $z=0.5$\,Mm reaching 100\% at the top boundary due to the boundary condition. The area fraction that is covered with $|\mathbf{B}|>1$\,kG in the Sun-v100 model is 0.1 -- 0.2 in the range $z=0$ to 0.5\,Mm, decreasing with height further above. The initial increase is the result of the fanning of magnetic flux concentrations in the lower part of the atmosphere. In the case of Sun-h100, only less than 40\% of the magnetic flux density is vertically directed in the lower solar atmosphere, decreasing with height to 0\% at the top boundary as a result of the boundary condition. The area fraction that is covered with $|\mathbf{B}|>1$ \,kG in the Sun-h100 model is less than 0.06. \begin{figure}[!h] \centering\includegraphics[width=.47\linewidth]{fig4a.pdf}\hskip3ex\includegraphics[width=.47\linewidth]{fig4b.pdf} \caption{Ratio of mean absolute vertical component of magnetic field ($B_{z}$) to the total mean $\mathbf{B}$ (left panel) and the area fraction covered by $|\mathbf{B}|>1$ \,kG field (right panel) in the Sun-v100 (blue) and Sun-h100 (red) models as a function of height.} \label{fig:flux_full_factor} \end{figure} Measurement of the quiet Sun magnetic fields is particularly difficult in the upper solar atmosphere. Indirect methods using spectropolarimetric data reveal a prevalence of horizontal fields in the internetwork region \cite{2008ApJ...672.1237L}, with loop like structures portruding from the surface \cite{2010ApJ...714L..94M}. These loops show a ``flattened'' geometry as they rise through the temperature minimum region. We believe that the internetwork region is pervaded with such loops and in the vicinity of the temperature minimum region they might tend to have a predominantly horizontal field topology due to their ``flattened'' nature. The network region on the other hand can be thought of as being dominated by the footpoints of the low-lying loops as well as long reaching vertical flux concentrations, thereby giving them a predominantly vertical field topology. \subsection{Wave diagnostics} After a sufficient duration of the simulation is complete, 4 hrs in our case, we extract different physical quantities from the simulation to perform a spectral analysis. The spectral analysis is carried out by decomposing the physical quantities into their Fourier components in the horizontal direction and in time for each grid level in the $z$-direction. The Fourier synthesis is performed using the fast Fourier transform (FFT) algorithm, and the components are then represented on a {$k_{h}$-$\omega$} dispersion relation diagram for a given height by azimuthally averaging over the $k_{x}$-$k_{y}$ plane. The phase difference and coherence spectra of the velocities at two separate layers are then computed from the complex cross-spectrum, $\displaystyle S_{v_1,v_2}(\boldsymbol{k},\omega)= v_{1}(\boldsymbol{k},\omega)\cdot \overline{v_{2}(\boldsymbol{k},\omega)}$, where $v_{1}$ represents the velocity at layer 1 and likewise for $v_{2}$, and the overbar represents the complex conjugate (see Eqs.~4-6 of Vigeesh et al. \cite{2017ApJ...835..148V}). The confidence interval for the phase and coherence measurements and the zero coherence threshold are also computed as described in Vigeesh et al. \cite{2020A&A...633A.140V}. To study the energy transport by these waves, we estimate the mechanical energy flux spectra by computing the perturbed pressure-velocity ($\Delta p - \boldsymbol{v}$) co-spectrum represented on the $k_{h}$-$\omega$ dispersion relation diagram. \section{Results}\label{s:results} In this section, we present the results from our wave analysis that show the differences in wave propagation between the three models introduced in \S\ref{s:method}(a). We then discuss about the similarities and differences in energy transport by IGWs in the different models. Lastly, we briefly mention the effect of the background flow on the dissipation of these waves. \subsection{Wave generation and propagation} We study the propagation characteristic of the waves by examining the velocity-velocity ($v$-$v$) phase difference spectra represented in the $k_{h}$-$\omega$ diagnostic diagram for a given pair of heights. Observed $k_{h}$-$\omega$ phase diagrams usually result from time series of simultaneously acquired Dopplergrams in a number of spectral lines, formed at different heights. For the simulation, we can extract the velocity field for any pair of atmospheric layers and compute the phase-difference spectra between those. In Figure \ref{fig:phase_diff_100_120}, we show the phase difference between the vertical velocity measured at $z=100$\,km and $z=120$\,km of the three models, represented on the $k_{h}$-$\omega$ diagnostic diagram. This near surface region where the waves are presumably excited show similar IGW emission characteristics -- downward phase propagation (negative phase difference) in the region below the IGW propagation boundary (marked by the lower black solid and dashed curves). The corresponding energy transport is upward directed, indicating that these waves are indeed IGWs. In the rest of the paper, we refer with ``upward'' or ``downward'' propagation to the direction of energy transport, which for gravity waves is opposite to the direction of phase propagation. An upwardly propagating gravity wave has downwardly propagating phase and shows up with a negative phase difference according to our sign convention when calculating phase difference spectra (phase of lower layer minus phase of upper layer). \begin{figure}[!h] \centering\includegraphics[width=.98\linewidth]{fig5.pdf} \caption{$v_{z} - v_{z}$ phase and coherence spectra estimated between $z=100$\,km and $z=120$\,km for the (a) non-magnetic model, Sun-v0, (b) model with predominantly vertical fields, Sun-v100, and (c) model with the predominantly horizontal fields, Sun-v100. The dashed black curves represent the propagation boundaries for the lower height, and the solid curves represent those for the upper height. The gray curve represents the dispersion relation of the surface gravity waves. The colors represent the phase difference and the shading shows the coherency. IGWs propagate in the region below the lower propagation boundaries.} \label{fig:phase_diff_100_120} \end{figure} Although the near surface region show similar wave spectra, higher up in the atmosphere the three models show markedly different behaviour on the $k_{h}$-$\omega$ phase difference spectra for a given pair of heights. Rather than limiting ourselves to a single pair of heights, we look at how the phase difference and coherence vary as a function of vertical distance between the measurement heights in the three models. Figure~\ref{fig:phase_coherence} shows the phase difference and coherence as a function of travel distance relative to a reference height of $z$=0~Mm (thick curves) for the non-magnetic (grey: Sun-v0), the vertical field model (blue: Sun-v100), and the horizontal field model (red: Sun-h100). This is computed by estimating the $v_{z}$-$v_{z}$ phase difference and coherence between $z$=0~Mm and every grid point along the $z$-direction. The plots correspond to a given Fourier component, here $k_{h}\approx 4$~rad~Mm$^{-1}$ and $\nu\approx 2$~mHz, which fall in the region of the $k_{h}$-$\omega$ where the bulk of the IGWs occur in our simulation (location marked by the circle in Fig~\ref{fig:phase_diff_100_120}). The 90\% confidence bounds for the phase difference and coherence estimates are shown by the shaded areas and the zero-coherence threshold is shown by the dashed-grey line. We also show the phase difference and coherence relative to a selection of other reference layers, viz. $z$ = 0.4, 0.6, 0.8 Mm. These plots clearly reveal the difference in the propagation properties of waves in models without, with vertical, and with horizontal magnetic fields. \begin{figure}[!h] \centering\includegraphics[width=.47\linewidth]{fig6a.pdf}\hskip3ex\includegraphics[width=.47\linewidth]{fig6b.pdf} \caption{\textit{v}-\textit{v} phase difference (left) and coherence (right) between the reference layer, $z=0$\,Mm, and layers of constant geometrical scale for a given $k_h$ and $\omega$ (thick curves) for the three models. The leftmost subplot is a rescaled part of the small region in the panel, marked by a black rectangle. The thin solid lines show three other reference layers. The 90\% confidence bounds for estimates are represented by the shaded area. The zero-coherence threshold at a significance level of 0.05 is marked by the dashed line in the right plot.} \label{fig:phase_coherence} \end{figure} Firstly, all the three models show negative phase difference up to around 0.4~Mm, suggesting that the waves in these model propagate in the upward direction (downward propagating phase). We note here that beyond the travel distance of 0.4~Mm for the $z=0$~Mm reference layer, the phase difference measurements in both the magnetic models become unreliable as indicated by the break in the phase-difference spectra and by the uncertainty in the measurement shown by the shaded area. The coherence in all the three models above a height of 0.4~Mm drops below the zero-coherence threshold. Particularly interesting to note is the difference in the phase and coherence spectrum for the reference layer of $z=0.4$~Mm and the same above. Here, we see that the non-magnetic model always shows negative phase-difference, suggesting that the waves are upward propagating throughout the atmosphere. However, when comparing the two magnetic models, we clearly see the influence of magnetic field orientation on the propagation of the IGWs. The model with a predominantly horizontal field reveals a similar behaviour as the non-magnetic case, showing negative phase-difference throughout the atmosphere, meaning that the IGWs propagate upwards as if the magnetic fields were absent. For the model with a predominantly vertical field, on the other hand, this is not the case as the phase difference here is positive for heights above $z=0.4$~Mm, revealing quite a contrasting behaviour compared to the non-magnetic and the model with horizontal field. The IGWs in the vertical field case are seen to propagate downwards in the higher layers. The change in behaviour around $z=0.4$~Mm can be understood by looking again at Figure~\ref{fig:magnetic_fields}. We see here that the magnetic fields in the two models are quite similar in regions between the average $\tau_{_R}=1$ surface and average $\beta=1$ surface which is around $z=0.5$~Mm. We believe that the similarity in both simulations in terms of wave propagation is due to the fact that the dynamics of the waves below the plasma $\beta$=1 surface is not dominated by the magnetic fields, but by the thermodynamic properties of the atmosphere. The thermal properties of the atmosphere in the near surface layers are similar in the two cases of Sun-v100 and Sun-h100. In the low plasma $\beta$ layers, however, the waves see a different magnetic field orientation between the two magnetic models as they propagate. This is an indication of the mode coupling that occurs in the presence of magnetic fields of different inclination as described by Newington \& Cally \cite{2010MNRAS.402..386N, 2011MNRAS.417.1162N}. \subsection{Energy transport} The difference in the wave propagation behavior as revealed by the phase and coherence analysis presented in the previous section highlights an important aspect of the energy transport of these waves in the solar atmosphere. In order to better understand the energy transport of the waves in the different models, we look at the mechanical energy flux computed in the $k_{h}$-$\omega$ diagnostic diagram for a given layer. In Figure~\ref{fig:energy_flux}, we show the mechanical flux as a function of height for the three models. Here, we show it for the same Fourier component ($k_{h}\approx 4$~rad~Mm$^{-1}$, $\nu\approx 2$~mHz) as for the phase/coherence analysis above. In addition, for comparison we also show the fluxes for a Fourier component that correspond to the high-frequency acoustic wave ($k_{h}\approx 4$~rad~Mm$^{-1}$, $\nu\approx 8$~mHz). \begin{figure}[!h] \centering \includegraphics[width=0.5\linewidth]{fig7.pdf} \caption{Mechanical flux as a function of height for a given $k_h$ and $\omega$ for the different models.} \label{fig:energy_flux} \end{figure} We see that for all the three models, the mechanical energy flux is upward directed up to a height of $z\approx0.5$~Mm, confirming the propagation properties that are revealed by the phase-difference analysis. Above this height, we see that the non-magnetic model and the model with horizontal field show a very similar behaviour exhibiting upward energy transport up to a height of $\sim$0.9~Mm, beyond which they tend to deviate. The reason for this behaviour is unclear. Nevertheless, the plot of the energy flux as a function of height suggests that the IGWs in a predominantly horizontal field environment may transport energy to higher layers, up to a height of $\sim$0.9~Mm. On the other hand, the model with vertical magnetic field shows a completely different behaviour above $z=0.5$~Mm. Here, we see that the energy transport is downward directed, which is what we expected from the phase-difference analysis. This suggests a markedly different energy transport behaviour of the IGWs in the presence of vertical magnetic field compared to a non-magnetic atmosphere or an atmosphere with a predominantly horizontal field. But here again, the behaviour around $z=0.8$~Mm and above remains unclear. We refrain from interpreting the results in the vicinity of the upper boundary ($\sim$ 2-3 pressure scale heights), i.e above $z=1$\,Mm in the atmosphere where the coherence value drops below the zero-coherence threshold at 95\% level anyway. The peculiar behaviour above the height of $z=0.8$\,Mm in all the three models suggests that this might have a different origin than the magnetic field itself. The measured phase difference and the energy transport for the chosen Fourier component do not fit the behaviour of IGW above $z=0.8$\,Mm, the basic signature of which is the opposite sign of the vertical component of the phase propagation and energy transport. We do not have an explanation for this behavior and these waves might not be IGWs anymore. This is particularly surprising for the Sun-v0 case, because the selected Fourier component falls in the IGW range and should satisfy the acoustic-gravity dispersion relation. The boundary condition seems to have an influence only in the grid cells very close to the boundary. Perhaps a linear analysis breaks down due to the large amplitude of the thermodynamic perturbations in these layers. The temperature perturbations in the upper atmosphere is particularly strong for the Sun-v0 case. Exploring this aspect is extremely important, but is beyond the scope of this work. A cause for the puzzling behaviour above the $z=0.8$~Mm level might be non-linear effects as is addressed in the following section. \subsection{Wave-breaking} When waves propagate in the presence of a background flow, their propagation properties are modified. Strong background flow may result in nonlinear breaking of IGWs, leading to enhanced energy dissipation. In the context of IGWs in the solar atmosphere, \cite{1981ApJ...249..349M} considered a stability condition given by the ratio of wave vorticity ($\zeta$) and the Brunt-V\"{a}is\"{a}l\"{a} frequency, $N$, defined as, \begin{align}\label{1.1} N^2 = g \left( \frac{1}{H_{\varrho}} - \frac{1}{\gamma H_{p}}\right), \end{align} where, $\gamma$ is the ratio of the specific heats ($c_{P}/c_{V}$), $H_{\varrho}$ is the density scale height, and $H_{p}$ is the pressure scale height of the atmosphere. \begin{figure}[!h] \centering\includegraphics[width=.47\linewidth]{fig8a.pdf}\hskip3ex\includegraphics[width=.47\linewidth]{fig8b.pdf} \caption{Nonlinearity parameter ($\zeta/N$, left panel) and the Brunt-V\"{a}is\"{a}l\"{a} frequency ($N$, right panel) as a function of height in the three models. The peculiarities apparent in the Sun-h100 model above $z\approx1.2$\,Mm are due to the influence of the upper boundary condition.} \label{fig:nonlinear} \end{figure} A non-zero vorticity may occur due to vortex flows or as a result of velocity shear. Here we consider the average fluid vorticity ($\mathbf{\nabla} \times \mathbf{v}$) as a proxy for the wave vorticity ($\zeta$) to estimate the strength of the background flow \cite{2017MmSAI..88...54V}. In Figure~\ref{fig:nonlinear}, the left shows the ratio, $\zeta/N$ and the right panel shows the Brunt-V\"{a}is\"{a}l\"{a} frequency as a function of height for the three models. We see that for most of the atmosphere, $\zeta/N$ is below unity, suggesting that the model does not have strong background flows to influence the propagation of IGWs. However, the horizontal field model seems to have a larger $\zeta/N$ at higher layers compared to both the vertical field and non-magnetic case, suggesting that IGWs are more likely to break in the former case. The Sun-h100 model shows a lower Brunt-V\"{a}is\"{a}l\"{a} frequency compared to the other models, as a result of the higher density and pressure scale height. A more detailed analysis would require estimating the wave vorticity in the {$k_{h}$-$\omega$} diagnostic diagram and exploring the ratio for a given Fourier component, which is planned for the future. However, from the energy transport perspective, we have seen that horizontal magnetic field allow IGWs to propagate upwards into the atmosphere. Combined with the fact that, horizontal models also show stronger non-linearity parameter, this suggests that wave breaking may be a possible scenario by which IGWs dissipate their energy in an atmosphere with predominantly horizontal fields. Vertical fields, on the other hand, appear to prohibit IGWs from propagating into higher layers and therefore prevents them from undergoing wave-breaking at chromospheric heights. The numerical solver used in this work is capable of capturing strong discontinuities related to wave breaking condition, but the coarse resolution of the simulation restricts us from exploring this aspect adequately. The models presented in this work are of low resolution ($\delta x, \delta y = 80$\,km) that do not capture strong vorticity, and therefore it is unclear if wave breaking is still the reason for the decrease in energy flux that we observe. We see that the average temperature remains similar in the three models. Therefore, we cannot claim to have clear evidence of wave breaking or the consequent temperature enhancement in our simulation. High resolution simulations show significantly stronger flows and vorticity \cite{2017MmSAI..88...54V, 2020arXiv200705847F,2020A&A...639A.118C} and therefore maybe better suited for the study of wave breaking. \section{Conclusion}\label{s:conclusion} The complex and highly dynamic atmosphere of the Sun harbours internal gravity waves. They have been detected in observations and have been clearly reproduced in realistic three-dimensional numerical simulations of the solar atmosphere. In this work, we show that the energy flux spectra in the lower photosphere are dominated by upward propagating IGWs and are independent of the magnetic properties of the model. In the higher layers of the atmosphere, the average magnetic field orientation dictates the propagation properties of the IGWs. In vertical field models the IGWs are reflected downwards, whereas in models without a magnetic field and a horizontal magnetic field they propagate freely into the upper layers. Our study demonstrates that IGWs behave similarly in the near surface layers of internetwork-like region where the fields are predominantly horizontal and in network regions where the fields are predominantly vertical. In these low layers, the magnetic field does not play a significant role. However, the significant differences in behaviour between the two models in the upper layers above $\approx$ 0.4 Mm, demonstrate the importance of the orientation of the magnetic field for the propagation of IGWs. The upward propagation of IGWs in internetwork-like regions may lead to wave-breaking and therefore may be related to the brightness fluctuations seen in UV passbands. This is in agreement with the interpretation of the observed UV brightness fluctuations as signature of IGWs by Rutten et al. \cite{2003A&A...407..735R}. On the other hand, where there are predominantly vertical field components, like in network-regions, IGWs tend to reflect back without reaching wave-breaking heights and therefore may play only a minor role for the heating of the upper layers. The peculiar behaviour of the waves above $z=0.8$\,Mm in our simulation is not understood and requires further study. A detailed comparison with observations should shed further light on the differences in wave propagation behaviour between internetwork and network regions. \vskip6pt \enlargethispage{20pt} \dataccess{The data this study is based on are too large to host on public repositories. However, parts of the data can be requested from the corresponding author, who will be happy to discuss ways to access the data.} \aucontribute{GV designed and carried out the simulations, performed the data analysis and drafted the manuscript. All authors read and contributed to the discussion and helped polish the manuscript.} \competing{The author(s) declare that they have no competing interests.} \funding{This work was supported by the \emph{Deut\-sche For\-schungs\-ge\-mein\-schaft, DFG\/} grant RO 3010/3-1.} \ack{We thank the anonymous referee for exceptionally detailed and constructive comments, which helped to significantly improve the paper.}
{ "timestamp": "2020-10-15T02:16:27", "yymm": "2010", "arxiv_id": "2010.06926", "language": "en", "url": "https://arxiv.org/abs/2010.06926" }
\section{\textbf{Introduction}} Let $H(\mathbb{D})$ be the space of all analytic functions on the open unit disk $\mathbb{D}$. Let $\varphi$ be an analytic self-map of $\mathbb{D}$. The map $\varphi$ induces a composition operator $C_\varphi$ on $H(\mathbb{D})$ which is defined by $C_\varphi f=f\circ \varphi$. We refer to \cite{cowen1, shapiro} for various aspects of the theory of composition operators on holomorphic function spaces. Also, $H^\infty(\mathbb{D})$ is the Banach space of bounded functions in $H(\mathbb{D})$ with supremum norm. Consider the continuous function $\nu:\mathbb{D}\rightarrow (0,\infty)$. For $0<p<\infty$, the weighted Bergman space $A^p_\nu(\mathbb{D})$ is the space of analytic functions $f$ on $\mathbb{D}$ where \begin{equation*} \|f\|_{\nu,p}^p= \int_{z\in \mathbb{D}} |f(z)|^p \nu(z)<\infty. \end{equation*} For $p=\infty$, the weighted Bergman space of infinite order is defined as: \begin{equation*} A^p_\nu(\mathbb{D})=H^\infty_\nu(\mathbb{D})=\{f\in H(\mathbb{D}): \ \sup_{z\in \mathbb{D}} |f(z)|\nu(z)<\infty\}. \end{equation*} Throughout this paper, $\nu$ has the following properties: \begin{itemize} \item [(i)] it is radial, that is, $\nu(z)=\nu(|z|)$), \item [(ii)] it is decreasing, \item [(iii)] $\lim_{|z|\rightarrow 1} \nu(z)=0$ and \item [(iv)] it satisfies the Lusky condition; $\inf_n \frac{\nu(1-2^{-n-1})}{\nu(1-2^{-n})}>0.$ \end{itemize} A well-known fact is that $\nu \simeq \tilde{\nu}$, where \begin{equation*} \tilde{\nu}(z) =\dfrac{1}{sup\{|f(z)|: \ f\in H^\infty_\nu(\mathbb{D}), \|f\|\leq 1\}}. \end{equation*} By \cite[1.2 Properties, Part (iv)]{bier}, for every $z\in \mathbb{D}$ there is a function $f_z$ in the unit ball of $H^\infty_\nu(\mathbb{D})$ such that $f_z(z)=1/\tilde{\nu}(z)$. Bonet et al. \cite{bonet} showed that for such weighted $\nu$, the composition operators generated by disk automorphisms are bounded on $H^\infty_\nu(\mathbb{D})$. Thus, by the Schwarz Lemma, every $C_\varphi$ is bounded on $H^\infty_\nu(\mathbb{D})$. Consider $\alpha>0$; we use the notation $H^\infty_\alpha(\mathbb{D})$ for the space $H^\infty_\nu(\mathbb{D})$ with the weight $\nu(z)=(1-|z|)^\alpha$. Let $\{a_k\}$ be a discrete sequence of points in $\mathbb{D}$. If it is the case that for any bounded sequence of complex number $\{b_k\}$, one can find a bounded analytic function, $f$, in $\mathbb{D}$, such that $$f(a_k)=b_k, \qquad k=1,2,...,$$ we say that $\{a_k\}$ is an interpolating sequence for $H^\infty(\mathbb{D})$. Nevannlina \cite{nevanlinna} gave a necessary and sufficient condition for a sequence to be an interpolation sequence in $H^\infty(\mathbb{D})$. Carleson \cite{carleson1} with a simpler condition presented another characterization for these sequences in $H^\infty(\mathbb{D})$. Using interpolation sequences, Carleson solved the corona conjecture on $H^\infty (\mathbb{D})$ in his celebrated paper \cite{carleson2}. Berndtsson \cite{bern} gave a sufficient condition for $H^\infty(\mathbb{B}_n)$-interpolating sequences. Consider the sequence \begin{equation*} M_n(T)=\dfrac{1}{n}\sum_{j=1}^n T^j, \end{equation*} where $T^j$ is the $j-th$ iteration of continuous operator $T$ on the Banach space $X$. We say that $T$ is mean ergodic if $M_n(T)$ converges to a bounded operator acting on $X$, in the strong operator topology. Also, $T$ is called uniformly mean ergodic if $M_n(T)$ converges in the operator norm. M. J. Beltr\'{a}n-Meneua et al. \cite{beltran1} and E. Jord\'{a} and A. Rodr\'{i}guez-Arenas \cite{jorda} characterized the (uniformly) mean ergodic composition operators on $H^\infty(\mathbb{D})$ and $H^\infty_\nu (\mathbb{D})$, respectively. In this paper, by using the interpolating sequences, we give other necessary and sufficient conditions for the (uniformly) mean ergodicity of composition operators on these spaces. It is well-known that $H^\infty(\mathbb{D})$ is a Grothendieck Dunford-Pettis (GDP) space. Lusky \cite{lusky}, showed that $H^\infty_\nu(\mathbb{D})$ is isomorphic either to a $H^\infty(\mathbb{D})$ or to a $l^\infty$. Hence, $H^\infty_\nu(\mathbb{D})$ also is a GDP space. Lotz \cite{lotz} proved that if $X$ is a GDP space and in addition $\|T^n/n\|\rightarrow 0$, then $T$ is mean ergodic if and only if it is uniformly mean ergodic. M. J. Beltr\'{a}n-Meneua et al. \cite{beltran1} used the above results to characterize (uniformly) mean ergodic composition operators on $H^\infty(\mathbb{D})$. Indeed, according to the result of Lotz, the mean ergodicity and uniformly mean ergodicity of composition operators on $H^\infty(\mathbb{D})$ are equivalent. Also, they gave another equivalent geometric condition with the mean ergodicity of composition operators. In this paper, we give another equivalent condition: we show that $C_\varphi$ on $H^\infty(\mathbb{D})$ is mean ergodic if and only if $\lim_{n\rightarrow \infty} \| \frac{1}{n} \sum_{j=1}^n C_{\varphi_j}\|_e=0$, where $\|.\|_e$ denotes the essential norm of operators. March T. Boedihardjo and William B. Johnson \cite{boe} investigated the mean ergodicity of operators in Calkin algebra. E. Jord\'{a} and A. Rodr\'{i}guez-Arenas \cite{jorda} characterized the (uniformly) mean ergodic composition operators on $H_\nu^\infty(\mathbb{D})$. Also, they showed that if $C_\varphi$ is mean ergodic then $\varphi$ has an interior Denjoy-Wolff point. In this paper, we give another geometric necessary and sufficient conditions for the (uniformly) mean ergodic composition operators on $H^\infty_\nu(\mathbb{D})$, when $\varphi$ has an interior Denjoy-Wolff point. Also, we show that if $\varphi$ is not an elliptic disk automorphism, then $C_\varphi$ is uniformly mean ergodic on $H^\infty_\alpha(\mathbb{D})$ if and only if $M_n(C_\varphi)$ converges to $0$ in Calkin algebra of $H^\infty_\alpha(\mathbb{D})$. Throughout the paper, we write $A\lesssim B$ when there is a positive constant $C$ such that $A\leq CB$, and $A\simeq B$ when $A\lesssim B$ and $B\lesssim A$. \section{\textbf{Main Results}} Let $\varphi$ be an analytic self-map of the unit disk. The point in the following theorem is called the Denjoy-Wolff point of $\varphi$. \begin{theorem}[Denjoy-Wolff Theorem] If $\varphi$, which is neither the identity nor an elliptic automorphism of $\mathbb{D}$, is an analytic map of the unit disk into itself, then there is a point $w$ in $\overline{\mathbb{D}}$ so that $ \varphi_j\rightarrow w$, uniformly on the compact subsets of $\mathbb{D}$. \end{theorem} If $\varphi$ has an interior Denjoy-Wolff point, then we can conjugate it with an analytic self-map of $\mathbb{D}$ that fixes the origin. Hence, if $\varphi$ has an interior Denjoy-Wolff point, we investigate only the case $\varphi(0)=0$. \begin{theorem} \label{t1} Consider $\varphi:\mathbb{D}\rightarrow \mathbb{D}$ analytic with $\varphi(0)=0$, which is neither the identity nor an elliptic automorphism of $\mathbb{D}$. Then the following statements are equivalent: \begin{itemize} \item[(i)] $C_\varphi$ is mean ergodic on $H^\infty_\nu(\mathbb{D})$. \item[(ii)] $C_\varphi$ is uniformly mean ergodic on $H^\infty_\nu(\mathbb{D})$. \item[(iii)] For some (every) sequence $(r_n)\in (0,1)$ where $r_n\uparrow 1$, \begin{equation} \label{e1} \lim_{n\rightarrow \infty} \sup_{|z|>r_n} \dfrac{\nu(z)}{\nu(\varphi_n(z))}=0. \end{equation} \end{itemize} Moreover, if $\sup_{z\in \mathbb{D}} \nu(z)\leq 1$, then \begin{itemize} \item[(iv)] We have \begin{equation} \label{e2} \lim_{n\rightarrow \infty} \sup_{|z|<1}\dfrac{\nu(z)}{\nu(\varphi_n(z))} |\varphi_n(z)|=0. \end{equation} \end{itemize} \end{theorem} \begin{proof} (i)$\Leftrightarrow$ (ii): If $f$ is in the unit ball of $H^\infty_\nu(\mathbb{D})$, then \begin{equation*} |f(z)|\leq \dfrac{1}{\nu(z)}, \qquad \forall z\in \mathbb{D}. \end{equation*} Thus, by using the monotonicity of $\nu$ and the Schwarz lemma \begin{equation*} |f(\varphi_n (z))|\nu(z)\leq \dfrac{\nu(z)}{\nu(\varphi_n(z))}\leq 1, \qquad \forall z\in \mathbb{D}. \end{equation*} Hence, $\|C_{\varphi_n}/n\| \rightarrow 0$. Therefore, from \cite{lotz} it follows that (i) and (ii) are equivalent.\\ (ii)$\Rightarrow$(iii). Consider the sequence $(r_n)\in (0,1)$ where $r_n\uparrow 1$ and \ref{e1} does not hold. Since for any $z\in \mathbb{D}$ the sequence $\{|\varphi_i(z)|\}$ is decreasing, there are an $r>0$ and a sequence $\{a_n\}\subset \mathbb{D}$ where $|a_n|\rightarrow 1$ and \begin{equation*} \dfrac{\nu(a_n)}{\nu(\varphi_n(a_n))}\geq r. \end{equation*} Since $|a_n|\rightarrow 1$, and $\lim_{|z|\rightarrow 1} \nu(z)=0$, there are some $s>0$ and $k\in \mathbb{N}$ such that $|\varphi_n(a_n)|\geq s$, for all $n\geq k$. Without loss of generality, we can let $k=1$. Thus, by the Schwarz lemma for $1\leq i \leq n$, \begin{equation*} \dfrac{\nu(a_n)}{\nu(\varphi_i(a_n))}\geq\dfrac{\nu(a_n)}{\nu(\varphi_n(a_n))}\geq r, \qquad |\varphi_i(a_n)|\geq |\varphi_n(a_n)|\geq s. \end{equation*} Hence, by the proof of \cite[Lemma 13]{cowen2}, for every $n$, $\{\varphi_i(a_n)\}_{i=1}^n$ is an interpolation. Thus, by \cite[Page 3]{bern} there is some $M>0$ and $\{f_{i,n}\}_{i=1}^n$ in $H^\infty(\mathbb{D})$ such that \begin{itemize} \item[(a)] $f_{i,n}(\varphi_i(a_n))=1$ and $f_{i,n}(\varphi_j(a_n))=0$, for $i\neq j$. \item[(b)] $\sup_{z\in \mathbb{D}} \sum_{i=1}^n |f_{i,n}(z)|\leq M$, for all $n\in \mathbb{N}$. \end{itemize} Now we define the functions \begin{equation*} f_n(z)=\sum_{i=1}^n z\overline{\varphi_i(a_n)}f_{i,n}(z)f_{\varphi_i(a_n)}(z). \end{equation*} We can easily see that $\sup_{z\in \mathbb{D}} |f_n(z)|(1-|z|)^\alpha \leq M$, $f_n(0)=0$ and \begin{equation*} f_n(\varphi_i(a_n))=\dfrac{|\varphi_i(a_n)|^2}{\tilde{\nu}(\varphi_i(a_n))}. \end{equation*} Therefore, \begin{equation*} \dfrac{1}{n}\sum_{i=1}^n f_n(\varphi_i(a_n))\nu(a_n|) =\dfrac{1}{n}\sum_{i=1}^n \Big( \dfrac{\nu(a_n)}{\tilde{\nu}(\varphi_i(a_n))}\Big) |\varphi_i(a_n)|^2 \gtrsim s^2r. \end{equation*} But this is a contradiction with the uniformly mean ergodicity of $C_\varphi$.\\ (iii)$\Rightarrow$(i). Let $\{r_n\}$ be a sequence in $(0,1)$ such that $r_n\uparrow 1$ and \ref{e1} holds. Consider $\varepsilon>0$ arbitrary. By using \ref{e1}, there exists some $N$ such that \begin{equation*} \sup_{|z|>r_N} \dfrac{\nu(|z|)}{\nu(|\varphi_N(z)|)}<\varepsilon. \end{equation*} Now by the Schwarz lemma \begin{equation*} \sup_{|z|>r_N} \dfrac{\nu(|z|)}{\nu(|\varphi_n(z)|)}<\varepsilon, \qquad \forall n\geq N. \end{equation*} Let $f$ be in the unit ball of $H^\infty_\nu(\mathbb{D})$. Then \begin{eqnarray*} \sup_{|z|<1} \dfrac{1}{n} \sum_{i=1}^n |f\circ \varphi_i(z)-f(0)|\nu(|z|) &=& \sup_{|z|\leq r_N} \dfrac{1}{n} \sum_{i=1}^n |f\circ \varphi_i(z)-f(0)|\nu(|z|) \\ &+& \sup_{|z|>r_N} \dfrac{1}{n} \sum_{i=1}^n |f\circ \varphi_i(z)-f(0)|\nu(|z|) . \end{eqnarray*} Since $\varphi_i\rightarrow 0$ uniformly on $|z|\leq r_N$, we have $$\lim_{n\rightarrow\infty} \sup_{|z|\leq r_N} \dfrac{1}{n} \sum_{i=1}^n |f\circ \varphi_i(z)-f(0)|\nu(|z|) =0.$$ Now for $|z|> r_N$: We know that \begin{equation*} |f\circ \varphi_i(z)-f(0)|\lesssim \dfrac{1}{\nu(|\varphi_i(z)|)}, \end{equation*} thus, for $|z|>r_N$ and $n\geq N$, \begin{eqnarray*} \dfrac{1}{n} \sum_{i=1}^n |f\circ \varphi_i(z)-f(0)|\nu(|z|) &\lesssim& \dfrac{1}{n} \sum_{i=1}^n \dfrac{\nu(|z|)}{\nu(|\varphi_i(z)|)}\\ &=& \dfrac{1}{n} \Big[ \sum_{i=1}^N \dfrac{\nu(|z|)}{\nu(|\varphi_i(z)|)} + \sum_{i=N+1}^n \dfrac{\nu(|z|)}{\nu(|\varphi_i(z)|)} \Big]\\ &\leq& \dfrac{N}{n}+\dfrac{n-N}{n}\varepsilon. \end{eqnarray*} Taking $n$ as sufficiently large, we have $\frac{N}{n}<\varepsilon$. Since $\varepsilon>0$ was arbitrary, (ii) holds.\\ (iii)$\Rightarrow$(iv): Consider $\varepsilon>0$ arbitrary. By using \ref{e1}, there exists some $N$ such that \begin{equation*} \sup_{|z|>r_N} \dfrac{\nu(z)}{\nu(\varphi_N(z))}<\varepsilon. \end{equation*} Now by the Schwarz lemma \begin{equation*} \sup_{|z|>r_N} \dfrac{\nu(z)}{\nu(\varphi_n(z))} |\varphi_n(z)| <\varepsilon, \qquad \forall n\geq N. \end{equation*} Also, Since $\varphi_i\rightarrow 0$ uniformly on compact subsets of $\mathbb{D}$, there exists some $M$ such that \begin{equation*} \sup_{|z|\leq r_N} \dfrac{\nu(z)}{\nu(\varphi_n(z))} |\varphi_n(z)| \leq \sup_{|z|\leq r_N} |\varphi_n(z)| <\varepsilon, \qquad \forall n\geq M. \end{equation*} Therefore, for $n\geq \max\{M,N\}$ \begin{equation*} \sup_{|z|<1}\dfrac{\nu(z)}{\nu(\varphi_n(z))} |\varphi_n(z)| <\varepsilon. \end{equation*} This gives the desire result.\\ (iv)$\Rightarrow$(ii): From the proof of \cite[Proposition 2]{bonet3}, we can conclude that \begin{equation*} \|C_{\varphi_i}-K_0\|\simeq \sup_{|z|<1}\max\{ \dfrac{\nu(z)}{\tilde{\nu}(\varphi_i(z))},\nu(z)\} |\varphi_i(z)|. \end{equation*} Since $\sup_{z\in \mathbb{D}} \nu(z)\leq 1$, for every $z\in \mathbb{D}$ there is some $g_z$ in the unit ball of $H^\infty_\nu(\mathbb{D})$ where $|g_z(z)|\geq 1$. Thus, $$\dfrac{1}{\tilde{\nu}(z)} =sup\{|f(z)|: \ f\in H^\infty_\nu(\mathbb{D}), \|f\|\leq 1\}\geq 1,$$ for all $z\in \mathbb{D}$. Therefore, \begin{eqnarray*} \|\dfrac{1}{n} \sum_{i=1}^n C_{\varphi_i}-K_0\| &\leq& \dfrac{1}{n} \sum_{i=1}^n \|C_{\varphi_i}-K_0\|\\ &\simeq& \dfrac{1}{n} \sum_{i=1}^n \sup_{|z|<1}\max\{ \dfrac{\nu(z)}{\tilde{\nu}(\varphi_i(z))},\nu(z)\} |\varphi_i(z)|\\ &\leq& \dfrac{1}{n} \sum_{i=1}^n \sup_{|z|<1}\dfrac{\nu(z)}{\tilde{\nu}(\varphi_i(z))} |\varphi_i(z)|\\ &\lesssim& \dfrac{1}{n} \sum_{i=1}^n \sup_{|z|<1}\dfrac{\nu(z)}{\nu(\varphi_i(z))} |\varphi_i(z)|\rightarrow 0. \end{eqnarray*} as $n\rightarrow \infty$. \end{proof} Inequality \ref{e1} implies the following examples. \begin{example} Let $\alpha>0$ and $k$ be a positive integer. If $\varphi(z)=z^k$, then $C_\varphi$ is uniformly mean ergodic on every $H^\infty_\alpha(\mathbb{D})$. In fact, if $r_n=1-\frac{1}{k^n}$, then \ref{e1} holds: \begin{eqnarray*} \lim_{n\rightarrow \infty} \sup_{|z|>r_n} \dfrac{1-|z|}{1-|\varphi_n(z)|}&=& \lim_{n\rightarrow \infty} \sup_{|z|>r_n} \dfrac{1-|z|}{1-|z|^{k^n}}= \lim_{n\rightarrow \infty} \sup_{r>r_n} \dfrac{1-r}{1-r^{k^n}}\\ &=& \lim_{n\rightarrow \infty} \dfrac{1-(1-\frac{1}{k^n})}{1-(1-\frac{1}{k^n})^{k^n}} \leq \lim_{n\rightarrow \infty} \dfrac{\frac{1}{k^n}}{1-(1-\frac{1}{k^n})^{k^n}} \\ &=& \dfrac{\lim_{n\rightarrow \infty} \frac{1}{k^n}}{1-e^{-1}}=0. \end{eqnarray*} \end{example} \begin{example} If $\varphi:\mathbb{D}\rightarrow \mathbb{D}$ does not have a finite angular derivative at any point of the boundary, then $C_\varphi$ is uniformly mean ergodic on $H^\infty_\alpha(\mathbb{D})$. Indeed \begin{equation*} \lim_{n\rightarrow \infty} \sup_{|z|>r_n} \dfrac{1-|z|}{1-|\varphi_n(z)|}\leq \limsup_{|z|\rightarrow 1} \dfrac{1-|z|}{1-|\varphi(z)|}=0. \end{equation*} \end{example} In Theorems \ref{t3} and \ref{t2} we will use of this fact that: Let $X=H^\infty(\mathbb{D})$ or $H^\infty_\nu(\mathbb{D})$. If $\{f_n\}\subset X$ is a sequence which converges to $0$, uniformly on compact subsets of $\mathbb{D}$, and $Q:X\rightarrow X$ is a compact operator then $\{Q(f_n)\}$ converges to $0$ in the norm of $X$. \begin{theorem} \label{t3} Consider $\alpha>0$ and $\varphi:\mathbb{D}\rightarrow \mathbb{D}$ analytic with $\varphi(0)=0$, , which is neither the identity nor an elliptic automorphism of $\mathbb{D}$. Then (in addition to the conditions in Theorem \ref{t1}) another necessary and sufficient condition for the (uniformly) mean ergodicity of $C_\varphi$ on $H^\infty_\alpha(\mathbb{D})$ is that $\lim_{n\rightarrow \infty} \| \frac{1}{n} \sum_{j=1}^n C_{\varphi_j}\|_e=0$. \end{theorem} \begin{proof} Let $C_\varphi$ be uniformly mean ergodic, so $ \frac{1}{n} \sum_{j=1}^n C_{\varphi_j}\rightarrow K_0$, where $K_0$ is the point evaluation at $0$. We know that $K_0$ is a compact operator, therefore, \begin{equation*} \lim_{n\rightarrow \infty} \| \frac{1}{n} \sum_{j=1}^n C_{\varphi_j}\|_e=\lim_{n\rightarrow \infty} \| \frac{1}{n} \sum_{j=1}^n C_{\varphi_j}-K_0\|_e\leq \lim_{n\rightarrow \infty} \| \frac{1}{n} \sum_{j=1}^n C_{\varphi_j}-K_0\|=0. \end{equation*} Conversely, we show that Part (iii) of Theorem \ref{t1} holds. Let $T_n=\frac{1}{n} \sum_{j=1}^n C_{\varphi_j}$ and $\{f_m\}$ be a bounded sequence in $H^\infty_\alpha(\mathbb{D})$, where $f_m$ converges to $0$ uniformly on the compact subsets of $\mathbb{D}$. If $Q$ is a compact operator, then \begin{equation*} \|T_n+Q\|\gtrsim \lim_{m\rightarrow\infty} \|T_n f_m+Qf_m\|=\lim_{m\rightarrow\infty} \|T_n f_m\|. \end{equation*} Thus, \begin{equation} \label{e} \lim_{n\rightarrow\infty}\lim_{m\rightarrow\infty} \|T_n f_m\|=0. \end{equation} Let Part (iii) of Theorem \ref{t1} do not hold. Consider the sequence $(r_n)\in (0,1)$ where $r_n\uparrow 1$ and $r_n^n\uparrow 1$ (for example, $r_n=\frac{1+1/n}{e^{1/n}}$) but \ref{e1} does not hold. It is suffices to construct a bounded sequence $\{f_m\}$ in $H^\infty_\alpha(\mathbb{D})$ where $f_m$ converges to $0$ uniformly on compact subsets of $\mathbb{D}$, however, \ref{e} does not hold. Again there are an $r>0$ and a sequence $\{a_n\}\subset \mathbb{D}$ where $|a_n|\geq r_n$ and \begin{equation*} \dfrac{1-|a_n|}{1-|\varphi_n(a_n)|}\geq r. \end{equation*} Thus, \begin{eqnarray*} r&\leq& \dfrac{1-|a_n|}{1-|\varphi_n(a_n)|}= \dfrac{(1-|a_n|)\sum_{j=0}^{n-1} |\varphi_n(a_n)|^j}{1-|\varphi_n(a_n)|^n}\\ &\leq& \dfrac{(1-|a_n|)\sum_{j=0}^{n-1} |a_n|^j}{1-|\varphi_n(a_n)|^n}=\dfrac{1-|a_n|^n}{1-|\varphi_n(a_n)|^n} \end{eqnarray*} Since $|a_n|^n\rightarrow 1$ there are some $s>0$ and $k\in \mathbb{N}$ such that $|\varphi_n(a_n)|\geq|\varphi_n(a_n)|^n \geq s$, for all $n\geq k$. Without loss of generality, we can let $k=1$. Thus, by the Schwarz lemma for $1\leq i \leq n$, \begin{equation*} \dfrac{1-|a_n|}{1-|\varphi_i(a_n)|}\geq \dfrac{1-|a_n|}{1-|\varphi_n(a_n)|}\geq r, \qquad |\varphi_i(a_n)|^n\geq |\varphi_n(a_n)|^n\geq s \end{equation*} Similar to the proof of Theorem \ref{t1}, there are some $M>0$ and $\{f_{i,n}\}_{i=1}^n$ in $H^\infty(\mathbb{D})$ such that \begin{itemize} \item[(a)] $f_{i,n}(\varphi_i(a_n))=1$ and $f_{i,n}(\varphi_j(a_n))=0$, for $i\neq j$. \item[(b)] $\sup_{z\in \mathbb{D}} \sum_{i=1}^n |f_{i,n}(z)|\leq M$. \end{itemize} Now we construct the functions $f_m$ as follows: \begin{equation*} f_m(z)=\sum_{i=1}^m \dfrac{z^m\overline{\varphi_i(a_m)}^m f_{i,m}(z)}{(1-\overline{\varphi_i(a_m)}z)^\alpha}. \end{equation*} It is easily to check that $\sup_{z\in \mathbb{D}} |f_m(z)|(1-|z|)^\alpha \leq M$, $f_m$ converges to $0$ uniformly on the compact subsets of $\mathbb{D}$, and \begin{equation*} f_m(\varphi_i(a_m))=\dfrac{|\varphi_i(a_m)|^{2m}}{(1-|\varphi_i(a_m)|^2)^\alpha}. \end{equation*} Therefore, for $m\geq n$ \begin{equation*} \dfrac{1}{n}\sum_{i=1}^n f_m(\varphi_i(a_m))(1-|a_m|)^\alpha =\dfrac{1}{n}\sum_{i=1}^n \Big( \dfrac{1-|a_m|}{1-|\varphi_i(a_m)|^2}\Big)^\alpha |\varphi_i(a_m)|^{2m} \geq \dfrac{s^2r^\alpha}{2^\alpha}. \end{equation*} Thus, Equation \ref{e} does not hold and this is a contradiction. \end{proof} The equivalency of Parts (i), (ii), and (iii) of the following theorem has been shown in \cite[Theorem 3.3]{beltran1}. Here we add Part (iv). \begin{theorem} \label{t2} Consider $\varphi:\mathbb{D}\rightarrow \mathbb{D}$ analytic with $\varphi(0)=0$, , which is neither the identity nor an elliptic automorphism of $\mathbb{D}$. Then the following statements are equivalent: \begin{itemize} \item[(i)] $C_\varphi$ is mean ergodic on $H^\infty (\mathbb{D})$. \item[(ii)] $C_\varphi$ is uniformly mean ergodic on $H^\infty (\mathbb{D})$. \item[(iii)] $\lim_{n\rightarrow \infty} \|\varphi_n\|_{\infty}=0.$ \item[(iv)] $\lim_{n\rightarrow \infty} \| \frac{1}{n} \sum_{j=1}^n C_{\varphi_j}\|_e=0.$ \end{itemize} \end{theorem} \begin{proof} Let $T_n=\frac{1}{n} \sum_{j=1}^n C_{\varphi_j}$ and $\{f_m\}$ be a bounded sequence in $H^\infty(\mathbb{D})$, where $f_m$ converges to $0$ uniformly on the compact subsets of $\mathbb{D}$. Again, we must have \begin{equation}\label{e5} \lim_{n\rightarrow\infty}\lim_{m\rightarrow\infty} \|T_n f_m\|_\infty=0, \end{equation} Let $\|\varphi_n\|_\infty\nrightarrow 0$, so $\|\varphi_n\|_\infty=1$, for all $n$. We construct a bounded sequence $\{f_m\}$ in $H^\infty(\mathbb{D})$ where $f_m$ converges to $0$ uniformly on the compact subsets of $\mathbb{D}$, but \ref{e5} does not hold. There is some $r>0$ and $\{a_n\}\subset \mathbb{D}$ such that $|\varphi_n(a_n)|^{2n}>r$. Hence, by the Schwarz lemma \begin{equation*} |\varphi_j(a_n)|^{2n}>r \ \ and \ \ |\varphi_j(a_n)|>r, \qquad 1\leq j\leq n. \end{equation*} Thus, there is a bounded sequence $\{g_m\}$ in $H^\infty(\mathbb{D})$ where \begin{equation*} g_m(\varphi_i(a_m))=\overline{\varphi_i(a_m)}^m, \qquad 1\leq j\leq m. \end{equation*} Consider the functions $f_m(z)=z^mg_m(z)$. Thus, $\{f_m\}$ is a bounded sequence in $H^\infty(\mathbb{D})$ which converges to $0$ uniformly on the compact subsets of $\mathbb{D}$. Also, for $m\geq n$ \begin{equation} \dfrac{1}{n}\sum_{i=1}^n f_m(\varphi_i(a_m))=\dfrac{1}{n}\sum_{i=1}^n |\varphi_i(a_m)|^{2m} \geq r. \end{equation} This is a contradiction. \end{proof} \subsection*{Boundary Denjoy-Wolff point} In \cite[Theorem 3.6]{beltran1} and \cite[Theorem 3.8]{jorda}, it has shown that if $\varphi$ has boundary Denjoy-Wolff point, then $C_\varphi$ is not mean ergodic on $H^\infty (\mathbb{D})$ and $H^\infty_\nu (\mathbb{D})$, respectively. \subsection*{Disk automorphisms} Let $\varphi$ be an automorphism of the disk. If $\varphi$ has a boundary Denjoy-Wolff point then by using the preceding part, $C_\varphi$ failed to be mean ergodic on $H^\infty(\mathbb{D})$ and $H^\infty_\nu(\mathbb{D})$. If $\varphi$ is an elliptic disk automorphism, then there are some $\lambda\in \partial\mathbb{D}$ and some disk automorphism $\psi$ such that $\psi \circ \varphi\circ \psi^{-1}(z)=\lambda z$. In this case, Equation \ref{e1} does not hold. But \cite[Proposition 18]{wolf1} and \cite[Theorem 3.8]{jorda} imply that: $C_\varphi$ is (uniformly) mean ergodic on $H^\infty_\nu(\mathbb{D})$ if and only if $\lambda$ is a root of $1$. Also, \cite[Theorem 2.2]{beltran1} gives a similar result on $H^\infty(\mathbb{D})$.
{ "timestamp": "2020-12-15T02:34:51", "yymm": "2010", "arxiv_id": "2010.06951", "language": "en", "url": "https://arxiv.org/abs/2010.06951" }
\section{Introduction} \label{sec:org6bf1d50} Variable selection is an important aspect of statistical and predictive modelling workflows, for example, when understanding a model's predictions is important, or where there is a cost associated to collecting new data. From the perspective of predictive performance, one goal of variable selection is to find the smallest subset of variables in a dataset yielding comparable predictive performance to the full model containing all the available variables. In this context, we assume that there might be variables with true non-zero coefficients that we cannot properly detect due to scarce data or the presence of a highly complex correlation structure. In this paper, we substantially generalize projection predictive inference to perform variable selection and model structure selection in generalized linear multilevel models (GLMMs) \citep{mcculloch_GLMMs,Gelman_2013} and generalized additive multilevel models (GAMMs) \citep{Hastie_1986,Verbyla_1999}. Both types of models are widely used across the quantitative sciences, for instance, in the social and political sciences (e.g. poll or elections data whose measurements are organized in regions or districts with multiple levels), or in the physical sciences (e.g., meteorological or medical data). Projection predictive inference \citep{piironen_projective_2018} is a general Bayesian decision theoretic framework that separates model estimation from decision. Given a reference model on the basis of all variables, it aims at replacing its posterior \(p(\lambda_{*} \mid {\cal D})\) with a constrained projection \(q_{\bot}(\lambda)\). This projection is solved so that its predictions are as close as possible to the reference model's predictions. The uncertainties in the reference model related to the excluded model parts are also projected and thus partially retained in the projection. In the context of variable selection, one typically constrains the projection to a smaller subset of variables where the excluded variables have their coefficients fixed at zero. Then, the projection procedure sequentially projects the posterior onto an incremental subspace, until all the variables have entered the projection. At each step, the method selects the variable that most decreases the Kullback-Leibler (KL) divergence from the reference model's predictive distribution to that of the projection model, a procedure known as forward search. This forms a \emph{solution path} for the variables into the projection. This approach has been shown to provide better performance than state-of-the-art competitors \citep{piironen_comparison_2017,piironen_projective_2018,pavone_2019,piironen_projection_2016}. \cite{piironen_comparison_2017} demonstrate that, when using the projection approach, overfitting in the model space search is very small compared to other stepwise procedures and that, even in huge model spaces, the selected submodel has similar predictive performance to the reference model. Previously, projection predictive inference has been used to perform variable selection only in generalized linear models (GLMs) \citep{piironen_projective_2018} and Gaussian processes (GPs) \citep{piironen_projection_2016}. However, the existing projection solutions do not directly translate to GLMMs or GAMMs because, without further restrictions, the projection is not identifiable for these models \citep{Bickel_1977}, that is, there is not a unique solution to the projection. In this paper we extend the projection predictive inference to GLMMs and GAMMs. In Figure~\ref{fig:glm_gam_glmm_gamm} we showcase a broader picture of the different types of models that are now supported in our framework, starting from basic GLMs to very complex GAMMs. Along them, we show examples of equations for these kind of models and their correspondence to \texttt{R formula} syntax, which is an easy way of expressing complex models. \begin{figure} \centering \includegraphics[width=1.\textwidth]{glm_glmm_gam_gamm.pdf} \caption{Different types of models supported by \texttt{projpred} now (previously only generalized linear models) and their relationships. We showcase math and \texttt{R} \texttt{formula} correspondence in color-coded terms.} \label{fig:glm_gam_glmm_gamm} \end{figure} Specifically, our contributions include: \begin{itemize} \item Discussing the identifiability issue for projecting to GLMMs and GAMMs. \item Extending projection predictive inference to support GLMMs and GAMMs by performing a Laplace approximation to the marginal likelihood of the projection. \item Performing extensive simulations and real data experiments that validate the working of our method. \item Implementing our proposal in the open source \texttt{projpred} \texttt{R} package for projection predictive inference \citep{projpred_package}. \end{itemize} \begin{figure} \centering \includegraphics[width=\textwidth]{projpred_workflow_diagram.pdf} \caption{Projection predictive variable and structure selection workflow for an illustrative example on a \texttt{BikeSharing} data model.} \label{fig:projpred_workflow} \end{figure} To give an illustration of our developed method's application, consider the \texttt{BikeSharing} data. These data contain the hourly and daily count of rental bikes between 2011 and 2012 in London's capital bikeshare system with the corresponding weather and season information. The main variables included are \texttt{month}, \texttt{season}, \texttt{weather}, \texttt{temperature}, \texttt{humidity} and \texttt{windspeed}. In Figure~\ref{fig:projpred_workflow} we show the full projection predictive variable and structure selection for an illustrative reference model example for these data. A priori we assume all continuous variables are relevant for predictions and they may interact with all categorical variables. For illustrative purposes we have built a simple model that contains different types of terms for only a subset of all the included variables, namely \texttt{hum, temp, month}. We leave the remaining variables for a more in-depth analysis in the experiments in Section~\ref{sec:real_data_experiments}. Our approach finds a projection model with simplified structure that provides optimal predictive performance with respect to the reference model. \section{Related methods} \label{sec:org70ab7b4} For GLMs, variable selection has been approached from different perspectives. Some methods \citep{breiman_garrote_1995,tibshirani_lasso_1996,fan_nonconcave_2001,Zou_2005,Candes_2007} propose to deal with it by solving a penalized maximum likelihood formulation that enforces sparse solutions, while at the same time trying to select a subset of relevant variables (e.g. LASSO). These approaches suffer from confounding the estimation and selection of variables, often ending up selecting fewer variables than truly relevant in the data, as in the case of correlated covariates. For further information, see the comprehensive review by \citet{Hastie_2015}. Similarly, \citet{Marra_2011} propose to add an additional penalty term to perform variable selection in GAMs, with similar shrinkage capacity as ridge regression. Another set of methods \citep{George_1993,Raftery_1997,Ishwaran_2005,Johnson_2012,Carvalho_2010} suggests imposing a sparsifying prior on the coefficients that favours sparse solutions. Nonetheless, these priors do not actually produce sparse posteriors, because every variable has a non-zero probability of being relevant. One can obtain a truly sparse solution by selecting only those variables whose probability of being relevant is above a certain threshold \citep{Barbieri_2004,Ishwaran_2005,Narisetty_2014}, but this approach ignores the uncertainty in the variables below the threshold. Reference models have been used before for tasks other than variable selection, as in \citet{afrabandpey19:_makin_bayes_predic_model_inter}, where the authors constrain the projection of a complex neural network to be interpretable (e.g., projecting onto decision trees). Closer to variable selection and related to our approach, \citet{piironen_projection_2016} use projection predictive inference and impose further constraints on the projection of a GP reference model to perform variable selection, given the identifiability issue of the direct projection. While some alternative methods for variable selection in GLMMs and GAMMs exist, they either only allow variable selection for population parameters but not group parameters \citep{Groll_2012,Tutz_2012} or, when trying to use Bayes factors \citep{Kass_1995}, are computationally infeasible due to combinatorial explosion as soon as there are more than just a few variables. To the best of our knowledge, there are no practically applicable competing methods available in the literature, and so we only focus on the absolute performance of our method. \section{Projection Predictive Inference} \label{sec:orgce68568} \subsection{Formulation of the KL projection} \label{sec:org151ebe8} Because the domain of both models may be different, formulating the problem in terms of minimizing a discrepancy measure between \(p \left( \lambda_{*} \mid {\cal D} \right)\) and \(q_{\bot}(\lambda)\) does not make sense. Instead, we minimize the KL divergence from the reference model's predictive distribution to that of the constrained projection, which is not easy in its general form: \begin{align} \label{eq:kl_minimization} \text{KL} &\left( p \left( \tilde{y} \mid {\cal D} \right) \parallel q \left( \tilde{y} \right) \right) \nonumber\\ & = \mathbb{E}_{\tilde{y}} \left( \log p \left( \tilde{y} \mid {\cal D} \right) - \log q \left( \tilde{y} \right) \right) \nonumber\\ & = - \mathbb{E}_{\tilde{y}} \left( \log q \left( \tilde{y} \right) \right) + \text{C} \nonumber\\ & = - \mathbb{E}_{\tilde{y}} \left( \log \mathbb{E}_{\lambda}\left( p \left( \tilde{y} \mid \lambda \right) \right) \right) + \text{C} \nonumber\\ & = - \mathbb{E}_{\lambda_{*}} \left( \mathbb{E}_{\tilde{y} \mid \lambda_{*}} \left( \log \mathbb{E}_{\lambda} \left( p \left( \tilde{y} \mid \lambda \right) \right) \right) \right) + \text{C}, \end{align} where we have collapsed terms that don't depend on \(\lambda\) into \(C\). Here, the expectations over \(\lambda_{*}, \tilde{y} \mid \lambda_{*}, \lambda\) are over the posterior \(p \left( \lambda_{*} \mid {\cal D} \right)\), the posterior predictive distribution \(p \left( \tilde{y} \mid \lambda_{*} \right)\), and the constrained projection \(q_{\bot} \left( \lambda \right)\), respectively. In practice, we approximate the KL minimization by changing the order of the integration and optimization in \(\mathbb{E}_{\tilde{y} \mid \lambda_{*}} \left( \log \mathbb{E}_{\lambda} \left( p \left( \tilde{y} \mid \lambda \right) \right) \right)\). To make this feasible, \citet{Goutis_1998} propose to do the projection draw-by-draw, where we find a direct mapping from a posterior draw of the reference model to the projection's constrained space. \citet{piironen_projective_2018} propose a further speedup by demonstrating that it is possible to solve the projection employing only a small subset of posterior draws or even representative points that can be found, for instance, by clustering. \subsection{Variable and structure selection} \label{sec:org202b444} A high level overview of the variable selection procedure of projective predictive inference includes the following steps \citep{piironen_projective_2018}: \begin{enumerate} \item Cluster the draws of the reference model's posterior. \item Perform forward search to determine the ordering of the terms for the projection. At each step include the term that most decreases the KL divergence between the reference model's predictions and the projection's. \item Sequentially, compute the projections adding one term at a time. \end{enumerate} For a more robust variable selection, we perform a leave-one-out (LOO) cross validation procedure through the model space. In this approach, we repeat the full forward search \(N\) times by performing the selection with \(N-1\) data points and leaving the remaining point as test point each time, resulting in \(N\) different solution paths. Instead of running the procedure for every observation, \citet{vehtari_pareto_2015} show that we can achieve a similarly robust selection by running the procedure only on a carefully selected subset of points based on their estimated Pareto-\(\hat{k}\) diagnostic. In GLMMs and GAMMs, we no longer perform variable but model structure selection, that is, select additive model components to which we refer to as {\it terms}. In this context, a {\it term} may refer to a single variable with a single coefficient, a group level term, corresponding to all the coefficients of the group's levels, or a smooth term, corresponding to all the coefficients associated with the smooth basis functions. The structure selection involves the same steps as the variable selection only that variables are replaced by terms. \subsection{Solving the projection for exponential family models} \label{sec:org77bc3b6} For GLMs with observation models in the exponential family \citep{McCullagh_1989}, projecting a draw \(\lambda_{*}\) from the reference model's posterior to the projection space \(\lambda_{\bot}\) in Equation \eqref{eq:kl_minimization} coincides exactly with maximizing its likelihood under the projection model. Given a new observation \(\tilde{y}_i\), with expectation over the reference model \(\mu_i^{*} = \mathbb{E}_{\tilde{y} \mid \lambda_{*}}(\tilde{y}_i)\), this reduces to \citep{piironen_projective_2018}: \begin{equation} \label{eq:projection_exponential_family} \lambda_{\bot} = \arg \max_{\lambda} \sum_{i=1}^N\mu_{i}^{*}\xi_i(\lambda) - B(\xi_i(\lambda)), \end{equation} which does not depend on the dispersion parameter \(\phi\), for some function \(B\) of the natural parameters \(\xi_i(\lambda)\). The above projection holds for observation models other than Gaussian or link functions other than the identity, as long as they belong to the exponential family. In these cases, there is no closed form solution for the projection parameters, and we run iteratively reweighted least squares (IRLS), where at every iteration one computes a pseudo Gaussian transformation of each log likelihood \({\cal L}_i\) as a second order Taylor series expansion centered at the projection's prediction \citep{McCullagh_1989,Gelman_2013}. \section{Projection Predictive Inference for GLMMs and GAMMs} \label{sec:org08a4ded} \subsection{Generalized Linear Multilevel Models} \label{sec:org2378051} GLMMs \citep{mcculloch_GLMMs,Gelman_2013} jointly estimate both \emph{global population} and \emph{group-specific} parameters. This approach allows the model to \emph{partially pool} information across groups, which is particularly useful for the estimates of groups with few data points. In this case, we refer to multilevel structure as terms arising from the levels of a categorical variable and their interactions with other variables. Given a response variable \(y\) for a population design matrix \(X\) with group design matrix \(Z\), we can write a GLMM as \(y \sim \pi \left( g(\eta), \phi \right)\), where \(g(\cdot)\) is the inverse link function of the generalized family \(\pi\), and \(\phi\) is its dispersion parameter. The only difference to a GLM comes in the linear predictor \(\eta = X\beta + Zu\), where \(\beta\) are the population parameters and we add the group parameters \(u \sim p(u \mid \theta)\), which may depend on some hyper parameters \(\theta\). The goal is to accurately estimate the model parameters \((\beta, u, \phi, \theta)\). \subsection{Generalized Additive Multilevel Models} \label{sec:org6163466} GAMMs \citep{Hastie_1986,Verbyla_1999} add further complexity to GLMMs by introducing smooth terms, which are presented as a linear combination of non-linear basis functions. As for GLMMs, we can formulate the model as \(y \sim \pi \left( g(\eta), \phi \right)\). In the case of generalized additive models (GAMs) without multilevel structure, the predictor \(\eta\) can be written as \(\eta = \sum_{j=1}^J f_j(X)\), where each \(f_j\) is a function of the predictor matrix \(X\) (in practice, each \(f_j\) uses only a subset of columns of \(X\)). These functions are usually represented via additive spline basis expansion \(f(x) = \sum_{k=1}^K \gamma_kb_k(x)\) with B-splines \(b_k(x)\) \citep{Eilers_1996}. To avoid overfitting, we can either penalize some summary of the spline basis coefficients, or equivalently write it as a GLMM by splitting up the evaluated spline basis function into an unpenalized null-space (appended into \(X\)) and a penalized space (appended as group variables into \(Z\)) where the prior on \(u\) serves the same purpose \citep{Wood_2017}. Standard multilevel terms of GLMMs can be combined with non-linear smooth terms of GAMs to form the even more powerful GAMM model class \citep{Wood_2017}. However, as soon as smooth terms are added and translated to the GLMM framework, the resulting \(Z\) matrix becomes much denser than in a standard GLMM, thus further complicating inference. \subsection{Solving the projection for GLMMs} \label{sec:org1ab1c1f} Without further constraints, even if its observation model belongs to the exponential family, the projection \eqref{eq:kl_minimization} is not identifiable for GLMMs \citep{Bickel_1977,lee_nelder_96,Gelman_2013}. This means that, given the mean prediction of the reference model \(\mu_i^{*}\), there is no unique solution for the parameters in the projection model fitted to \(\mu_i^{*}\). To make the model identifiable and solve the projection, we propose to further restrict it by integrating out the group parameters \(u\). The resulting likelihood can be written as: \begin{align} \label{eq:glmm_likelihood} {\cal L} & \left( y \mid \beta, u, \phi \right) = p \left( y \mid \beta, \phi, \theta \right) \nonumber\\ & = \int p \left( y \mid \beta, \phi, u \right) p \left( u \mid \theta \right) du \nonumber\\ & = \prod_i^N\int p \left( y_i \mid \beta, \phi_i, u \right) p \left( u_i \mid \theta \right) du_i, \end{align} where \(u_i\) are the group parameters belonging to observation \(i\) and we have assumed conditional independence between datapoints given the group parameters. This integral cannot be evaluated in closed form. For simple models with a single group one can numerically integrate the above expression (employing Gaussian-Hermite quadrature) but this quickly becomes infeasible for higher dimensional problems. Maximizing this likelihood with respect to the model parameters, following Equation \eqref{eq:projection_exponential_family}, gives the constrained projection. There are many references in the literature that focus on practical approaches to obtain suitable approximate maximum likelihood estimates for GLMMs \citep{McCulloch_1997,lme4,Lee_2001,Lee_2006,ogden13,Booth_1999}. For our purposes, it is essential to use an approximate solution that still provides a good proxy for the KL divergence minimising solutions to Equation \eqref{eq:kl_minimization}. Some of the methods cited above provide accurate and reliable solutions but often at the expense of a higher computational cost \citep[e.g.,][]{ogden13}. In the statistics literature, one finds simpler approximations that would scale better for our case, such as the well known Restricted Maximum Likelihood Estimates (REML) \citep{Lee_2001,Lee_2006}. This approach obtains a solution by dealing with the group parameters as \emph{fixed} data and appending them as an extension to the population parameters. One would then solve an augmented GLM. \subsection{Laplace Approximation} \label{sec:org5673980} The REML approach does not provide a tractable approximation to the log marginal likelihood obtained from Equation \eqref{eq:glmm_likelihood}, which takes the form \begin{equation*} \label{eq:3} \log {\cal L} \left( \beta, u, \phi \right) = \sum_{i=1}^N \log \left\{ \int \exp \left( h_i \right) du_i \right\}, \end{equation*} where \(h_i = \log p(y_i \mid g(\eta_i), \phi_i) + \log p(u_i \mid \theta)\) is the (unnormalized) log joint density. The integrals in the above equation do not exist in closed form except for models where \(p(u \mid \theta)\) is conjugate to the likelihood \(\pi\). Even in those cases, if the dimensionality of $u$ is not very small, the computation is still intractable. As a general purpose solution, we consider a first-order Laplace approximation to the integral \citep{ha2009maximum,barndoff1989,lme4}. We split up the integration problem into sub-problems that are easier to solve. Given a value of \(\theta\), we can find the conditional mode \(\tilde{u}(\theta)\) and conditional estimate \(\tilde{\beta}(\theta)\) by solving the following optimization problem \begin{equation*} \begin{bmatrix} \tilde{u}(\theta) \\ \tilde{\beta}(\theta) \end{bmatrix} = \text{arg}\max_{u,\beta} h(u \mid y, \beta, \phi, \theta), \end{equation*} as the parameters that maximize the likelihood. Usually, we express the conditional density on the \emph{deviance scale}: \begin{equation*} \label{eq:pirls} \begin{bmatrix} \tilde{u}(\theta) \\ \tilde{\beta}(\theta) \end{bmatrix} = \text{arg}\min_{u,\beta} -2 h(u \mid y, \beta, \phi, \theta). \end{equation*} This optimization problem can be solved efficiently using Penalized Iteratively Re-weighted Least Squares (PIRLS), as implemented in the popular \texttt{lme4} \citep{lme4} package. At each iteration, PIRLS performs a Gauss-Newton iteration in the space of \(u\) and \(\beta\). See \citet{lme4} for more details. The second order Taylor series expansion of \(-2\log h\) at \(\tilde{u}(\theta)\) and \(\tilde{\beta}(\theta)\) provides the Laplace approximation to the \emph{profiled deviance}. On the deviance scale, the Laplace approximation is a function of the so-called \emph{discrepancy} measure and takes a sum of squares form: \begin{equation*} \label{eq:discrepancy} d(u \mid y, \theta, \beta) = \left\| W^{1/2}(\mu) \left[ y - \mu(u, \theta, \beta) \right] \right\|^2 + \left\| u \right\|^2, \end{equation*} where \(\mu = g(\eta(u, \theta, \beta))\) is the inverse link transformation of the latent predictor \(\eta\), and \(W\) is a diagonal matrix of weights. Optimizing this function with respect to \(\theta\) provides the maximum likelihood estimates of \(\beta, \phi\) by substituting \(\theta_{\text{ML}}\) into \(\tilde{\beta}(\theta)\) and solving for \(\phi\) in \(h(u \mid y, \beta, \theta_{\text{ML}}, \phi)\). Importantly, optimizing the Laplace approximation is a problem in the space of constrained \(\theta\), which is usually small and therefore easy to solve efficiently. \subsection{Solving the projection for GAMMs} \label{sec:orgb3c6429} The identifiability issue that exists in GLMMs is further aggravated by having a dense \(Z\) matrix in GAMMs, which makes the likelihood in Equation \eqref{eq:glmm_likelihood} intractable to compute even in conjugate Gaussian models with a single smooth term. This also happens in GAMs without any multilevel structure. In order to make these models identifiable, one has to 1) impose a quadratic penalty on the coefficients of the basis functions \citep{Wood_2017}, which also helps in avoiding overfitting, and 2) integrate out the group parameters and group smooth terms. Solving the resulting maximum likelihood equations has similar issues as in the plain GLMM case. Given that GAMMs can be represented as GLMMs, the same Laplace approximation is commonly used to obtain maximum likelihood estimates in these models \citep{Wood_2010,Wood_2017}. \subsection{Computational cost} \label{sec:org4b6cdc8} The computational budget of our approach is composed of the following components \begin{itemize} \item Running PIRLS, i.e. solving the projection, for a given subset of terms. \item Solving the same projection for a number draws. \item Performing forward search to explore the model space. \item Running LOO cross-validation for many data points. \end{itemize} \citet{piironen_projective_2018} demonstrate that a small number of posterior draws is sufficient to find a good solution path, which saves a lot of computation during the forward search. Nonetheless, running PIRLS for complex multilevel models is still expensive especially when done repeatedly for different posterior draws. Further, in our approach, we reduce the number of models to explore in forward search by considering only those that are sensible according to common modelling practices for GLMMs \citep{Gelman_2006}. This means that we only consider a model with a certain group parameter if its population parameter has already entered the projection. Likewise, we only consider an interaction between two variables when both of them have already entered the projection separately. This saves the method from exploring many models that are not considered sensible in the first place. \section{Experiments} \label{sec:org6e08ade} We now turn to validating our method in both simulated experiments and real world datasets. \subsection{Simulations} \label{sec:org7d43805} \begin{table*}[pt] \caption{\label{tab:generation_settings} Data generation process settings for the simulations.} \centering \begin{tabular}{lll} \hline Parameter & Description & Values\\ \hline \(D\) & Number of variables & 5, 7, 10\\ \(V\) & Percentage of \(D\) as group parameters & 0.33, 0.67, 1.0\\ \(K\) & Number of grouping factors & 1, 2, 3\\ \(\rho\) & Correlation factor & 0.0, 0.33, 0.67, 0.9\\ \(L\) & Levels per grouping factor & 5\\ \(N\) & Number of observations & 300\\ \(\pi\) & Observation model & Gaussian, Bernoulli\\ \(s\) & Sparsity & 0.4\\ \hline \end{tabular} \end{table*} We first validate our method by running projection predictive variable and structure selection as implemented in \texttt{projpred} \citep{projpred_package} on extensive simulations. We systematically test simple and more complex models with both an increasing number of grouping factors and variables. We also consider correlations between different coefficients per level within a grouping factor. The complete settings of the simulations are shown in Table \ref{tab:generation_settings}. To reduce external noise, we fix the number of observations to $300$, the number of levels in each grouping factor to $5$ and the sparsity to $0.4$ to make sure that some terms are irrelevant. For each simulation condition, we run $25$ data realisations. The complete data generation process, common for all observation models \(\pi\), is given as \begin{align*} x_{id} & \sim \text{Normal}(0, 1)\qquad \beta_d \sim \text{Normal}(\mu_{b,f}, \sigma^2_{b,f}) \\ z_d & \sim \text{Bernoulli}(p=0.6)\qquad g_{ik} \sim \text{DiscreteUniform}(1, L) \\ \mu_{g_k} & \sim \text{MultivariateNormal}( \mu_g, \sigma^2_gI) \\ v_d & \sim \text{Bernoulli}(p=V) \\ u_{lk} & \sim \text{MultivariateNormal}(\mu_{g_k}, \Sigma_{\sigma^2_g, \rho}) \\ \eta_i & = \sum_{d=0}^{D} z_d\beta_dx_{id} + \sum_{k=1}^K\sum_{l=1}^L\sum_{d=0}^{D} v_dz_du_{lkd}x_{g_{ik} = l,d} \\ y_i & \sim \pi(g(\eta_i), \phi_i), \end{align*} for data points \(i = 1,\ldots,N\), variables \(d = 0,\ldots,D\), grouping factors \(k = 1,\ldots,K\), inverse link function \(g\), and covariance matrix \(\Sigma_{\sigma^2_g,\rho}\) with diagonal entries \(\sigma_g^2\) and off diagonal elements \(\rho\sigma_g^2\). We collapse the intercept (\(d = 0\)) into \(\beta\), \(u\), so that we have \(D + 1\) dimensions by appending a column of ones to \(X\). We choose the identity link function. We fix the mean and standard deviation hyper parameters for the intercept $\mu_f, \sigma^2_f$ to $0$, $20$ respectively, for the main terms $\mu_{b,f}, \sigma^2_{b,f}$ to $5$, $10$ respectively and for the group terms $\mu_g, \sigma^2_g$ to $0$, $5$ respectively. We choose large values to avoid simulating practically undetectable terms. We sample group terms following a two-step procedure: \begin{enumerate} \item We first sample \(K\) means for all grouping factors from a \(D + 1\) dimensional multivariate normal distribution as \begin{equation*} \mu_{g_k} \sim \text{MultivariateNormal}( \mu_g, \sigma^2_gI). \end{equation*} \item Then, for each level \(l\) and grouping factor \(k\), we sample from a \(D + 1\) dimensional multivariate normal (with mean $\mu_{g_k}$) as \begin{equation*} u_{lk} \sim \text{MultivariateNormal}(\mu_{g_k}, \Sigma_{\sigma^2_g, \rho}). \end{equation*} \end{enumerate} \begin{figure*}[pt] \centering \includegraphics[width=.99\linewidth]{pct_optimal_terms_uncertainty_paper_new_gaussian.pdf} \caption{\label{fig:gaussian_size_results}Optimal size for the projections to achieve the reference LOO performance (ELPD). Each column shows a different correlation factor. Each row shows a different number of grouping factors. We include 95\% uncertainty interval for the size of the projection.} \end{figure*} We first focus on studying the predictive performance of the projections. We show the optimal size of the projection to reach the reference LOO Expected Log Predictive Density (ELPD) performance for Gaussian simulated data in Figure \ref{fig:gaussian_size_results}. We normalize this quantity by the total number of possible terms for each model. \paragraph{The projections achieve optimal predictive performance.} From the predictive perspective, \texttt{projpred} aims at finding a projection with good performance independently of the underlying true model. The results show that the projection is able to discard on average 50\% of the terms without losing performance. In the supplementary material we show these results for different sparsity factors. \paragraph{The projections are smaller with higher correlation.} Including correlated terms into the projection would not improve its predictive performance, which results in redundant components that can be discarded. \paragraph{The projections are robust.} Even though the models become more and more complex by adding more variables and a higher proportion of group terms, the size of the optimal projection slightly decreases with respect to the total number of terms. This also applies to the number of grouping factors. \begin{figure*}[pt] \centering \includegraphics[width=.99\linewidth]{pct_roc_uncertainty_paper_new_gaussian.pdf} \caption{\label{fig:gaussian_results}False against true positive rate in a Gaussian model for varying selection thresholds. Each column shows a different correlation factor. Each row shows a different number of grouping factors. We include 95\% uncertainty interval for both true and false positive rates. Chance selection in dashed black lines.} \end{figure*} We now turn our focus to study the selected group terms in the projections with regard to their true relevance. Even though it is not \texttt{projpred}'s goal to select all truly non-zero terms (but only all empirically relevant ones), it is still useful to see how projection predictive selection coincides with the known truth. We show these results for Gaussian data in Figure \ref{fig:gaussian_results}. Importantly, even though in some cases our method may miss relevant variables, it is able to find a projection with optimal performance with just a subset of all variables and group coefficients, as shown in Figure \ref{fig:gaussian_size_results}. To compute true and false positive rates, we decide which terms are relevant by looking at the ELPD improvement of each projection with respect to the previous one. For a varying threshold \(t \in \left[ 0, 1 \right]\), we select as relevant all terms whose projection's ELPD improvement is above the \(t\)th quantile of all ELPD improvements. Then, we compare the selected terms against the ground truth. For models with only a few terms, the discretization implied by the selection of terms results in some straight jumps in Figure \ref{fig:gaussian_results}. \paragraph{The number of variables $D$ has a moderate effect.} By increasing the number of variables we introduce more estimates in the model that in turn makes the identification of the relevant terms harder. Then, as the dimension increases, the true positive rate decreases while returning more false positives. \paragraph{The number of grouping factors $K$ is the most significant factor.} Increasing the number of grouping factors in the data multiplies the number of total parameters to estimate. This, in turn, dilutes the individual contribution of each one and therefore makes its identification even more opaque. This is reflected in the figure as we look at the bottom row, where the true positive rate drops to above 75\% for 10 variables. \paragraph{The percentage of group terms $V$ is important.} In the simulations, we go to the extreme case of having all possible population terms vary across all grouping factors, which, even though unrealistic, sets an interesting bar to the performance of the method. In this extreme case, the false positive rate reaches its maximum for all settings. Typically, though, only some terms vary across grouping factors, and usually different terms vary between different grouping factors. \paragraph{High correlation $\rho$ induces more false positives.} Lastly, we analyze the impact of the correlation. As the terms get more correlated, the chance of selecting an irrelevant one as relevant gets higher, and therefore the ratio of true and false positive rates worsens overall. We show further simulations for Bernoulli and Poisson models in the supplementary material. Furthermore, we also simulate more levels per grouping factor and sparsity thresholds. Results of those simulations indicate similar patterns and performance of our method to the results shown above. \subsection{Real data experiments} \label{sec:real_data_experiments} We now turn to validating the performance of our method in real world datasets, including a Bernoulli classification model and a Poisson count data model. \subsubsection{Bernoulli classification model} \label{sec:org1c840c3} For the Bernoulli classification model we use the \texttt{VerbAgg} \citep{lme4} dataset. This dataset includes item responses to a questionaire on verbal aggression, used throughout \citet{de2004explanatory} to illustrate various forms of item response models. It consists of 7584 responses of a total of 316 participants on 24 items. These items vary systematically in multiple aspects as captured by three covariates whose parameters may vary over participants. For the purpose of our study, we randomly draw \(50\) individuals and their responses to increase the difficulty of the selection. Following \texttt{R}'s \texttt{formula} syntax, we fit the reference model \texttt{r2} \(\sim\) \texttt{btype + mode + situ + (btype + mode + situ | id)}, that includes \texttt{btype}, \texttt{situ} and \texttt{mode} as group parameters varying over participants. The full reference model contains 7 terms, counted as the simplest individual components of the model excluding the global intercept, namely \texttt{btype}, \texttt{mode}, \texttt{situ}, \texttt{(btype | id)}, \texttt{(mode | id)}, \texttt{(situ | id)} and \texttt{(1 | id)}. We fit the reference model with a regularised horseshoe prior (with global scale \(0.01\)) \citep{piironen_sparsity_2017,Carvalho_2010} with \texttt{rstanarm} \citep{rstanarm} and run a leave-one-out (LOO) cross validated variable selection procedure \citep{projpred_package,Vehtari_2016}. We show ELPD summaries and their standard error in Figure \ref{fig:elpd_report_verbagg} for incremental projections. The red dash line shows LOO performance for the full reference model. The optimal projection threshold used by \texttt{projpred} \citep{piironen_projective_2018} suggests including the first \(6\) terms, resulting in the model \texttt{r2} \(\sim\) \texttt{btype + mode + situ + (btype + situ | id)}, effectively implying that almost all terms are relevant. However, the standard error increases for the last two terms, which can be explained by the correlations in the posterior, which may result in a slight unidentifiability issue for those parameters. For more accurate projections one could project more posterior draws or improve the robustness of the reference model (e.g., with some PCA-like approach). The complete sequence of models considered in the solution path is: \begin{enumerate} \itemsep0em \item \texttt{r2} $\sim$ \texttt{(1 | id)}, \item \texttt{+ btype}, \item \texttt{+ (btype | id)}, \item \texttt{+ situ}, \item \texttt{+ (situ | id)}, \item \texttt{+ mode}, \item \texttt{+ (mode | id)}. \end{enumerate} \begin{figure}[pt] \centering \includegraphics[width=0.5\textwidth]{vsel_verbagg_plot_new.pdf} \caption{\label{fig:elpd_report_verbagg}Summaries of incremental projections for \texttt{VerbAgg} dataset.} \end{figure} \subsubsection{Poisson GLMM model} \label{sec:orge06671f} For a Poisson count data model we use the \texttt{BikeSharing} dataset \citep{bikesharing}. These data contain the hourly and daily count of rental bikes between 2011 and 2012 in London's capital bikeshare system with the corresponding weather and season information. We only use the daily averaged dataset, with \(731\) observations for \(2\) years. It includes the following variables: \emph{season}, \emph{month}, \emph{holiday}, \emph{weekday}, \emph{weather}, \emph{temp}, \emph{humidity}, \emph{windspeed} and \emph{count}. We build a knowingly overly complicated model that includes very correlated group effects for different grouping factors, such as season, month or weather. Our reference model is \texttt{count} \(\sim\) \texttt{windspeed + temp + humidity + (windspeed + temp + humidity | month) + (windspeed + temp + humidity | weekday) + (windspeed + temp + humidity | weather) + (windspeed + temp + humidity | holiday) + (windspeed + temp + humidity | season)}. We fit the reference model with a regularised horseshoe prior (with global scale \(0.01\)) using \texttt{rstanarm} \citep{rstanarm} and provide it as an input to \texttt{projpred}'s LOO cross-validated selection procedure. \begin{figure}[pt] \centering \begin{subfigure}{0.49\textwidth} \includegraphics{vsel_bikeshare_plot_new.pdf} \caption{\label{fig:elpd_report_bike}Summaries of incremental projections for GLMM model for \texttt{BikeSharing} dataset.} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics{vsel_bikeshare_gamm_plot_new.pdf} \caption{\label{fig:elpd_report_bike_gamm}Summaries of incremental projections for GAMM model for \texttt{BikeSharing} dataset.} \end{subfigure} \caption{Summaries for \texttt{BikeSharing} models} \end{figure} We show ELPD summaries for each incremental projection in Figure \ref{fig:elpd_report_bike}. The suggested optimal projection by \texttt{projpred}'s heuristic method \citep{projpred_package} is \texttt{count} \(\sim\) \texttt{temp + humidity + windspeed + (humidity + temp | month) + (1 | weather) + (1 | season)}, including only \(8\) terms out of \(23\). The rest of the variables add only a marginal improvement. \subsubsection{Poisson GAMM model} \label{sec:orge84f887} We now used the same data as in the example above to build a simpler GAMM reference model, for computational reasons, of the form \texttt{count} \(\sim\) \texttt{s(windspeed) + s(temp) + s(humidity) + (1 | month) + (1 | weekday) + (1 | weather) + (1 | holiday) + (1 | season)}, where we just added smooth terms for the main effects and group intercepts. Note that fitting GAMMs is usually more expensive and take longer time than plain GLMMs. We make use of \texttt{rstanarm} again to build this model with a normal prior and provide it as an input to \texttt{projpred}'s LOO-cross-validated selection. We show ELPD summaries for each incremental projection in Figure \ref{fig:elpd_report_bike_gamm}. The suggested optimal projection by \texttt{projpred}'s heuristic method \citep{projpred_package} is \texttt{count} \(\sim\) \texttt{s(temp) + s(humidity) + (1 | season) + s(windspeed) + (1 | month)}, including only \(5\) out of \(9\) terms. \section{Discussion} \label{sec:org153ee84} In this work, we have extended projection predictive inference to variable and structure selection on more complex classes of models, namely GLMMs and GAMMs. For these models, the GLM projection solution cannot be directly translated as it would result in unidentifiable models. For these model classes, combining \texttt{projpred} with a Laplace approximation gives a good approximation that not only enables accurate variable selection but also scales well to larger number of variables and grouping factors. We have validated our proposal by performing extensive simulations that test the boundaries of our method in extreme settings. We also showed that our method works well in real world scenarios with highly correlated grouping factors. We leave the extension of our current framework to other models which do not belong to the exponential family for future work. In such cases, the KL minimization in Equation \eqref{eq:kl_minimization} does not coincide with maximum likelihood estimates anymore. \section*{Acknowledgements} We would like to thank Akash Dhaka, Kunal Gosh, Charles Margossian and Topi Paananen for helpful comments and discussions. We also acknowledge the computational resources provided by the Aalto Science-IT project. \newpage
{ "timestamp": "2020-10-15T02:19:39", "yymm": "2010", "arxiv_id": "2010.06994", "language": "en", "url": "https://arxiv.org/abs/2010.06994" }
"\\section{\\textbf{Introduction}}\\label{sectIntr}\nPrincipal component analysis (PCA) is a well-kn(...TRUNCATED)
{"timestamp":"2021-10-07T02:12:01","yymm":"2010","arxiv_id":"2010.06851","language":"en","url":"http(...TRUNCATED)
"\\section{Definition of the operators and technical lemmas}\nWe recall here all the definitions of (...TRUNCATED)
{"timestamp":"2020-10-15T02:13:03","yymm":"2010","arxiv_id":"2010.06863","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\n\nIn recent years, there has been increasing interest in electronic syst(...TRUNCATED)
{"timestamp":"2021-02-09T02:19:36","yymm":"2010","arxiv_id":"2010.06853","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\n\\subsection{Motivation}\n\nTracking people in buildings has various imp(...TRUNCATED)
{"timestamp":"2020-10-15T02:20:24","yymm":"2010","arxiv_id":"2010.07028","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nIt is known from condensed matter physics \\cite{Kane05b,Kane05a,Bernevig0(...TRUNCATED)
{"timestamp":"2020-10-15T02:18:01","yymm":"2010","arxiv_id":"2010.06966","language":"en","url":"http(...TRUNCATED)
"\\section{Training details}\n\\subsection{Pretraining pre-trained models}\n\\label{app:training_fes(...TRUNCATED)
{"timestamp":"2020-10-20T02:34:26","yymm":"2010","arxiv_id":"2010.06866","language":"en","url":"http(...TRUNCATED)
End of preview.