diff --git "a/samples/texts_merged/450057.md" "b/samples/texts_merged/450057.md"
new file mode 100644--- /dev/null
+++ "b/samples/texts_merged/450057.md"
@@ -0,0 +1,1876 @@
+
+---PAGE_BREAK---
+
+Some results on the Weiss-Weinstein bound for
+conditional and unconditional signal models in array
+processing
+
+Dinh Thang Vu, Alexandre Renaux, Remy Boyer, Sylvie Marcos
+
+► To cite this version:
+
+Dinh Thang Vu, Alexandre Renaux, Remy Boyer, Sylvie Marcos. Some results on the Weiss-Weinstein bound for conditional and unconditional signal models in array processing. Signal Processing, Elsevier, 2014, 95 (2), pp.126-148. 10.1016/j.sigpro.2013.08.020. hal-00947784
+
+HAL Id: hal-00947784
+
+https://hal.inria.fr/hal-00947784
+
+Submitted on 17 Feb 2014
+
+**HAL** is a multi-disciplinary open access
+archive for the deposit and dissemination of sci-
+entific research documents, whether they are pub-
+lished or not. The documents may come from
+teaching and research institutions in France or
+abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est
+destinée au dépôt et à la diffusion de documents
+scientifiques de niveau recherche, publiés ou non,
+émanant des établissements d'enseignement et de
+recherche français ou étrangers, des laboratoires
+publics ou privés.
+---PAGE_BREAK---
+
+Some results on the Weiss-Weinstein bound for conditional and
+unconditional signal models in array processing
+
+Dinh Thang VU, Alexandre RENAUX, Rémy BOYER, Sylvie MARCOS
+
+Université Paris-Sud 11, CNRS, Laboratoire des Signaux et Systèmes, Supelec, 3 rue Joliot Curie, 91192 Gif-sur-Yvette
+Cedex, France (e-mail: {Vu,Renaux,Remy.Boyer,Marcos}@lss.supelec.fr)
+
+Abstract
+
+In this paper, the Weiss-Weinstein bound is analyzed in the context of sources localization with a planar
+array of sensors. Both conditional and unconditional source signal models are studied. First, some results
+are given in the multiple sources context without specifying the structure of the steering matrix and of the
+noise covariance matrix. Moreover, the case of an uniform or Gaussian prior are analyzed. Second, these
+results are applied to the particular case of a single source for two kinds of array geometries: a non-uniform
+linear array (elevation only) and an arbitrary planar (azimuth and elevation) array.
+
+Keywords: Weiss-Weinstein bound, DOA estimation.
+
+# 1. Introduction
+
+Sources localization problem has been widely investigated in the literature with many applications such as radar, sonar, medical imaging, etc. One of the objective is to estimate the direction-of-arrival (DOA) of the sources using an array of sensors.
+
+In array processing, lower bounds on the mean square error are usually used as a benchmark to evaluate
+the ultimate performance of an estimator. There exist several lower bounds in the literature. Depending
+on the assumptions about the parameters of interest, there are three main kinds of lower bounds. When
+the parameters are assumed to be deterministic (unknown), the main lower bounds on the (local) mean
+square error used are the well known Cramér-Rao bound and the Barankin bound (more particularly their
+approximations [1][2][3][4]). When the parameters are assumed to be random with a known prior distribution,
+these lower bounds on the global mean square error are called Bayesian bounds [5]. Some typical families
+of Bayesian bounds are the Ziv-Zakai family [6][7][8] and the Weiss-Weinstein family [9][10][11][12]. Finally,
+when the parameter vector is made from both deterministic and random parameters, the so-called hybrid
+bounds have been developed [13][14][15].
+
+Since the DOA estimation is a non-linear problem, the outliers effect can appear and the estimators
+mean square error exhibits three distinct behaviors depending on the number of snapshots and/or on
+the signal to noise ratio(SNR) [16]. At high SNR and/or for a high number of snapshots, i.e., in the
+---PAGE_BREAK---
+
+asymptotic region, the outliers effect can be neglected and the ultimate performance are described by the (classical/Bayesian/hybrid) Cramér-Rao bound. However, when the SNR and/or the number of snapshots decrease, the outliers effect lead to a quick increase of the mean square error: this is the so-called threshold effect. In this region, the behavior of the lower bounds are not the same. Some bounds, generally called global bounds (Barankin, Ziv-Zakai, Weiss-Weinstein) can predict the threshold while the others, called local bounds, like the Cramér-Rao bound or the Bhattacharyya bound cannot. Finally, at low SNR and/or at low number of snapshots, i.e., in the no-information region, the deterministic bounds exceed the estimator mean square error due to the fact that they do not take into account the parameter support. On the contrary, the Bayesian bounds exploit the parameter prior information leading to a "real" lower bound on the global mean square error.
+
+In this paper¹, we are interested in the Weiss-Weinstein bounds which is known to be one of the tightest Bayesian bound with the bounds of the Ziv-Zakai family. We will study the two main source models used in the literature [17]: the unconditional (or stochastic) model where the source signals are assumed to be Gaussian and the conditional (or deterministic) model where the source signals are assumed to be deterministic. Surprisingly, in the context of array processing, while closed-form expressions of the Ziv-Zakai bound (more precisely its extension by Bell et. al. [18]) were proposed around 15 years ago for the unconditional model, the results concerning the Weiss-Weinstein bound are, most of the time, only conducted by way of computations. Concerning the unconditional model, in [19], the Weiss-Weinstein bound has been evaluated by way of computations and has been compared to the mean square error of the MUSIC algorithm and classical Beamforming using a particular 8 × 8 element array antenna. In [20], the authors have introduced a numerical comparison between the Bayesian Cramér-Rao bound, the Ziv-Zakai bound and the Weiss-Weinstein bound for DOA estimation. In [21], numerical computations of the Weiss-Weinstein bound to optimize sensor positions for non-uniform linear arrays have been presented. Again in the unconditional model context, in [22], by considering the matched-field estimation problem, the authors have derived a semi closed-form expression of a simplified version of the Weiss-Weinstein bound for the DOA estimation. Indeed, the integration over the prior probability density function was not performed. The conditional model (with known waveforms) is studied only in [23], where a closed-form expression of the WWB is given in the simple case of spectral analysis and in [24] which is a simplified version of the bound.
+
+While the primary goal of this paper is to give closed-form expressions of the Weiss-Weinstein bound for the DOA estimation of a single source with an arbitrary planar array of sensors, under both conditional and unconditional source signal models, we also provide partial closed-form expressions of the bound which could be useful for other problems. First, we study the general Gaussian observation model with parameterized
+
+¹Section 5.2.2 of this paper has been partially presented in [24]
+---PAGE_BREAK---
+
+mean or parameterized covariance matrix. Indeed, one of the success of the Cramér-Rao is that, for this
+observation model, a closed-form expression of the Fisher information matrix is available: this is the so-
+called Slepian-Bang formula [25]. Such kind of formulas have been less investigated in the context of bounds
+tighter than the Cramér-Rao bound. Second, some results are given in the multiple sources context without
+specifying the structure of the steering matrix and of the noise covariance matrix. Finally, these results
+are applied to the particular case of a single source for two kinds of array geometries: the non-uniform
+linear array (elevation only) and the planar (azimuth and elevation) array. Consequently, the aim of this
+paper is also to provide a textbook of formulas which could be applied in other fields. The Weiss-Weinstein
+bound is known to depend on parameters called test points and other parameters generally denoted $s_i$. One
+particularity of this paper in comparison with the previous works on the Weiss-Weinstein bound is that we
+do not use the assumption $s_i = 1/2, \forall i$.
+
+This paper is organized as follows. Section 2 is devoted to the array processing observation model which will be used in the paper. In Section 3, a short background on the Weiss-Weinstein bound is presented and two general closed-form expressions which will be the cornerstone for our array processing problems are derived. In Section 4 we apply these general results to the array processing problem without specifying the structure of the steering matrix. In Section 5, we study the particular case of the non-uniform linear array and of the planar array for which we provide both closed-form expressions of the bound in the context of a single stationary source in the far field area. Some simulation results are proposed in Section 6. Finally, Section 7 gives our conclusions.
+
+## 2. Problem setup
+
+In this section, the general observation model generally used in array signal processing is presented as
+well as the first different assumptions used in the remain of the paper. Particularly, the so-called conditional
+and unconditional source models are emphasized.
+
+**2.1. Observations model**
+
+We consider the classical scenario of an array with $M$ sensors which receives $N$ complex bandpass
+signals $\mathbf{s}(t) = [s_1(t) \ s_2(t) \ \cdots \ s_N(t)]^T$. The output of the array is a $M \times 1$ complex vector $\mathbf{y}(t)$ which can
+be modelled as follows (see, e.g., [26] or [17])
+
+$$ \mathbf{y}(t) = \mathbf{A}(\theta)\mathbf{s}(t) + \mathbf{n}(t), \quad t = 1, \dots, T, \qquad (1) $$
+
+where $T$ is the number of snapshots, where $\theta = [\theta_1 \ \theta_2 \ \cdots \ \theta_q]^T$ is an unknown parameter vector of interest²,
+where $\mathbf{A}(\theta)$ is the so-called $M \times N$ steering matrix of the array response to the sources, and where the
+$M \times 1$ random vector $\mathbf{n}(t)$ is an additive noise.
+
+²Note that one source can be described by several parameters. Consequently, *q* > *N* in general.
+---PAGE_BREAK---
+
+## 2.2. Assumptions
+
+* The unknown parameters of interest are assumed to be random with an *a priori* probability density function $p(\theta_i)$, $i = 1, \dots, q$. These random parameters are assumed to be statistically independent such that the *a priori* joint probability density function is $p(\boldsymbol{\theta}) = \prod_{i=1}^q p(\theta_i)$. Note that this assumption will be only used in Subsections 4.2 and 4.3. We also assume that the parameter space, denoted $\Theta$, is a connected subset of $\mathbb{R}^q$ (see [27]).
+
+* The noise vector is assumed to be complex Gaussian, statistically independent of the parameters, i.i.d., circular, with zero mean and known covariance matrix $E[\mathbf{n}(t)\mathbf{n}^H(t)] = \mathbf{R}_n$. This assumption will be made more restrictive in Section 5 where it will be assumed that $\mathbf{R}_n = \sigma_n^2\mathbf{I}$. In any case, $\mathbf{R}_n$ is assumed to be a full rank matrix.
+
+* The steering matrix $\mathbf{A}(\boldsymbol{\theta})$ is assumed such that the observation model is identifiable. From Section 3 to Section 4, the structure of $\mathbf{A}(\boldsymbol{\theta})$ is not specified in order to obtain the more general results.
+
+* Concerning the source signals, two kinds of models have been investigated in the literature (see, e.g., [28] or [17]) and will be alternatively used in this paper.
+
+- $M_1$: *Unconditional or stochastic model:* $\mathbf{s}(t)$ is assumed to be a complex circular random vector, i.i.d., statistically independent of the noise, Gaussian with zero-mean and known covariance matrix $E[\mathbf{s}(t)\mathbf{s}^H(t)] = \mathbf{R}_s$. Note that concerning the previous results on the Cramér-Rao bound available in the literature [28], the covariance matrix $\mathbf{R}_s$ is assumed to be unknown. In this paper, we have made the simpler assumption that the covariance matrix $\mathbf{R}_s$ is known. These assumptions have already been used for the calculation of bounds more complex than the Cramér-Rao bound (see, e.g., [22], [29], [30]).
+
+- $M_2$: *Conditional or deterministic model:* $\forall t$, $\mathbf{s}(t)$ is assumed to be deterministic known. Note that, under the conditional model assumption, the signal waveforms can be assumed either unknown or known. While the conditional observation model with unknown waveforms seems more challenging, the conditional model with known waveforms signals which will be used in this paper can be found in several applications such as in mobile telecommunication and radar (see e.g. [31],[32], and [33]).
+
+## 2.3. Likelihood of the observations
+
+Let $\mathbf{R}_y = E[(\mathbf{y}(t) - E[\mathbf{y}(t)])(\mathbf{y}(t) - E[\mathbf{y}(t)])^H]$ be the covariance matrix of the observation vector $\mathbf{y}(t)$. According to the aforementioned assumptions, it is easy to see that under $M_1$, the observations $\mathbf{y}(t)$ are distributed as a complex circular Gaussian random vector with zero mean and covariance matrix
+---PAGE_BREAK---
+
+$\mathbf{R}_y(\theta) = \mathbf{A}(\theta)\mathbf{R}_s\mathbf{A}^H(\theta) + \mathbf{R}_n$ while under $\mathcal{M}_2$, the observations $\mathbf{y}(t)$ are distributed as a complex circular Gaussian random vector with mean $\mathbf{A}(\theta)\mathbf{s}(t)$ and covariance matrix $\mathbf{R}_y = \mathbf{R}_n$. Moreover, in both case the observations are i.i.d..
+
+Therefore, the likelihood, $p(\mathbf{Y}; \boldsymbol{\theta})$, of the full observations matrix $\mathbf{Y} = [\mathbf{y}(1) \ \mathbf{y}(2) \ \dots \ \mathbf{y}(T)]$ under $\mathcal{M}_1$
+is given by
+
+$$
+p(\mathbf{Y}; \boldsymbol{\theta}) = \frac{1}{\pi^{MT} |\mathbf{R}_{\mathbf{Y}}(\boldsymbol{\theta})|^T} \exp \left( -\sum_{t=1}^{T} \mathbf{y}(t)^H \mathbf{R}_{\mathbf{Y}}^{-1}(\boldsymbol{\theta}) \mathbf{y}(t) \right), \quad (2)
+$$
+
+where $\mathbf{R}_y(\theta) = \mathbf{A}(\theta)\mathbf{R}_s\mathbf{A}^H(\theta) + \mathbf{R}_n$ and the likelihood under $\mathcal{M}_2$ is given by
+
+$$
+p(\mathbf{Y}; \boldsymbol{\theta}) = \frac{1}{\pi^{MT} |\mathbf{R}_{\mathrm{n}}|^T} \exp \left( -\sum_{t=1}^{T} (\mathbf{y}(t) - \mathbf{A}(\boldsymbol{\theta}) \mathbf{s}(t))^{H} \mathbf{R}_{\mathrm{n}}^{-1} (\mathbf{y}(t) - \mathbf{A}(\boldsymbol{\theta}) \mathbf{s}(t)) \right). \quad (3)
+$$
+
+**3. Weiss-Weinstein bound: Generalities**
+
+In this Section, we first remind to the reader the structure of the Weiss-Weinstein bound on the mean square error and the assumptions used to compute this bound. Second, a general result about the Gaussian observation model with parameterized mean or parameterized covariance matrix, which, to the best of our knowledge, does not appear in the literature is presented. This result will be useful to study both the unconditional model $\mathcal{M}_1$ and the conditional model $\mathcal{M}_2$ in the next Section.
+
+3.1. Background
+
+The Weiss-Weinstein bound for a $q \times 1$ real parameter vector $\boldsymbol{\theta}$ is a $q \times q$ matrix denoted **WWB** and is
+given as follows [34]
+
+$$
+\text{WWB} = \text{HG}^{-1}\text{H}^T, \tag{4}
+$$
+
+where the $q \times q$ matrix $\mathbf{H} = [\mathbf{h}_1 \ \mathbf{h}_2 \dots \mathbf{h}_q]$ contains the so-called test-points $\mathbf{h}_i$, $i = 1, \dots, q$ such that
+$\boldsymbol{\theta} + \mathbf{h}_i \in \Theta \ \forall \mathbf{h}_i$. The $k, l$-element of the $q \times q$ matrix $\mathbf{G}$ is given by
+
+$$
+\{\mathbf{G}\}_{k,l} = \frac{\mathbb{E}\left[(L^{s_k}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_k, \boldsymbol{\theta})) - L^{1-s_k}(\mathbf{Y}; \boldsymbol{\theta} - \mathbf{h}_k, \boldsymbol{\theta}))\right] (L^{s_l}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_l, \boldsymbol{\theta})) - L^{1-s_l}(\mathbf{Y}; \boldsymbol{\theta} - \mathbf{h}_l, \boldsymbol{\theta}))\right]}{\mathbb{E}[L^{s_k}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_k, \boldsymbol{\theta})] \mathbb{E}[L^{s_l}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_l, \boldsymbol{\theta})]}, \quad (5)
+$$
+
+where the expectations are taken over the joint probability density function $p(\mathbf{Y}, \boldsymbol{\theta})$ and where the function
+$L(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_i, \boldsymbol{\theta})$ is defined by $L(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_i, \boldsymbol{\theta}) = \frac{p(\mathbf{Y}, \boldsymbol{\theta}+\mathbf{h}_i)}{p(\mathbf{Y}, \boldsymbol{\theta})}$. The notation $L^{s_k}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_k, \boldsymbol{\theta})$ means that $s_k$
+is the power of $L(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_i, \boldsymbol{\theta})$. The elements $s_i$ are such that $s_i \in [0, 1], i = 1, \dots, q$.
+
+Note that we have the following order relation [34]
+
+$$
+\operatorname{Cov}(\hat{\theta}) = E\left[(\hat{\theta} - \theta)(\hat{\theta} - \theta)^T\right] \geq \operatorname{WWB}, \quad (6)
+$$
+
+where $\mathbf{A} \succeq \mathbf{B}$ means that the matrix $\mathbf{A} - \mathbf{B}$ is a semi-positive definite matrix and where $\operatorname{Cov}(\hat{\theta})$ is the
+global (the expectation is taken over the joint pdf $p(\mathbf{Y}, \boldsymbol{\theta})$) mean square error of any estimator $\hat{\theta}$ of the
+---PAGE_BREAK---
+
+parameter vector $\theta$. Finally, in order to obtain a tight bound, one has to maximize **WWB** over the test-points $\mathbf{h}_i$ and $s_i$ ($i=1, \dots, q$). Note that this maximization can be done by using the trace of $\mathbf{HG}^{-1}\mathbf{H}^T$ or with respect to the Loewner partial ordering [35]. In this paper we will use the trace of $\mathbf{HG}^{-1}\mathbf{H}^T$ which is enough to obtain tight results.
+
+## 3.2. A general result on the Weiss-Weinstein bound and its application to the Gaussian observation models
+
+An analytical result on the Weiss-Weinstein bound which will be useful in the following derivations and which could be useful for other problems is derived in this part. Note that this result is independent of the parameter vector size *q* and of the considered observation model.
+
+Let us denote $\Omega$ the observation space. By rewriting the elements of matrix $\mathbf{G}$ (see Eqn. (5)) involved in the Weiss-Weinstein bound, one obtains for the numerator denoted by $N_{\{\mathbf{G}\}_{k,l}}$,
+
+$$
+\begin{aligned}
+N_{\{\mathbf{G}\}_{k,l}} &= \mathbb{E} \left[ (L^{s_k}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_k, \boldsymbol{\theta})) (L^{s_l}(\mathbf{Y}; \boldsymbol{\theta} - \mathbf{h}_k, \boldsymbol{\theta})) (L^{s_k+s_l}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_l, \boldsymbol{\theta})) (L^{s_k-s_l}(\mathbf{Y}; \boldsymbol{\theta} - \mathbf{h}_l, \boldsymbol{\theta})) \right] \\
+&= \int_{\Theta} \int_{\Omega} \frac{p^{s_k}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_k) p^{s_l}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_l)}{p^{s_k+s_l-1}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y}d\boldsymbol{\theta} + \int_{\Theta} \int_{\Omega} \frac{p^{1-s_k}(\mathbf{Y}, \boldsymbol{\theta} - \mathbf{h}_k) p^{1-s_l}(\mathbf{Y}, \boldsymbol{\theta} - \mathbf{h}_l)}{p^{1-s_k-s_l}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y}d\boldsymbol{\theta} \\
+&\quad - \int_{\Theta} \int_{\Omega} \frac{p^{s_k}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_k) p^{1-s_l}(\mathbf{Y}, \boldsymbol{\theta} - \mathbf{h}_l)}{p^{s_k-s_l}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y}d\boldsymbol{\theta} - \int_{\Theta} \int_{\Omega} \frac{p^{1-s_k}(\mathbf{Y}, \boldsymbol{\theta} - \mathbf{h}_k) p^{s_l}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_l)}{p^{s_l-s_k}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y}d\boldsymbol{\theta},
+\end{aligned}
+\quad (7) $$
+
+and for the denominator denoted by $D_{\{\mathbf{G}\}_{k,l}}$,
+
+$$
+\begin{aligned}
+D_{\{\mathbf{G}\}_{k,l}} &= \mathbb{E}[L^{s_k}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_k, \boldsymbol{\theta})] \mathbb{E}[L^{s_l}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{h}_l, \boldsymbol{\theta})] \\
+&= \int_{\Theta} \int_{\Omega} \frac{p^{s_k}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_k)}{p^{s_k-1}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y} d\boldsymbol{\theta} \int_{\Theta} \int_{\Omega} \frac{p^{s_l}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{h}_l)}{p^{s_l-1}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y} d\boldsymbol{\theta}.
+\end{aligned}
+\quad (8) $$
+
+Let us now define a function $\eta(\alpha, \beta, \mathbf{u}, \mathbf{v})$ as
+
+$$
+\eta(\alpha, \beta, \mathbf{u}, \mathbf{v}) = \int_{\Theta} \int_{\Omega} \frac{p^{\alpha}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{u}) p^{\beta}(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{v})}{p^{\alpha+\beta-1}(\mathbf{Y}, \boldsymbol{\theta})} d\mathbf{Y}d\boldsymbol{\theta},
+\quad (9) $$
+
+where $(\alpha, \beta) \in [0, 1]^2$ and where $(\mathbf{u}, \mathbf{v})$ are two $q \times 1$ vectors such that $\boldsymbol{\theta} + \mathbf{u} \in \Theta$ and $\boldsymbol{\theta} + \mathbf{v} \in \Theta$. The notation $p^\alpha (\mathbf{Y}, \boldsymbol{\theta} + \mathbf{u})$ means that $\alpha$ is the power of $p(\mathbf{Y}, \boldsymbol{\theta} + \mathbf{u})$. By identification, it is easy to see that
+
+$$
+\begin{aligned}
+\{\mathbf{G}\}_{k,l} = & \\
+& \frac{\eta(s_k, s_l, h_k, h_l) + \eta(1-s_k, 1-s_l, -h_k, -h_l) - \eta(s_k, 1-s_l, h_k, -h_l) - \eta(1-s_k, s_l, -h_k, h_l)}{\eta(s_k, 0, h_k, 0) \eta(0, s_l, 0, h_l)}.
+\end{aligned}
+\quad (10) $$
+
+Note that we choose the arbitrary notation $D_{\{\mathbf{G}\}_{k,l}} = \eta(s_k, 0, h_k, 0) \eta(0, s_l, 0, h_l)$ for the denominator. The notation $D_{\{\mathbf{G}\}_{k,l}} = \eta(s_k, 1, h_k, 0) \eta(1, s_l, 0, h_l)$ or, even, $D_{\{\mathbf{G}\}_{k,l}} = \eta(s_k, 0, h_k, v) \eta(0, s_l, u, h_l)$ will lead to the same result.
+
+With Eqn. (10), it is clear that the knowledge of $\eta(\alpha, \beta, u, v)$ for a particular problem leads to the Weiss-Weinstein bound (without the maximization procedure over the test-points and over the parameters $s_i$). Surprisingly, this simple expression is given in [34] only for $s_i = 1/2$, $\forall i$ and not for the general case.
+---PAGE_BREAK---
+
+Let us now detail this function $\eta(\alpha, \beta, \mathbf{u}, \mathbf{v})$. The function $\eta(\alpha, \beta, \mathbf{u}, \mathbf{v})$ can be rewritten as
+
+$$
+\begin{align}
+\eta(\alpha, \beta, \mathbf{u}, \mathbf{v}) &= \int_{\Theta} \frac{p^{\alpha}(\boldsymbol{\theta} + \mathbf{u}) p^{\beta}(\boldsymbol{\theta} + \mathbf{v})}{p^{\alpha+\beta-1}(\boldsymbol{\theta})} \int_{\Omega} \frac{p^{\alpha}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{u}) p^{\beta}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{v})}{p^{\alpha+\beta-1}(\mathbf{Y}; \boldsymbol{\theta})} d\mathbf{Y} d\boldsymbol{\theta} \\
+&= \int_{\Theta} \dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \frac{p^{\alpha}(\boldsymbol{\theta} + \mathbf{u}) p^{\beta}(\boldsymbol{\theta} + \mathbf{v})}{p^{\alpha+\beta-1}(\boldsymbol{\theta})} d\boldsymbol{\theta},
+\end{align}
+\tag{11}
+$$
+
+where we define
+
+$$ \dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v}, \boldsymbol{\theta}) = \int_{\Omega} \frac{p^{\alpha}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{u}) p^{\beta}(\mathbf{Y}; \boldsymbol{\theta} + \mathbf{v})}{p^{\alpha+\beta-1}(\mathbf{Y}; \boldsymbol{\theta})} d\mathbf{Y}. \quad (12) $$
+
+Our aim is to give the most general result. Consequently, we will focus only on $\dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ since the *a priori* probability density function depends on the considered problem.
+
+An important remark pointed out in [27] is that the integration for the parameter space is with respect to the region $\{\boldsymbol{\theta}: p(\boldsymbol{\theta}) > 0\}$. However, since the functions being integrated are $p(\boldsymbol{\theta})$, $p(\boldsymbol{\theta} + \mathbf{u})$, and $p(\boldsymbol{\theta} + \mathbf{v})$, then the actual region of integration (where all the functions are positive) is the intersection of three regions, $\{\boldsymbol{\theta}: p(\boldsymbol{\theta}) > 0\} \cap \{\boldsymbol{\theta}: p(\boldsymbol{\theta} + \mathbf{u}) > 0\} \cap \{\boldsymbol{\theta}: p(\boldsymbol{\theta} + \mathbf{v}) > 0\}$. Note that, in order to simplify the notation we only use $\Theta$ throughout this paper but this remark will be useful and explicitly specified in Section 4.2.
+
+### 3.2.1. Gaussian observation model with parameterized covariance matrix
+
+One calls (circular, i.i.d.) Gaussian observation model with parameterized covariance matrix, a model such that the observations $\mathbf{y}(t) \sim CN(0, R_y(\boldsymbol{\theta}))$ where $\boldsymbol{\theta}$ are the parameters of interest. Note that $M_1$ is a special case of this model since the parameters of interest appear only in the covariance matrix of the observations which has the following particular structure $R_y(\boldsymbol{\theta}) = A(\boldsymbol{\theta})R_sA^H(\boldsymbol{\theta}) + R_n$. The closed-form expression of $\dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ is given by:
+
+$$
+\dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v}) = \frac{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta})|^{T(\alpha+\beta-1)}}{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{u})|^{T\alpha} |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{v})|^{T\beta} |\alpha\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}+\mathbf{u}) + \beta\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}+\mathbf{v}) - (\alpha+\beta-1)\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta})|^T}.
+\tag{13}
+$$
+
+The proof is given in Appendix .1. Note that, similar expressions are given in [18] (Eqn. (B.15)) and [36] (p. 67, Eqn. (52)) for the particular case where $\alpha = s$ and $\beta = 1-s$.
+
+### 3.2.2. Gaussian observation model with parameterized mean
+
+One calls (circular, i.i.d.) Gaussian observation model with parameterized mean, a model such that the observations $\mathbf{y}(t) \sim CN(\mathbf{f}(\boldsymbol{\theta}), R_y)$ where $\boldsymbol{\theta}$ are the parameters of interest. Note that $M_2$ is a special case of this model since the parameters of interest appear only in the mean of the observations which has the following particular structure $\mathbf{f}_t(\boldsymbol{\theta}) = A(\boldsymbol{\theta})s(t)$ (and $R_y = R_n$). The closed-form expression of $\dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ is given in this case by
+---PAGE_BREAK---
+
+$$
+\begin{equation}
+\begin{split}
+\ln \eta_{\theta} (\alpha, \beta, \mathbf{u}, \mathbf{v}) = & -\sum_{t=1}^{T} \alpha (1-\alpha) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta (1-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) \\
+& + (1-\alpha-\beta) (\alpha+\beta) \mathbf{f}_t^H (\boldsymbol{\theta}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) - 2 \operatorname{Re} \{\alpha\beta \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) \\
+& + \alpha (1-\alpha-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) + \beta (1-\alpha-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) \},
+\end{split}
+\tag{14}
+\end{equation}
+$$
+
+or equivalently by
+
+$$
+\begin{equation}
+\begin{split}
+\ln \eta_{\theta} (\alpha, \beta, \mathbf{u}, \mathbf{v}) = & -\sum_{t=1}^{T} \alpha (1-\alpha-\beta) \| \mathbf{R}_{\mathbf{y}}^{-1/2} (\mathbf{f}_t(\boldsymbol{\theta}+\mathbf{u}) - \mathbf{f}_t(\boldsymbol{\theta})) \|^{2} + \alpha\beta \| \mathbf{R}_{\mathbf{y}}^{-1/2} (\mathbf{f}_t(\boldsymbol{\theta}+\mathbf{u}) - \mathbf{f}_t(\boldsymbol{\theta}+\mathbf{v})) \|^{2} \\
+& +\beta (1-\alpha-\beta) \| \mathbf{R}_{\mathbf{y}}^{-1/2} (\mathbf{f}_t(\boldsymbol{\theta}+\mathbf{v}) - \mathbf{f}_t(\boldsymbol{\theta})) \|^{2}.
+\end{split}
+\tag{15}
+\end{equation}
+$$
+
+The details are given in Appendix .2.
+
+**4. General application to array processing**
+
+In the previous Section, it has been shown that the Weiss-Weinstein bound computation (or, at least,
+the matrix **G** computation) is reduced to the knowledge of the function η (α, β, **u**, **v**) given by Eqn. (9). As
+one can see in Eqn. (10), the elements of the matrix **G** depend on η (α, β, **u**, **v**) for particular values of α, β,
+**u**, and **v**. Consequently, the goal of this Section is to detail these particular functions for our model given
+by Eqn. (1). Since Eqn. (9) can be decomposed into a *deterministic part* (in the sense where ηθ (α, β, **u**, **v**)
+(see Eqn. (12)) only depends on the likelihood function) and a *Bayesian part* (when we have to integrate
+ηθ (α, β, **u**, **v**) over the *a priori* probability density function of the parameters), we will first focus on the
+particular functions ηθ (α, β, **u**, **v**) by using the results of the previous Section on the Gaussian observation
+model with parameterized mean or covariance matrix. Second, we will detail the passage from ηθ (α, β, **u**, **v**)
+to η (α, β, **u**, **v**) in the particular case where p(θi) is a uniform probability density function ∀i. Another
+result will also be given in the case of a Gaussian prior.
+
+4.1. Analysis of $\hat{\eta}_{\theta}$ $(\alpha, \beta, u, v)$
+
+We will now detail the particular functions $\hat{\eta}_{\theta}(\alpha, \beta, u, v)$ involved in the different elements of $\{\mathbf{G}\}_{k,l}$,
+$k,l \in \{1,q\}^2$ for both models $M_1$ and $M_2$.
+
+4.1.1. Unconditional observation model $M_1$
+
+Under the unconditional model $\mathcal{M}_1$, by using Eqn. (13), one obtains straightforwardly the functions
+$\hat{\eta}_{\theta}(\alpha, \beta, u, v)$ involved in the elements $\{\mathbf{G}\}_{k,l} = \{\mathbf{G}\}_{l,k}$
+---PAGE_BREAK---
+
+$$
+\left\{
+\begin{aligned}
+\dot{\eta}_{\theta}(s_k, s_l, \mathbf{h}_k, \mathbf{h}_l) &= \frac{|\mathbf{R}_y(\theta)|^{T(s_k+s_l-1)}}{|\mathbf{R}_y(\theta+\mathbf{h}_k)|^{Ts_k} |\mathbf{R}_y(\theta+\mathbf{h}_l)|^{Ts_l} |s_k \mathbf{R}_y^{-1}(\theta+\mathbf{h}_k)+s_l \mathbf{R}_y^{-1}(\theta+\mathbf{h}_l)-(s_k+s_l-1) \mathbf{R}_y^{-1}(\theta)|^T}, \\
+\dot{\eta}_{\theta}(1-s_k, 1-s_l, -\mathbf{h}_k, -\mathbf{h}_l) &= \frac{|\mathbf{R}_y(\theta)|^{T(1-s_k-s_l)} |\mathbf{R}_y(\theta-\mathbf{h}_k)|^{Ts_k-1} |\mathbf{R}_y(\theta-\mathbf{h}_l)|^{Ts_l-1}}{|(1-s_k)\mathbf{R}_y^{-1}(\theta-\mathbf{h}_k)+(1-s_l)\mathbf{R}_y^{-1}(\theta-\mathbf{h}_l)-(1-s_k-s_l)\mathbf{R}_y^{-1}(\theta)|^T}, \\
+\dot{\eta}_{\theta}(s_k, 1-s_l, \mathbf{h}_k, -\mathbf{h}_l) &= \frac{|\mathbf{R}_y(\theta)|^{Ts_k} |s_k \mathbf{R}_y^{-1}(\theta+\mathbf{h}_k)+(1-s_l)\mathbf{R}_y^{-1}(\theta-\mathbf{h}_l)-(s_k-s_l)\mathbf{R}_y^{-1}(\theta)|^T}{|\mathbf{R}_y(\theta+\mathbf{h}_k)|^{Ts_k} |s_k \mathbf{R}_y^{-1}(\theta+\mathbf{h}_k)-(s_k-1)\mathbf{R}_y^{-1}(\theta)|^T}, \\
+\dot{\eta}_{\theta}(1-s_k, s_l, -\mathbf{h}_k, \mathbf{h}_l) &= \frac{|\mathbf{R}_y(\theta)|^{Ts_l} |(1-s_k)\mathbf{R}_y^{-1}(\theta-\mathbf{h}_k)+s_l\mathbf{R}_y^{-1}(\theta+\mathbf{h}_l)-(s_l-s_k)\mathbf{R}_y^{-1}(\theta)|^T}{|\mathbf{R}_y(\theta-\mathbf{h}_k)|^{Ts_k-1} |\mathbf{R}_y(\theta-\mathbf{h}_k)|^{Ts_k-1}}, \\
+\dot{\eta}_{\theta}(s_k, 0, \mathbf{h}_k, \mathbf{0}) &= \frac{|\mathbf{R}_y(\theta)|^{Ts_k} |s_k \mathbf{R}_y^{-1}(\theta+\mathbf{h}_k)-(s_k-1)\mathbf{R}_y^{-1}(\theta)|^T}{|\mathbf{R}_y(\theta+\mathbf{h}_k)|^{Ts_k} |s_l \mathbf{R}_y^{-1}(\theta+\mathbf{h}_l)-(s_l-1)\mathbf{R}_y^{-1}(\theta)|^T}, \\
+\dot{\eta}_{\theta}(0, s_l, \mathbf{0}, \mathbf{h}_l) &= \frac{|\mathbf{R}_y(\theta)|^{Ts_l} |\mathbf{R}_y(\theta)|^{Ts_l-1}}{|s_l \mathbf{R}_y^{-1}(\theta+\mathbf{h}_l)-(s_l-1)\mathbf{R}_y^{-1}(\theta)|^T}.
+\end{aligned}
+\right.
+\quad (16)
+$$
+
+The diagonal elements of $\mathbf{G}$ are obtained by letting $k=l$ in the above equations.
+
+### 4.1.2. Conditional observation model $\mathcal{M}_2$
+
+Under the conditional model $\mathcal{M}_2$, by using Eqn. (15) with $\mathbf{f}_t(\boldsymbol{\theta}) = \mathbf{A}(\boldsymbol{\theta})\mathbf{s}(t)$ and $\mathbf{R}_{\boldsymbol{y}} = \mathbf{R}_{\boldsymbol{n}}$ one obtains straightforwardly the functions $\dot{\eta}_{\boldsymbol{\theta}}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ involved in the elements $\{\mathbf{G}\}_{k,l} = \{\mathbf{G}\}_{l,k}$
+
+$$
+\left\{
+\begin{array}{l}
+\ln \dot{\eta}_{\theta}(s_k, s_l, \mathbf{h}_k, \mathbf{h}_l) = s_k (s_k + s_l - 1) \zeta_{\theta}(\mathbf{h}_k, \mathbf{0}) + s_l (s_k + s_l - 1) \zeta_{\theta}(\mathbf{h}_l, \mathbf{0}) - s_k s_l \zeta_{\theta}(\mathbf{h}_k, \mathbf{h}_l), \\
+\\
+\ln \dot{\eta}_{\theta}(1 - s_k, 1 - s_l, -\mathbf{h}_k, -\mathbf{h}_l) = (s_k - 1)(s_k + s_l - 1) \zeta_{\theta}(-\mathbf{h}_k, \mathbf{0}) + (s_l - 1)(s_k + s_l - 1) \zeta_{\theta}(-\mathbf{h}_l, \mathbf{0}) \\
+\qquad - (1 - s_k)(1 - s_l) \zeta_{\theta}(-\mathbf{h}_k, -\mathbf{h}_l), \\
+\\
+\ln \dot{\eta}_{\theta}(s_k, 1 - s_l, \mathbf{h}_k, -\mathbf{h}_l) = s_k (s_k - s_l) \zeta_{\theta}(\mathbf{h}_k, \mathbf{0}) + (1 - s_l)(s_k - s_l) \zeta_{\theta}(-\mathbf{h}_l, \mathbf{0}) + s_k (s_l - 1) \zeta_{\theta}(\mathbf{h}_k, -\mathbf{h}_l), \\
+\\
+\ln \dot{\eta}_{\theta}(1 - s_k, s_l, -\mathbf{h}_k, \mathbf{h}_l) = (s_k - 1)(s_k - s_l) \zeta_{\theta}(-\mathbf{h}_k, \mathbf{0}) + s_l (s_l - s_k) \zeta_{\theta}(\mathbf{h}_l, \mathbf{0}) + (s_k - 1) s_l \zeta_{\theta}(-\mathbf{h}_k, \mathbf{h}_l), \\
+\\
+\ln \dot{\eta}_{\theta}(s_k, 0, \mathbf{h}_k, \mathbf{0}) = s_k (s_k - 1) \zeta_{\theta}(\mathbf{h}_k, \mathbf{0}), \\
+\\
+\ln \dot{\eta}_{\theta}(0, s_l, \mathbf{0}, \mathbf{h}_l) = s_l (s_l - 1) \zeta_{\theta}(\mathbf{h}_l, \mathbf{0}),
+\end{array}
+\right.
+\tag{17}
+$$
+
+where we define
+
+$$
+\zeta_{\theta}(\mu, \rho) = \sum_{t=1}^{T} \| \mathbf{R}_{n}^{-1/2} (\mathbf{A}(\theta + \mu) - \mathbf{A}(\theta + \rho)) \mathbf{s}(t) \|^{2}. \quad (18)
+$$
+
+The diagonal elements of $\mathbf{G}$ are obtained by letting $k=l$ in the above equations. Note that, since we are working on matrix $\mathbf{G}$, all the previously proposed results are made whatever the number of test-points.
+
+## 4.2. Analysis of $\eta(\alpha, \beta, u, v)$ with a uniform prior
+
+Of course, the analysis of $\eta(\alpha, \beta, u, v)$ given by Eqn. (11) can only be conducted by specifying the a priori probability density functions of the parameters. Consequently, the results provided here are very specific. However, note that, in general, this aspect is less emphasized in the literature where most of the authors give results without specifying the prior probability density functions and compute the rest of the bound numerically (see e.g., [22][20][37]).
+
+We assume that all the parameters $\theta_i$ have a uniform prior distribution over the interval $[a_i, b_i]$ and are statistically independent. We will also assume one test-point per parameter, otherwise there is no possibility
+---PAGE_BREAK---
+
+to obtain (pseudo) closed-form expressions. Consequently, the matrix **H** is such that
+
+$$
+\mathbf{H} = \mathrm{Diag} ([h_1 h_2 \cdots h_q]), \tag{19}
+$$
+
+and the vector **h***i*, *i* = 1, ..., *q*, takes the value *h**i* at the *i*th row and zero elsewhere. So, in this analysis,
+the vector **u** takes the value *u**i* at the *i*th row and zero elsewhere and the vector **v** takes the value *v**j* at the
+*j*th row and zero elsewhere (of course, we can have *i* = *j*). Under these assumptions, η(α, β, **u**, **v**) can be
+rewritten³ for *i* ≠ *j*
+
+$$
+\begin{align}
+\eta(\alpha, \beta, \mathbf{u}, \mathbf{v}) &= \int_{\Theta} \dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \frac{p^{\alpha}(\theta_i + u_i) p^{\beta}(\theta_j + v_j) p^{\beta}(\theta_i) p^{\alpha}(\theta_j)}{p^{\alpha+\beta-1}(\theta_i) p^{\alpha+\beta-1}(\theta_j)} \prod_{\substack{k=1 \\ k \neq i, k \neq j}}^{q} p(\theta_k) d\theta \\
+&= \frac{1}{\prod_{k=1}^{q} (b_k - a_k)} \int_{\Theta^{q-2}} \int_{\Theta_j} \int_{\Theta_i} \dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) d\theta_i d\theta_j d(\theta / \{\theta_i, \theta_j\}), \tag{20}
+\end{align}
+$$
+
+where $\Theta_i = \begin{cases} [a_i, b_i - u_i] & \text{if } u_i > 0, \\ [a_i - u_i, b_i] & \text{if } u_i < 0, \end{cases}$ and $\Theta_j = \begin{cases} [a_j, b_j - v_j] & \text{if } v_j > 0, \\ [a_j - v_j, b_j] & \text{if } v_j < 0, \end{cases}$. For $i=j$, one can have $\mathbf{v} = \pm \mathbf{u}$,
+then one obtains
+
+$$
+\begin{align}
+\eta(\alpha, \beta, \mathbf{u}, \mathbf{v} = \pm \mathbf{u}) &= \int_{\Theta} \dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \frac{p^{\alpha}(\theta_i + u_i) p^{\beta}(\theta_i \pm u_i)}{p^{\alpha+\beta-1}(\theta_i)} \prod_{\substack{k=1 \\ k \neq i}}^{q} p(\theta_k) d\theta \\
+&= \frac{1}{\prod_{k=1}^{q} (b_k - a_k)} \int_{\Theta^{q-1}} \int_{\Theta_i} \dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v} = \pm \mathbf{u}) d\theta_i d(\theta / \{\theta_i\}). \tag{21}
+\end{align}
+$$
+
+In the last equation, if $\mathbf{v} = -\mathbf{u}$, then $\Theta_i = \begin{cases} [a_i + u_i, b_i - u_i] & \text{if } u_i > 0, \\ [a_i - u_i, b_i + u_i] & \text{if } u_i < 0, \end{cases}$ , while, if $\mathbf{v} = \mathbf{u}$, then
+$\Theta_i = \begin{cases} [a_i, b_i - u_i] & \text{if } u_i > 0, \\ [a_i - u_i, b_i] & \text{if } u_i < 0, \end{cases}$.
+
+Depending on the structure of $\eta_\theta (\alpha, \beta, \mathbf{u}, \mathbf{v})$, $\eta (\alpha, \beta, \mathbf{u}, \mathbf{v})$ has to be computed numerically or a closed-
+form expression can be found.
+
+Another particular case which appears sometimes is when the function $\eta_\theta (\alpha, \beta, \mathbf{u}, \mathbf{v})$ does not depend
+on $\theta$ (see, [23][5][8][18][20][21][27][29] and Section 5 of this paper). In this case, $\eta_\theta (\alpha, \beta, \mathbf{u}, \mathbf{v})$ is denoted
+
+³In this case, one has to have a particular attention to the integration domain as mentionned in Section 3.2. It will not be
+the case for the Gaussian prior since the support is ℝ.
+---PAGE_BREAK---
+
+$\dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ and one obtains from Eqn. (20)
+
+$$
+\begin{align}
+\eta(\alpha, \beta, \mathbf{u}, \mathbf{v}) &= \frac{\dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v})}{\prod_{k=1}^{q} (b_k - a_k)} \left( \prod_{\substack{k=1 \\ k \neq i, k \neq j}}^{q} \int_{a_k}^{b_k} d\theta_k \right) \int_{\Theta_i} d\theta_i \int_{\Theta_j} d\theta_j \nonumber \\
+&= \frac{(b_i - a_i - |u_i|)(b_j - a_j - |v_j|)}{(b_i - a_i)(b_j - a_j)} \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}), \tag{22}
+\end{align}
+$$
+
+and from Eqn. (21)
+
+$$
+\eta(\alpha, \beta, \mathbf{u}, \mathbf{v} = \mathbf{u}) = \frac{(b_i - a_i - |u_i|)}{(b_i - a_i)} \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}), \quad (23)
+$$
+
+and
+
+$$
+\eta(\alpha, \beta, \mathbf{u}, \mathbf{v} = -\mathbf{u}) = \frac{(b_i - a_i - 2|u_i|)}{(b_i - a_i)} \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}). \quad (24)
+$$
+
+### 4.3. Analysis of $\eta(\alpha, \beta, \mathbf{u}, \mathbf{v})$ with a Gaussian prior
+
+Finally, one can mention that if the prior is now assumed to be Gaussian, i.e., $\theta_i \sim N(\mu_i, \sigma_i^2) \forall i$ and $\dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ does not depend on $\theta$ one obtains after a straightforward calculation
+
+$$
+\begin{align}
+\eta(\alpha, \beta, \mathbf{u}, \mathbf{v}) &= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \int_{\mathbb{R}} \frac{p^{\alpha}(\theta_i + u_i)}{p^{\alpha-1}(\theta_i)} d\theta_i \int_{\mathbb{R}} \frac{p^{\beta}(\theta_j + v_j)}{p^{\beta-1}(\theta_j)} d\theta_j \\
+&= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \exp \left( -\frac{1}{2} \left( \frac{\alpha(1-\alpha)u_i^2}{\sigma_i^2} + \frac{\beta(1-\beta)v_j^2}{\sigma_j^2} \right) \right), \tag{25}
+\end{align}
+$$
+
+$$
+\begin{align}
+\eta(\alpha, \beta, \mathbf{u}, \mathbf{v} = \mathbf{u}) &= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \int_{\mathbb{R}} \frac{p^{\alpha+\beta}(\theta_i + u_i)}{p^{\alpha+\beta-1}(\theta_i)} d\theta_i \\
+&= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \exp\left(-\frac{(\alpha+\beta)(1-\alpha-\beta)u_i^2}{2\sigma_i^2}\right), \tag{26}
+\end{align}
+$$
+
+and
+
+$$
+\begin{align}
+\eta(\alpha, \beta, \mathbf{u}, \mathbf{v} = -\mathbf{u}) &= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \int_{\mathbb{R}} \frac{p^{\alpha}(\theta_i + u_i) p^{\beta}(\theta_i - u_i)}{p^{\alpha+\beta-1}(\theta_i)} d\theta_i \\
+&= \dot{\eta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) \exp\left(-\frac{(\alpha + \beta - \alpha^2 - \beta^2 + 2\alpha\beta) u_i^2}{2\sigma_i^2}\right). \tag{27}
+\end{align}
+$$
+
+## 5. Specific applications to array processing: DOA estimation
+
+We now consider the application of the Weiss-Weinstein bound in the particular context of source localization. Indeed, until now, the structure of the steering matrix $A(\theta)$ for a particular problem has not been used in the proposed (semi) closed-form expressions. Consequently, these previous results can be applied to a large class of estimation problems such as far-field and near-field sources localization, passive localization with polarized array of sensors, or radar (known waveforms).
+---PAGE_BREAK---
+
+Here, we want to focus on the direction-of-arrival estimation of a single source in the far-field area with narrow-band signal. In this case, the steering matrix $\mathbf{A}(\boldsymbol{\theta})$ becomes a steering vector denoted as $\mathbf{a}(\boldsymbol{\theta})$ (except for one preliminary result concerning the conditional model which will be given whatever the number of sources in Section 5.1.2). The structure of this vector will be specified by the analysis of two kinds of array geometry: the non-uniform linear array from which only one angle-of-arrival can be estimated ($\boldsymbol{\theta}$ becomes a scalar) and the arbitrary planar array from which both azimuth and elevation can be estimated ($\boldsymbol{\theta}$ becomes a $2 \times 1$ vector). In any cases, the array always consists of $M$ identical, omnidirectional sensors. Both models $\mathcal{M}_1$ and $\mathcal{M}_2$ will be considered and the noise will be assumed spatially uncorrelated: $\mathbf{R}_n = \sigma_n^2 \mathbf{I}$. Since we focus on the single source scenario, the variance of the source signal $s(t)$ is denoted $\sigma_s^2$ for the model $\mathcal{M}_1$.
+
+The general structure of the $i^{th}$ element of the steering vector is as follows
+
+$$ \{\mathbf{a}(\boldsymbol{\theta})\}_i = \exp \left( j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\theta} \right), \quad i = 1, \dots, M \qquad (28) $$
+
+where $\boldsymbol{\theta}$ represents the parameter vector, where $\lambda$ denotes the wavelength, and where $\mathbf{r}_i$ denotes the coordinate of the $i^{th}$ sensor position with respect to a given referential. In the following, $\mathbf{r}_i$ will be a scalar or a $2 \times 1$ vector depending on the context (linear array or planar array).
+
+## 5.1. Preliminary results
+
+Since our analysis is now reduced to the single source case, we give here some other closed-form expressions which will be useful when we will detail the specific linear and planar arrays.
+
+### 5.1.1. Unconditional observation model $\mathcal{M}_1$
+
+In order to detail the set of functions $\eta_{\theta}$ given by Eqn. (16), one has to find closed-form expressions of the determinant $|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta} + \mathbf{u})|$ and of determinants having the following structure: $|m_1\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_1) + m_2\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_2)|$ with $m_1 + m_2 = 1$ or $|m_1\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_1) + m_2\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_2) + m_3\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_3)|$ with $m_1 + m_2 + m_3 = 1$. Under $\mathcal{M}_1$, the observation covariance matrix is now given by
+
+$$ \mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}) = \sigma_s^2 \mathbf{a}(\boldsymbol{\theta}) \mathbf{a}^H(\boldsymbol{\theta}) + \sigma_n^2 \mathbf{I}_M. \qquad (29) $$
+
+Concerning the calculation of $|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta} + \mathbf{u})|$, it is easy to find
+
+$$ |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta} + \mathbf{u})| = \sigma_n^{2M} \left( 1 + \frac{\sigma_s^2}{\sigma_n^2} \|\mathbf{a}(\boldsymbol{\theta} + \mathbf{u})\|^2 \right). \qquad (30) $$
+
+Moreover, after calculation detailed in Appendix B.3, one obtains for the other determinants
+
+$$ |m_1 \mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_1) + m_2 \mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_2)| = \frac{1}{(\sigma_n^2)^M} \left( \begin{aligned}[t] & 1 - \varphi_1 m_1 \| \mathbf{a}(\boldsymbol{\theta}_1) \| ^2 + m_2 \varphi_2 \| \mathbf{a}(\boldsymbol{\theta}_2) \| ^2 \\ & - \varphi_1 m_1 \varphi_2 m_2 (\| \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_2) \| ^2 - \| \mathbf{a}(\boldsymbol{\theta}_1) \| ^2 \| \mathbf{a}(\boldsymbol{\theta}_2) \| ^2) \end{aligned} \right) \qquad (31) $$
+---PAGE_BREAK---
+
+and
+
+$$
+\begin{align}
+|m_1 \mathbf{R}_{\mathbf{y}}^{-1}(\theta_1) + m_2 \mathbf{R}_{\mathbf{y}}^{-1}(\theta_2) + m_3 \mathbf{R}_{\mathbf{y}}^{-1}(\theta_3)| = & \nonumber \\
+& \frac{1}{(\sigma_n^2)^M} \left( 1 - \sum_{k=1}^3 m_k \varphi_k \| \mathbf{a}(\theta_k) \|^2 - \frac{1}{2} \sum_{k=1}^3 \sum_{\substack{k'=1 \\ k' \neq k}}^3 m_k \varphi_k m_{k'} \varphi_{k'} \left( \| \mathbf{a}^H(\theta_k) \mathbf{a}(\theta_{k'}) \|^2 - \| \mathbf{a}(\theta_k) \|^2 \| \mathbf{a}(\theta_{k'}) \|^2 \right) \right. \nonumber \\
+& \left. - \left( \prod_{k=1}^3 m_k \varphi_k \right) \left( \prod_{k=1}^3 \| \mathbf{a}(\theta_k) \|^2 - \frac{1}{2} \sum_{k=1}^3 \sum_{\substack{k'=1 \\ k' \neq k}}^3 \sum_{\substack{k''=1 \\ k'' \neq k'}}^3 \| \mathbf{a}^H(\theta_k) \mathbf{a}(\theta_{k''}) \|^2 \| \mathbf{a}(\theta_{k''}) \|^2 \right) \right. \nonumber \\
+& \left. + \mathbf{a}^H(\theta_3) \mathbf{a}(\theta_2) \mathbf{a}^H(\theta_1) \mathbf{a}(\theta_3) \mathbf{a}^H(\theta_2) \mathbf{a}(\theta_1) + \mathbf{a}^H(\theta_3) \mathbf{a}(\theta_1) \mathbf{a}^H(\theta_1) \mathbf{a}(\theta_2) \mathbf{a}^H(\theta_2) \mathbf{a}(\theta_3) \right), \tag{32}
+\end{align}
+$$
+
+where
+
+$$
+\varphi_k = \frac{\sigma_s^2}{\sigma_s^2 \|a(\theta_k)\|^2 + \sigma_n^2}, \quad k = 1, 2, 3. \tag{33}
+$$
+
+5.1.2. Conditional observation model $\mathcal{M}_2$
+
+Note that the results proposed here are in the context of any number of sources. Under the conditional
+model, the set of functions $\hat{\eta}_{\theta}$ given by Eqn. (17) is linked to the function $\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho})$ given by Eqn. (18). In
+this analysis, the vector $\boldsymbol{\mu}$ takes the value $\mu_i$ at the $i^{th}$ row and zero elsewhere and the vector $\boldsymbol{\rho}$ takes the
+value $\rho_j$ at the $j^{th}$ row and zero elsewhere (of course, one can has $i = j$). In Appendix .4, the calculation
+of the following closed-form expressions for $\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho})$ are detailed.
+
+• If $(m-1)p+1 \le i,j \le mp$, where $p$ denotes the number of parameters per source, then, we have
+
+$$
+\begin{equation}
+\begin{aligned}
+\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho}) = {}& \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_m \|^{2} \sum_{i=1}^{M} \sum_{j=1}^{M} \{\mathbf{R}_{\boldsymbol{n}}^{-1}\}_{i,j} \\
+& \times \left( \exp\left(-j\frac{2\pi}{\lambda}\mathbf{r}_{i}^{T}\boldsymbol{\mu}_{m}\right) - \exp\left(-j\frac{2\pi}{\lambda}\mathbf{r}_{i}^{T}\boldsymbol{\rho}_{m}\right) \right) \\
+& \times \left( \exp\left(j\frac{2\pi}{\lambda}\mathbf{r}_{j}^{T}\boldsymbol{\mu}_{m}\right) - \exp\left(j\frac{2\pi}{\lambda}\mathbf{r}_{j}^{T}\boldsymbol{\rho}_{m}\right) \right)
+\end{aligned}
+\tag{34}
+\end{equation}
+$$
+
+• Otherwise, if (m − 1) p + 1 ≤ i ≤ mp and ( n − 1) p + 1 ≤ j ≤ np , then we have
+
+$$
+\begin{align*}
+\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho}) = & -2 \operatorname{Re} \left( \sum_{t=1}^{T} {\{\mathbf{s}(t)\}_m}^* {\{\mathbf{s}(t)\}_n} \right) \\
+& + \sum_{t=1}^{T} \|{\{\mathbf{s}(t)\}_n}\|^{2} \sum_{i=1}^{M} \sum_{j=1}^{M} {\{\mathbf{R}_{\boldsymbol{n}}^{-1}\}_{i,j}} \\
+& + \sum_{t=1}^{T} \|{\{\mathbf{s}(t)\}_n}\|^{2} \sum_{i=1}^{M} \sum_{j=1}^{M} {\{\mathbf{R}_{\boldsymbol{n}}^{-1}\}_{i,j}} \\
+& + 2 \operatorname{Re} \left( j \frac{2\pi}{\lambda} (\mathbf{r}_j^T \boldsymbol{\theta}_n - \mathbf{r}_i^T \boldsymbol{\theta}_m) \right) \\
+& + 2 \operatorname{Re} (\mathbf{r}_j^T (\boldsymbol{\mu}_m - \boldsymbol{\rho}_m)) \\
+& + 2 \operatorname{Re} (\mathbf{r}_i^T (\boldsymbol{\mu}_n - \boldsymbol{\rho}_n)) \\
+& + 2 \operatorname{Re} (\boldsymbol{\mu}_m - \boldsymbol{\rho}_m) \\
+& + 2 \operatorname{Re} (\boldsymbol{\mu}_n - \boldsymbol{\rho}_n)
+\end{align*}
+$$
+
+$$
+\times
+\sum_{i=1}^{M}
+\sum_{j=1}^{M}
+\{
+\mathbf{R}_{\mathrm{n}}^{-1}
+\}_{i,j}
+\times
+\exp
+\left(
+j
+\frac{2\pi}{\lambda}
+(
+\mathbf{r}_{j}^{T}
+\boldsymbol{\theta}_{n}
+-
+\mathbf{r}_{i}^{T}
+\boldsymbol{\theta}_{m}
+)
+\right)
+\times
+(-j
+\frac{2\pi}{\lambda}
+\mathbf{r}_{i}^{T}
+\boldsymbol{\mu}_{m})
+\times
+(j
+\frac{2\pi}{\lambda}
+\mathbf{r}_{j}^{T}
+\boldsymbol{\rho}_{n})
+.
+\quad (35)
+$$
+---PAGE_BREAK---
+
+In particular, if one assumes $\mathbf{R}_n = \sigma_n^2 \mathbf{I}$, then, several simplifications can be done:
+
+• If $(m-1)p+1 \le i,j \le mp$, then
+
+$$
+\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho}) = \frac{1}{\sigma_n^2} \sum_{i=1}^{M} \left\| \exp\left(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m\right) - \exp\left(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_m\right) \right\|^2 \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_m \|^2, \quad (36)
+$$
+
+where we note that the function $\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho})$ does not depend on the parameter $\theta$.
+
+• Otherwise, if $(m-1)p+1 \le i \le mp$ and $(n-1)p+1 \le j \le np$, then
+
+$$
+\begin{equation}
+\begin{split}
+\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho}) &= \frac{1}{\sigma_n^2} \sum_{i=1}^{M} \left\| \exp\left(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m\right) \right\|^2 \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_m \|^2 + \\
+&\qquad + \frac{1}{\sigma_n^2} \sum_{i=1}^{M} \left\| \exp\left(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_n\right) \right\|^2 \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_n \|^2 \\
+&\quad - 2 \operatorname{Re} \left( \frac{1}{\sigma_n^2} \sum_{i=1}^{M} \exp\left(j \frac{2\pi}{\lambda} \mathbf{r}_i^T (\boldsymbol{\theta}_n - \boldsymbol{\theta}_m)\right) \exp\left(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m\right) \exp\left(j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_n\right) \sum_{t=1}^{T} \{\mathbf{s}(t)\}_m^* \{\mathbf{s}(t)\}_n \right)
+\end{split}
+\tag{37}
+\end{equation}
+$$
+
+It is clear that the proposed above formulas for both the unconditional and the conditional models can be applied to any kind of array geometry and whatever the number of sources. However, they generally depend on the parameter vector $\theta$. This means that, in general, the calculation of the set of functions $\eta$ will have to be performed numerically (except if one is able to find a closed-form expression of Eqn. (11)). In the following we present a kind of array geometry where, fortunately, the set of functions $\eta_\theta$ will not depend on $\theta$ leading to a straightforward calculation of the bound.
+
+5.2. 3D Source localization with a planar array
+
+We first consider the problem of DOA estimation of a single narrow band source in the far field area by using an arbitrary planar array. In fact, we start by this general setting because the non-uniform linear array is clearly a particular case of this array. Without loss of generality, we assume that the sensors of this array lay on the $xOy$ plan with Cartesian coordinates (see Fig. .1). Therefore, the vector $\mathbf{r}_i$ contains the coordinate of the $i^{th}$ sensor position with respect to this referential, i.e., $\mathbf{r}_i = [d_{x_i} \ d_{y_i}]^T$, $i = 1, ..., M$. From (28), the steering vector is given by
+
+$$
+\mathbf{a}(\boldsymbol{\theta}) = \left[ \exp\left(j \frac{2\pi}{\lambda} (d_{x_1} u + d_{y_1} v)\right) \dots \exp\left(j \frac{2\pi}{\lambda} (d_{x_M} u + d_{y_M} v)\right) \right]^T, \quad (38)
+$$
+
+where, as in [18], the parameter vector of interest is $\boldsymbol{\theta} = [u \ v]^T$ where
+
+$$
+\begin{cases}
+u = \sin \varphi \cos \phi, \\
+v = \sin \varphi \sin \phi,
+\end{cases}
+\tag{39}
+$$
+
+and where $\varphi$ and $\phi$ represent the elevation and azimuth angles of the source, respectively. The parameters space is such that $u \in [-1, 1]$ and $v \in [-1, 1]$. Therefore, we assume that they both follow a uniform distribution over $[-1, 1]$. Note that from a physical point of view, it should be more tempting to choose a uniform
+---PAGE_BREAK---
+
+prior for $\varphi$ and $\phi$. This will lead to a probability density functions for $u$ and $v$ not uniform. To the best of our knowledge, this assumption has only been used in the context of lower bounds in [20]. Unfortunately, such prior leads to an untractable expression of the bound (see Eqn. (21) of [20]). Consequently, other authors have generally not specified the prior leading to semi closed-form expressions of bounds (i.e. that it remains a numerical integration to perform over the parameters) [20][37][22]. On the other hand, in order to obtain a closed-form expression, authors have generally used a simplified assumption, i.e. a uniform prior directly on $u$ and $v$ (see, for example, [21][38]). In this paper, we have followed the same way by expecting a slight modification of performance with respect to a more physical model and in order to be able to get closed-form expressions of the bound.
+
+We choose the matrix of test points such that
+
+$$ \mathbf{H} = [\mathbf{h}_u \quad \mathbf{h}_v] = \begin{bmatrix} h_u & 0 \\ 0 & h_v \end{bmatrix}. \qquad (40) $$
+
+Then, we have: $\theta + \mathbf{h}_u = [u + h_u \ v]^T$ and $\theta + \mathbf{h}_v = [u \ v + h_v]^T$. Moreover, we now have two elements $s_i \in [0, 1], i = 1, 2$ for which we will prefer the notation $s_u$ and $s_v$, respectively.
+
+### 5.2.1. Unconditional observation model $\mathcal{M}_1$
+
+Under $\mathcal{M}_1$, let us set $U_{SNR} = \frac{\sigma_s^4}{\sigma_n^2(M\sigma_s^2+\sigma_n^2)}$. The closed-form expressions of the elements of matrix $\mathbf{G} = [\begin{matrix} \{\mathbf{G}\}_{uu} & \{\mathbf{G}\}_{uv} \\ \{\mathbf{G}\}_{vu} & \{\mathbf{G}\}_{vv} \end{matrix}]$ are given by (see Appendix B.5 for the proof):
+
+$$ \{\mathbf{G}\}_{uu} = \frac{\left( \left(1 - \frac{|h_u|}{2}\right) \left(1 + 2s_u(1 - 2s_u)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u)\right\|^2\right)\right)^{-T} + \left(1 - \frac{|h_u|}{2}\right) \left(1 + 2(1-s_u)(2s_u-1)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u)\right\|^2\right)\right)^{-T} \right)}{\left(1 - \frac{|h_u|}{2}\right)^2 \left(1 + s_u(1-s_u)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u)\right\|^2\right)\right)^{-2T}}, \quad (41) $$
+
+$$ \{\mathbf{G}\}_{vv} = \frac{\left( \left(1 - \frac{|h_v|}{2}\right) \left(1 + 2s_v(1 - 2s_v)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{y_k}h_v)\right\|^2\right)\right)^{-T} + \left(1 - \frac{|h_v|}{2}\right) \left(1 + 2(1-s_v)(2s_v-1)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{y_k}h_v)\right\|^2\right)\right)^{-T} \right)}{\left(1 - \frac{|h_v|}{2}\right)^2 \left(1 + s_v(1-s_v)U_{\text{SNR}} \left(M^2 - \left\|\sum_{k=1}^{M} \exp(-j\frac{4\pi}{\lambda}d_{y_k}h_v)\right\|^2\right)\right)^{-2T}}, \quad (42) $$
+---PAGE_BREAK---
+
+$$
+\begin{equation}
+\left\{
+\begin{aligned}
+& \left(
+ \begin{pmatrix}
+ s_u s_v \left( \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)) \right\|^2 - M^2 \right) \\
+ +s_u(1-s_u-s_v) \left( \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 - M^2 \right) \\
+ +s_v(1-s_u-s_v) \left( \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{y_k} h_v) \right\|^2 - M^2 \right)
+ \end{pmatrix}
+ \right)^{-T} \\
+& \times \left(
+ \begin{pmatrix}
+ -s_u s_v (1-s_u-s_v) \frac{U_{SNR}^2 R^{\sigma_n^2}}{\sigma_n^2} \\
+ \left( \sum_{k=1}^{M} \exp(j \frac{2\pi d_{y_k} h_v}{\lambda}) \sum_{k=1}^{M} \exp(-j \frac{2\pi d_{x_k} h_u}{\lambda}) \sum_{k=1}^{M} \exp(j \frac{2\pi (d_{x_k} h_u - d_{y_k} h_v)}{\lambda}) \right) \\
+ +\sum_{k=1}^{M} \exp(-j \frac{2\pi d_{y_k} h_v}{\lambda}) \sum_{k=1}^{M} \exp(j \frac{2\pi d_{x_k} h_u}{\lambda}) \sum_{k=1}^{M} \exp(-j \frac{2\pi (d_{x_k} h_u - d_{y_k} h_v)}{\lambda}) \\
+ -M \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{y_k} h_v) \right\|^2 - M \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 \\
+ -M \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)) \right\|^2 + M^3
+ \end{pmatrix}
+ \\
+& + \left(
+ \begin{pmatrix}
+ (1-s_u)(1-s_v) \left( \left\| \sum_{k=1}^{M} \exp(j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)) \right\|^2 - M^2 \right) \\
+ +(1-s_u)(s_u+s_v-1) \left( \left\| \sum_{k=1}^{M} \exp(j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 - M^2 \right) \\
+ +(1-s_v)(s_u+s_v-1) \left( \left\| \sum_{k=1}^{M} \exp(j \frac{2\pi}{\lambda} d_{y_k} h_v) \right\|^2 - M^2 \right)
+ \end{pmatrix}
+ \right)^{-T} \\
+& + (1-s_u)(1-s_v)(s_u+s_v-1) \frac{U_{SNR}^2 R^{\sigma_n^2}}{\sigma_n^2} \\
+& + (-1-s_u)(1-s_v)(s_u+s_v-1) (1-U_{SNR}) \\
+& + (1-U_{SNR}) \left(
+ \begin{pmatrix}
+ s_u(1-s_v) \left( \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u + d_{y_k} h_v)) \right\|^2 - M^2 \right) \\
+ +s_u(s_v-s_u) \left( \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 - M^2 \right) \\
+ +(1-s_v)(s_u-s_v) \left( \left\| \sum_{k=1}^{M} \exp(j \frac{2\pi}{\lambda} d_{y_k} h_v) \right\|^2 - M^2 \right)
+ \end{pmatrix}
+ \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR})
+ \end{pmatrix}
+ \\
+& - (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s_u) (1-U_{SNR}) \\
+& + (-s_u)(1-s_v)(s_v-s-u)
+ \frac{U_{SNR}^2 R^{\sigma_n^2}}{\sigma_n^2}
+ \\
+& - (-s-u(1-u_s)) U_{SNR}
+ (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_y_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j\frac{2\pi}{\lambda}\frac{d_x_h v}{h_c}))\\
+& - (\sum_{k=1}^{M}\exp(-j)\frac{\sigma_n^2 R^{\sigma_n^2}}{\sigma_n^4}\\
+&
+ \left(
+ 0
+ \right)^{-T},
+ \\[6ex]
+{\mathbf{\Gamma}}uv =
+ &
+ \left(
+ 0
+ \right)^{-T}
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ )
+ (
+ 0
+ +
+ U_S U_N S R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_R U_T S_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R_S R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-R-S-RRSRSSSRRSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_z_zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzppp_pppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprprpr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr pr p pp_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_o_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_yy__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o___o_____
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$$
+
+$${G}}uv = $$
+---PAGE_BREAK---
+
+and, of course, ${\mathbf{G}}_{uv} = {\mathbf{G}}_{vu}$. Consequently, the unconditional Weiss-Weinstein bound is 2 × 2 matrix given by:
+
+$$
+\begin{align}
+\mathbf{UWWB} &= \mathbf{HG}^{-1}\mathbf{H}^T \nonumber \\
+&= \frac{1}{\{\mathbf{G}\}_{uu}\{\mathbf{G}\}_{vv} - \{\mathbf{G}\}_{uv}^2} \begin{bmatrix}
+h_u^2 \{\mathbf{G}\}_{vv} & -h_u h_v \{\mathbf{G}\}_{uv} \\
+-h_u h_v \{\mathbf{G}\}_{uv} & h_v^2 \{\mathbf{G}\}_{uu}
+\end{bmatrix}, \tag{44}
+\end{align}
+$$
+
+which has to be optimized over $s_u$, $s_v$, $h_u$, and $h_v$. Concerning the optimization over $s_u$ and $s_v$, several other works in the literature have suggested to simply use $s_u = s_v = 1/2$. Most of the time, numerical simulations of this simplified bound compared with the bound obtained after optimization over $s_u$ and $s_v$ leads to the same results while their is no formal proof of this fact (see [5] page 41 footnote 17). Note that, thanks to the expressions obtained in the next Section concerning the linear array, we will be able to prove that $s = 1/2$ is a (maybe not unique) correct choice for any linear array. In the case of the planar array treated in this Section, we will only check this property by simulation.
+
+In the particular case where $s_u = s_v = 1/2$ one obtains the following simplified expressions
+
+$$
+\begin{align}
+\{\mathbf{G}\}_{uu} &= \frac{2\left(1-\frac{|h_u|}{2}\right) - 2(1-|h_u|)\left(1+\frac{U_{SNR}}{4}\left(M^2 - \left\|\sum_{k=1}^M \exp(-j\frac{4\pi}{\lambda}d_{x_k}h_u)\right\|^2\right)\right)^{-T}}{\left(1-\frac{|h_u|}{2}\right)^2 \left(1+\frac{U_{SNR}}{4}\left(M^2 - \left\|\sum_{k=1}^M \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u)\right\|^2\right)\right)^{-2T}}, \tag{45} \\
+\{\mathbf{G}\}_{vv} &= \frac{2\left(1-\frac{|h_v|}{2}\right) - 2(1-|h_v|)\left(1+\frac{U_{SNR}}{4}\left(M^2 - \left\|\sum_{k=1}^M \exp(-j\frac{4\pi}{\lambda}d_{y_k}h_v)\right\|^2\right)\right)^{-T}}{\left(1-\frac{|h_v|}{2}\right)^2 \left(1+\frac{U_{SNR}}{4}\left(M^2 - \left\|\sum_{k=1}^M \exp(-j\frac{2\pi}{\lambda}d_{y_k}h_v)\right\|^2\right)\right)^{-2T}}, \tag{46}
+\end{align}
+$$
+
+and
+
+$$
+\begin{equation}
+\begin{split}
+\{\mathbf{G}\}_{uv} = {}& \frac{\left( 2 \left( 1 + \frac{U_{SNR}}{4} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)) \right\|^2 \right) \right)^{-T} \right.}{\left( 1 + \frac{U_{SNR}}{4} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u + d_{y_k} h_v)) \right\|^2 \right) \right)^{-T}} \\
+& \quad \left. - 2 \left( 1 + \frac{U_{SNR}}{4} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} (d_{x_k} h_u + d_{y_k} h_v)) \right\|^2 \right) \right)^{-T} \right)} \\
+& \quad \cdot \frac{\left( 1 + \frac{U_{SNR}}{4} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 \right) \right)^{-T}}{\left( 1 + \frac{U_{SNR}}{4} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{y_k} h_v) \right\|^2 \right) \right)^{-T}}
+\end{split}
+\tag{47}
+\end{equation}
+$$
+
+Again, the Weiss-Weinstein bound is obtained by using the above expressions in Eqn. (44) and after an optimization over the test points. The optimization over the test points can be done over a search grid or by using the ambiguity diagram of the array in order to reduce significantly the computational cost (see [14],[22], [30],[39]).
+---PAGE_BREAK---
+
+5.2.2. Conditional observation model $M_2$
+
+Under $\mathcal{M}_2$, let us set $C_{SNR} = \frac{1}{\sigma_n^2} \sum_{t=1}^{T} \|s(t)\|^2$. The closed-form expressions of the elements of matrix **G** are given by (see Appendix .6 for the proof):
+
+$$
+\begin{align}
+\{\mathbf{G}\}_{uu} &= \frac{\left( \begin{aligned}[c]
+ &\left(1 - \frac{|h_u|}{2}\right) \exp\left(4s_u(2s_u - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right) \\
+ + &\left(1 - \frac{|h_u|}{2}\right) \exp\left(4(2s_u - 1)(s_u - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right) \\
+ - &2(1 - |h_u|) \exp\left(2s_u(s_u - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{x_k}h_u\right)\right)\right)
+\end{aligned} \right)}{\left(1 - \frac{|h_u|}{2}\right)^2 \exp\left(4s_u(s_u - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right)}, \tag{48}
+\end{align}
+$$
+
+$$
+\begin{equation}
+\begin{split}
+\{\mathbf{G}\}_{vv} = {}& \frac{\left( \begin{aligned}[t]
+ &\left(1 - \frac{|h_v|}{2}\right) \exp\left(4s_v(2s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right) \\
+ + &\left(1 - \frac{|h_v|}{2}\right) \exp\left(4(2s_v - 1)(s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right) \\
+ - &2(1 - |h_v|) \exp\left(2s_v(s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{y_k}h_v\right)\right)\right)
+\end{aligned} \right)}{\left(1 - \frac{|h_v|}{2}\right)^2 \exp\left(4s_v(s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right)},
+\end{split}
+\tag{49}
+\end{equation}
+$$
+
+$$
+\begin{equation}
+\begin{split}
+\{\mathbf{G}\}_{uv} = {}& \frac{\left(
+ \begin{aligned}[t]
+ & 2s_u(s_u + s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right) \\
+ & + 2s_v(s_u + s_v - 1)C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right) \\
+ & - 2s_u s_v C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u - d_{y_k}h_v)\right)\right)
+ \end{aligned}
+ \right)}
+ {\exp\left(2s_u(s_u-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right)}
+ \times
+ \left(
+ \begin{aligned}[t]
+ & 2(s_u-1)(s_u+s_v-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right) \\
+ & + 2(s_v-1)(s_u+s_v-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right) \\
+ & - 2(1-s_u)(1-s_v)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u-d_{y_k}h_v)\right)\right)
+ \end{aligned}
+ \right)}
+ \\
+& +
+ \times
+ \left(
+ \begin{aligned}[t]
+ & 2s_u(s_u-s_v)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right) \\
+ & + 2(1-s_v)(s_u-s_v)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right) \\
+ & + 2s_u(s_v-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u+d_{y_k}h_v)\right)\right)
+ \end{aligned}
+ \right)
+ \\
+& -
+ \times
+ \left(
+ \begin{aligned}[t]
+ & 2(s_u-1)(s_u-s_v)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right) \\
+ & + 2s_v(s_v-s_u)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right) \\
+ & + 2(s_u-1)s_v C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u+d_{y_k}h_v)\right)\right)
+ \end{aligned}
+ \right)
+ \\
+& -
+ \times
+ \left(
+ \begin{aligned}[t]
+ & 2(s_u-1)(s_u-s_v)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right) \\
+ & + 2s_v(s_v-s_u)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right) \\
+ & + 2(s_u-1)s_v C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u+d_{y_k}h_v)\right)\right)
+ \end{aligned}
+ \right)
+ \\
+& =
+ \frac{\exp\left(2s_u(s_u-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right)
+ \exp\left(2s_v(s_v-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right)}
+ {\exp\left(2s_u(s_u-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right)
+ \exp\left(2s_v(s_v-1)C_{SNR}\left(M-\sum_{k=1}^{M}\cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right)},
+ \end{split}
+ \tag{50}
+\end{split}
+$$
+
+and $\{\mathbf{G}\}_{uv} = \{\mathbf{G}\}_{vu}$. Consequently, the conditional Weiss-Weinstein bound is 2 × 2 matrix given by using the above equations in Eqn. (44). As for the unconditional case, if we set $s_u = s_v = 1/2$, one obtains the following simplified expressions
+---PAGE_BREAK---
+
+$$
+\begin{align}
+\{\mathbf{G}\}_{uu} &= \frac{2\left(1 - \frac{|h_u|}{2}\right) - 2(1 - |h_u|)\exp\left(-\frac{C_{SNR}}{2}\left(M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{x_k}h_u\right)\right)\right)}{\left(1 - \frac{|h_u|}{2}\right)^2 \exp\left(-C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right)\right)}, \tag{51} \\
+\{\mathbf{G}\}_{vv} &= \frac{2\left(1 - \frac{|h_v|}{2}\right) - 2(1 - |h_v|)\exp\left(-\frac{C_{SNR}}{2}\left(M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{y_k}h_v\right)\right)\right)}{\left(1 - \frac{|h_v|}{2}\right)^2 \exp\left(-C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right)}, \tag{52} \\
+\{\mathbf{G}\}_{uv} &= \frac{\begin{pmatrix} 2 \exp\left(-\frac{C_{SNR}}{2}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u - d_{y_k}h_v)\right)\right)\right) \\ -2 \exp\left(-\frac{C_{SNR}}{2}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}(d_{x_k}h_u + d_{y_k}h_v)\right)\right)\right)}{\exp\left(-\frac{C_{SNR}}{2}\left(2M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_u\right) - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right)\right)}. \tag{53}
+\end{align}
+$$
+
+By using the above expressions in Eqn. (44) and after an optimization over the test points, one obtains
+the Weiss-Weinstein bound.
+
+5.3. Source localization with a non-uniform linear array
+
+We now briefly consider the DOA estimation of a single narrow band source in the far area by using a non-uniform linear array antenna. Without loss of generality, let us assume that the linear array antenna lays on the Ox axis of the coordinate system (see Fig. .1), consequently, $d_{y_i} = 0, \forall i$. The sensor positions vector is denoted $[d_{x_1} ... d_{x_M}]$. By letting $\theta = \sin \varphi$, where $\varphi$ denotes the elevation angle of the source, the steering vector is then given by
+
+$$
+\mathbf{a}(\theta) = \left[ \exp \left( j \frac{2\pi}{\lambda} d_{x_1} \theta \right) \dots \exp \left( j \frac{2\pi}{\lambda} d_{x_M} \theta \right) \right]^T . \quad (54)
+$$
+
+We assume that the parameter $\theta$ follows a uniform distribution over [-1, 1]. As in Section 4.2 and since
+the parameter of interest is a scalar, matrix **H** of the test points becomes a scalar denoted $h_\theta$. In the
+same way, there is only one element $s_i \in [0, 1]$ which will be simply denoted *s*. The closed-form expressions
+given here are straightforwardly obtained from the aforementioned results on the planar array about the
+element denoted $\{\mathbf{G}\}_{uu}$. We will continue to use the previously introduced notations $U_{SNR} = \frac{\sigma_s^4}{\sigma_n^2 (M\sigma_s^2 + \sigma_n^2)}$
+and $C_{SNR} = \frac{1}{\sigma_n^2} \sum_{t=1}^T \|s(t)\|^2$.
+---PAGE_BREAK---
+
+### 5.3.1. Unconditional observation model $M_1$
+
+The closed-form expression of the unconditional Weiss-Weinstein bound, denoted UWWB, is given by
+
+$$ \text{UWWB} = \frac{h_{\theta}^{2} \left(1 - \frac{|h_{\theta}|}{2}\right)^{2} \left(1 + s(1-s)U_{\text{SNR}} \left(M^{2} - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-2T}}{\left(1 - \frac{|h_{\theta}|}{2}\right) \left( \begin{aligned}[t] & \left(1 + 2s(1-2s)U_{\text{SNR}} \left(M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-T} \\ & + \left(1 + 2(1-s)(2s-1)U_{\text{SNR}} \left(M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-T} \end{aligned} \right) \\ & - 2(1-|h_{\theta}|) \left(1 + s(1-s)U_{\text{SNR}} \left(M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{4\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-T}} \tag{55} $$
+
+In order to find one optimal value of $s$ that maximizes $\mathbf{HG}^{-1}\mathbf{H}^T$, $\forall h_\theta$ we have considered the derivative of $\mathbf{HG}^{-1}\mathbf{H}^T$ w.r.t. $s$. The calculation (not reported here) is straightforward and it is easy to see that $\left.\frac{\partial \mathbf{HG}^{-1}\mathbf{H}^T}{\partial s}\right|_{s=\frac{1}{2}} = 0$. Consequently, the Weiss-Weinstein bound has just to be optimized over $h_\theta$ and is simplified leading to
+
+$$ UWWB = \sup_{h_{\theta}} \frac{h_{\theta}^{2} \left(1 - \frac{|h_{\theta}|}{2}\right)^{2} \left(1 + \frac{U_{SNR}}{4} \left(M^{2} - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-2T}}{2 \left(1 - \frac{|h_{\theta}|}{2}\right) - 2(1-|h_{\theta}|) \left(1 + \frac{U_{SNR}}{4} \left(M^{2} - \left\| \sum_{k=1}^{M} \exp(-j \frac{4\pi}{\lambda} d_{x_k} h_{\theta}) \right\|^2\right)\right)^{-T}} . \tag{56} $$
+
+In the classical case of a uniform linear array (i.e., $d_{x_k} = d$), this expression can be still simplified by
+noticing that $\sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_{\theta}) = M \exp(-j \frac{2\pi d}{\lambda} h_{\theta})$.
+
+### 5.3.2. Conditional observation model $M_2$
+
+The closed-form expression of the conditional Weiss-Weinstein bound CWWB is given by
+
+$$ CWWB = \frac{h_{\theta}^{2} \left(1 - \frac{|h_{\theta}|}{2}\right)^{2} \exp \left(4s(s-1)C_{SNR} \left(M - \sum_{k=1}^{M} \cos \left(\frac{2\pi}{\lambda} d_{x_k} h_{\theta}\right)\right)\right)}{\left(1 - \frac{|h_{\theta}|}{2}\right) \left( \begin{aligned}[t] & \exp \left(4s(2s-1)C_{SNR} \left(M - \sum_{k=1}^{M} \cos \left(\frac{2\pi}{\lambda} d_{x_k} h_{\theta}\right)\right)\right) \\ & + \exp \left(4(2s-1)(s-1)C_{SNR} \left(M - \sum_{k=1}^{M} \cos \left(\frac{2\pi}{\lambda} d_{x_k} h_{\theta}\right)\right)\right) \\ & - 2(1-|h_{\theta}|) \exp \left(2s(s-1)C_{SNR} \left(M - \sum_{k=1}^{M} \cos \left(\frac{4\pi}{\lambda} d_{x_k} h_{\theta}\right)\right)\right) \end{aligned} \right)} . \tag{57} $$
+
+Again, it is easy to check that $\left.\frac{\partial \mathbf{HG}^{-1}\mathbf{H}^T}{\partial s}\right|_{s=\frac{1}{2}} = 0$. Consequently, one optimal value of $s$ that maximizes $\mathbf{HG}^{-1}\mathbf{H}^T$, $\forall h_\theta$ is $s = \frac{1}{2}$. The Weiss-Weinstein bound is then simplified as follows
+
+$$ CWWB = \sup_{h_\theta} \frac{h_\theta^2 \left(1 - \frac{|h_\theta|}{2}\right)^2 \exp\left(-C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}h_\theta\right)\right)\right)}{2\left(1 - \frac{|h_\theta|}{2}\right) - 2(1-|h_\theta|)\exp\left(-\frac{1}{2}C_{SNR}\left(M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{x_k}h_\theta\right)\right)\right)}. \tag{58} $$
+---PAGE_BREAK---
+
+In the classical case of a uniform linear array (i.e., $d_{x_k} = d$), this expression can be still simplified by
+noticing that $\sum_{k=1}^{M} \cos(\frac{2\pi}{\lambda}d_{x_k}h_{\theta}) = M \cos(\frac{2\pi d}{\lambda}h_{\theta})$.
+
+**6. Simulation results and analysis**
+
+As an illustration of the previously derived results, we first consider the scenario proposed in Fig. 5 of
+[18], i.e., the DOA estimation under the unconditional model using an uniform circular array consisting of
+$M = 16$ sensors with a half-wavelength inter-sensors spacing. The numbers of snapshots is $T = 100$. Since
+the array is symmetric, the performance estimation concerning the parameters $u$ and $v$ are the same, this is
+why only the performance with respect to the parameters $u$ is given in Fig. 2. The Weiss-Weinstein bound
+is computed using Eqn. (45), (46) and (47). The Ziv-Zakai bound is computed using Eqn. (24) in [18]. The
+empirical global mean square error (MSE) of the maximum *a posteriori* (MAP) estimator is obtained over
+2000 Monte Carlo trials. As in the Fig. (1b) in [18], one observes that both the Weiss-Weinstein bound and
+the Ziv-Zakai bound are tight w.r.t. the MSE of the MAP and capture the SNR threshold. Note that, in
+the Fig. (1b) in [18], the Weiss-Weinstein bound was computed numerically only.
+
+To the best of our knowledge, their are no closed-form expressions of the Ziv-Zakai bound for the
+conditional model available in the literature. In this case, we consider 3D source localization using a V-
+shaped array. Indeed, it has been shown that this kind of array is able to outperform other classical planar
+arrays, more particularly the uniform circular array [40]. This array is made from two branches of uniform
+linear arrays with 6 sensors located on each branches and one sensor located at the origin. We denote $\Delta$ the
+angle between these two branches. The sensors are equally spaced with a half-wavelength. The number of
+snapshots is $T = 20$. Fig. 3 shows the behavior of the Weiss-Weinstein bound with respect to the opening
+angle $\Delta$. One can observe that when $\Delta$ varies, the estimation performance concerning the estimation of
+parameter $u$ varies slightly. On the contrary, the estimation performance concerning the estimation of
+parameter $v$ is strongly dependent on $\Delta$. When $\Delta$ increases from 0° to 90°, the Weiss-Weinstein bound of
+$v$ decreases, as well as the SNR threshold. Fig. 3 also shows that $\Delta = 90^\circ$ is the optimal value, which is
+different with the optimal value $\Delta = 53.13^\circ$ in [40] since the assumptions concerning the source signal are
+not the same.
+
+**7. Conclusion**
+
+In this paper, the Weiss-Weinstein bound on the mean square error has been studied in the array process-
+ing context. In order to analyze the unconditional and conditional signal source models, the structure of the
+bound has been detailed for both Gaussian observation models with parameterized mean or parameterized
+covariance matrix.
+---PAGE_BREAK---
+
+Appendix 1. Closed-form expression of $\eta_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ under the Gaussian observation model with parameterized covariance
+
+Since $\mathbf{y}(t) \sim \mathcal{CN}(\mathbf{0}, \mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}))$, one has,
+
+$$ \eta_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) = \frac{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta})|^{T(\alpha+\beta-1)}}{\pi^{MT} |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{u})|^{T\alpha} |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{v})|^{T\beta}} \int_{\Omega} \exp \left( -\sum_{t=1}^{T} \mathbf{y}^H(t) \mathbf{\Gamma}^{-1} \mathbf{y}(t) \right) d\mathbf{Y}, \quad (1) $$
+
+where $\mathbf{\Gamma}^{-1} = \alpha \mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta} + \mathbf{v}) - (\alpha + \beta - 1)\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta})$. Then, since
+
+$$ \int_{\Omega} \exp \left\{ -\sum_{t=1}^{T} \mathbf{y}^H(t) \mathbf{\Gamma}^{-1} \mathbf{y}(t) \right\} d\mathbf{Y} = \pi^{MT} |\mathbf{\Gamma}|^T, \quad (2) $$
+
+one has
+
+$$ \eta_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) = \frac{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta})|^{T(\alpha+\beta-1)} |\mathbf{\Gamma}|^T}{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{u})|^{T\alpha} |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{v})|^{T\beta}} = \frac{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta})|^{T(\alpha+\beta-1)}}{|\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{u})|^{T\alpha} |\mathbf{R}_{\mathbf{y}}(\boldsymbol{\theta}+\mathbf{v})|^{T\beta} |\mathbf{\Gamma}^{-1}|^T} \quad (3) $$
+
+Appendix 2. Closed-form expression of $\eta_{\theta}(\alpha, \beta, u, v)$ under the Gaussian observation model with parameterized mean
+
+Since $\mathbf{y}(t) \sim \mathcal{CN}(\mathbf{f}_t(\boldsymbol{\theta}), \mathbf{R}_{\mathbf{y}})$, one has
+
+$$ \eta_{\theta}(\alpha, \beta, u, v) = \frac{1}{\pi^{MT} |\mathbf{R}_{\mathbf{y}}|^T} \int_{\Omega} \exp \left( -\sum_{t=1}^{T} \xi(t) \right) d\mathbf{Y}, \quad (4) $$
+
+with⁴
+
+$$
+\begin{align*}
+\xi(t) &= \alpha (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta} + \mathbf{u}))^H \mathbf{R}_{\mathbf{y}}^{-1} (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta} + \mathbf{u})) + \beta (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta} + \mathbf{v}))^H \mathbf{R}_{\mathbf{y}}^{-1} (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta} + \mathbf{v})) \\
+&\quad + (1 - \alpha - \beta) (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta}))^H \mathbf{R}_{\mathbf{y}}^{-1} (\mathbf{y} - \mathbf{f}_t(\boldsymbol{\theta})) \\
+&= \mathbf{y}^H \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{y} + \alpha \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) + (1 - \alpha - \beta) \mathbf{f}_t^H (\boldsymbol{\theta}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) \\
+&\quad - 2 \operatorname{Re}\{\mathbf{y}^H \mathbf{R}_{\mathbf{y}}^{-1} (\alpha \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) + (1 - \alpha - \beta) \mathbf{f}_t (\boldsymbol{\theta}))\}.
+\end{align*}
+\quad (5)
+$$
+
+Let us set $\mathbf{x} = \mathbf{y} - (\alpha\mathbf{f}_t(\boldsymbol{\theta} + \mathbf{u}) + \beta\mathbf{f}_t(\boldsymbol{\theta} + \mathbf{v}) + (1-\alpha-\beta)\mathbf{f}_t(\boldsymbol{\theta}))$. Consequently,
+
+$$
+\begin{align}
+\mathbf{x}^H \mathbf{R}_{\mathrm{y}}^{-1} \mathbf{x} &= \mathbf{y}^H \mathbf{R}_{\mathrm{y}}^{-1} \mathbf{y} - 2 \operatorname{Re}\{\mathbf{y}^H \mathbf{R}_{\mathrm{y}}^{-1} (\alpha \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) + (1 - \alpha - \beta) \mathbf{f}_t (\boldsymbol{\theta}))\} \\
+&\quad + (\alpha \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) + (1 - \alpha - \beta) \mathbf{f}_t^H (\boldsymbol{\theta})) \mathbf{R}_{\mathrm{y}}^{-1} (\alpha \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) + (1 - \alpha - \beta) \mathbf{f}_t (\boldsymbol{\theta})) .
+\end{align}
+$$
+
+And $\xi(t)$ can be rewritten as
+
+$$
+\xi(t) = x^H R_y^{-1} x + c\xi(t),
+$$
+
+⁴For simplicity, the dependence on $t$ of $\tilde{\boldsymbol{f}}$ and $\tilde{\boldsymbol{x}}$ is not emphasized.
+---PAGE_BREAK---
+
+where
+
+$$
+\begin{align}
+\dot{\xi}(t) ={}& \alpha (1-\alpha) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{u}) + \beta (1-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) \nonumber \\
+& + (1-\alpha-\beta) (\alpha+\beta) \mathbf{f}_t^H (\boldsymbol{\theta}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) - 2 \operatorname{Re} \left\{ \alpha \beta \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta} + \mathbf{v}) \right. \nonumber \\
+& \qquad \left. + \alpha (1-\alpha-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{u}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) + \beta (1-\alpha-\beta) \mathbf{f}_t^H (\boldsymbol{\theta} + \mathbf{v}) \mathbf{R}_{\mathbf{y}}^{-1} \mathbf{f}_t (\boldsymbol{\theta}) \right\}. \tag{.8}
+\end{align}
+$$
+
+Note that $\dot{\xi}(t)$ is independent of $\mathbf{x}$. By defining $\mathbf{X} = [\mathbf{x}(1), \mathbf{x}(2), \dots, \mathbf{x}(T)]$, the function $\eta_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v})$ becomes
+
+$$
+\[
+\dot{\eta}_{\theta}(\alpha, \beta, \mathbf{u}, \mathbf{v}) = \frac{1}{\pi^{MT} |\mathbf{R}_{\mathbf{y}}|^T} \int_{\Omega} \exp \left( -\sum_{t=1}^{T} x^H \mathbf{R}_{\mathbf{y}}^{-1} x + \dot{\xi}(t) \right) d\mathbf{X} = \exp \left( -\sum_{t=1}^{T} \dot{\xi}(t) \right), \quad (.9)
+\]
+$$
+
+since $\frac{1}{\pi^{MT} |\mathbf{R_y}|^T} \int_{\Omega} \exp \left(-\sum_{t=1}^{T} x^H \mathbf{R_y}^{-1} x\right) d\mathbf{X} = 1$.
+
+Appendix .3. Closed-form expressions of $|m_1\mathbf{R}_y^{-1}(\theta_1) + m_2\mathbf{R}_y^{-1}(\theta_2)|$ and $|m_1\mathbf{R}_y^{-1}(\theta_1) + m_2\mathbf{R}_y^{-1}(\theta_2) + m_3\mathbf{R}_y^{-1}(\theta_3)|$
+
+Note that this calculation is actually an extension of the result obtained in Appendix A of [22] in which $m_1 = m_2 = \frac{1}{2}$ and $m_3 = 0$, but follows the same method. The inverse of $\mathbf{R_y}$ can be deduced from the Woodbury formula
+
+$$
+\mathbf{R}_{\mathrm{y}}^{-1}(\boldsymbol{\theta}) = \frac{1}{\sigma_n^2} \left( \mathbf{I}_M - \frac{\sigma_s^2 \mathbf{a}(\boldsymbol{\theta}) \mathbf{a}^H(\boldsymbol{\theta})}{\sigma_s^2 \| \mathbf{a}(\boldsymbol{\theta}) \|^2 + \sigma_n^2} \right).
+$$
+
+Then,
+
+$$
+\sum_{k=1}^{3} m_k \mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_k) = \frac{1}{\sigma_n^2} \sum_{k=1}^{3} m_k \left( I - \frac{\sigma_s^2 \mathbf{a}(\boldsymbol{\theta}_k) \mathbf{a}^H(\boldsymbol{\theta}_k)}{\sigma_s^2 \| \mathbf{a}(\boldsymbol{\theta}_k) \|^2 + \sigma_n^2} \right). \quad (10)
+$$
+
+Since the rank of **a**(*θ**k*)**a**H(*θ**k*) is equal to 1 and since *θ*1 ≠ *θ*2 ≠ *θ*3 (except for **h***k* = **h**l = 0), the above matrix has *M* − 3 eigenvalues equal to 1⁄*σ**n*2 ∑*k*=13 *m**k* and 3 eigenvalues corresponding to the eigenvectors made from the linear combination of **a**(*θ*1), **a**(*θ*2), and **a**(*θ*3): **a**(*θ*1) + *pa*(*θ*2) + *qa*(*θ*3). The determinant will then be the product of these *M* eigenvalues⁵. Let us set
+
+$$
+\varphi_k = \frac{\sigma_s^2}{\sigma_s^2 \|a(\theta_k)\|^2 + \sigma_n^2}, \quad k = 1, 2, 3. \tag{.11}
+$$
+
+Then, the three aforementioned eigenvalues denoted $\lambda$ must satisfy:
+
+$$
+\left( \sum_{k=1}^{3} m_k \mathbf{R}_{\mathbf{y}}^{-1} (\boldsymbol{\theta}_k) \right) (\mathbf{a}(\boldsymbol{\theta}_1) + p\mathbf{a}(\boldsymbol{\theta}_2) + q\mathbf{a}(\boldsymbol{\theta}_3)) = \lambda (\mathbf{a}(\boldsymbol{\theta}_1) + p\mathbf{a}(\boldsymbol{\theta}_2) + q\mathbf{a}(\boldsymbol{\theta}_3)). \quad (.12)
+$$
+
+By using Eqn. (.10) in the above equation and after a factorization with respect to **a**(*θ*₁), **a**(*θ*₂), and **a**(*θ*₃) one obtains
+
+⁵Note that we are only interested by the eigenvalues. Consequently, the linear combination of $a(\theta_1)$, $a(\theta_2)$, and $a(\theta_3)$ can be written $a(\theta_1) + pa(\theta_2) + qa(\theta_3)$ instead of $ra(\theta_1) + pa(\theta_2) + qa(\theta_3)$
+---PAGE_BREAK---
+
+$$
+\begin{align}
+& \left( x - m_1 \varphi_1 \| \mathbf{a}(\boldsymbol{\theta}_1) \|^2 - p m_1 \varphi_1 \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_2) - q m_1 \varphi_1 \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_3) \right) \mathbf{a}(\boldsymbol{\theta}_1) \nonumber \\
+& + \left( -m_2 \varphi_2 \mathbf{a}^H(\boldsymbol{\theta}_2) \mathbf{a}(\boldsymbol{\theta}_1) + p (x - m_2 \varphi_2 \| \mathbf{a}(\boldsymbol{\theta}_2) \|^2) - q m_2 \varphi_2 \mathbf{a}^H(\boldsymbol{\theta}_2) \mathbf{a}(\boldsymbol{\theta}_3) \right) \mathbf{a}(\boldsymbol{\theta}_2) \nonumber \\
+& + \left( -m_3 \varphi_3 \mathbf{a}^H(\boldsymbol{\theta}_3) \mathbf{a}(\boldsymbol{\theta}_1) - m_3 \varphi_3 p \mathbf{a}^H(\boldsymbol{\theta}_3) \mathbf{a}(\boldsymbol{\theta}_2) + q (x - m_3 \varphi_3 \| \mathbf{a}(\boldsymbol{\theta}_3) \|^2) \right) \mathbf{a}(\boldsymbol{\theta}_3) = 0, \tag{.13}
+\end{align}
+$$
+
+where⁶
+
+$$
+x = 1 - \sigma_n^2 \lambda. \quad (14)
+$$
+
+Consequently, the coefficients of $\mathbf{a}(\boldsymbol{\theta}_1)$, $\mathbf{a}(\boldsymbol{\theta}_2)$, and $\mathbf{a}(\boldsymbol{\theta}_3)$ are equals to zero leading to a system of three equations with two unknown ($p$ and $q$). Solving the two first equations to find⁷ $p$ and $q$, and applying the solution into the last equation, one obtains the following polynomial equation of $x$
+
+$$
+\begin{equation}
+\begin{split}
+& x^3 - x^2 \sum_{k=1}^{3} m_k \varphi_k \| \mathbf{a}(\boldsymbol{\theta}_k) \|^2 - \frac{x}{2} \sum_{k=1}^{3} \sum_{\substack{k'=1 \\ k' \neq k}}^{3} m_k \varphi_k m_{k'} \varphi_{k'} \left( \| \mathbf{a}^H(\boldsymbol{\theta}_k) \mathbf{a}(\boldsymbol{\theta}_{k'}) \|^2 - \| \mathbf{a}(\boldsymbol{\theta}_k) \|^2 \| \mathbf{a}(\boldsymbol{\theta}_{k'}) \|^2 \right) \\
+& - m_1 m_2 m_3 \varphi_1 \varphi_2 \varphi_3 (\| \mathbf{a}(\boldsymbol{\theta}_1) \|^2 \| \mathbf{a}(\boldsymbol{\theta}_2) \|^2 \| \mathbf{a}(\boldsymbol{\theta}_3) \|^2 - \| \mathbf{a}^H(\boldsymbol{\theta}_2) \mathbf{a}(\boldsymbol{\theta}_3) \|^2 \| \mathbf{a}(\boldsymbol{\theta}_1) \|^2 \\
+& - \| \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_2) \|^2 \| \mathbf{a}(\boldsymbol{\theta}_3) \|^2 - \| \mathbf{a}^H(\boldsymbol{\theta}_3) \mathbf{a}(\boldsymbol{\theta}_1) \|^2 \| \mathbf{a}^H(\boldsymbol{\theta}_2) \|^2 + \| \mathbf{a}^H(\boldsymbol{\theta}_3) \mathbf{a}(\boldsymbol{\theta}_2) \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_3) \| \mathbf{a}^H(\boldsymbol{\theta}_2) \| \mathbf{a}(\boldsymbol{\theta}_1) \\
+& + \| \mathbf{a}^H(\boldsymbol{\theta}_3) \mathbf{a}(\boldsymbol{\theta}_1) \mathbf{a}^H(\boldsymbol{\theta}_1) \mathbf{a}(\boldsymbol{\theta}_2) \| \mathbf{a}^H(\boldsymbol{\theta}_2) \mathbf{a}(\boldsymbol{\theta}_3) ) = 0
+\end{split}
+\end{equation}
+$$
+
+Since we are only interested by the product of the three eigenvalues, we do not have to solve this polynomial in $\lambda$ and only the opposite of the last term is required. This leads to Eqn. (31) with $\sum_{k=1}^{3} m_k = 1$. Of course, the closed-form expression of $|m_1\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_1) + m_2\mathbf{R}_{\mathbf{y}}^{-1}(\boldsymbol{\theta}_2)|$ is obtained by letting $m_3 = 0$ and $\sum_{k=1}^{2} m_k = 1$ in Eqn. (32).
+
+Appendix .4. Closed-form expressions of $\zeta_\theta (\mu, \rho)$
+
+Remind that the function $\zeta_{\theta} (\mu, \rho)$ is defined by Eqn. (18). Let us define $p$ as the number of parameters per sources (assumed to be constant for each sources). Then, without loss of generality, the full parameter vector $\theta$ can be decomposed as $\theta = [\theta_1^T ... \theta_N^T]^T$ where $\theta_i = [\theta_{i,1} ... \theta_{i,p}]^T$, $i = 1, ..., N$ with $q = Np$. Remind that $\mu = [0... \mu_i ... 0]^T$ and $\rho = [0... \rho_j ... 0]^T$. It exists two distinct cases to study: when both index $i$ and $j$ are such that $(m-1)p+1 \le i \le mp$, $m=1,...,N$ and $(m-1)p+1 \le j \le mp$ or when
+
+$^{6}$Note that, from Eqn. (16), $\sum_{k=1}^{3} m_k = 1$.
+
+$^7p$ and $q$ are given by
+
+$$
+p = \frac{m_2\varphi_2\mathbf{a}^H(\boldsymbol{\theta}_2)(m_1\varphi_1\mathbf{a}(\boldsymbol{\theta}_1)\mathbf{a}^H(\boldsymbol{\theta}_1) + (x-m_1\varphi_1\|\mathbf{a}(\boldsymbol{\theta}_1)\|^2)\mathbf{I})\mathbf{a}(\boldsymbol{\theta}_3)}{m_1\varphi_1\mathbf{a}^H(\boldsymbol{\theta}_1)(m_2\varphi_2\mathbf{a}(\boldsymbol{\theta}_2)\mathbf{a}^H(\boldsymbol{\theta}_2) + (x-m_2\varphi_2\|\mathbf{a}(\boldsymbol{\theta}_2)\|^2)\mathbf{I})\mathbf{a}(\boldsymbol{\theta}_3)}, \quad (.15)
+$$
+
+and
+
+$$
+q = \frac{(x - m_1 \varphi_1 ||\mathbf{a}(\boldsymbol{\theta}_1)||^2)(x - m_2 \varphi_2 ||\mathbf{a}(\boldsymbol{\theta}_2)||^2) - m_1 \varphi_1 m_2 \varphi_2 ||\mathbf{a}^H(\boldsymbol{\theta}_1)\mathbf{a}(\boldsymbol{\theta}_2)||^2 ||\mathbf{a}(\boldsymbol{\theta}_1)||^2}{m_1 \varphi_1 |\mathbf{a}^H(\boldsymbol{\theta}_1)| (m_2 \varphi_2 |\mathbf{a}(\boldsymbol{\theta}_2)| |\mathbf{a}^H(\boldsymbol{\theta}_2)| + (x - m_2 \varphi_2 ||\mathbf{a}(\boldsymbol{\theta}_2)||^2) |\mathbf{I}| |\mathbf{a}(\boldsymbol{\theta}_3)|)} . \quad (.16)
+$$
+---PAGE_BREAK---
+
+$(m-1)p+1 \le i \le mp, m=1,\dots,N$ and $(n-1)p+1 \le j \le np, n=1,\dots,N$ with $m \ne n$. Therefore let us denote:
+
+$$
+\left\{
+\begin{array}{l}
+\boldsymbol{\mu}_m = [0 \cdots 0 \quad h_i \quad 0 \cdots 0]^T \in \mathbb{R}^p \\
+\boldsymbol{\rho}_m = [0 \cdots 0 \quad h_j \quad 0 \cdots 0]^T \in \mathbb{R}^p
+\end{array}
+\right.
+\quad \text{if } (m-1)p+1 \le i,j \le mp
+\qquad (17)
+$$
+
+and
+
+$$
+\left\{
+\begin{array}{ll}
+\boldsymbol{\mu}_m = [0 \cdots 0 & h_i \quad 0 \cdots 0]^T \in \mathbb{R}^p, \\
+\boldsymbol{\rho}_n = [0 \cdots 0 & h_j \quad 0 \cdots 0]^T \in \mathbb{R}^p,
+\end{array}
+\right.
+\quad
+\text{if }
+\left\{
+\begin{array}{l}
+(m-1)p+1 \le i \le mp, \\
+(n-1)p+1 \le j \le np,
+\end{array}
+\right.
+\quad
+\text{with } m \ne n.
+\tag{18}
+$$
+
+Appendix .4.1. The case where (m − 1) p + 1 ≤ i, j ≤ mp
+
+In this case, one has:
+
+$$
+A(\theta + \mu) - A(\theta + \rho) = [\mathbf{0} \cdots \mathbf{0} \quad a(\theta_m + \mu_m) - a(\theta_m + \rho_m) \quad \mathbf{0} \cdots \mathbf{0}] \in C^{p \times N}, \quad (19)
+$$
+
+and consequently,
+
+$$
+\zeta_{\theta}(\boldsymbol{\mu}, \boldsymbol{\rho}) = \| \mathbf{R}_{\mathrm{n}}^{-1/2} (\mathbf{a}(\boldsymbol{\theta}_{m}+\boldsymbol{\mu}_{m}) - \mathbf{a}(\boldsymbol{\theta}_{m}+\boldsymbol{\rho}_{m})) \|^{2} \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_{m} \|^{2}. \quad (20)
+$$
+
+Due to Eqn. (28), one has
+
+$$
+\[
+\|\mathbf{R}_{\mathrm{n}}^{-1/2} (\mathbf{a}(\boldsymbol{\theta}_m + \boldsymbol{\mu}_m) - \mathbf{a}(\boldsymbol{\theta}_m + \boldsymbol{\rho}_m))\|^2 =
+\sum_{i=1}^{M} \sum_{j=1}^{M} \left\{ \mathbf{R}_{\mathrm{n}}^{-1} \right\}_{i,j} \exp \left( j \frac{2\pi}{\lambda} (\mathbf{r}_j^T - \mathbf{r}_i^T) \boldsymbol{\theta}_m \right) \left( \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m) - \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_m) \right) \\
+\times \left( \exp(j \frac{2\pi}{\lambda} \mathbf{r}_j^T \boldsymbol{\mu}_m) - \exp(j \frac{2\pi}{\lambda} \mathbf{r}_j^T \boldsymbol{\rho}_m) \right). \tag{21}
+\]
+$$
+
+In particular, in the case where $\mathbf{R}_n = \sigma_n^2 I$ one obtains
+
+$$
+\| \mathbf{R}_{\mathrm{n}}^{-1/2} (\mathbf{a}(\boldsymbol{\theta}_m + \boldsymbol{\mu}_m) - \mathbf{a}(\boldsymbol{\theta}_m + \boldsymbol{\rho}_m)) \|^{2} = \frac{1}{\sigma_n^2} \sum_{i=1}^{M} \| \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m) - \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_m) \|^{2}. \quad (22)
+$$
+
+Appendix .4.2. The case where (m − 1) p + 1 ≤ i ≤ mp and where (n − 1) p + 1 ≤ j ≤ np
+
+Without loss of generality, we assume that $n > m$. Then,
+
+$$
+\begin{align*}
+& A(\boldsymbol{\theta} + \boldsymbol{\mu}) - A(\boldsymbol{\theta} + \boldsymbol{\rho}) = [\boldsymbol{a}(\boldsymbol{\theta}_1) - \boldsymbol{a}(\boldsymbol{\theta}_1) \cdots \boldsymbol{a}(\boldsymbol{\theta}_m + \boldsymbol{\mu}_m) - \boldsymbol{a}(\boldsymbol{\theta}_m) \cdots \boldsymbol{a}(\boldsymbol{\theta}_n) - \boldsymbol{a}(\boldsymbol{\theta}_n + \boldsymbol{\rho}_n) \cdots \boldsymbol{a}(\boldsymbol{\theta}_N) - \boldsymbol{a}(\boldsymbol{\theta}_N)] \\
+& = [\mathbf{0} \cdots \mathbf{0} ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
+$$
+
+$$
+= [\mathbf{0}\cdots\mathbf{0}\quad a(\theta_m+\mu_m)-a(\theta_m)\quad 0\cdots0\quad a(\theta_m)-a(\theta_n+\rho_n)\quad 0\cdots0], \quad (23)
+$$
+
+and consequently,
+
+$$
+\zeta_{\theta}(\mu, \rho) = \sum_{t=1}^{T} \| R_{n}^{-1/2} (\mathbf{a}(\theta_{m}+\mu_{m}) - \mathbf{a}(\theta_{m})) [\mathbf{s}(t)]_{m} + (\mathbf{a}(\theta_{n}) - \mathbf{a}(\theta_{n}+\rho_{n})) [\mathbf{s}(t)]_{n} \|^{2}. \quad (24)
+$$
+---PAGE_BREAK---
+
+Let us set $\varkappa = \mathbf{R}_n^{-1/2}(\mathbf{a}(\boldsymbol{\theta}_m+\boldsymbol{\mu}_m)-\mathbf{a}(\boldsymbol{\theta}_m))$ and $\boldsymbol{\varrho} = \mathbf{R}_n^{-1/2}(\mathbf{a}(\boldsymbol{\theta}_n)-\mathbf{a}(\boldsymbol{\theta}_n+\boldsymbol{\rho}_n))$. Then, $\zeta_{\boldsymbol{\theta}}(\boldsymbol{\mu}, \boldsymbol{\rho})$ can be rewritten
+
+$$
+\begin{align*}
+\zeta_{\boldsymbol{\theta}}(\boldsymbol{\mu}, \boldsymbol{\rho}) &= \sum_{t=1}^{T} \| \varkappa \{\mathbf{s}(t)\}_{m} + \boldsymbol{\varrho} \{\mathbf{s}(t)\}_{n} \|^2 \\
+&= \sum_{t=1}^{T} \left( \varkappa^H \varkappa \| \{\mathbf{s}(t)\}_{m} \|^2 + \varkappa^H \boldsymbol{\varrho} \{\mathbf{s}(t)\}_{m}^* \{\mathbf{s}(t)\}_{n} + \boldsymbol{\varrho}^H \varkappa \{\mathbf{s}(t)\}_{m} \{\mathbf{s}(t)\}_{n}^* + \boldsymbol{\varrho}^H \boldsymbol{\varrho} \| \{\mathbf{s}(t)\}_{n} \|^2 \right) \\
+&= \varkappa^H \varkappa \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_{m} \|^2 + \boldsymbol{\varrho}^H \boldsymbol{\varrho} \sum_{t=1}^{T} \| \{\mathbf{s}(t)\}_{n} \|^2 + 2 \operatorname{Re} \left( \varkappa^H \boldsymbol{\varrho} \sum_{t=1}^{T} \{\mathbf{s}(t)\}_{m}^* \{\mathbf{s}(t)\}_{n} \right). \tag{25}
+\end{align*}
+$$
+
+By using the structure of the steering matrix **A**, it leads to
+
+$$
+\left\{
+\begin{aligned}
+\varkappa^H \varkappa &= \sum_{i=1}^{M} \sum_{j=1}^{M} \{\mathbf{R}_n^{-1}\}_{i,j} \exp(j \frac{2\pi}{\lambda} (\mathbf{r}_j^T - \mathbf{r}_i^T) \boldsymbol{\theta}_m) \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m) \exp(j \frac{2\pi}{\lambda} \mathbf{r}_j^T \boldsymbol{\mu}_m), \\
+\boldsymbol{\varrho}^H \boldsymbol{\varrho} &= \sum_{i=1}^{M} \sum_{j=1}^{M} \{\mathbf{R}_n^{-1}\}_{i,j} \exp(j \frac{2\pi}{\lambda} (\mathbf{r}_j^T - \mathbf{r}_i^T) \boldsymbol{\theta}_n) \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\rho}_n) \exp(j \frac{2\pi}{\lambda} \mathbf{r}_j^T \boldsymbol{\rho}_n), \\
+\varkappa^H \boldsymbol{\varrho} &= -\sum_{i=1}^{M} \sum_{j=1}^{M} \{\mathbf{R}_n^{-1}\}_{i,j} \exp(j \frac{2\pi}{\lambda} (\mathbf{r}_j^T \boldsymbol{\theta}_n - \mathbf{r}_i^T \boldsymbol{\theta}_m)) \exp(-j \frac{2\pi}{\lambda} \mathbf{r}_i^T \boldsymbol{\mu}_m) \exp(j \frac{2\pi}{\lambda} \mathbf{r}_j^T \boldsymbol{\rho}_n).
+\end{aligned}
+\right.
+\quad (26)
+$$
+
+Appendix 5. Proof of Eqn. (41), (42) and (43)
+
+In fact, one only has to prove Eqn. (43) since Eqn. (41) and (42) can be obtained by letting $h_u = h_v$ and $s_u = s_v$ in Eqn. (43) and by using $(h_u, s_u)$ for Eqn. (41) and $(h_v, s_v)$ for Eqn. (42). By plugging Eqn. (30) and (32) into Eqn. (16), and by considering the following expressions
+
+$$
+\begin{align*}
+\mathbf{a}^H(\boldsymbol{\theta} + \mathbf{h}_u)\mathbf{a}(\boldsymbol{\theta} + \mathbf{h}_v) &= \sum_{i=1}^{M} \exp(j\frac{2\pi}{\lambda}(d_{y_i}\mathbf{h}_v - d_{x_i}\mathbf{h}_u)) = (\mathbf{a}^H(\boldsymbol{\theta} + \mathbf{h}_v)\mathbf{a}(\boldsymbol{\theta} + \mathbf{h}_u))^H, \\
+\mathbf{a}^H(\boldsymbol{\theta} \pm \mathbf{h}_u)\mathbf{a}(\boldsymbol{\theta}) &= \sum_{i=1}^{M} \exp(\mp j\frac{2\pi}{\lambda}d_{x_i}\mathbf{h}_u), \text{ and }
+\mathbf{a}^H(\boldsymbol{\theta} + \mathbf{h}_u)\mathbf{a}(\boldsymbol{\theta} - \mathbf{h}_u) = \sum_{i=1}^{M} \exp(-j\frac{4\pi}{\lambda}d_{x_i}\mathbf{h}_u),
+\end{align*}
+$$
+
+one obtains the closed-form expressions for the set of functions $\eta_{\theta}$ ($\alpha, \beta, u, v$)
+
+$$
+\eta_{\theta}(s_u, s_v, h_u, h_v) =
+\begin{pmatrix}
+s_u s_v & \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}(d_{x_k}\mathbf{h}_u - d_{y_k}\mathbf{h}_v)) \right\|^2 - M^2 \right) \\
+& + s_u(1-s_u-s_v) & \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}\mathbf{h}_u) \right\|^2 - M^2 \right) \\
+& + s_v(1-s_u-s_v) & \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{y_k}\mathbf{h}_v) \right\|^2 - M^2 \right) \\
+& - s_u s_v (1-s_u-s_v) & U_{SNR}^2 / (\sigma_s^2) \\
+& + U_{SNR} & - M \\
+& + M & - M \\
+& + M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & - M \\
+& - M & -M\\
+\end{pmatrix}
+^{-T}
+. (27)
+$$
+---PAGE_BREAK---
+
+$$
+\begin{aligned}
+\eta_{\theta}(1 - s_u, 1 - s_v, -\mathbf{h}_u, -\mathbf{h}_v) = & \\
+& \left( 1 - U_{SNR} \left( \begin{array}{@{}l@{}} (1-s_u)(1-s_v) \left( \left\| \sum_{k=1}^{M} \exp \left(j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)\right) \right\|^2 - M^2 \right) \\ + (1-s_u)(s_u+s_v-1) \left( \left\| \sum_{k=1}^{M} \exp \left(j \frac{2\pi}{\lambda} d_{x_k} h_u\right) \right\|^2 - M^2 \right) \\ + (1-s_v)(s_u+s_v-1) \left( \left\| \sum_{k=1}^{M} \exp \left(j \frac{2\pi}{\lambda} d_{y_k} h_v\right) \right\|^2 - M^2 \right) \end{array} \right)^{-T} \\
+& - (1-s_u)(1-s_v)(s_u+s_v-1) \frac{U_{SNR}^2 \sigma_n^2}{\sigma_s^2} \times \\
+& \times \left( \begin{array}{@{}l@{}} \sum_{k=1}^{M} \exp \left(j \frac{2\pi d_{y_k} h_v}{\lambda}\right) \sum_{k=1}^{M} \exp \left(-j \frac{2\pi d_{x_k} h_u}{\lambda}\right) \sum_{k=1}^{M} \exp \left(j \frac{2\pi (d_{x_k} h_u - d_{y_k} h_v)}{\lambda}\right) \\ + \sum_{k=1}^{M} \exp \left(-j \frac{2\pi d_{y_k} h_v}{\lambda}\right) \sum_{k=1}^{M} \exp \left(j \frac{2\pi d_{x_k} h_u}{\lambda}\right) \sum_{k=1}^{M} \exp \left(-j \frac{2\pi (d_{x_k} h_u - d_{y_k} h_v)}{\lambda}\right) \\ - M \left\| \sum_{k=1}^{M} \exp \left(-j \frac{2\pi}{\lambda} d_{y_k} h_v\right) \right\|^2 - M \left\| \sum_{k=1}^{M} \exp \left(-j \frac{2\pi}{\lambda} d_{x_k} h_u\right) \right\|^2 \\ - M \left\| \sum_{k=1}^{M} \exp \left(-j \frac{2\pi}{\lambda} (d_{x_k} h_u - d_{y_k} h_v)\right) \right\|^2 + M^3 \end{array} \right)
+\end{aligned}
+. (28)
+$$
+
+$$
+\begin{aligned}
+\eta_{\theta}(s_u, 1 - s_v, \mathbf{h}_u, -\mathbf{h}_v) = & \\
+& \left( 1 - U_{SNR} \left( s_u(1-s_v) \left( \left\| \sum_{k=1}^{M} \exp\left(-j\frac{2\pi}{\lambda}(d_{x_k}h_u + d_{y_k}h_v)\right)\right\|^2 - M^2 \right) + s_u(s_v-s_u) \left( \left\| \sum_{k=1}^{M} \exp\left(-j\frac{2\pi}{\lambda}d_{x_k}h_u\right)\right\|^2 - M^2 \right) + (1-s_v)(s_v-s_u) \left( \left\| \sum_{k=1}^{M} \exp\left(j\frac{2\pi}{\lambda}d_{y_k}h_v\right)\right\|^2 - M^2 \right) \right)^{-T} \\
+& - s_u(1-s_v)(s_v-s_u) \frac{U_{SNR}^2 g_n^2}{g_s^2} \\
+& \times \left( \sum_{k=1}^{M} \exp\left(j\frac{2\pi d_{y_k}h_v}{\lambda}\right) \sum_{k=1}^{M} \exp\left(j\frac{2\pi d_{x_k}h_u}{\lambda}\right) \sum_{k=1}^{M} \exp\left(-j\frac{2\pi(d_{x_k}h_u+d_{y_k}h_v)}{\lambda}\right) + \sum_{k=1}^{M} \exp\left(-j\frac{2\pi d_{y_k}h_v}{\lambda}\right) \sum_{k=1}^{M} \exp\left(-j\frac{2\pi d_{x_k}h_u}{\lambda}\right) \sum_{k=1}^{M} \exp\left(j\frac{2\pi(d_{x_k}h_u+d_{y_k}h_v)}{\lambda}\right) \\
+& - M \left\| \sum_{k=1}^{M} \exp\left(j\frac{2\pi}{\lambda}d_{y_k}h_v\right) \right\|^2 - M \left\| \sum_{k=1}^{M} \exp\left(-j\frac{2\pi}{\lambda}d_{x_k}h_u\right) \right\|^2 \\
+& - M \left\| \sum_{k=1}^{M} \exp\left(-j\frac{2\pi}{\lambda}(d_{x_k}h_u+d_{y_k}h_v)\right) \right\|^2 + M^3
+\end{aligned}
+. (29)
+$$
+
+$$
+\eta_\theta(s_u, 0, h_u, 0) = \left( 1 + s_u(1-s_u)U_{SNR} \left( M^2 - \left\| \sum_{k=1}^{M} \exp(-j \frac{2\pi}{\lambda} d_{x_k} h_u) \right\|^2 \right) \right)^{-T}, . (30)
+$$
+
+$$
+\noindent
+\eta_\theta(0, s_v, 0, h_v) = (1 + s_v(1 - s_v)U_{SNR}) (M^2 - (\sum_{k=1}^{M} |\exp(-j\dfrac{2\pi}{\lambda}d_{y_k}h_v)|^2))^{-T}.
+\notag
+$$
+
+. (31)
+---PAGE_BREAK---
+
+$$
+\begin{equation}
+\begin{split}
+\eta_{\theta}(1 - s_u, s_v, -\mathbf{h}_u, \mathbf{h}_v) = {}& \\
+& \left(
+ \begin{aligned}
+ & \left( s_v(1-s_u) \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}(d_{x_k}h_u + d_{y_k}h_v)) \right\|^2 - M^2 \right) \right) \\
+ & + s_v(s_u-s_v) \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u) \right\|^2 - M^2 \right) \\
+ & + (1-s_u)(s_u-s_v) \left( \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{y_k}h_v) \right\|^2 - M^2 \right)
+ \end{aligned}
+ \right)^{-T} \\
+& \times \left(
+ \begin{aligned}
+ & \left( \sum_{k=1}^{M} \exp(j\frac{2\pi d_{y_k} h_v}{\lambda}) \sum_{k=1}^{M} \exp(j\frac{2\pi d_{x_k} h_u}{\lambda}) \sum_{k=1}^{M} \exp(-j\frac{2\pi(d_{x_k}h_u+d_{y_k}h_v)}{\lambda}) \right) \\
+ & + \sum_{k=1}^{M} \exp(-j\frac{2\pi d_{y_k} h_v}{\lambda}) \sum_{k=1}^{M} \exp(-j\frac{2\pi d_{x_k} h_u}{\lambda}) \sum_{k=1}^{M} \exp(j\frac{2\pi(d_{x_k}h_u+d_{y_k}h_v)}{\lambda}) \\
+ & - M \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{y_k}h_v) \right\|^2 - M \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}d_{x_k}h_u) \right\|^2 \\
+ & - M \left\| \sum_{k=1}^{M} \exp(-j\frac{2\pi}{\lambda}(d_{x_k}h_u+d_{y_k}h_v)) \right\|^2 + M^3
+ \end{aligned}
+ \right)
+\end{split}
+\tag{.32}
+\end{equation}
+$$
+
+One notices that the set of functions $\eta_\theta(\alpha, \beta, u, v)$ does not depend on $\theta$. Consequently, it is also easy to obtain the Weiss-Weinstein bound (throughout the set of functions $\eta(\alpha, \beta, u, v)$) by using the results of Section 4.2 whatever the considered prior on $\theta$ (only the integral $\int_\Theta \frac{p^{\alpha+\beta}(\theta+u)}{p^{\alpha+\beta-1}(\theta)} d\theta$ has to be calculated or computed numerically). In our case of a uniform prior, the results are straightforward and leads to Eqn. (41), (42) and (43).
+
+Appendix .6. *Proof of Eqn. (48), (49) and (50)*
+
+The set of functions $\eta_\theta(\mu, \rho)$ from Eqn. (18). Since $\mathbf{R}_n = \sigma_n^2 \mathbf{I}$, one obtains $\zeta_\theta(\mathbf{h}_u, \mathbf{0}) = \zeta_\theta(-\mathbf{h}_u, \mathbf{0}) = 2C_{SNR} \left(M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{x_k}\mathbf{h}_u\right)\right)$,
+$$
+\begin{align*}
+\zeta_\theta(\mathbf{h}_v, \mathbf{0}) &= \zeta_\theta(-\mathbf{h}_v, \mathbf{0}) = 2C_{SNR} \left( M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}d_{y_k}\mathbf{h}_v\right) \right), && \zeta_\theta(\mathbf{h}_u, -\mathbf{h}_u) = \zeta_\theta(-\mathbf{h}_u, \mathbf{h}_u) = 2C_{SNR} \left( M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{x_k}\mathbf{h}_v\right) \right), \\
+\zeta_\theta(\mathbf{h}_v, -\mathbf{h}_v) &= \zeta_\theta(-\mathbf{h}_v, \mathbf{h}_v) = 2C_{SNR} \left( M - \sum_{k=1}^{M} \cos\left(\frac{4\pi}{\lambda}d_{y_k}\mathbf{h}_v\right) \right), && \zeta_\theta(\mathbf{h}_u, -\mathbf{h}_v) = \zeta_\theta(-\mathbf{h}_u, -\mathbf{h}_v) = \\
+&= 2C_{SNR} \left( M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}(d_{x_k}\mathbf{h}_u - d_{y_k}\mathbf{h}_v)\right) \right), && \zeta_\theta(-\mathbf{h}_v, -\mathbf{h}_u) = \zeta_\theta(\mathbf{h}_u, -\mathbf{h}_v) = \zeta_\theta(-\mathbf{h}_v, -\mathbf{h}_u) = \\
+&= 2C_{SNR} \left( M - \sum_{k=1}^{M} \cos\left(\frac{2\pi}{\lambda}(d_{x_k}\mathbf{h}_u + d_{y_k}\mathbf{h}_v)\right) \right), && \zeta_\theta(\mathbf{h}_u, -\mathbf{h}_u) = \zeta_\theta(-\mathbf{h}_u, -\mathbf{h}_v) = \zeta_\theta(-\mathbf{h}_v, -\mathbf{h}_u) = \\
+&= 0.
+\end{align*}
+$$
+
+Again, since the set of functions $\zeta_\theta(\mu, \rho)$ does not depend on $\theta$, the set of functions $\eta_\theta(\alpha, \beta, u, v)$ is given by plugging the above equations into Eqn. (17) and does not depend on $\theta$. Consequently, as in unconditional case, the set of functions $\eta(\alpha, \beta, u, v)$ is obtained by using the results of Section 4.2 whatever the considered prior on $\theta$. In our case of a uniform prior, the results are straightforward and leads to Eqn. (48), (49) and (50).
+---PAGE_BREAK---
+
+References
+
+[1] R. J. McAulay and L. P. Seidman, "A useful form of the Barankin lower bound and its application to PPM threshold analysis," *IEEE Transactions on Information Theory*, vol. 15, no. 2, pp. 273-279, Mar. 1969.
+
+[2] R. J. McAulay and E. M. Hofstetter, "Barankin bounds on parameter estimation," *IEEE Transactions on Information Theory*, vol. 17, no. 6, pp. 669-676, Nov. 1971.
+
+[3] E. Chaumette, J. Galy, A. Quinlan, and P. Larzabal, "A new Barankin bound approximation for the prediction of the threshold region performance of maximum likelihood estimators," *IEEE Transactions on Signal Processing*, vol. 56, no. 11, pp. 5319-5333, Nov. 2008.
+
+[4] K. Todros and J. Tabrikian, "General classes of performance lower bounds for parameter estimation - part I: non-Bayesian bounds for unbiased estimators," *IEEE Transactions on Information Theory*, vol. 56, no. 10, pp. 5045-5063, Oct. 2010.
+
+[5] H. L. Van Trees and K. L. Bell, Eds., *Bayesian Bounds for Parameter Estimation and Nonlinear Filtering/Tracking*. New-York, NY, USA: Wiley/IEEE Press, Sep. 2007.
+
+[6] J. Ziv and M. Zakai, "Some lower bounds on signal parameter estimation," *IEEE Transactions on Information Theory*, vol. 15, no. 3, pp. 386-391, May 1969.
+
+[7] S. Bellini and G. Tartara, "Bounds on error in signal parameter estimation," *IEEE Transactions on Communications*, vol. 22, no. 3, pp. 340-342, Mar. 1974.
+
+[8] K. L. Bell, Y. Steinberg, Y. Ephraim, and H. L. Van Trees, "Extended Ziv-Zakaï lower bound for vector parameter estimation," *IEEE Transactions on Information Theory*, vol. 43, no. 2, pp. 624-637, Mar. 1997.
+
+[9] A. J. Weiss and E. Weinstein, "A lower bound on the mean square error in random parameter estimation," *IEEE Transactions on Information Theory*, vol. 31, no. 5, pp. 680-682, Sep. 1985.
+
+[10] I. Rapoport and Y. Oshman, "Weiss-Weinstein lower bounds for markovian systems. part I: Theory," *IEEE Transactions on Signal Processing*, vol. 55, no. 5, pp. 2016-2030, May 2007.
+
+[11] A. Renaux, P. Forster, P. Larzabal, C. D. Richmond, and A. Nehorai, "A fresh look at the Bayesian bounds of the Weiss-Weinstein family," *IEEE Transactions on Signal Processing*, vol. 56, no. 11, pp. 5334-5352, Nov. 2008.
+
+[12] K. Todros and J. Tabrikian, "General classes of performance lower bounds for parameter estimation - part II: Bayesian bounds," *IEEE Transactions on Information Theory*, vol. 56, no. 10, pp. 5064-5082, Oct. 2010.
+
+[13] Y. Rockah and P. Schultheiss, "Array shape calibration using sources in unknown locations-part I: Far-field sources," *IEEE Transactions on Acoustics, Speech, and Signal Processing*, vol. 35, no. 3, pp. 286-299, Mar. 1987.
+
+[14] I. Reuven and H. Messer, "A Barankin-type lower bound on the estimation error of a hybrid parameter vector," *IEEE Transactions on Information Theory*, vol. 43, no. 3, pp. 1084-1093, May 1997.
+
+[15] S. Bay, B. Geller, A. Renaux, J.-P. Barbot, and J.-M. Brossier, "On the hybrid Cramér-Rao bound and its application to dynamical phase estimation," *IEEE Signal Processing Letters*, vol. 15, pp. 453-456, 2008.
+
+[16] H. L. Van Trees, *Detection, Estimation and Modulation Theory*. New-York, NY, USA: John Wiley & Sons, 1968, vol. 1.
+
+[17] B. Ottersten, M. Viberg, P. Stoica, and A. Nehorai, "Exact and large sample maximum likelihood techniques for parameter estimation and detection in array processing," in *Radar Array Processing*, S. S. Haykin, J. Litva, and T. J. Shepherd, Eds. Berlin: Springer-Verlag, 1993, ch. 4, pp. 99-151.
+
+[18] K. L. Bell, Y. Ephraim, and H. L. Van Trees, "Explicit Ziv-Zakaï lower bound for bearing estimation," *IEEE Transactions on Signal Processing*, vol. 44, no. 11, pp. 2810-2824, Nov. 1996.
+
+[19] T. J. Nohara and S. Haykin, "Application of the Weiss-Weinstein bound to a two dimensional antenna array," *IEEE Transactions on Acoustics, Speech, and Signal Processing*, vol. 36, no. 9, pp. 1533-1534, Sep. 1988.
+
+[20] H. Nguyen and H. L. Van Trees, "Comparison of performance bounds for DOA estimation," in Proc. of IEEE Workshop on Statistical Signal and Array Processing (SSAP), vol. 1, Jun. 1994, pp. 313-316.
+---PAGE_BREAK---
+
+[21] F. Athley, "Optimization of element positions for direction finding with sparse arrays," in *Proc. of IEEE Workshop on Statistical Signal Processing (SSP)*, vol. 1, 2001, pp. 516–519.
+
+[22] W. Xu, A. B. Baggeroer, and C. D. Richmond, "Bayesian bounds for matched-field parameter estimation," *IEEE Transactions on Signal Processing*, vol. 52, no. 12, pp. 3293–3305, Dec. 2004.
+
+[23] A. Renaux, "Weiss-Weinstein bound for data aided carrier estimation," *IEEE Signal Processing Letters*, vol. 14, no. 4, pp. 283–286, Apr. 2007.
+
+[24] D. T. Vu, A. Renaux, R. Boyer, and S. Marcos, "Closed-form expression of the Weiss-Weinstein bound for 3D source localization: the conditional case," in *Proc. of IEEE Workshop on Sensor Array and Multi-channel Processing (SAM)*, vol. 1, Kibutz Ma'ale Hahamisha, Israel, Oct. 2010, pp. 125–128.
+
+[25] S. M. Kay, *Fundamentals of Statistical Signal Processing: Estimation Theory*. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., Mar. 1993, vol. 1.
+
+[26] H. L. Van Trees, *Detection, Estimation and Modulation theory: Optimum Array Processing*. New-York, NY, USA: John Wiley & Sons, Mar. 2002, vol. 4.
+
+[27] Z. Ben Haim and Y. Eldar, "A comment on the Weiss-Weinstein bound for constrained parameter sets," *IEEE Transactions on Information Theory*, vol. 54, no. 10, pp. 4682–4684, Oct. 2008.
+
+[28] P. Stoica and A. Nehorai, "Performances study of conditional and unconditional direction of arrival estimation," *IEEE Transactions on Acoustics, Speech, and Signal Processing*, vol. 38, no. 10, pp. 1783–1795, Oct. 1990.
+
+[29] K. L. Bell, Y. Ephraim, and H. L. Van Trees, "Explicit Ziv-Zakaï lower bounds for bearing estimation using planar arrays," in *Proc. of Workshop on Adaptive Sensor Array Processing (ASAP)*. Lexington, MA, USA: MIT Lincoln Laboratory, Mar. 1996.
+
+[30] I. Reuven and H. Messer, "The use of the Barankin bound for determining the threshold SNR in estimating the bearing of a source in the presence of another," in *Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)*, vol. 3, Detroit, MI, USA, May 1995, pp. 1645–1648.
+
+[31] J. Li and R. T. Compton, "Maximum likelihood angle estimation for signals with known waveforms," *IEEE Transactions on Signal Processing*, vol. 41, no. 9, pp. 2850–2862, Sep. 93.
+
+[32] M. Cedervall and R. L. Moses, "Efficient maximum likelihood DOA estimation for signals with known waveforms in presence of multipath," *IEEE Transactions on Signal Processing*, vol. 45, no. 3, pp. 808–811, Mar. 1997.
+
+[33] J. Li, B. Halder, P. Stoica, and M. Viberg, "Computationally efficient angle estimation for signals with known waveforms," *IEEE Transactions on Signal Processing*, vol. 43, no. 9, pp. 2154–2163, Sep. 1995.
+
+[34] E. Weinstein and A. J. Weiss, "A general class of lower bounds in parameter estimation," *IEEE Transactions on Information Theory*, vol. 34, no. 2, pp. 338–342, Mar. 1988.
+
+[35] P. S. La Rosa, A. Renaux, A. Nehorai, and C. H. Muravchik, "Barankin-type lower bound on multiple change-point estimation," *IEEE Transactions on Signal Processing*, vol. 58, no. 11, pp. 5534–5549, Nov. 2010.
+
+[36] H. L. Van Trees, *Detection, Estimation and Modulation Theory: Radar-Sonar Signal Processing and Gaussian Signals in Noise*. New-York, NY, USA: John Wiley & Sons, Sep. 2001, vol. 3.
+
+[37] K. L. Bell, "Performance bounds in parameter estimation with application to bearing estimation," Ph.D. dissertation, George Mason University, Fairfax, VA, USA, 1995.
+
+[38] W. Xu, A. B. Baggeroer, and K. L. Bell, "A bound on mean-square estimation error with background parameter mismatch," *IEEE Transactions on Information Theory*, vol. 50, no. 4, pp. 621–632, Apr. 2004.
+
+[39] J. Tabrikian and J. L. Krolik, "Barankin bounds for source localization in an uncertain ocean environment," *IEEE Transactions on Signal Processing*, vol. 47, no. 11, pp. 2917–2927, Nov. 1999.
+
+[40] H. Gazzah and S. Marcos, "Cramér-Rao bounds for antenna array design," *IEEE Transactions on Signal Processing*, vol. 54, no. 1, pp. 336–345, Jan. 2006.
+---PAGE_BREAK---
+
+Figure .1: 3D source localization using a planar array antenna.
+---PAGE_BREAK---
+
+Figure .2: Ziv-Zakai bound, Weiss-Weinstein bound and empirical MSE of the MAP estimator: unconditional case.
+---PAGE_BREAK---
+
+Figure 3: Weiss-Weinstein bounds of the V-shaped array w.r.t. the opening angle $\Delta$.
\ No newline at end of file