Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 5
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 66171)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 5
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Appendix A}\label{App:A}
\begin{proof}[Proof of Theorem \ref{the:min}.]
Let $X=\{ \mathrm{x}_1, \ldots, \mathrm{x}_n \}$.
We write
$$
\mathrm{z}_i= W(\mathrm{x}_i-m), \quad \mathrm{z}_{ij}= \omega_j^T(\mathrm{x}_i-m),
$$
for observation $i$, where $i=1,\ldots,n$ and coordinates $j=1,\ldots,d$.
Let us consider the likelihood function, i.e.
$$
\begin{array}{l}
L(X;\mathrm{m},W,\sigma,\tau) = \prod\limits_{i=1}^{n} GSN_d(\mathrm{x}_i ; \mathrm{m},W,\sigma,\tau) \\[6pt]
=\prod\limits_{i=1}^{n} | \mathrm{det}(W)| \prod\limits_{j=1}^{d} SN( \omega_j^T (\mathrm{x}_i - \mathrm{m}) ; 0 , \sigma_j^2, \tau_j^2)\\[6pt]
=\Big( c_1|\mathrm{det}(W)| \Big)^{n} \Big( \prod\limits_{j=1}^{d} \sigma_j(1+\tau_j) \Big)^{-n}
\prod\limits_{i=1}^{n} \prod\limits_{j=1}^{d} \exp \Big[ -\frac{1}{2\sigma_j^2}z_{ij}^2 (\mathds{1}_{ \{ z_{ij} \leq 0 \} } + \tau_{j}^{-2} \mathds{1}_{ \{ z_{ij} > 0 \} }) \Big],
\end{array}
$$
where
$
c_1=\left( \sqrt{\tfrac{2}{\pi}} \right)^{d}.
$
Now we take the log-likelihood function, i.e.
$$
\begin{array}{l}
\ln(L(X;\mathrm{m},W,\sigma,\tau)) \\[6pt]
=\ln \bigg( \Big( c_1|\mathrm{det}(W)| \Big)^{n} \Big( \prod\limits_{j=1}^{d} \sigma_j(1+\tau_j) \Big)^{-n} \bigg) +
\sum\limits_{i=1}^{n} \sum\limits_{j=1}^{d} \Big[ -\frac{1}{2\sigma_j^2}z_{ij}^2 (\mathds{1}_{ \{ z_{ij} \leq 0 \} } + \tau_{j}^{-2} \mathds{1}_{ \{ z_{ij} > 0 \} })\Big] \\[6pt]
= \ln \bigg( \Big( c_1|\mathrm{det}(W)| \Big)^{n} \Big( \prod\limits_{j=1}^{d} \sigma_j(1+\tau_j) \Big)^{-n} \bigg)
\frac{1}{2} \sum\limits_{j=1}^{d} \Big( \sigma_j^{-2} \sum\limits_{i \in I_{j}} z_{ij}^2 + \frac{\sigma_j^{-2}}{\tau_{j}^{2} } \sum\limits_{i \in I_{j}^{c}} z_{ij}^2 \Big) \\[6pt]
= \ln \bigg( \Big( c_1|\mathrm{det}(W)| \Big)^{n} \Big( \prod\limits_{j=1}^{d} \sigma_j(1+\tau_j) \Big)^{-n} \bigg) -
\sum\limits_{j=1}^{d} \frac{1}{2\sigma_j^{2}} \Big( s_{1j} + \frac{1}{\tau_{j}^{2} } s_{2j} \Big).
\end{array}
$$
We fix $\mathrm{m}$, $W$ and maximize the log-likelihood function over $\tau$ and $\sigma$.
In such a case we have to solve the following system of equations
$$
\begin{array}{l}
\frac{\partial \ln ( L(X;\mathrm{m},W,\sigma,\tau) ) }{\partial \sigma_j} = -\frac{n}{\sigma_j} + \sigma_j^{-3} (s_{1j} + \tau_j^{-2} s_{2j} )
=0, \\[6pt]
\frac{\partial \ln ( L(X;\mathrm{m},W,\sigma,\tau) ) }{\partial \tau_j} = - \frac{n}{1+\tau_j} + \frac{s_{2j}}{\tau_j^{3}\sigma_j^{2}} =0 ,
\end{array}
$$
for $ j=1,\ldots,d$.
By simple calculations we obtain the expressions for the estimators
\begin{align*}
\hat{\sigma}_j^2(\mathrm{m},W) =
\tfrac{1}{n} s_{1j}^{2/3} g_{j}(\mathrm{m},W), \qquad
\hat{\tau}_{j}(\mathrm{m},W) = \bigg( \frac{s_{2j}}{s_{1j}} \bigg)^{1/3}.
\end{align*}
Substituting it into the log-likelihood function,
we get
$$
\begin{array}{l}
\hat{L}(\mathrm{m},W) = \bigg( \frac{2}{\pi} \bigg)^{\frac{dn}{2}} |\mathrm{det}(W)|^{n} \cdot \Big( \prod\limits_{j=1}^{d} \frac{1}{\sqrt{n}} g_j(\mathrm{m},W)^{\frac{3}{2}} \Big)^{-n} e^{-\frac{dn}{2}}\\[6pt]
= \bigg( \frac{2n}{\pi e} \bigg)^{\frac{dn}{2}} \Big( \frac{1}{|\mathrm{det}(W)|^{\frac{2}{3}}} \prod\limits_{j=1}^{d} g_j(\mathrm{m},W) \Big)^{-\frac{3n}{2}}.
\end{array}
$$
\end{proof}
\section{Appendix B}\label{App:B}
\begin{proof}[Proof of Theorem \ref{ther:grad}.]
Let us start with the partial derivative of $\ln({l})$ with respect to $\mathrm{m}$. We have
$$
\begin{array}{l}
\frac{\partial \ln {l}(X;\mathrm{m},W)}{\partial \mathrm{m}_k} =
\sum \limits_{j=1}^d \frac{\partial \ln ({g}_j(\mathrm{m},W))}{\partial \mathrm{m}_k} = \sum\limits_{j=1}^d \frac{1}{{s}_{1j}^{\frac{1}{3}} + {s}_{2j}^{\frac{1}{3}}} \frac{\partial ({s}_{1j}^{\frac{1}{3}} + {s}_{2j}^{\frac{1}{3}})}{\partial \mathrm{m}_k}
\sum \limits_{j=1}^d \frac{1}{{s}_{1j}^{\frac{1}{3}} + {s}_{2j}^{\frac{1}{3}}} \bigg(
\frac{1}{3 {s}_{1j}^{\frac{2}{3}}} \frac{\partial {s}_{1j}}{\partial \mathrm{m}_k} +
\frac{1}{3 {s}_{2j}^{\frac{2}{3}}} \frac{\partial {s}_{2j}}{\partial \mathrm{m}_k}
\bigg).
\end{array}
$$
Now, we need $\frac{\partial {s}_{1j}}{\partial \mathrm{m}_k}$ and $\frac{\partial {s}_{2j}}{\partial \mathrm{m}_k}$, therefore
$$
\begin{array}{l}
\frac{\partial {s}_{1j}}{\partial \mathrm{m}_k} =
\sum\limits_{i \in {I}_j} \frac{\partial [\omega^T_j (\mathrm{x}_i - \mathrm{m})]^2}{\partial \mathrm{m}_k} = \sum\limits_{i \in {I}_j} 2 \omega^T_j (\mathrm{x}_i - \mathrm{m}) \frac{\partial \omega^T_j (\mathrm{x}_i - \mathrm{m})}{\partial \mathrm{m}_k} =
\sum\limits_{i \in {I}_j} - 2 \omega^T_j (\mathrm{x}_i - \mathrm{m}) \omega_{jk}.
\end{array}
$$
Analogously we get
$$
\begin{array}{l}
\frac{\partial {s}_{2j}}{\partial \mathrm{m}_k} = \sum\limits_{i \in {I}_j^c} -2 \omega^T_j (\mathrm{x}_i - \mathrm{m}) \omega_{jk}.
\end{array}
$$
Hence
$$
\begin{array}{l}
\frac{\partial \ln {l}}{\partial \mathrm{m}_k} =\sum\limits_{j=1}^d \frac{-1}{{s}_{1j}^{\frac{1}{3}} + {s}_{2j}^{\frac{1}{3}}} \bigg(
\frac{1}{3 {s}_{1j}^{\frac{2}{3}}} \sum\limits_{i \in I_j} 2 \omega_j^T (\mathrm{x}_i - \mathrm{m}) \omega_{jk}
\frac{1}{3 {s}_{2j}^{\frac{2}{3}}} \sum\limits_{i \in I_j^c} 2 \omega_j^T (\mathrm{x}_i - \mathrm{m}) \omega_{jk}
\bigg).
\end{array}
$$
Now we calculate the partial derivative of $\ln {l}(X;\mathrm{m},W)$ with respect to the matrix $W$. We have
$$
\begin{array}{l}
\frac{\partial \ln {l}(X;\mathrm{m},W)}{\partial \omega_{pk}} = \frac{\partial \ln |\mathrm{det}(W)|^{-\frac{2}{3}}}{\partial \omega_{pk}} + \sum\limits_{j=1}^d \frac{\partial \ln ({g}_j(\mathrm{m},W))}{\partial \omega_{pk}}.
\end{array}
$$
To calculate the derivative of the determinant we use Jacobi's formula (see Lemma \ref{jacobi}).
Henc
$$
\begin{array}{l}
\frac{\partial \ln (\mathrm{det}(W)^{-\frac{2}{3}})}{\partial \omega_{pk}} = \mathrm{det}(W)^{\frac{2}{3}} \Big(-\frac{2}{3}\Big) \mathrm{det}(W)^{-\frac{5}{3}} \frac{\partial \mathrm{det}(W)}{\partial \omega_{pk}} = -\frac{2}{3} \mathrm{det}(W)^{-1} \mathrm{adj}^T(W)_{pk} \\[6pt]
= -\frac{2}{3} \frac{1}{\mathrm{det}(W)} \left[\mathrm{det}(W) (W^{-1})^T_{pk}\right]= -\frac{2}{3} (\omega^{-1})^T_{pk},
\end{array}
$$
where $(\omega^{-1})^T_{pk}$ is the element in the $p$-th row and $k$-th column of the matrix $(W^{-1})^T$. Now we calculate
$$
\begin{array}{l}
\frac{\partial \ln ({g}_j(\mathrm{m},W))}{\partial \omega_{pk}} = \frac{1}{{s}_{1j}^{\frac{1}{3}} + {s}_{2j}^{\frac{1}{3}}} \frac{\partial ({s}_{1j}^{\frac{1}{3}} + {s}_{2j}^{\frac{1}{3}})}{\partial \omega_{pk}}= \frac{1}{{s}_{1j}^{\frac{1}{3}} + {s}_{2j}^{\frac{1}{3}}} \bigg(
\frac{1}{3 {s}_{1j}^{\frac{2}{3}}} \frac{\partial {s}_{1j}}{\partial \omega_{pk}} +
\frac{1}{3 {s}_{2j}^{\frac{2}{3}}} \frac{\partial {s}_{2j}}{\partial \omega_{pk}}
\bigg),
\end{array}
$$
where
$$
\begin{array}{l}
\frac{\partial {s}_{1j}}{\partial \omega_{pk}} = \sum\limits_{ i \in {I}_j} \frac{\partial [\omega^T_j (\mathrm{x}_i - \mathrm{m})]^2}{\partial \omega_{pk}} = \sum\limits_{ i \in {I}_j} 2 \omega^T_j (\mathrm{x}_i - \mathrm{m}) \frac{\partial \omega^T_j (\mathrm{x}_i - \mathrm{m})}{\partial \omega_{pk}}=
\\[6pt]
\left\{ \begin{array}{ll}
0, & \text{if} \; j\neq p\\
\sum\limits_{ i \in {I}_p} 2 \omega^T_p (\mathrm{x}_i - \mathrm{m}) (\mathrm{x}_{ik} - \mathrm{m}_k), & \text{if} \; j=p\\
\end{array} \right.
\end{array}
$$
and $\mathrm{x}_{ik}$ is the $k$-th element of the vector $\mathrm{x}_i$. Analogously we get
$$\frac{\partial {s}_{2j}}{\partial \omega_{pk}} = \left\{ \begin{array}{ll}
0, & \text{if} \; j\neq p,\\
\sum\limits_{ i \in {I}_p^c} 2 \omega^T_p (\mathrm{x}_i - \mathrm{m}) (\mathrm{x}_{ik} - \mathrm{m}_k), & \text{if} \; j=p.
\end{array} \right.
$$
Hence we obtain
$$
\begin{array}{l}
\frac{\partial \ln {l}}{\partial \omega_{pk}} = -\frac{2}{3} (\omega^{-1})^T_{pk} + \frac{1}{{s}_{1p}^{\frac{1}{3}} +{s}_{2p}^{\frac{1}{3}}}
\bigg(
\frac{1}{3} {s}_{1p}^{-\frac{2}{3}} \sum\limits_{ i \in {I}_p} 2 \omega^T_p (\mathrm{x}_i - \mathrm{m}) (\mathrm{x}_{ik} - \mathrm{m}_k)\\[6pt]
+ \frac{1}{3} {s}_{2p}^{-\frac{2}{3}} \sum\limits_{ i \in {I}_p^c} 2 \omega^T_p (\mathrm{x}_i - \mathrm{m}) (\mathrm{x}_{ik} - \mathrm{m}_k) \bigg).
\end{array}
$$
\end{proof}
\section{Conclusion}
In our work we introduce and explore a new approach to ICA which is based on the asymmetry of the data.
Roughly speaking in our approach, instead of approximating the data by product of densities with heavy tails, we approximate it by a product of
asymmetric densities -- the Split Gaussian distribution.
Contrary to classical approaches which consider third or fourth central moment, our algorithm in practice is based on second moments. This is a consequence of the fact that Split Gaussian distributions arise from merging two opposite halves of normal distributions in their common mode. Therefore we use only second order moments to describe skewness in dataset, and therefore we obtain an effective ICA method which is resistant to outliers.
We verified our approach on images, sound and EEG data.
In the case of source signal reconstructing our approach gives essentially better results (better recover original signals).
The main reason is such that kurtosis is very sensitive to the outliers and that the asymmetry of the data is more popular than heavy tails in real data sets.
\section*{Acknowledgment}
Research of P. Spurek was supported by the National Center of Science
(Poland) grant no. 2015/19/D/ST6/01472.
Research of J. Tabor was supported by the National Center of Science
(Poland) grant no. UMO-2014/13/B/ST6/01792.
\section*{References}
\section{Introduction}
Independent component analysis (ICA) is one of the most popular methods of data analysis and preprocessing. Historically, Herault and Jutten \cite{herault1986space} seem to be the first (around 1983) to have addressed the problem of ICA to separate mixtures of independent signals.
In signal processing ICA is a computational method for separating a multivariate signal into additive subcomponents and has been applied in magnetic resonance \cite{beckmann2004probabilistic}, MRI \cite{beckmann2005tensorial,rodriguez2012noising}, EEG analysis \cite{brunner2007spatial,delorme2007enhanced,zhang2013bayesian},
fault detection \cite{choi2005fault}, financial time series \cite{kiviluoto1998independent} and seismic recordings \cite{haghighi2008ica}.
Moreover, it is hard to overestimate the role of ICA in pattern recognition and image analysis; its applications include face recognition \cite{yang2005kernel,dagher2006face}, facial action recognition~\cite{chuang2006recognizing}, image filtering \cite{tsai2006independent}, texture segmentation \cite{jenssen2003independent}, object recognition~\cite{bressan2003using,tao2016ensemble}, image modeling \cite{kim2005iterative}, embedding graphs in pattern-spaces \cite{luo2003spectral,luo2002independent}, multi-label learning \cite{xu2016local} and feature extraction \cite{lai2014multilinear}. The calculation of ICA is discussed in several papers \cite{secchi2016hierarchical, hyvarinen2004independent,lee1999independent,cardoso1989source,pham1997blind,comon1994independent,du2016hyperspectral}, where the problem is given various names, in particular it is also called ``source separation problem''.
ICA is similar in many aspects to principal component analysis (PCA). In PCA we look for an orthonormal change of basis so that the components are not
linearly dependent (uncorrelated).
ICA can be described as a search for the optimal basis (coordinate system) in which the components are independent. Let us now, for the readers convenience, describe how the
ICA works. Data is represented by the random vector $\mathrm{x}$
and the components as the random vector~$s$.
The aim is to transform the observed data $\mathrm{x}$ into maximally independent components $s$ with respect to some measure
of independence. Typically we use a linear static transformation $W$, called the {\em transformation matrix}, combined with the formula $s = W \mathrm{x}$.
Most ICA methods are based on the maximization of non-Gaussianity. This follows from the fact that one of the theoretical foundations of ICA is given by the dual view at the Central Limit Theorem \cite{hyvarinen2000independent}, which states that the distribution of the sum (average or linear combination) of $N$ independent random variables approaches Gaussian as $N\rightarrow \infty$. Obviously if all source variables are Gaussian, ICA will not work.
\begin{figure}[!h]
\normalsize
\begin{center}
\includegraphics[width=5in]{1.jpg}
\end{center}
\caption{Comparison of images separation by our method (ICA$_{SG}$), with FastICA and ProDenICA.}
\label{fig:image_ICA_int}
\end{figure}
The classical measure of non-Gaussianity is kurtosis (the forth central moment), which can be both positive or negative. Random variables that have a negative kurtosis are called subgaussian, and those with the positive one are called supergaussian. Supergaussian random variables have typically a ``spiky'' pdf with heavy tails, i.e. the pdf is relatively large at zero and at large values of the variable, while being small for intermediate values (ex. the Laplace distribution). Typically non-Gaussianity is measured by the absolute value of kurtosis (the square of kurtosis can also be used).
Thus many methods of finding independent components are based on fitting a density with similar kurtosis as the data, and consequently are very sensitive to the existence of outliers. Moreover, typically data sets are bounded, and therefore the credible estimation of tails is not easy. Another problem with these methods, is that they usually assume that the underlying density is symmetric, which is rarely the case.
In our work we introduce and explore a new approach ICA$_{SG}$, based on the asymmetry of the data, which can be measured by the third central moment (skewness). Any symmetric data, in particular gaussian, has skewness equal to zero.
Negative values of skewness indicate the data skewed to the left and the positive ones indicate the data skewed to the right\footnote{By skewed to the left, we mean that the left tail is long relative to the right tail.
Similarly, skewed to the right means that the long tail is on the right-hand side.
}. Consequently, skewness is a natural measure of non-Gaussianity. In our approach, instead of approximating the data by product of densities
with heavy tails, we approximate it by a product of
asymmetric densities (so called Split Gaussians).
Contrary to classical approaches which consider third or fourth central moment, our algorithm is based on second moments. This is a consequence of the fact that Split Gaussian distributions arise from merging two opposite halves of normal distributions in their common mode (for more information see Section \ref{SGD}). Therefore we use only second order moments to describe skewness in dataset, and therefore we obtain an effective ICA method which is resistant to outliers.
\begin{figure*}[!h]
\normalsize
\begin{center}
\includegraphics[width=4.5in]{6.jpg}
\end{center}
\caption{MLE estimation for image histograms with respect to Logistic and Split Gaussian distributions.}
\label{fig:MLE}
\end{figure*}
The results of classical ICA and ICA$_{SG}$ \ in the case of image separation (for more detail comparison we refer to Section \ref{ex}) is presented in Fig. \ref{fig:image_ICA_int}. In the experiment we mixed two images (see Fig. \ref{fig:image_ICA_int} a) by adding and subtracting them (see Fig. \ref{fig:image_ICA_int} b). Our approach gives essentially better results than the classical FastICA approach, compare Fig. \ref{fig:image_ICA_int} c) to Fig. \ref{fig:image_ICA_int} d) and Fig. \ref{fig:image_ICA_int} f) to Fig. \ref{fig:image_ICA_int} g). In the case of classical ICA we can see artifacts in background, which means that the method does not separate signal properly. On the other hand, ProDenICA and ICA$_{SG}$ \ almost perfectly recovered images, compare Fig. \ref{fig:image_ICA_int} c) to Fig. \ref{fig:image_ICA_int} e) and Fig. \ref{fig:image_ICA_int} f) to Fig. \ref{fig:image_ICA_int} h).
\begin{figure}[!h]
\normalsize
\begin{center}
\includegraphics[width=5in]{10.jpg}
\end{center}
\caption{Comparison between our approach and classical ICA in the case of resistance on outliers.}
\label{fig:out}
\end{figure}
In general, ICA$_{SG}$ \ in most cases gives better results than other ICA methods, see Section \ref{ex}, while its numerical complexity lies below the methods which obtain
comparable results, that is ProDenICA and PearsonICA.
This is caused in particular by the fact that asymmetry is more common than heavy tails in real data sets -- we performed the symmetry test by using R package {\tt lawstat} \cite{test} with 5 percent confidence ratio, and it occurred that all image datasets we used in our paper have asymmetric densities.
We also verified it in the case of density estimation of our images. We found optimal parameters of Logistic and Split Gaussian distributions and compared the values of MLE function in Fig. \ref{fig:MLE}. As we see, in most cases Split Gaussian distribution fits the data better than the Logistic one.
Summarizing the results obtained in the paper, our method works better than classical approaches for asymmetric data, and is
more resistant to outliers (see Example \ref{ex:out}).
\begin{example} \label{ex:out}
We consider the data with heavy tails (a sample from the Logistic distribution) and skewed ones (a sample from the Split Normal distribution). We added to the data outliers uniformly generated from
rectangle $[\min(X_1)-\mathrm{sd}(X_1),\max(X_1)+\mathrm{sd}(X_1)]\times[\min(X_2)-\mathrm{sd}(X_2),\max(X_2)+\mathrm{sd}(X_2)]$, where $\mathrm{sd}(X_i)$ is a standard deviation of the $i$-th coordinate of $X$. In Fig.~\ref{fig:out} we present how the absolute value of the Tucker's congruence coefficient (the similarity measure of extracted factors, see Section \ref{ex}) is changing when we add the outliers.
As we see, ICA$_{SG}$ \ is more stable and deals better with outliers in the data, which
follows from the fact that classical ICA typically depends on the moments of order four, while our approach uses moments of order two.
\end{example}
This paper is arranged as follows. In the second section, we discuss related works. In the third, the theoretical background of our approach to ICA is presented. We introduce a cost function which uses the General Split Gaussian distribution and show that it is enough to minimize it respectively to only two parameters: vector $\mathrm{m} \in \mathbb{R}^d$ and $d \times d$ matrix $W$. We also calculate the gradient of the cost function, which is necessary for the efficient use in the minimization procedure.
The last section describes numerical experiments. The effects of our algorithm are illustrated on simulated and real datasets.
\section{Related works}\label{RW}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=5in]{8.jpg}
\end{center}
\caption{Logistic, Split Normal and Classical Gaussian distribution fitted to data with heavy tails and skew one.}
\label{fig:den_1d}
\end{figure*}
Various ICA methods were discussed in \cite{secchi2016hierarchical, hyvarinen2004independent,lee1999independent,cardoso1989source,pham1997blind,comon1994independent}. Herault and Jutten seem to be the first who introduced the ICA around 1983. They proposed an iterative real-time algorithm based on a neuro-mimetic architecture, which nevertheless, can show the lack of convergence
in a number of cases \cite{jutten1991blind}. It is worth mentioning that in their
framework, higher-order statistics were not introduced
explicitly. Giannakis et al. \cite{giannakis1989cumulant} addressed the issue of
identifiability of ICA in 1987 using third-order cumulants. However, the resulting algorithm required an exhaustive
search.
Lacoume and Ruiz \cite{lacoume1992separation} sketched
a mathematical approach to the problem using
higher-order statistics, which can be interpreted as a measure of fitting independent components.
Cardoso \cite{cardoso1991super,cardoso1999high} focused on the algebraic properties of the fourth-order cumulants (kurtosis) what is still a popular approach~\cite{sharma2006subspace}.
Unfortunately kurtosis has some drawbacks in practice, when its value has to be estimated from a measured sample. The main problem is that kurtosis can be very sensitive to the outliers. Its value may depend on only a few observations in the tails of the distribution.
In high-dimensional problems, where separation process contains PCA (for dimension reduction), whitening (for scale normalization), and standard ICA
this effect is called a small sample size problem \cite{yang2005ica,deng2012small}. This is caused by the fact that for the high-dimensional data sets ICA algorithms tend to extract the independent features simply by the projections that isolate single or very few samples (outliers). To address the difficulty random pursuit and locality pursuit methods were applied
\cite{deng2012small}.
Another commonly used solution is to use skewness \cite{stone2002spatiotemporal,kollo2008multivariate,liu2011investigation,karvanen2004independent} instead of kurtosis.
Unfortunately, skewness has received much less attention than kurtosis, and consequently methods based on skewness are usually not well theoretically justified.
One of the most popular ICA method dedicated to the skew data is PearsonICA \cite{karvanen2000pearson,karvanen2002blind}, which minimizes mutual information using a Pearson \cite{stuart1968advanced} system-based parametric model. The
model covers a wide class of source distributions
including skewed distributions.
The Pearson system is defined by the differential equation
$$
f'(x) = \frac{(a_1x - a_0)f(x)}{b_0 + b_1x + b_2x^2},
$$
where $a_0$, $a_1$, $b_0$, $b_1$ and $b_2$ are the parameters of the
distribution.
The parameters of the Pearson system can be estimated
using the method of moments.
Therefore such algorithms have strong limitations connected with the optimization procedure. The main problems are number of parameters which have to be fitted and numerical efficiency of the minimization procedure.
An important measure of fitting independent components is given by negentropy \cite{gaeta1990source}. FastICA \cite{hyvarinen1999fast}, one of the most popular implementations of ICA, uses this approach.
Negentropy is based on the information-theoretic quantity of (differential) entropy. This concept leads to the mutual information
which is the natural information-theoretic measure of the independence of random variables. Consequently, one can use it as the criterion for finding the ICA transformation \cite{comon1994independent,bell1995information}.
It can be shown that minimization of the mutual information is roughly equivalent to maximization of negentropy and it is easier to estimate since we do not need additional parameters. ProDenICA \cite{bach2002kernel,hastie2009elements} is based not on a
single nonlinear function, but on an entire function space of candidate nonlinearities. In particular, the method works with the functions in a reproducing kernel Hilbert space, and make use of the ``kernel trick'' to search over this space efficiently. The use of a function space makes it possible to adapt to a variety of sources and thus makes ProDenICA algorithms more robust to varying source distributions.
A somewhat similar approach to ICA is based on the maximum likelihood estimation~\cite{pham1997blind}. It is closely connected to the infomax principle since the likelihood is proportional to the negative of mutual information.
In recent publications, the maximum likelihood estimation is one of the mot popular \cite{hyvarinen2004independent,harroy1996maximum,comon2010handbook,samworth2012independent,zarzoso2006optimal,murillo2004sinusoidal,cardoso2006maximum} approaches to ICA. Maximum likelihood approach needs the source pdf.
In the classical ICA it is common to use the super-Gaussian logistic density or other heavy tails distributions.
In this paper we present ICA$_{SG}$, a method which joins the positive aspects of
classical ICA$_{SG}$ \ approaches with recent ones like ProDenICA or Pearson ICA.
First of all we use a General Split Gaussian distribution, which uses second order moments to describe skewness in dataset, and therefore is relatively robust to noise or outliers. The GSG distribution can be fitted by minimizing a simple function, which depends on only two parameters $\mathrm{m} \in \mathbb{R}^d$, $W \in \mathcal{M}(\mathbb{R}^d)$, see Theorem \ref{the:min}. Moreover we calculate its gradient, and therefore we can use numerically efficient gradient type algorithms, see Theorem~\ref{ther:grad}.
\section{Theoretical justification} \label{se:theor}
\begin{figure*}[!t]
\normalsize
\begin{center}
\includegraphics[width=5in]{7.jpg}
\end{center}
\caption{Logistic and General Split Normal distributions fitted to data with heavy tails and skew ones.}
\label{fig:den_2d}
\end{figure*}
Let us describe the idea\footnote{In fact it is one of the possible approaches, as there are many explanations which lead to similar formula.} behind ICA \cite{hyvarinen2000independent}. Suppose that we have a random vector $X$
in $\mathbb{R}^d$ which is generated by the model with the density $F$. Then it is well-known that components of $X$ are independent iff there exist one-dimensional densities $f_1,\ldots,f_d \in \mathcal{D}_\mathbb{R}$, where by $\mathcal{D}_\mathbb{R}$ we denote the set of densities on $\mathbb{R}$, such that
$$
F(\mathrm{x})=f_1(x_1) \cdot \ldots \cdot f_d(x_d), \mbox{ for }
\mathrm{x}=(x_1,\ldots,x_d) \in \mathbb{R}^d.
$$
Now suppose that the components of $X$ are not independent, but that
we know (or suspect) that there is a basis $A$ (we put $W=A^{-1}$) such that in that base the
components of $X$ become independent. This may be formulated in the form
\begin{equation} \label{eq:gen}
F(\mathrm{x})=\mathrm{det}(W) \cdot f_1(\omega_1^T(\mathrm{x}-\mathrm{m})) \cdot \ldots \cdot f_d(\omega_d^T(\mathrm{x}-\mathrm{m})) \mbox{ for } x \in \mathbb{R}^d,
\end{equation}
where $\omega_i^T(\mathrm{x}-\mathrm{m})$ is the $i$-th coefficient of $\mathrm{x}-\mathrm{m}$ (the basis is centered in $\mathrm{m}$) in the basis $A$ ($\omega_i$ denotes the $i$-th column of $W$).
Observe, that for a fixed family of one-dimensional densities $\mathcal{F} \subset \mathcal{D}_\mathbb{R}$, the set of all densities given by \eqref{eq:gen} for $f_i \in \mathcal{F}$, forms an affine invariant set of densities.
Thus, if we want to find such a basis that components become independent, we need to search for a matrix $W$ and one-dimensional densities such that the approximation
$$
F(\mathrm{x}) \approx \mathrm{det}(W) \cdot f_1(\omega_1^T(\mathrm{x}-\mathrm{m})) \cdot \ldots \cdot f_d(\omega_d^T(\mathrm{x}-\mathrm{m})), \mbox{ for } \mathrm{x} \in \mathbb{R}^d,
$$
is optimal. However, before proceeding to practical implementations, we need to precise:
\begin{enumerate}
\item how to measure the above approximation,
\item how to deal with data $X$, since we do not have the density,
\item how to work with the family of all possible densities.
\end{enumerate}
The answer to the first point is simple and is given by the Kullback-Leibler divergence, which is defined to be the integral:
$$
D_{\mathrm{KL}}(P\|Q) = \int_{-\infty}^\infty p(x) \, \log\frac{p(x)}{q(x)} \, {\rm d}x,
$$
where $p$ and $q$ denote the densities of $P$ and $Q$. This can be written
as
$$
D_{\mathrm{KL}}(P\|Q)=h(P)-MLE(P,Q),
$$
where $h$ is the classical Shannon entropy.
Thus to minimize the Kullback-Leibler divergence, we can equivalently maximize
the MLE. This is helpful, since for a discrete data $X$ we have nice estimator of the LE (likelihood estimation):
$$
LE(X,Q)=\frac{1}{|X|} \sum_{\mathrm{x} \in X} \ln(q(x)).
$$
Thus we arrive at the following problem.
\medskip
\noindent{\bf Problem [reduced]. }{\em
Let $X$ be a data set. Find an unmixing matrix $W$, center $\mathrm{m}$, and densities $f_1,\ldots,f_d \in \mathcal{D}_\mathbb{R}$ so that the value
$$
\begin{array}{l}
LE(X,f_1,\ldots,f_d,\mathrm{m},W)=\\[6pt]
\frac{1}{|X|} \sum \limits_{\mathrm{x} \in X} \ln(f_1(\omega_1^T(\mathrm{x}-\mathrm{m})) \ldots f_d(\omega_d^T(\mathrm{x}-\mathrm{m})))+\ln(\mathrm{det}(W)) \!= \\[6pt]
\frac{1}{|X|}\sum \limits_{i=1}^d \sum \limits_{\mathrm{x} \in X} \ln(f_i(\omega_i^T(\mathrm{x}-\mathrm{m})))+\ln(\mathrm{det}(W))
\end{array}
$$
is maximized.
}
However, there is still a problem with the last point, as the search over the space of all densities $\mathcal{D}_\mathbb{R}$ is not feasible. Thus, we naturally have to reduce our search to a subclass of all densities $\mathcal{F}$ (which should be parametrized by a finite amount of parameters).
\medskip
\noindent{\bf Problem [final]. }{\em
Let $X \subset \mathbb{R}^d$ be a data set and $\mathcal{F} \subset \mathcal{D}_\mathbb{R}$ be a set of densities. Find an unmixing matrix $W$, center $\mathrm{m}$, and densities $f_1,\ldots,f_d \in \mathcal{F}$ so that the value
$$
\frac{1}{|X|}\sum_{i=1}^d \sum_{\mathrm{x} \in X} \ln(f_i(\omega_i^T(\mathrm{x}-\mathrm{m})))+\ln(\mathrm{det}(W))
$$
is maximized.
}
It may seem that the most natural choice is Gaussian densities. However, this is not the case as Gaussian densities are affine invariant, and therefore do not ``prefer'' any fixed choice of coordinates\footnote{In fact one can observe that the choice of gaussian densities leads to PCA, if we restrict to the case of orthonormal bases}. In other words we have to choose a family of densities which is distant from Gaussian ones.
In the classical ICA approach it is common to use the super-Gaussian logistic distribution:
$$
f(x; \mu,s) = \frac{e^{\frac{x-\mu}{s}}} {s\left(1+e^{\frac{x-\mu}{s}}\right)^2} =\frac{1}{4s} \operatorname{sech}^2\!\left(\frac{x-\mu}{2s}\right).
$$
The main difference between the gaussian and super-gaussian is the existence of the heavy tails. This can be also viewed as the difference in the fourth moments.
However, such a choice leads to some negative consequences, namely the model is very sensitive to outliers. Moreover, if the data is not-symmetric, the approximation could not give the expected results, as the model consists only of
symmetric densities.
The idea behind this paper was to choose the model of densities which wouldn't
have the two above disadvantages. So, instead of choosing the family which differs from the Gaussians by the size of tail (fourth moment), we chose a family which would allow estimation of asymmetric densities -- Split Gaussian distribution \cite{gibbons1973estimation}.
\begin{example} \label{ex:2}
In Fig. \ref{fig:den_1d} and Fig. \ref{fig:den_2d} we present a comparison between the Logistic and the Split Normal distribution in 1d and 2d respectively. In experiments we use the classical skew dataset Lymphoma \cite{maier2007allelic,pyne2009automated} and the classical heavy tails dataset Australian athletes \cite{clauset2009power}. In the case of heavy tails both methods work nice, since real dataset represent heavy tails which
are not symmetric and the skew model is able to detect it. On the other hand, in the case of skew data Split Normal gives essentially better results.
\end{example}
\section{Split Gaussian distribution}\label{SGD}
\begin{figure*}[!t]
\normalsize
\begin{center}
\includegraphics[width=5in]{11.jpg}
\end{center}
\caption{Level sets of the General Split Normal distribution with different parameters.}
\label{fig:ex_level_s}
\end{figure*}
In this section we present our density model.
A natural direction for extending the normal distribution is the introduction of some skewness, and several proposals have indeed emerged, both in the univariate and multivariate case, see \cite{azzalini1985class,azzalini1996multivariate,villani2006multivariate}.
One of the most popular approaches is the Split Normal (SN) distribution, or the Split Gaussian (SG) distribution \cite{gibbons1973estimation}. In our paper we use a generalization of this model, which we call the General Split Normal (GSN) distribution.
We start from the one-dimensional case. After that we present a possible generalization of this definition to the multidimensional setting, which corresponds with the formula (\ref{eq:gen}). Contrary to the Split Gaussian distribution, we skip the assumption of the orthogonality of coordinates (often called principal components), and obtain an ICA model.
\subsection{One-dimensional case}
The density of the one-dimensional Split Gaussian distribution is given by the formula
$$
SN(x;m,\sigma^2,\tau^2) = \left\{ \begin{array}{ll}
c \cdot \exp[-\frac{1}{2\sigma^2}(x-m)^2], & \textrm{for $x\leq m$},\\
c \cdot \exp[-\frac{1}{2\tau^2\sigma^2}(x-m)^2], & \textrm{for $x>m$},\\
\end{array} \right.
$$
where $c=\sqrt{\frac{2}{\pi}}\sigma^{-1}(1+\tau)^{-1}$.
As we see the split normal distribution arises from merging two opposite halves of two probability density functions of normal distributions in their common mode.
In general the use of the Split Gaussian distribution (even in 1D) allows to fit data with better precision (from the likelihood function point of view). In 1982 John \cite{john1982three} showed that the likelihood function can be expressed in an intensive form, in which the scale parameters $\sigma$ and $\tau$ are a function of the location parameter $m$ (see Theorem 3.1 proved by \cite{villani2006multivariate}).
Thanks to this theorem we can maximize the likelihood function numerically with respect to a single parameter $m$ only. The rest of parameters are explicitly given by simple formulas.
\subsection{Multidimensional Split Gaussian distribution }
A natural generalization of the univariate split normal distribution to the multivariate settings was presented by \cite{villani2006multivariate}.
Roughly speaking, authors assume that a vector $\mathrm{x} \in \mathbb{R}^d$ follows the multivariate Split Normal distribution, if its principal components are orthogonal and follow the one-dimensional Split Normal distribution.
\begin{definition}[Definition 2.2. \cite{villani2006multivariate}]\label{def:SN}
A density of the multivariate Split Normal distribution is given by
$$
SN_{d}(\mathrm{x}; \mathrm{m}, \Sigma,\tau)= \prod_{j=1}^{d} SN(\omega_j^T(\mathrm{x}-\mathrm{m});0,\sigma_j^2,\tau_j^2),
$$
where $\omega_{j}$ is the eigenvector corresponding to the $j$-th largest eigenvalue in the spectral decomposition of $\Sigma = W \mathcal{A} W^{T}$ and $\mathrm{m} = [m_1, \ldots, m_d]^T$, $\mathcal{A} = \mathrm{diag}(\sigma_{1}^2,\ldots,\sigma_{d}^2)$ and $\tau=[\tau_{1}^2,\ldots,\tau_{d}^2]$.
\end{definition}
One can easily observe that the principal components $\omega_j^T\mathrm{x}$ are independent.
For this generalization a similar theorem, like in the one-dimensional case, is valid. We can extract the maximum likelihood estimation by maximizing the function with respect to two parameters $\mathrm{m} \in \mathbb{R}^d$ and $W \in \mathcal{M}_{d}(\mathbb{R})$ where columns of $W$ are orthonormal vectors ($\mathcal{M}_{d}(\mathbb{R})$ denotes the set of $d$-dimensional square matrices).
We may use this theorem for numerical maximization of the likelihood function w.r.t. $\mathrm{m}$ and $W$. Unfortunately, the optimization process on Stiefel manifold (the set of orthogonal matrices) studied by \cite{absil2009optimization} is numerically ineffective and requires additional tools. This problem can be omitted by using Eulerian angles described by \cite{khatri1977mises}. In the two-dimensional case, $W$ is explicitly parametrized as
$$
W=
\begin{bmatrix}
\cos(\theta) & \sin(\theta) \\
-\sin(\theta) & \cos(\theta)
\end{bmatrix}, \quad -\frac{\pi}{2} < \theta \leq \frac{\pi}{2}.
$$
In such a case we can straightforwardly apply standard numerical optimization algorithm
Both of these solutions can be applied. Nevertheless, unnatural assumption of the orthogonality of principal components causes two negative effects: the optimization process is time consuming and the model with the restriction that the coordinates are orthogonal can not accommodate data as good as the general one.
Therefore, in this article we use more flexible model -- the General Split Normal \cite{spurek2017general} distribution:
\begin{definition}\label{def:GSN}
A density of the multivariate General Split Normal distribution is given by
$$
GSN_{d}(\mathrm{x}; \mathrm{m},W, \sigma^2,\tau^2)=\mathrm{det}(W) \prod_{j=1}^{d} SN(\omega_j^T(\mathrm{x}-\mathrm{m});0,\sigma_j^2,\tau_j^2),
$$
where $\omega_{j}$ is the $j$-th column of non-singular matrix $W$, $\mathrm{m} = (m_1, \ldots, m_d)^T$, $\sigma = (\sigma_{1},\ldots,\sigma_{d})$ and $\tau=(\tau_{1},\ldots,\tau_{d})$.
\end{definition}
Our model is a natural generalization of the multivariate Split Normal distribution proposed in \cite{villani2006multivariate} (see Definition \ref{def:SN}) and is given in the form formulated by \eqref{eq:gen}
for the set of Split Gaussian densities.
Clearly every Split Normal distribution is a General
Split Normal distribution.
The above generalization is flexible and allows to fit data with greater precision, see Fig. \ref{fig:rev_1}. The level sets of the GSN distribution with different parameters are presented in Fig.~\ref{fig:ex_level_s}.
We skip the constraints of orthogonality of the principal components. Consequently, we can apply the standard optimization procedure directly. In the next section we discuss how to fit data in our model.
\begin{figure}[!t]
\normalsize
\begin{center}
\includegraphics[width=5in]{2.jpg}
\end{center}
\caption{Comparison between fitting Gaussian, Split Gaussian and General Split distribution on \citep{maier2007allelic,pyne2009automated}. Observe that, contrary to Split Gaussian, General Split Gaussian does not have orthogonal basis.}
\label{fig:rev_1}
\end{figure}
\section{Maximum likelihood estimation}
In the previous section we introduced the GSN distribution.
Now we show how to use the likelihood estimation in our setting. As it was mentioned, we have to maximize the likelihood function with respect to four parameters. In the case of the General Split Normal distribution (contrary to the classical Gaussian one) we do not have explicit formulas and consequently we heave to solve the optimization problem.
In the first subsection, we reduce our problem to the simpler one by introducing the function~${l}$. Minimization of~${l}$~is equivalent to maximization of the likelihood function.
In the second subsection we present how to minimize our function by using the gradient method.
\subsection{Optimization problem}
The density of the GSN distribution depends on four parameters $\mathrm{m} \in \mathbb{R}^d$, $W \in \mathcal{M}(\mathbb{R}^d)$, $\sigma \in \mathbb{R}^d$, $\tau \in \mathbb{R}^d$.
We can find them by minimizing the simpler function, which depends on only $m \in \mathbb{R}^d$ and $W \in \mathcal{M}(\mathbb{R}^d)$. Other parameters are given by explicit formulas.
\begin{theorem}\label{the:min}
Let $\mathrm{x}_1,\ldots,\mathrm{x}_n$ be given.
Then the likelihood maximized w.r.t. $\sigma$ and $\tau$ is
\begin{equation}\label{eq:1}
\hat{L}(X;\mathrm{m},W) = \bigg( \frac{2n}{\pi e} \bigg)^{dn/2} \bigg( \frac{1}{|\mathrm{det}(W)|^{\frac{2}{3}}} \prod_{j=1}^{d} g_{j}(\mathrm{m},W) \bigg)^{-3n/2},
\end{equation}
where
$$
\begin{array}{c}
{g}_{j}(\mathrm{m},W) = {s}_{1j}^{1/3} + {s}_{2j}^{1/3},
\\[1ex]
{s}_{1j}= \! \sum\limits_{i \in I_j}[ \omega_{j}^T (\mathrm{x}_i-\mathrm{m})]^2, {I}_j=\{ i = 1,\ldots,n \colon \omega_{j}^T (\mathrm{x}_i-\mathrm{m}) \leq 0 \},
\\[1ex]
{s}_{2j}= \! \sum\limits_{i \in I_j^c}[ \omega_{j}^T (\mathrm{x}_i-\mathrm{m})]^2, {I}_j^c=\{ i = 1,\ldots,n \colon \omega_{j}^T (\mathrm{x}_i-\mathrm{m}) > 0 \},
\end{array}
$$
and the maximum likelihood estimators of $\sigma_{j}^2$ and $\tau_{j}$ are
$$\hat \sigma_j^2(\mathrm{m},W) = \tfrac{1}{n} s_{1j}^{2/3} g_{j}(\mathrm{m},W), \quad
\hat \tau_{j}(\mathrm{m},W)=\left(\frac{s_{2j}}{s_{1j}}\right)^{1/3}.
$$
\end{theorem}
\begin{proof}
See Appendix \ref{App:A}.
\end{proof}
\begin{figure*}[t!]
\normalsize
\begin{center}
\includegraphics[width=5in]{3.jpg}
\end{center}
\caption{Results of image separation with the uses of various ICA algorithms.}
\label{fig:image_ICA_1}
\end{figure*}
Thanks to the above theorem, instead of looking for the maximum of the likelihood function, it is enough to obtain the maximum of the simpler function~(\ref{eq:1}) which depends on two parameters $\mathrm{m} \in \mathbb{R}^d$ and $W \in \mathcal{M}(\mathbb{R}^d)$
\begin{equation}\label{equ:ll}
{l}(X;\mathrm{m},W) = \frac{1}{|\mathrm{det}(W)|^{\frac{2}{3}}} \prod_{j=1}^{d} {g}_{j}(\mathrm{m},W),
\end{equation}
where $\omega_{j}$ stands for the $j$-th column of matrix $W$.
Consequently, maximization of (\ref{eq:1}) is equivalent to minimization of (\ref{equ:ll}), see the following corollary.
\begin{corollary}\label{c2}
Let $X \subset \mathbb{R}^d$, $\mathrm{m} \in \mathbb{R}^d$, $W \in \mathcal{M}(\mathbb{R}^d)$ be given, then
$$
\operatornamewithlimits{argmax}_{\mathrm{m},W} \hat{L}(X;\mathrm{m},W) = \operatornamewithlimits{argmin}_{\mathrm{m},W} {l}(X;\mathrm{m},W).
$$
\end{corollary}
\subsection{Gradient}
One of the possible methods of optimization is the gradient method. Since the minimum of ${l}$ is equal to the minimum of $\ln({l})$, in this subsection we calculate the gradient of $\ln({l})$.
Before we prove suitable Theorem \ref{ther:grad}, we recall the following lemma.
\begin{lemma}\label{jacobi}
Let $A = (a_{ij})_{1 \leq i,j \leq d}$ be a differentiable map from real numbers to $d \times d$ matrices then
\begin{equation}
\frac{\partial \mathrm{det}(A)}{\partial a_{ij}} = \mathrm{adj}^T(A)_{ij},
\end{equation}
where $\mathrm{adj}(A)$ stands for the adjugate of $A$, i.e. the transpose of the cofactor matrix.
\end{lemma}
\begin{proof}
By the Laplace expansion $\mathrm{det} A = \sum\limits_{j=1}^{d} (-1)^{i+j} a_{ij} M_{ij}$ where $M_{ij}$ is the minor of the entry in the $i$-th row and $j$-th column. Hence
$$\frac{\partial \mathrm{det} A}{\partial a_{ij}} = (-1)^{i+j} M_{ij} = \mathrm{adj}^T(A)_{ij}.$$
\end{proof}
Now we are ready to calculate gradient of our cost function.
\begin{theorem}\label{ther:grad}
Let $X \subset \mathbb{R}^d$, $\mathrm{m} = (\mathrm{m}_1, \ldots, \mathrm{m}_d)^T \in \mathbb{R}^d$, $W = (\omega_{ij})_{1 \leq i,j \leq d}$ non-singular be given.
Then
$\nabla_{\mathrm{m}} \ln {l}(X;\mathrm{m},W) = \left( \frac{\partial \ln {l}(X;\mathrm{m},W)}{\partial \mathrm{m}_1}, \ldots, \frac{\partial \ln {l}(X;\mathrm{m},W)}{\partial \mathrm{m}_d} \right)^T$,
where
$$
\begin{array}{l}
\frac{\partial \ln {l}(X;\mathrm{m},W)}{\partial \mathrm{m}_k} =
\sum \limits_{j=1}^d \frac{-1}{{s}_{1j}^{\frac{1}{3}} + {s}_{2j}^{\frac{1}{3}}} \bigg(
\frac{1}{3 {s}_{1j}^{\frac{2}{3}}} \sum \limits_{i \in I_j} 2 \omega_j^T (\mathrm{x}_i - \mathrm{m}) \omega_{jk} +
\frac{1}{3 {s}_{2j}^{\frac{2}{3}}} \sum \limits_{i \in I_j^c} 2 \omega_j^T (\mathrm{x}_i - \mathrm{m}) \omega_{jk}
\bigg).
\end{array}
$$
Moreover,
$
\nabla_{W} \ln {l}(X;\mathrm{m},W) = \left[ \frac{\partial \ln l(X;\mathrm{m},W)}{\partial \omega_{pk}} \right]_{1 \leq p,k \leq d},
$
where
$$
\begin{array}{l}
\frac{\partial \ln l(X;\mathrm{m},W)}{\partial \omega_{pk}} =
-\frac{2}{3} (\omega^{-1})^T_{pk}
\frac{1}{{s}_{1p}^{\frac{1}{3}} +{s}_{2p}^{\frac{1}{3}}}
\bigg(
\frac{1}{3} {s}_{1p}^{-\frac{2}{3}} \sum \limits_{i \in {I}_p} 2 \omega^T_p (\mathrm{x}_i - \mathrm{m}) (\mathrm{x}_{ik} - \mathrm{m}_k) + \\[6pt]
+ \frac{1}{3} {s}_{2p}^{-\frac{2}{3}} \sum \limits_{i \in {I}_p^c} 2 \omega^T_p (\mathrm{x}_i - \mathrm{m}) (\mathrm{x}_{ik} - \mathrm{m}_k) \bigg),
\end{array}
$$
and
$$
\begin{array}{c}
{s}_{1j}= \! \sum\limits_{i \in I_j}[ \omega_{j}^T (\mathrm{x}_i-\mathrm{m})]^2, {I}_j=\{ i = 1,\ldots,n \colon \omega_{j}^T (\mathrm{x}_i-\mathrm{m}) \leq 0 \},
\\[1ex]
{s}_{2j}= \! \sum\limits_{i \in I_j^c}[ \omega_{j}^T (\mathrm{x}_i-\mathrm{m})]^2, {I}_j^c=\{ i = 1,\ldots,n \colon \omega_{j}^T (\mathrm{x}_i-\mathrm{m}) > 0 \}.
\end{array}
$$
\end{theorem}
\begin{proof}
See Appendix \ref{App:B}.
\end{proof}
Thanks to the above theorem we can use gradient descent, a first-order optimization algorithm. To find a local minimum of the cost function $\ln(l)$ using gradient descent, one takes steps proportional to the negative of the gradient of the function at the current point. If instead one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function, see Algorithm \ref{alg1}.
\begin{algorithm}[!h]
\caption{:}
\label{alg1}
\begin{algorithmic}
\STATE {\bf Input}
\STATE\hspace\algorithmicindent data set $X$
\STATE {\bf Initial conditions}
\STATE\hspace\algorithmicindent initialization of mean vector $\mathrm{m}=\mathrm{mean}(X)$
\STATE\hspace\algorithmicindent initialization of matrix $W = \mathrm{cov}(X)$
\STATE {\bf Gradient algorithm}
\STATE\hspace\algorithmicindent obtain new values of $\mathrm{m}$ and $V$ by applying gradient method for function $\log(l)$ (see formula \ref{eq:1}):
\STATE\hspace\algorithmicindent $$(\mathrm{m},W) =\operatornamewithlimits{argmin}\limits_{\bar \mathrm{m},\bar W} \log({l}(X;\bar \mathrm{m},\bar W)), $$
\STATE\hspace\algorithmicindent where
\STATE\hspace\algorithmicindent $$\nabla_{\mathrm{m}} \ln {l}(X;\mathrm{m},W)$$ $$\nabla_{W} \ln {l}(X;\mathrm{m},W)$$
\STATE\hspace\algorithmicindent are given by Theorem \ref{ther:grad}
\STATE\hspace\algorithmicindent calculate $\sigma \in \mathbb{R}^d$ and $\tau \in \mathbb{R}^d$ by using Theorem \ref{the:min}
\STATE {\bf Return value}
\STATE\hspace\algorithmicindent return optimal ICA basis $(\mathrm{m},W) $.
\end{algorithmic}
\end{algorithm}
At the end of this section we present comparison of computational efficiency
between ICA$_{SG}$ \ and various ICA methods, see Fig. \ref{fig:time_1}. In our experiment we consider the classical image separation problem, where we mixed two images by adding and subtracting them. We use ten pairs of images. Each pair was scaled to different sizes. In Fig. \ref{fig:time_1} we present mean value of computation time. FastICA, Infomax and JADE are the most effective but do not solve the problem of image separation sufficiently well, see Tab. \ref{tab:congru_img_1}. On the other hand, the ProDenICA which gives comparable result to ICA$_{SG}$, is much slower.
\begin{figure*}[!t]
\normalsize
\begin{center}
\includegraphics[width=5in]{9.jpg}
\end{center}
\caption{Comparison of computational efficiency between ICA$_{SG}$ \ and various ICA methods.}
\label{fig:time_1}
\end{figure*}
\section{Experiments and analysis}\label{ex}
To compare our method to classical ones we use
Tucker's congruence coefficient \cite{lorenzo2006tucker}
(uncentered correlation) defined by
$$
Cr(s}%{{s\mathrm{I}}, \bar s}%{{s\mathrm{I}}) = \frac{ \sum_{i=1}^d s_i \bar s}%{{s\mathrm{I}}_i}{ \sqrt{\sum_{i=1}^d s}%{{s\mathrm{I}}_i^2}\sqrt{ \sum_{i=1}^d {\bar s}%{{s\mathrm{I}}}^2_i } }.
$$
Its values range between $-1$ and $+1$. It can be used to study the similarity of extracted factors across different samples. Generally, a congruence coefficient of $0.9$ indicates a high degree of factor similarity, while a coefficient of $0.95$ or higher indicates that the factors are virtually identical.
In the case of ICA methods multiplying by the scalar any of the sources do not change results. Therefore the sign of congruence coefficient is not important and we can compare absolute value of Tucker's congruence.
We evaluate our method in the context of images, sound, hyperspectral unmixing and EEG data.
For comparison we use R package {\tt ica} \cite{ica}, {\tt PearsonICA} \cite{pearsonica}, {\tt ProDenICA} \cite{prodenica}, {\tt tsBSS} \cite{tsBSS}.
The most popular method used in practice is FastICA \cite{hyvarinen1999fast,helwig2013critique} algorithm, which uses negentropy. In this context we can use three different functions to estimate neg-entropy:
logcosh, exp and kurtosis.
We also compare our method with algorithm using Information-Maximization (Infomax) approach \cite{bell1995information}. Similarly to FastICA we consider three possible nonlinear functions: hyperbolic tangent, logistic and extended Infomax.
We also consider algorithm which uses Joint Approximate Diagonalization of Eigenmatrices (JADE) proposed by Cardoso and Souloumiac's \cite{cardoso1993blind,cardoso1993blind,helwig2013critique}.
One of the most popular ICA methods dedicated for skew data is PearsonICA \cite{karvanen2000pearson,karvanen2002blind}, which minimizes mutual information using a Pearson \cite{stuart1968advanced} system-based parametric model. Another model we consider is ProDenICA \cite{bach2002kernel,hastie2009elements}, which is based not on a
single nonlinear function, but on an entire function space of candidate nonlinearities. In particular, the method works with the functions in a reproducing kernel Hilbert space, and make use of the “kernel trick” to search over this space efficiently.
We also compare our method with FixNA \cite{shi2009blind}, method for blind source separation problem.
\subsection{Separation of images}
One of the most popular application of ICA is the separation of images. In our experiments we use four images from the USC-SIPI Image Database of size $256 \times 256$ pixels (4.1.01, 4.1.06, 4.1.02, 4.1.03) and eight of size $512 \times 512$ pixels (4.2.04, 4.2.02, boat.512, elaine.512, 5.2.10, 5.2.08, 5.3.01, 4.2.03). We also use 8 images from the Berkeley Segmentation Dataset of size $482 \times 321$ with indexes (\#119082, \#42049, \#43074, \#38092, \#157055, \#220075, \#295087, \#167062). We make random pairs of above images and use them as a source signal, combined by the mixing matrix $A = \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} $. From practical point of view, we simply obtain two new images by adding and dividing sources pictures. Our goal is to reconstruct original images by using only the knowledge about mixed ones. The visualization of this process we present in Fig. \ref{fig:image_ICA_1}. The results of this experiment are presented in Tab.~\ref{tab:congru_img_1} where we exhibit Tucker's congruence coefficients
In the case of the Tucker's congruence coefficient measure almost in all situation we obtain better results. The ICA$_{SG}$ \ method essentially better recovers original signals. In Fig.~\ref{fig:image_ICA_1} we can sow that ICA$_{SG}$ \ almost perfectly recovers source signal.
\begin{table*}[!t]
\centering
\scalebox{0.575}{
\begin{tabular}{ | c | c | c c c | c c c | c | c | c | c | }
\multicolumn{1}{c}{} & \multicolumn{1}{c}{ICA$_{SG}$} & \multicolumn{3}{c}{FastICA} & \multicolumn{3}{c}{Infomax} & \multicolumn{1}{c}{JADE} & \multicolumn{1}{c}{PearsonICA} & \multicolumn{1}{c}{ProDenICA} & \multicolumn{1}{c}{FixNA} \\
& & logcosh & exp & kurtosis & tanh & tangent & logistic & & & & \\
\hline
4.1.01 & \bf-0.9818 & 0.5481 & -0.5457 & -0.5485 & 0.548 & -0.5484 & -0.548 & -0.5492 & -0.5308 & -0.0013 & 0.5503 \\
4.1.02 & \bf0.992 & 0.6696 & 0.6644 & 0.6707 & 0.6695 & 0.6705 & 0.6695 & 0.6726 & 0.6696 & -0.0981 & -0.6761 \\ \hline
4.1.06 & \bf-0.9609 & -0.4297 & -0.4297 & -0.4296 & -0.4297 & -0.4297 & -0.4296 & -0.4296 & -0.4297 & 0.4297 & 0.0148 \\
4.1.03 & \bf0.5664 & 0.2062 & 0.2062 & 0.2057 & 0.2061 & 0.206 & 0.2058 & 0.2058 & -0.2062 & 0.207 & 0.0127 \\ \hline
4.2.04 & \bf-0.5034 & 0.0506 & 0.0528 & -0.0499 & 0.0505 & -0.0512 & 0.0508 & 0.0397 & 0.3123 & -0.3164 & 0.1461 \\
5.2.10 & 0.2893 & -0.0719 & -0.0749 & 0.0709 & -0.0717 & 0.0727 & -0.0722 & -0.057 & -0.4275 & \bf0.4334 & -0.1979 \\ \hline
4.2.02 & \bf0.2305 & -0.0376 & 0.0203 & -0.0017 & 0.0377 & 0.0265 & 0.0061 & -0.0093 & -0.1228 & 0.1282 & 0.1235 \\
5.2.08 & \bf0.5717 & 0.1037 & -0.0625 & -0.0097 & -0.1039 & -0.0773 & -0.0285 & 0.0086 & -0.2913 & -0.3091 & -0.2931 \\ \hline
boat.512 & \bf 0.3593 & 0.0351 & 0.0314 & -0.056 & 0.0343 & -0.0449 & 0.0298 & 0.0356 & -0.1046 & -0.0461 & 0.3175 \\
5.3.01 & 0.4316 & 0.0078 & 0.0138 & -0.0262 & 0.0091 & -0.008 & 0.0164 & 0.007 & 0.1061 & 0.0486 & \bf -0.5303 \\ \hline
elaine.512 & \bf0.5874 & 0.32 & 0.32 & -0.32 & 0.32 & -0.32 & 0.32 & 0.32 & -0.32 & 0.0287 & 0.2282 \\
4.2.03 & -0.0226 & -0.3196 & -0.3196 & 0.3201 & -0.3196 & 0.3199 & -0.3196 & \bf -0.3202 & -0.3195 & -0.048 & -0.2554 \\ \hline
119082 & \bf0.9987 & 0.5736 & 0.5736 & 0.5731 & 0.5737 & 0.5733 & 0.5735 & 0.5735 & -0.032 & 0.5744 & 0.3695 \\
157055 & \bf0.389 & -0.3619 & -0.3619 & -0.3618 & -0.3619 & -0.3619 & -0.3619 & -0.3619 & 0.0046 & 0.3619 & -0.2446 \\ \hline
42049 & \bf -0.7493 & 0.3009 & 0.3028 & -0.299 & -0.3005 & -0.3031 & -0.3007 & -0.2898 & 0.2596 & 0.0421 & 0.142 \\
220075 & 0.4359 & -0.5087 & -0.5154 & 0.503 & 0.5074 & \bf0.5168 & 0.5081 & 0.4789 & 0.4838 & -0.0645 & -0.1839 \\ \hline
43074 & \bf-0.7371 & 0.0344 & 0.0323 & 0.0429 & 0.0348 & 0.0404 & 0.0342 & 0.0324 & 0.0891 & 0.3925 & 0.2458 \\
295087 & -0.3997 & -0.048 & -0.0458 & -0.0566 & -0.0484 & -0.0541 & -0.0478 & -0.0459 & -0.1035 & \bf 0.4015 & -0.2406 \\ \hline
38092 & \bf -0.5949 & 0.0555 & 0.0564 & 0.031 & -0.0553 & 0.041 & 0.0557 & 0.0375 & 0.0535 & 0.4036 & 0.2614 \\
167062 & 0.3255 & -0.0025 & -0.0041 & 0.0425 & 0.0021 & 0.0241 & -0.0029 & 0.0306 & 0.0011 & \bf 0.7404 & -0.5495 \\ \hline
\end{tabular}
}
\caption{The Tucker's congruence coefficient measure between original images and results of different ICA algorithms.}
\label{tab:congru_img_1}
\end{table*}
\subsection{Cocktail-party problem}
In this subsection we compare our method with classical ones in the case of cocktail-party problem.
Imagine that you are in a room where two people are speaking simultaneously. You have two microphones, which you hold in different locations. The microphones give you two recorded time signals, which we could interpret as mixed signal $\mathrm{x}$. Each of these recorded signals is a weighted sum of the speech signals emitted by the two speakers, which we denote by $s}%{{s\mathrm{I}}$.
The cocktail-party problem is to estimate the two original speech signals.
In our experiments we use signal obtained by mixing synthetic sources\footnote{We use signals from \url{http://research.ics.aalto.fi/ica/cocktail/cocktail_en.cgi}.} (similar as before we use mixing matrix $A = \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} $).
Comparison between methods we present in Tab. \ref{tab:congru_sound_1}. In the case of cocktail-party problem our method recovers
sources signal better then classical methods.
\begin{table*}[!t]
\centering
\scalebox{0.6}{
\begin{tabular}{ | c | c | c c c | c c c | c | c | c | c | }
\multicolumn{1}{c}{} & \multicolumn{1}{c}{ICA$_{SG}$} & \multicolumn{3}{c}{FastICA} & \multicolumn{3}{c}{Infomax} & \multicolumn{1}{c}{JADE} & \multicolumn{1}{c}{PearsonICA} & \multicolumn{1}{c}{ProDenICA} & \multicolumn{1}{c}{FixNA} \\
& & logcosh & exp & kurtosis & tanh & tangent & logistic & & & & \\
\hline
source 1 & \bf 0.1597 & 0.1097 & 0.1096 & 0.1101 & 0.1097 & 0.11 & 0.1097 & 0.1101 & 0.1097 & 0.1412 & 0.109 \\
source 2 & 0.7739 & 0.7705 & 0.7713 & 0.7672 & 0.7705 & 0.7685 & 0.7705 & 0.7704 & 0.7704 & \bf 0.9998 & 0.7751 \\ \hline
source 2 & \bf0.1388 & 0.0899 & 0.0899 & 0.0908 & 0.0899 & 0.0899 & 0.0899 & 0.0908 & 0.0899 & 0.0984 & 0.0907 \\
source 3 & 0.9435 & 0.9075 & 0.9076 & 0.898 & 0.9074 & 0.907 & 0.9074 & 0.9075 & 0.9075 & \bf0.9989 & 0.8988 \\ \hline
source 3 & \bf0.1985 & 0.079 & 0.0791 & 0.079 & 0.079 & 0.079 & 0.079 & 0.079 & 0.0789 & 0.0843 & 0.0791 \\
source 4 & 0.8453 & 0.8887 & 0.8882 & 0.8889 & 0.8887 & \bf0.8892 & 0.8887 & 0.8898 & 0.8898 & 0.8459 & 0.8882 \\ \hline
source 4 & \bf0.232 & 0.0989 & 0.0989 & 0.099 & 0.0989 & 0.0989 & 0.0989 & 0.099 & 0.0989 & 0.1153 & 0.0989 \\
source 5 & 0.7679 & 0.7798 & 0.7799 & 0.7793 & 0.7798 & 0.7798 & 0.7798 & 0.7801 & 0.7801 & \bf0.9344 & 0.7796 \\ \hline
source 5 & \bf0.1728 & 0.0989 & 0.099 & 0.0988 & 0.0989 & 0.0989 & 0.0989 & 0.0989 & 0.0989 & 0.0963 & 0.0987 \\
source 6 & 0.9424 & 0.9245 & 0.9243 & 0.9256 & 0.9246 & 0.925 & 0.9246 & 0.9245 & 0.9245 & \bf0.9729 & 0.9273 \\ \hline
source 6 & \bf0.15 & 0.0404 & 0.0404 & 0.0402 & 0.0404 & 0.0404 & 0.0404 & 0.0402 & 0.0404 & 0.0567 & 0.0402 \\
source 7 & 0.7417 & 0.7129 & 0.7134 & 0.707 & 0.7132 & 0.7125 & 0.7129 & 0.7124 & 0.7124 & \bf0.9998 & 0.7099 \\ \hline
source 7 & \bf0.1036 & 0.0839 & 0.084 & 0.0839 & 0.0839 & 0.0839 & 0.0839 & 0.0839 & 0.084 & 0.093 & 0.0836 \\
source 8 & 0.908 & 0.9016 & 0.9015 & 0.9019 & 0.9019 & 0.9019 & 0.9017 & 0.9014 & 0.9014 & \bf0.9999 & 0.9056 \\ \hline
source 8 & 0.1166 & 0.1153 & 0.1156 & 0.1145 & 0.1152 & 0.1148 & 0.1153 & 0.1155 & 0.1149 & \bf0.1427 & 0.1147 \\
source 9 & 0.8212 & 0.8136 & 0.8116 & 0.8195 & 0.8141 & 0.8174 & 0.8138 & 0.8165 & 0.8165 & \bf0.9996 & 0.8176 \\ \hline
\end{tabular}
}
\caption{The Tucker's congruence coefficient measure between original sound and results of different ICA algorithms in the case of cocktail-party problem.}
\label{tab:congru_sound_1}
\end{table*}
\subsection{Hyperspectral Unmixing}
Independent component analysis has been recently
applied into hyperspectral unmixing as a result of its low
computation time and its ability to perform without prior information.
However, when applying ICA for hyperspectral unmixing,
the independence assumption in the ICA model conflicts with
the abundance sum-to-one constraint and the abundance nonnegative
constraint in the linear mixture model, which affects the
hyperspectral unmixing accuracy. Nevertheless, ICA was recently applied in this area \cite{wang2015abundance,caiafa2008blind}. In this subsection we apply simple example which shows that our method can by used for spectral data.
Urban data \cite{fyzhu2014IJPRSSSNMF,fyzhu2014TIPDgSNMF,fyzhu2014JSTSPRRLbSF} is one of the most widely used hyperspectral data-sets used in the hyperspectral unmixing study. Each image has $307 \times 307$ pixels, each of which corresponds to a $2 \times 2$ m area. In this image, there are 210 wavelengths ranging from 400 nm to 2500 nm, resulting in a spectral resolution of 10 nm. After the channels 1--4, 76, 87, 101--111, 136--153 and 198--210 are removed (due to dense water vapor and atmospheric effects), there remain 162 channels (this is a common preprocess for hyperspectral unmixing analyses). There is ground truth \cite{fyzhu2014IJPRSSSNMF,fyzhu2014TIPDgSNMF,fyzhu2014JSTSPRRLbSF}, which contains 4 channels: \#1 Asphalt, \#2 Grass, \#3 Tree and \#4 Roof.
A highly mixed area is cut from the original data set in this experiment (similar example was showed in \cite{wang2015abundance}), with the size of $200 \times 150$ pixels.
In our experiment we apply various ICA methods and report the Tucker's congruence coefficient measure between each layer and the closest reference channel, see Fig. \ref{fig:spec_1}. ICA$_{SG}$ \ and ProDenICA give layers which contain more information then the other approaches. Distance between four best channels to the reference ones we present in Tab. \ref{tab:spec}.
\begin{table*}[!t]
\centering
\scalebox{0.7}{
\begin{tabular}{ | c | c | c c c | c c c | c | c | }
\multicolumn{1}{c}{} & \multicolumn{1}{c}{ICA$_{SG}$} & \multicolumn{3}{c}{FastICA} & \multicolumn{3}{c}{Infomax} & \multicolumn{1}{c}{PearsonICA} & \multicolumn{1}{c}{ProDenICA} \\
& & logcosh & exp & kurtosis & tanh & tangent & logistic & & \\
\hline
\#1 Asphalt &\bf 0.6774 & 0.2859 & 0.2864 & -0.2595 & -0.2972 & -0.2954 & -0.2972 & 0.20978 & 0.4928 \\
\#2 Grass & \bf -0.7784 & -0.2746 & -0.2605 & -0.2798 & -0.2814 & -0.2816 & -0.2814 & -0.2412 & -0.4323 \\
\#3 Tree & \bf 0.7267 & 0.2338 & 0.2717 & -0.2547 & 0.2441 & 0.2354 & 0.2442 & 0.2482 & -0.5961 \\
\#4 Roof &\bf 0.6666 & -0.4256 & 0.4279 & 0.4167 & -0.4244 & 0.4301 & -0.4244 & 0.4193 & -0.6128 \\
\end{tabular}
}
\caption{The Tucker's congruence coefficient measure between reference layers and results of different ICA algorithms in the case of the urban data set.}
\label{tab:spec}
\end{table*}
\begin{figure*}[!t]
\normalsize
\begin{center}
\includegraphics[width=5in]{5.jpg}
\end{center}
\caption{Congruence distance between layers obtain by different ICA algorithms and the closest reference channel.}
\label{fig:spec_1}
\end{figure*}
\subsection{EEG}
At the end of this section we present how our method works in the case of EEG signals. In this context, ICA is applied to many different task like
eye movements, blinks, muscle, heart and line noise e.t.c..
In this experiment we concentrate on eye movement and blink artifacts.
Our goal here is to demonstrate that our method is capable of
finding artifacts in real EEG data. However,
we emphasize that it does not provide a complete solution
to any of these practical problems. Such a solution usually
entails a significant amount of domain-specific knowledge
and engineering. Nevertheless, from these preliminary
results with EEG data, we believe that
the method presented in this paper provides a reasonable
solution for signal separation, which is simple and
effective enough to be easily customized for a broad range
of practical problems.
For EEG analysis, the rows of the input matrix $\mathrm{x}$ are the EEG signals recorded at
different electrodes, the rows of the output data matrix $s}%{{s\mathrm{I}} = W\mathrm{x}$ are time courses of
activation of the ICA components, and the columns of the inverse matrix, $W$, give
the projection strengths of the respective components onto the scalp sensors.
One EEG data set used in the analysis was collected from 40 scalp electrodes (see Fig. \ref{fig:EEG} a)). The second and the third are located very near to eye and can be understood as a base (we can use them for removing eye blinking artifacts). In Fig. \ref{fig:EEG} b) we present signals obtained by ICA$_{SG}$. The scale of this figure is large but we can find the data which have spikes exactly in the same place as the two base signals (see Fig. \ref{fig:EEG} c)). After removing selected signal and going back to the original situation we obtain signal (see Fig. \ref{fig:EEG} d)) without eye blinking artifacts (compare Fig. \ref{fig:EEG} a) with Fig. \ref{fig:EEG} d)).
\begin{figure*}[!t]
\normalsize
\begin{center}
\includegraphics[width=5in]{4.jpg}
\end{center}
\caption{Results of ICA$_{SG}$ in the case of EEG data.}
\label{fig:EEG}
\end{figure*}
|
{
"timestamp": "2017-02-01T02:08:17",
"yymm": "1701",
"arxiv_id": "1701.09160",
"language": "en",
"url": "https://arxiv.org/abs/1701.09160"
}
|
\section{#1}}
\textwidth 160mm
\textheight 220mm
\allowdisplaybreaks
\newtheorem{theorem}{Theorem}
\newtheorem{acknowledgement}[theorem]{Acknowledgement}
\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{axiom}[theorem]{Axiom}
\newtheorem{case}[theorem]{Case}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{conclusion}[theorem]{Conclusion}
\newtheorem{condition}[theorem]{Condition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{criterion}[theorem]{Criterion}
\newtheorem{exercise}[theorem]{Exercise}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{solution}[theorem]{Solution}
\newtheorem{summary}[theorem]{Summary}
\begin{document}
\topmargin 0pt
\oddsidemargin 0mm
\def\be{\begin{equation}}
\def\ee{\end{equation}}
\def\bea{\begin{eqnarray}}
\def\eea{\end{eqnarray}}
\def\ba{\begin{array}}
\def\ea{\end{array}}
\def\ben{\begin{enumerate}}
\def\een{\end{enumerate}}
\def\nab{\bigtriangledown}
\def\tpi{\tilde\Phi}
\def\nnu{\nonumber}
\newcommand{\eqn}[1]{(\ref{#1})}
\newcommand{\half}{{\frac{1}{2}}}
\newcommand{\vs}[1]{\vspace{#1 mm}}
\newcommand{\dsl}{\pa \kern-0.5em /}
\def\a{\alpha}
\def\b{\beta}
\def\g{\gamma}\def\G{\Gamma}
\def\d{\delta}\def\D{\Delta}
\def\ep{\epsilon}
\def\et{\eta}
\def\z{\zeta}
\def\t{\theta}\def\T{\Theta}
\def\l{\lambda}\def\L{\Lambda}
\def\m{\mu}
\def\f{\phi}\def\F{\Phi}
\def\n{\nu}
\def\p{\psi}\def\P{\Psi}
\def\r{\rho}
\def\s{\sigma}\def\S{\Sigma}
\def\ta{\tau}
\def\x{\chi}
\def\o{\omega}\def\O{\Omega}
\def\k{\kappa}
\def\pa {\partial}
\def\ov{\over}
\def\nn{\nonumber\\}
\def\ud{\underline}
\begin{flushright}
\end{flushright}
\begin{center}
{\large{\bf Anisotropic SD2 brane: accelerating cosmology\\ and
Kasner-like space-time from compactification}}
\vs{10}
{Kuntal Nayek\footnote{E-mail: kuntal.nayek@saha.ac.in} and Shibaji Roy\footnote{E-mail: shibaji.roy@saha.ac.in}}
\vs{4}
{\it Saha Institute of Nuclear Physics\\
1/AF Bidhannagar, Calcutta 700064, India\\}
\vs{4}
and
\vs{4}
{\it Homi Bhabha National Institute\\
Training School Complex, Anushakti Nagar, Mumbai 400085, India}
\end{center}
\vs{15}
\begin{abstract}
Starting from an anisotropic (in all directions including the time direction of the brane) non-susy D2 brane solution of type IIA string theory
we construct an anisotropic space-like D2 brane (or SD2 brane, for short) solution by the standard trick of double Wick rotation. This solution
is characterized by five independent parameters. We show that compactification on six dimensional hyperbolic space (H$_6$) of time dependent
volume of this SD2 brane solution leads to accelerating cosmologies (for some time $t\sim\,t_0$, with $t_0$ some characteristic time) where
both the expansions and the accelerations are
different in three spatial directions of the resultant four dimensional universe. On the other hand at early times ($t \ll t_0$)
this four dimensional space, in certain situations, leads to four dimensional Kasner-like cosmology, with two additional scalars, namely, the dilaton
and a volume scalar of H$_6$.
Unlike in the standard four dimensional Kasner cosmology here all three Kasner exponents could be positive definite, leading to expansions in
all three directions.
\end{abstract}
\newpage
\noindent{\it 1. Introduction} : It is well-known \cite{Townsend:2003fx} that cosmological solution of higher dimensional vacuum Einstein
equation can give rise to interesting
four dimensional cosmology (with a period of accelerated expansion) upon time dependent hyperbolic space compactifications \cite{Kaloper:2000jb}.
This process, therefore,
evades a no-go theorem \cite{Gibbons:1985,Maldacena:2000mw} of obtaining such accelerated expansion in standard time-independent compactifications.
Similar cosmologies also follow
if one includes fluxes \cite{Ohta:2003pu} and/or a dilaton field \cite{Roy:2003nd} in the higher dimensional theories such as M/string theory.
M/string theory solution which gives rise
to four dimensional accelerating cosmologies upon time dependent hyperbolic space compactifications is called the space-like M2 (SM2) brane (for M theory)
or space-like D2 (SD2) brane (for string theory). Space-like branes \cite{Gutperle:2002ai} are topological defects localized on a space-like
hypersurface and exist for a moment
in time. So, they are time dependent solutions of field theories or M/string theory with an isometry ISO($p+1$) $\times$ SO($d-p-2,1$) for an S$p$ brane
in $d$ space-time dimensions \cite{Chen:2002yq,Bhattacharya:2003sh}. The original motivation for constructing these solutions was to understand
the time-dependent processes in field and M/string
theory \cite{Gutperle:2002ai,Sen:1999mg} and also to have a better understanding of the dS/CFT correspondence \cite{Strominger:2001pn}. The cosmological
implication leading to four dimensional accelerated expansion
from these solutions has been elucidated in refs.\cite{Townsend:2003fx,Ohta:2003pu,Roy:2003nd,Ohta:2003ie}.
The previous S2 brane solutions considered in the literature \cite{Ohta:2003pu,Roy:2003nd,Chen:2002yq,Bhattacharya:2003sh} were isotropic in the brane
directions and so the four dimensional accelerating cosmologies
obtained from these solutions were isotropic. In this paper we will construct an anisotropic SD2 brane solutions of type IIA string theory and try to see
whether similar four dimensional accelerating cosmologies can be obtained in all three spatial directions upon compactification. Another motivation
to look at the anisotropic SD2 brane solution is to see whether one can get a four dimensional Kasner-like \cite{Kasner:1921zz} solution from it upon
compactification
where one can get expansions in all three spatial directions which is not possible in conventional Kasner solution from four dimensional vacuum Einstein
equation. The construction of anisotropic SD2 brane solution follows from the standard double Wick rotation \cite{Lu:2004ms} of the known anisotropic
non-susy D2 brane
solution \cite{Lu:2007bu} of type IIA string theory. This solution is characterized by five independent parameters. We then cast the solution in a
suitable time-like coordinate
and is given in terms of a single harmonic function containing a characteristic time $t_0$. Next, we compactify the space-time on a six dimensional
hyperbolic space with time dependent volume. The resultant metric when expressed in Einstein frame gives us a four dimensional FLRW type space-time with
three different scale factors in three spatial directions. We find that when $t \sim t_0$, we can get accelerating cosmologies in all three directions
when other parameters of the solution take some specific values. Although the expansions and the accelerations in all three directions are not the same
but they do not
differ drastically and the accelerations are all transient. However, when $t \ll t_0$, the resultant four dimensional metric takes a Kasner-like form
when the parameters characterizing the solution satisfy certain conditions.
But because of the presence of the dilaton as well as the volume scalar of the six dimensional hyperbolic space, all the Kasner exponents could be
positive definite, leading to expansions in all three spatial directions. However, the expansions in this case are decelerating. This
can be contrasted with the standard four dimensional Kasner space-time \cite{Kasner:1921zz} (obtained from the solution of vacuum Einstein equation)
where expansions in all three directions are not possible.
This paper is organized as follows. In the next section we give the construction of anisotropic SD2 brane solution from its time-like counterpart
and cast the solution in a coordinate system suitable for our purpose. In section 3, we obtain the anisotropic accelerating cosmologies from this
solution upon compactification on six dimensional hyperbolic space of time dependent volume. In section 4, we show how a four dimesional Kasner-like
geometry arises from this string theory solution, where all the Kasner exponents could be positive definite leading to expansions in all three spatial
directions unlike the standard Kasner solution in four dimensions. Finally, we conclude in section 5.
\vspace{.5cm}
\noindent{\it 2. Anisotropic SD2 brane solutions} :
In this section we construct the anisotropic SD2 brane solution from the known anisotropic non-susy D2 brane solution of
type IIA string theory. In \cite{Lu:2007bu}, we have constructed an anisotropic non-susy D$p$ brane solution and showed how it nicely interpolates
between a black D$p$ brane and a Kaluza-Klein ``bubble of nothing'' when some of the parameters of the solution are varied continuously
and interpreted this interpolation as closed string tachyon condensation. Here we make use of that solution and write the anisotropic
non-susy D2 brane solution in the following by putting $p=2$ in eq.(4) (we have replaced $\d_0$ by $\d_3$ for convenience) of the above mentioned
reference \cite{Lu:2007bu},
\bea\label{dp1}
ds^2 &=& F(r)^{\frac{3}{8}} \left(H(r)\tilde{H}(r)\right)^{\frac{2}{5}}\left(\frac{H(r)}{\tilde{H}(r)}\right)^{\frac{\delta_1}{4}
+\frac{\delta_2}{10}+\frac{\delta_3}{10}}\left(dr^2 + r^2 d\Omega_6^2\right)\nnu\\
& & + F(r)^{-\frac{5}{8}}\Big\{-\left(\frac{H(r)}{\tilde{H}(r)}\right)^{\frac{\delta_1}{4}+\frac{\delta_2}{2}+\frac{\delta_3}{2}}dt^2
+\left(\frac{H(r)}{\tilde{H}(r)}\right)^{-\frac{3\delta_1}{4}-\frac{3\delta_2}{2}+\frac{\delta_3}{2}}(dx^1)^2\nnu\\
& & + \left(\frac{H(r)}{\tilde{H}(r)}\right)^{-\frac{3\delta_1}{4}+\frac{\delta_2}{2}-\frac{3\delta_3}{2}}(dx^2)^2\Big\}\\
e^{2(\phi-\phi_0)} &=& F(r)^{\frac{1}{2}}\left(\frac{H(r)}{\tilde{H}(r)}\right)^{\delta_1-2\delta_2-2\delta_3}, \qquad\qquad F_{[6]} \,\,=\,\,
\hat Q {\rm Vol}(\Omega_{6})
\eea
The metric in the above is given in the Einstein frame. The various functions appearing in the solution are defined as,
\bea\label{functions1}
F(r) &=& \left(\frac{H(r)}{\tilde{H}(r)}\right)^{\alpha} \cosh^2 \theta - \left(\frac{\tilde {H}(r)}{H(r)}\right)^{\beta} \sinh^2\theta\nnu\\
H(r) &=& 1 + \frac{\omega^5}{r^5},\qquad\qquad \tilde{H}(r)\,\,=\,\, 1 - \frac{\omega^5}{r^5}
\eea
Note that the solution has eight parameters $\alpha,\,\beta,\,\delta_1,\,\delta_2,\,\delta_3,\,\theta,\,\omega,$ and
$\hat Q$. $\phi_0$ is the asymptotic value of the dilaton and $F_{[6]}$ is a six form and $\hat Q$ is the magnetic charge associated
with the D2 brane. The solution becomes isotropic in the brane directions when $\d_1 = -2\d_2 = -2\d_3$. So, in that sense these
parameters can be called anisotropy parameters. Now for the consistency of the field equations the eight parameters of the solution
must satisfy the following relations \cite{Lu:2007bu},
\bea\label{relations1}
& & \alpha - \beta\,\,=\,\, -\frac{3}{2}\delta_1\nn
& & \frac{1}{2}\delta_1^2 + \frac{1}{2}\alpha(\alpha+\frac{3}{2}\delta_1)+\frac{2}{5}\delta_2\delta_3\,\,=\,\, \frac{6}{5}\left(1-\delta_2^2-\delta_3^2\right)\nn
& & \hat Q \,\,=\,\, 5 \omega^5(\alpha+\beta)\sinh2\theta
\eea
These three relations reduce the number of independent parameters from eight to five, which are $\omega$,
$\theta$, and the anisotropy parameters $\delta_1,\,\delta_2$ and $\delta_3$.
Using the second and the first relations in \eqref{relations1}, we can express $\a$ and $\b$ in terms of the other
parameters as,
\bea\label{alphabeta}
& & \alpha=-\frac{3}{4}\delta_1\pm \frac{1}{2}\sqrt{\frac{48}{5}(1-\delta_2^2-\delta_3^2)-\frac{7}{4}\delta_1^2-\frac{16}{5}\delta_2\delta_3}\nn
& & \beta=\frac{3}{4}\delta_1\pm \frac{1}{2}\sqrt{\frac{48}{5}(1-\delta_2^2-\delta_3^2)-\frac{7}{4}\delta_1^2-\frac{16}{5}\delta_2\delta_3}
\eea
The form of the harmonic function $\tilde{H}(r)$ in \eqref{functions1} indicates that there is a naked singularity of the solution at $r=\omega$
and therefore, the solution is well defined only for $r>\omega$. Now we apply the double Wick rotation \cite{Lu:2004ms} $r \to i\tau$, $t \to -ix^3$ to the
solution \eqref{dp1} along with $\omega \to i\omega$, $\theta \to i\theta$ and $\theta_1 \to i\theta_1$, where $\theta_1$ is one of the angular coordinates
of the sphere $\Omega_6$ of the transverse space. This operation gives us anisotropic space-like D2 brane from the anisotropic static non-susy D2 brane and
the change in the angular
coordinate converts spherical $\Omega_6$ to hyperbolic $H_6$. Thus the transformed solution is,
\bea\label{sdp1}
ds^2 &=& F(\tau)^{\frac{3}{8}} \left(H(\tau)\tilde{H}(\tau)\right)^{\frac{2}{5}}\left(\frac{H(\tau)}{\tilde{H}(\tau)}\right)^{\frac{\delta_1}{4}
+\frac{\delta_2}{10}+\frac{\delta_3}{10}}\left(-d\tau^2 + \tau^2 dH_6^2\right)\nn
& & + F(\tau)^{-\frac{5}{8}}\Big\{\left(\frac{H(\tau)}{\tilde{H}(\tau)}\right)^{\frac{\delta_1}{4}+\frac{\delta_2}{2}+\frac{\delta_3}{2}}(dx^3)^2
+\left(\frac{H(\tau)}{\tilde{H}(\tau)}\right)^{-\frac{3\delta_1}{4}-\frac{3\delta_2}{2}+\frac{\delta_3}{2}}(dx^1)^2\\
& & + \left(\frac{H(\tau)}{\tilde{H}(\tau)}\right)^{-\frac{3\delta_1}{4}+\frac{\delta_2}{2}-\frac{3\delta_3}{2}}(dx^2)^2\Big\}\nn
e^{2(\phi-\phi_0)} &=& F(\tau)^{\frac{1}{2}}\left(\frac{H(\tau)}{\tilde{H}(\tau)}\right)^{\delta_1-2\delta_2-2\delta_3}, \qquad\qquad F_{[6]} \,\,=\,\, \hat Q {\rm Vol}(H_{6})
\eea
The various functions associated with the solution are also changed under the above rotation and are given below,
\bea\label{functions2}
F(\tau) &=& \left(\frac{H(\tau)}{\tilde{H}(\tau)}\right)^{\alpha} \cos^2 \theta + \left(\frac{\tilde {H}(\tau)}{H(\tau)}\right)^{\beta} \sin^2\theta\nn
H(\tau) &=& 1 + \frac{\omega^5}{\tau^5},\qquad\qquad \tilde{H}(\tau)\,\,=\,\, 1 - \frac{\omega^5}{\tau^5}
\eea
Thus we see that the anisotropic static non-susy D2 brane has been converted to anisotropic time dependent or space-like D2 brane. For the former
solution the radial coordinate $r$ was transverse to the D2 brane's world-volume, whereas, for the latter the timelike coordinate $\tau$ is
transverse to the SD2 brane's
world-volume. The metric of the transverse sphere $d\Omega_6^2$ has been converted to negative of the metric of the hyperbolic space $dH_6^2$.
The hyperbolic functions\, $\sinh^2\theta$\, and \,$\cosh^2\theta$\, become \,$-\sin^2\theta$\, and \,$\cos^2\theta$\, respectively, therefore,
the relative sign
of the two terms of the function $F(\tau)$ has been flipped. But the form field remains unchanged with $\hat Q\rightarrow-\hat Q$. Thus the
first two parameter
relations in \eqref{relations1} remain the same, while the last relation has changed to $\hat Q=5\omega^5(\alpha+\beta)\sin2\theta$.
Now for our purpose we will make a coordinate transformation from $\tau$ to $t$ given by,
\be\label{trans}
\tau \,\,=\,\, t\left(\frac{1+\sqrt{g(t)}}{2}\right)^{\frac{2}{5}}, \qquad {\rm where,}\qquad g(t)\,\, =\,\, 1+\frac{4\omega^5}{t^5} \equiv 1 + \frac{t_0^5}{t^5}
\ee
Under this coordinate change we have,
\bea\label{fnschange}
& & H(\tau) = 1 + \frac{\omega^5}{\tau^5} = \frac{2\sqrt{g(t)}}{1+\sqrt{g(t)}}, \qquad \tilde{H}(\tau) = 1 - \frac{\omega^5}{\tau^5}
= \frac{2}{1+\sqrt{g(t)}},\nn
& & H(\tau)\tilde{H}(\tau) = \frac{4\sqrt{g(t)}}{(1+\sqrt{g(t)})^2}, \qquad \frac{H(\tau)}{\tilde{H}(\tau)} = \sqrt{g(t)},\nn
& & -d\tau^2 + \tau^2 dH_6^2 = g(t)^{\frac{1}{5}}\left(-\frac{dt^2}{g(t)} + t^2 dH_6^2\right)
\eea
Using \eqref{fnschange} we can rewrite the anisotropic SD2 brane solution given in \eqref{sdp1} as follows,
\bea\label{sdp2}
ds^2 &=& F(t)^{\frac{3}{8}} g(t)^{\frac{\delta_1}{8}+\frac{\delta_2}{20}+\frac{\delta_3}{20}+\frac{1}{5}}\left(-\frac{dt^2}{g(t)} +
t^2 dH_6^2\right)
+ F(t)^{-\frac{5}{8}}\Big[g(t)^{-\frac{3\delta_1}{8}-\frac{3\delta_2}{4}+\frac{\delta_3}{4}}(dx^1)^2\nn
&& +g(t)^{-\frac{3\delta_1}{8}+\frac{\delta_2}{4}-\frac{3\delta_3}{4}}(dx^2)^2+g(t)^{\frac{\delta_1}{8}+\frac{\delta_2}{4}+\frac{\delta_3}{4}}(dx^3)^2\Big]\nn
e^{2(\phi-\phi_0)} &=& F(t)^{\frac{1}{2}} g(t)^{\frac{\delta_1}{2}-\delta_2-\delta_3}, \qquad\qquad F_{[6]} \,\,=\,\, \hat{Q} {\rm Vol}(H_6)
\eea
where $g(t)$ is as given in \eqref{trans} and $F(t)$ is given by,
\be\label{ft}
F(t) = g(t)^{\frac{\alpha}{2}} \cos^2\theta + g(t)^{-\frac{\beta}{2}} \sin^2\theta
\ee
It is important to note that in the new coordinate, the original singularity at $\tau=\omega$ has been
shifted to $t=0$. Also note that as $t \gg t_0$, $g(t),\, F(t) \to 1$ and therefore, the solution reduces to flat space. In the next
two sections we will impose the assumption $t \sim t_0$ and also $t \ll t_0$ into the solution \eqref{sdp2} to see how one can get
accelerating cosmology in the first case and a Kasner-like cosmology in the second case in (3+1) dimensions upon compactification.
\vspace{.5cm}
\noindent{\it 3. Compactification and accelerating cosmology} : In this section we will compactify the anisotropic SD2 brane solution
given in \eqref{sdp2} on a six dimensional hyperbolic space of time dependent volume and write the resultant four dimensional metric
in the Einstein frame\footnote{Here one might ask that since hyperbolic spaces are in general non-compact in what sense are we
compactifying the ten dimensional space on six dimensional hyperbolic space and studying the four dimensional cosmology? To address this question
we remark that it is quite well-known how to construct compact hyperbolic manifolds (CHM) from hyperbolic spaces and there is a vast
mathematical literature some of which are given in \cite{Kaloper:2000jb}. In short, the CHM's are obtained from $H_d$ (with $d\geq 2$), the
universal covering space of $d$ dimensional hyperbolic manifold by modding out by an appropriate freely acting discrete subgroup of the isometry group
SO$(1,d)$ of $H_d$. CHM's have many interesting properties and we refer the reader to some of the original literature given in \cite{Kaloper:2000jb}
for details.}. This four dimensional metric will have the standard FLRW form whose cosmology we want to study.
We rewrite the metric in \eqref{sdp2} in a four dimensional part and the transverse six dimensional part as,
\be\label{compct1}
ds^2 = ds_4^2+e^{2\psi}dH_6^2
\ee
where $\psi$ is the radion field and $e^{2\psi}=F(t)^{\frac{3}{8}} g(t)^{\frac{\delta_1}{8}+\frac{\delta_2}{20}+\frac{\delta_3}{20}+\frac{1}{5}}t^2$. The
four dimensional metric $ds_4^2$ is given as,
\bea\label{4dmetric}
ds_4^2 &=& -F(t)^{\frac{3}{8}} g(t)^{\frac{\delta_1}{8}+\frac{\delta_2}{20}+\frac{\delta_3}{20}-\frac{4}{5}}dt^2
+ F(t)^{-\frac{5}{8}}\Big[g(t)^{-\frac{3\delta_1}{8}-\frac{3\delta_2}{4}+\frac{\delta_3}{4}}(dx^1)^2\nn
&&+g(t)^{-\frac{3\delta_1}{8}+\frac{\delta_2}{4}-\frac{3\delta_3}{4}}(dx^2)^2+g(t)^{\frac{\delta_1}{8}+\frac{\delta_2}{4}+\frac{\delta_3}{4}}(dx^3)^2\Big]
\eea
The compactified four dimensional metric \eqref{4dmetric} when expressed in Einstein frame takes the form \cite{Roy:2003nd},
\bea\label{einstein4d}
ds_{4E}^2 &=& e^{6\psi} ds_4^2\nn
& = & -F(t)^{\frac{3}{2}}g(t)^{-{\frac{1}{5}}+{\frac{\delta_1}{2}}+{\frac{\delta_2}{5}}+{\frac{\delta_3}{5}}}t^6dt^2 + F(t)^{\half}g(t)^{{\frac{3}{5}}-{\frac{3\delta_2}{5}}+{\frac{2\delta_3}{5}}}t^6dx_1^2 \nn
&& + F(t)^{\half}g(t)^{{\frac{3}{5}}+{\frac{2\delta_2}{5}}-{\frac{3\delta_3}{5}}}t^6dx_2^2 + F(t)^{\half}g(t)^{{\frac{3}{5}}+{\frac{\delta_1}{2}}+{\frac{2\delta_2}{5}}+{\frac{2\delta_3}{5}}}t^6dx_3^2\nn
& = & -A(t)^2dt^2+\sum_{i=1}^3S_i(t)^2dx_i^2
\eea
where the various time-dependent coefficients are
\bea
A(t) &=& F(t)^{\frac{3}{4}}g(t)^{-{\frac{1}{10}}+{\frac{\delta_1}{4}}+{\frac{\delta_2}{10}}+{\frac{\delta_3}{10}}}t^3, \quad\quad S_1(t)=F(t)^{\frac{1}{4}}g(t)^{{\frac{3}{10}}
-{\frac{3\delta_2}{10}}+{\frac{\delta_3}{5}}}t^3\nn
S_2(t) &= & F(t)^{\frac{1}{4}}g(t)^{{\frac{3}{10}}+{\frac{\delta_2}{5}}-{\frac{3\delta_3}{10}}}t^3, \quad\quad\qquad S_3(t)=F(t)^{\frac{1}{4}}g(t)^{{\frac{3}{10}}
+{\frac{\delta_1}{4}}+{\frac{\delta_2}{5}}+{\frac{\delta_3}{5}}}t^3
\eea
Note that in the compactified four dimensional space there are three fields, namely $g_{\mu\nu},\,\,\phi,\,\,\psi$. Now we perform another
coordinate transformation
\be
d\eta^2=F(t)^{\frac{3}{2}}g(t)^{-{\frac{1}{5}}+{\frac{\delta_1}{2}}+{\frac{\delta_2}{5}}+{\frac{\delta_3}{5}}}t^6dt^2\,\,\,
\Rightarrow\,\,\eta= \int F(t)^{\frac{3}{4}}g(t)^{-{\frac{1}{10}}+{\frac{\delta_1}{4}}+{\frac{\delta_2}{10}}+{\frac{\delta_3}{10}}}t^3dt
\ee
and rewrite the Einstein frame metric $ds_{4E}^2$ in the standard flat FLRW form as
\be\label{flrw4}
ds_{4E}^2 = - d\eta^2 + s_i^2(\eta)\sum_{i=1}^{3} (dx^i)^2
\ee
with $\eta$ being the canonical time and the scale factor $s_i(\eta)\equiv S_i(t)$. Note that since $s_i(\eta)$ are different for each $i$, the cosmology here
will be anisotropic. Now because of the complicated relation between $t$ and $\eta$ let us define \cite{ourpaper}
\bea\label{mandn}
m_i(t) & \equiv & \frac{d\ln S_i(t)}{dt}\nn
n_i(t) & \equiv &\left[\frac{d^2}{dt^2}\ln(S_i(t))+\frac{d}{ dt}\ln(S_i(t))
\frac{d}{dt}\ln\left(\frac{S_i(t)}{A(t)}\right)\right]
\eea
and with these one can easily see that $m_i(t) > 0$ implies that $ds_i(\eta)/d\eta > 0$, amounting to expansion of our universe, and similarly,
$n_i(t) > 0$ implies that $d^2 s_i(\eta)/d\eta^2 > 0$, amounting to acceleration of our universe.
Therefore, from \eqref{mandn} it is clear that
in the four-dimensional spacetime \eqref{einstein4d} we get an accelerated expansion in the $i$-th coordinate direction only if the
parameters $m_i(t)$ and $n_i(t)$ are simultaneously positive in that direction. It can be checked that for $t \ll t_0$, accelerating
expansion is not possible at all in any direction. However, it is possible only if $t \sim t_0$. In this case the first term in
the harmonic function $g(t)$ given in \eqref{trans} is of the same order as the second. The other parameters of the solution, namely,
$\d_1$, $\d_2$ and $\d_3$ can not be totally arbitrary. From Eq.\eqref{alphabeta}, we see that the reality of $\a$
and $\b$ imposes some restriction on the value of these three anisotropy parameters. Also it can be checked that by changing the value of
$\theta$ does not change the cosmological behavior of the solution very much. Thus we have chosen some typical values of these parameters
(as given in the Figure) and plotted the functions $m_i(t)$ and $n_i(t)$ in Figure 1, to show that it is indeed possible to have accelerating
expansions in all three directions.
\begin{figure}[ht]
\begin{center}
\subfloat[along $x_1$ direction]{\includegraphics[width=0.33\textwidth]{expaccx1.eps}\label{pic1}}
\subfloat[along $x_2$ direction]{\includegraphics[width=0.33\textwidth]{expaccx2.eps}\label{pic2}}\subfloat[along $x_3$ direction]
{\includegraphics[width=0.33\textwidth]{expaccx3.eps}\label{pic3}}
\end{center}
\caption[\textwidth]{The plot of $m(t)$(solid blue line) and $n(t)$(dashed red line) in different spatial coordinate directions at
$\theta=\pi/6,\,\delta_1=-0.5,\,\delta_2=0.2,\,\delta_3=0.4$ and $t_0=2.0$.}
\end{figure}
We notice as shown in (a), (b) and (c) in Figure 1, we always get expanding universe (given by the solid
blue line) in all three directions, but the expansion is accelerating only for a short period of time, i.e., the acceleration is transient
(given by the dotted red line). Also note that since $m_i(t)$ and $n_i(t)$ are different for different $i$, the cosmology is anisotropic,
however, the anisotropy is not too much.
To understand the accelerating expansion, we can write down the four dimensional compactified action from the original ten dimensional one
and obtain the form of the potential of the dilaton and the radion field \cite{Roy:2003nd}. The ten dimensional action has the form,
\be\label{10daction}
S = \int d^{10}x \sqrt{-g}\left[R - \half (\partial\phi)^2 - \frac{1}{2 \cdot 6!} e^{-\phi/2} F_{[6]}^2\right]
\ee
Reducing the action on a six dimensional hyperbolic space $H_6$, the four dimensional action we get\footnote{Here reduction on the
hyperbolic space $H_6$ to obtain the four dimensional action is done in the sense decribed in footnote 3. This has also been done
in the references \cite{Garriga:2000cv,Emparan}.} \cite{ourpaper,Garriga:2000cv,Emparan}
\be\label{4daction}
S_4 = \int d^4 x \sqrt{-g_{4E}} \left[R_{4E} - \half (\partial \phi)^2 - 24 (\partial \psi)^2 - V(\phi,\psi)\right]
\ee
where,
\be\label{potential}
V(\phi,\psi) = \frac{\hat{Q}^2}{2} e^{-\frac{\phi}{2} - 18\psi} + 30 e^{-8\psi}.
\ee
Here $\hat{Q}$ is the magnetic charge of the D2 brane given in \eqref{dp1}. Note that because of the hyperbolic space compactification the
potential is always positive irrespective of the charge and therefore there is always a possibility that the system will be driven
to an accelerating phase \cite{Emparan}.
\vspace{.5cm}
\noindent{\it 4. Compactification and Kasner-like solution} :
In this section we will show how a four dimensional Kasner-like cosmological solution follows from the anisotropic SD2 brane solution
upon six dimensional hyperbolic space compactification discussed in the previous section. The compactified action expressed in
Einstein frame is given in \eqref{einstein4d}. We take this four dimensional metric and express it at early times, $t \ll t_0$.
In this case the function $g(t)$ can be approximated as,
\be\label{gtau}
g(t)= 1 + \frac{t_0^5}{t^5} \approx \frac{t_0^5}{t^5} \sim t^{-5},
\ee
Also since we want to express the metric components in \eqref{einstein4d} as some powers of $t$, we note from the form of $F(t)$ in
\eqref{ft} that this can be done (assuming $\a>0$ without any loss of generality) in three ways as follows. (a) Put $\theta=0$, with $\a$, $\b$
as given in \eqref{alphabeta}, (b) put $\a=-\b = -(3/4)\d_1$, with $\theta$ arbitrary and (c) both $\a > 0$, $\b > 0$, with $\theta$ arbitrary.
There is another possibility with $\theta = \pi/2$ and $\b < 0$, but this case can be seen to be equivalent to case (a).
Note that for case (a) and (b) we have $\hat{Q}=0$ (since $\hat{Q}=5\omega^5(\a+\b)\sin2\theta$), however, for case (c) $\hat{Q}$ is non-zero
and the non-susy brane is magnetically charged.
In either case (a) or (b) we have
\be\label{Ft}
F(t) \sim t^{-\frac{5\alpha}{2}}
\ee
In the above we have absorbed $t_0$ in $t$. But for case (c) $F(t)$ has an additional $\cos^2\theta$ factor which can be absorbed in $t$ as well
as in $x^{1,2,3}$. Thus in all cases $F(t)$ has the form as given in \eqref{Ft}.
So,
in this near region, the space-time metric \eqref{einstein4d}, the dilaton and the radion fields take the forms,
\bea\label{kasner1}
ds^2 &=& -\,t^{2\left(\frac{7}{2}-\frac{15\alpha}{8}-\frac{5\delta_1}{4}-\frac{\delta_2}{2}-\frac{\delta_3}{2}\right)} dt^2 + t^{2\left(\frac{3}{2}-\frac{5\alpha}{8}+\frac{3\delta_2}{2}
-\delta_3\right)} (dx^1)^2\nn
& &+\,t^{2\left(\frac{3}{2}-\frac{5\alpha}{8}-\delta_2+\frac{3\delta_3}{2}\right)} (dx^2)^2+t^{2\left(\frac{3}{2}-\frac{5\alpha}{8}-\frac{5\delta_1}{4}-\delta_2-\delta_3\right)} (dx^3)^2\nn
e^{2(\phi-\phi_0)} &=& t^{2\left(-\frac{5\alpha}{8}-\frac{5\delta_1}{4}+\frac{5\delta_2}{2}+\frac{5\delta_3}{2}\right)}, \qquad
e^{2\psi} = t^{2\left({\half}-\frac{15}{32}\alpha-\frac{5}{16}\delta_1-\frac{\delta_2}{8}-\frac{\delta_3}{8}\right)}
\eea
Now since we are taking $t\ll 1$ here, we have to be careful about the validity of the gravity solution. The gravity solution will
be valid as long as the dilaton remains small and the curvature of the transverse space in string units also remains small. These two
conditions impose certain restrictions on the parameters of the solution and they are given as,
\bea\label{constraints}
& & 5 \a + 5\d_1 - 4\d_2 - 4\d_3 > 4\nn
& & \a + 2\d_1 - 4\d_2 - 4\d_3 \leq 0
\eea
where $\a$ is as given in \eqref{alphabeta}. Furthermore, the reality of $\a$ also restricts the parameters as
\be\label{cons}
\frac{35}{4}\d_1^2 + 48 \d_2^2 + 48 \d_3^2 + 16 \d_2\d_3 \leq 48
\ee
We have checked numerically that all these three conditions can be satisfied simultaneously for certain range of values of the parameters
$\d_1$, $\d_2$ and $\d_3$ and only for those values we have a valid gravity solution \eqref{kasner1}. We would like to remark here that
the validity of the supergravity solution also requires that we cannot take $t$ arbitrarily close to zero as we are considering $t\ll 1$.
In fact $t$ has to be much larger than the string scale if the supergravity solution remains valid. This can be seen if we calculate $\dot{\phi}^2$,
$\dot{\psi}^2$ and also the scalar curvature with the solution given in \eqref{kasner1}. All of these terms come out to be proportional to
$1/t^2$ and so, when $t \ll 1$, they can become very large invalidating the supergravity solution and stringy corrections must be included.
To avoid this we require $\sqrt{\a'} \ll t \ll t_0$ or in terms of scaled $t$, we must have $\sqrt{\a'}/t_0 \ll t \ll 1$.
Now, keeping those restrictions in mind, we can rewrite the solution in terms of canonical time
$\eta \equiv \frac{8t^{\frac{9}{2}-\frac{15\alpha}{8}-\frac{5\delta_1}{4}-\frac{\delta_2}{2}-\frac{\delta_3}{2}}}
{36-15\alpha-10\delta_1-4\delta_2-4\delta_3}$ as,
\bea\label{kasner2}
ds^2 &=& -d\eta^2+\eta^{2p_1}(dx^1)^2+\eta^{2p_2}(dx^2)^2+\eta^{2p_3}(dx^3)^2\nn
e^{2(\phi-\phi_0)} &=& C(\d_1,\d_2,\d_3)\,\eta^{2\gamma_\phi}\qquad\qquad e^{2\psi}= D(\d_1,\d_2,\d_3)\eta^{2\gamma_\psi}
\eea
Note that in writing the metric in \eqref{kasner2} we have rescaled the coordinates $x^1$, $x^2$ and $x^3$ by some constant factors involving
the parameters $\d_1$, $\d_2$, $\d_3$. Also in the dilaton and the radion field $C$ and $D$ are constants involving these
paramaters whose explicit form will not be important. It can be easily checked in the defining relation of $\eta$, that the coefficient
in front of $t$ is always positive definite and that also ensures that as $t \to 0$, $\eta \to 0$.
The Kasner exponents $p_1$, $p_2$ and $p_3$ in the metric and $\gamma_\phi$,
$\gamma_\psi$ are defined as,
\bea\label{kasnercoeff}
& & p_1=\frac{12-5\alpha+12\delta_2-8\delta_3}{36-15\alpha-10\delta_1-4\delta_2-4\delta_3}\nn
& & p_2=\frac{12-5\alpha-8\delta_2+12\delta_3}{36-15\alpha-10\delta_1-4\delta_2-4\delta_3}\nn
& & p_3=\frac{12-5\alpha-10\delta_1-8\delta_2-8\delta_3}{36-15\alpha-10\delta_1-4\delta_2-4\delta_3}\nn
& & \gamma_\phi=\frac{-5\alpha-10\delta_1+20\delta_2+20\delta_3}{36-15\alpha-10\delta_1-4\delta_2-4\delta_3}\nn
& & \gamma_\psi=\frac{1}{4}\frac{16-15\alpha-10\delta_1-4\delta_2-4\delta_3}{36-15\alpha-10\delta_1-4\delta_2-4\delta_3}
\eea
Now since this a solution to the compactified four dimensional action given in \eqref{4daction}, it must satisfy the equations of motion.
The Einstein equation, the dilaton and radion equations following from \eqref{4daction} have the forms,
\bea\label{einsteineq}
& & R_{\mu\nu,E}-{\half}\partial_\mu\phi\partial_\nu\phi-24\partial_\mu\psi\partial_\nu\psi=0\nn
& & \frac{1}{\sqrt{-g_E}}\partial_\mu\left(\sqrt{-g_E}g^{\mu\nu}_E \partial_\nu\phi\right) = 0, \qquad
\frac{1}{\sqrt{-g_E}}\partial_\mu\left(\sqrt{-g_E}g^{\mu\nu}_E \partial_\nu\psi\right) = 0
\eea
here $\mu,\,\nu$ run over $(1+3)$-dimensional space-time. Note that since we have $t \ll 1$, the potential in \eqref{potential} is trivial
(the first term is zero even when $\hat{Q} \neq 0$ because the exponential factor effectively goes to zero due to the relations given in \eqref{constraints}
and similarly the exponential in the second term also effectively goes to zero because of \eqref{constraints}).
Substituting the above solution \eqref{kasner2} in \eqref{einsteineq}, we get two conditions
\be\label{pcond1}
p_1+p_2+p_3=1, \qquad {\rm and} \qquad p_1^2+p_2^2+p_3^2=1-\half \gamma_\phi^2-24\gamma_\psi^2
\ee
The first condition of \eqref{pcond1} can be seen to be satisfied trivially from \eqref{kasnercoeff}. On the other hand when we
substitute the parameter values from \eqref{kasnercoeff} to the second condition of \eqref{pcond1}, we find that it gives the same
parametric relation as the second relation of \eqref{relations1} verifying the consistency of the solution. This therefore shows how
one can get a four dimensional Kasner-like
solution from the ten dimensional anisotropic SD2 brane solution by six dimensional hyperbolic space compactification. It is well-known that
the standard Kasner solution \cite{Kasner:1921zz} obtained as the solution of vacuum Einstein equation, does not lead to expansions in all spatial directions.
The reason is that in standard Kasner cosmology the Kasner exponents satisfy $p_1+p_2+p_3=1$ and $p_1^2+p_2^2+p_3^2=1$. Since these two
conditions can not be satisfied together when $p_i$'s are all positive, the expansions can not occur in all the directions. However, for the
four-dimensional Kasner cosmology we obtained from string theory
solutions, the parameters $p_i$'s can all be positive definite because the second condition here \eqref{pcond1} is different. This is the
essentially the reason that we can have expansions in all the directions, but, it can be easily checked that the expansions are decelerating.
\vspace{.5cm}
\noindent{\it 5. Conclusion} : To summarize, in this paper we have constructed an anisotropic SD2 brane solution starting from
an anisotropic non-susy D2 brane solution of type IIA string theory by the standard trick of double Wick rotation. We wanted to see whether
it is possible to generate accelerating cosmologies in all the directions which is known for the isotropic SD2 brane solution upon
compactification on six dimensional hyperbolic space of time dependent volume. Indeed we found that when the resultant four dimensional
metric is expressed in Einstein frame there are some windows of the parameters
of the solution where one can get accelerating cosmologies in all the directions and is discussed in section 3. Here both the expansions
and the accelerations we found are anisotropic. But, in order to get accelerating expansions we noted that the anisotropy can not
be too drastic in three different directions. We also noted that accelerations are possible only for $t \sim t_0$, where $t_0$ is some
characteristic time given as one of the parameters of the solution. Next, we looked at the four dimensional metric at early times,
i.e., for $t \ll t_0$ and found that in a suitable coordinate and under certain conditions on the parameters of the solution, it can be expressed
in a standard four dimensional Kasner-like form.
But unlike in the standard Kasner cosmology, where expansions in all three directions are not possible, here we can get expansions
in all the three directions. The reason is that in this case the relations among the Kasner exponents get modified due to the presence
of the dilaton and the radion field. It would be interesting to see what effect (such modification to Kasner solution at early time)
does it have on the cosmological singularities \cite{Engelhardt:2014mea,Chatterjee:2016bhj}.
\vspace{.5cm}
|
{
"timestamp": "2017-07-04T02:12:38",
"yymm": "1701",
"arxiv_id": "1701.09158",
"language": "en",
"url": "https://arxiv.org/abs/1701.09158"
}
|
\section{Introduction}
Tuberculosis (TB) is an infectious disease caused by the \emph{Mycobacterium
tuberculosis (Mtb)}. Following the World Health Organization (WHO),
the \emph{(Mtb)} is the second cause of death worldwide
from a single infectious agent, after the human immunodeficiency virus
\cite{WHO:TB:report:2013}. TB is present in all regions of the world. Most of
the estimated number of cases in 2013 occurred in Asia ($56\%$) and the African
region ($29\%$); smaller proportions of cases occurred in the Eastern
Mediterranean region ($8\%$), the European region (4\%) and the region
of the Americas ($3\%$) \cite{WHO:TB:report:2014}.
In TB spread, migration plays an important role, e.g., following the International
Organization for Migration (IOM), TB is a social disease and migration, as a social
determinant of health, increases TB-related morbidity and mortality among migrants
and surrounding communities \cite{IOM:migration:TB}. Migrants of specific legal
and social status, such as workers, undocumented migrants, trafficked and detained
persons, face particular TB vulnerabilities. Among migrant workers with a legal
status, their access to TB diagnosis and care is subject to their ability to access
health care services and health insurance coverage, provided either by the state
or the employer. Illegal migrants face particular challenges such as fear of
deportation that delay or limit their access to diagnostic and treatment services.
Deportation while on treatment or poor compliance with treatment may lead to drug
resistant infection and increased chances of spreading TB in countries of origin,
transit and destination \cite{IOM:migration:TB}.
Mathematical models are an important tool in analyzing the spread and control
of infectious diseases \cite{Hethcote:1001models,Hethcote:SIAM:Rev}. There are
many mathematical dynamic models for TB, see, e.g.,
\cite{Blower:etall:1996,Castillo:Feng:1998,Cohen:Murray:2004,Gomes:etall:JTB:2007,Vynnycky:Fine:1997}
and references cited therein. There are also models dedicated to study TB
transmission dynamics in immigrants and local population. Usually, these models
divide the total population into two subgroups: immigrants and local subpopulation.
Each subgroup is divided into several epidemiological compartments: susceptible,
latent, infectious, recovered, or other, depending on the type of the model, see, e.g.,
\cite{Mod:TB:immigration:Driessche:2001,Mod:immigrat:TB:TPB:2008,Mod:Immig:Netherland:IJTLD:2002,TB:immigration:JTB:2008}.
In general, compartment models written with ordinary differential equations tend
to be nice approximations of the true scenario that have rather simple formulation,
e.g., with five state-space variables and a (non)autonomous quadratic vector field,
because of numerical and analytic limitations and the tradeoff between complexity
and the relevant information that they can present. In particular, heterogeneous
situations may be studied using such models. However, no interaction between
individuals in the different groups are considered in such models. We are interested
in understanding how the flux and distribution of individuals affects TB on a
host country. As a case-study, we have considered the situation of Angola
and Portugal, although the techniques may be applied to any similar situation.
Angola is the seventh-largest country in Southern Africa with a total population
of approximately $24.3$ million \cite{INE2014}. WHO predicts that by 2017 the
TB cases rate may rise significantly in Angola. A natural question is to try to
understand how this may affect the rest of the world. According to Celestino
Teixeira, the Coordinator of the \emph{Fight Against Tuberculosis Programme},
in 2013 Angola reported a total of $60, 807$ cases of TB in all forms, observing
an increase of $11\%$ over the previous year \cite{news:TB:ANGOLA:increase}.
Portugal is a country in Southwest Europe with a total population of approximately
$10.5$ million \cite{INE2014}. In 2014, for the first time, the incidence of TB
in Portugal was estimated to be lower than $20$ new cases per $100,000$ inhabitants,
placing Portugal among the countries with low TB incidence. However, there are
still some regions (Lisbon and Porto) with much higher TB incidences
\cite{relatorioTB:PT:2015}. Portugal is a relevant geographically area of study
for TB because its infection behaviour is not similar to the rest of Europe,
in the sense that has higher incidence of tuberculosis. Aside from the
independence period, Angola is characterized by a reduced emigration and is
becoming gradually an attractive region, receiving migrants from different
regions, including Portugal \cite{livro:migracao:PT:2011}. Following the
Portuguese Emigration Observatory, in 2014 there were 126,356 Portuguese
emigrants living in Angola \cite{url:obsEmigraPT}. According to
the Organisation for Economic Co-operation and Development (OECD)
\cite{OCDE:migration:PT:2014}, for the first time in five years, 2012 saw the
number of long-term entry visas grow. Visas to Angolans doubled in 2012, mainly for study.
According to the Portuguese Foreigners and Borders Service, in 2012 there were
20,177 Angolans citizens living in Portugal \cite{reportImigrants:PT:2013}.
Although Angolans living in Portugal are dispersed throughout the country,
there is a very high concentration in the district of Lisbon, followed by
Set\'{u}bal and Porto \cite{ImigAngolanosPT}.
In this paper, we propose and study a new mathematical model for TB that
generalises the one proposed in \cite{TBportugalGomesRodrigues}. We consider
three different populations: people living in a high TB incidence country (A),
people living in a low TB incidence country in a semi-closed community of the
high incidence country natives (G), and the other persons living in the low
incidence country (C). Each of these three groups of population are subdivided
into the five epidemiological categories considered in the model from
\cite{TBportugalGomesRodrigues}. Our model considers the movement of persons
from the high TB incidence country to the low TB incidence country and vice-versa.
We assume that the individuals that arrive and depart from the low TB incidence
country are split into the ones that enter/leave the semi-closed community
of the high TB incidence country natives and the ones that enter/leave other
regions of the low TB incidence country. Our model is quite different from
\cite{TBportugalGomesRodrigues} and other TB models in the literature,
since it has internal transfer of individuals between the subgroups,
high TB incidence country, semi-closed community of high TB incidence
country natives and other persons living in the low TB incidence country.
We consider a case study where the low TB incidence country is represented
by Portugal and the high TB incidence country is represented by Angola.
The paper is organized as follows. In Section~\ref{sec:model}, we explain how
we construct our model. The basic reproduction number is algebraically and
numerically computed in Section~\ref{sec:R0} for the autonomous case. This
section also includes a sensitivity analysis of the basic reproduction number
with respect to TB transmission rates, transfer of individuals and ratio
of individuals that stay in the community versus spread in the host country.
Section~\ref{sec:numeric} is devoted to numerical simulations, which help us
to make a qualitative sensitivity analysis for each epidemiological category
of the subgroups Angola, semi-closed community of Angola natives and other
persons living in Portugal, when relevant TB parameters are perturbed.
We end with Section~\ref{sec:conclusions} of conclusions and future work.
\section{Mathematical model}
\label{sec:model}
We construct a model with three components, based on \cite{TBportugalGomesRodrigues},
where there exists seasonal flux of population between some of the components.
The model from \cite{TBportugalGomesRodrigues} divides the
total population $N$ in five epidemiological compartments: susceptible individuals
($\TS$) that never have been in contact with \emph{(Mtb)}, primary infected individuals
($\TP$) that have been infected by \emph{(Mtb)} but it is not certain if the disease
will progress, actively infected and infectious individuals ($\TI$) that are
not yet in treatment, latent infected individuals ($\TL$) and under treatment
individuals ($\TT$). Susceptible individuals become primary infected at a rate
$\lambda = \beta \nu \TI$ $yrs^{-1}$, where $\beta$ is the transmission coefficient
and $\nu$ is the proportion of pulmonary TB cases. A proportion $\phi$ and
$(1-\phi)$ of individuals in the class $\TP$ is transferred to the class $\TI$
and $\TL$, respectively, at a rate $\delta \, yrs^{-1}$. Each year, a proportion
$k$ of individuals in the class $\TI$ is detected and start TB treatment
at a rate $\tau \, yrs^{-1}$, entering the class $\TT$. It is assumed that
individuals in the class $\TT$ are neither infectious nor susceptible to reinfection.
A fraction $\phi_T$ of individuals in class $\TT$ is transferred to class $\TI$
due to either treatment failure or default, while the remaining $(1-\phi_T)$
are successfully treated and enter in the class $\TL$. The inverse of treatment
length is denoted by $\delta_T$. In \cite{TBportugalGomesRodrigues}, birth and
death rates are assumed equal, here we assume that they can be different and we
denote the recruitment rate by $\eta \, yrs^{-1}$ and the death rate by
$\mu \, yrs^{-1}$. The reinfection factor is denoted by $\sigma$
(see \cite{TBportugalGomesRodrigues} for more details). Optimal control strategies
for such model were studied in \cite{MR3266821,SilvaTorresTBAngola,MR3101449}.
Let $\TS\equiv \TS(t)$, $\TP\equiv \TP(t)$, $\TI\equiv \TI(t)$,
$\TL\equiv \TL(t)$, $\TT\equiv \TT(t)$, where $t$ represents time in years.
The model described above is given by the following system
of ordinary differential equations:
\myeq{eq1}{
\left\{\begin{array}{l}
\dot{\TS}= \eta N - \left(\lambda(t)+\mu\right)\TS,\\
\dot{\TP}=\lambda(t)\TS+\sigma\lambda(t)\TL-\left(\delta+\mu\right)\TP,\\
\dot{\TI}=\phi\delta \TP+\omega \TL+\phi_T\delta_T \TT-\left(\tau k + \mu\right)\TI,\\
\dot{\TL}=(1-\phi)\delta \TP+(1-\phi_T)\delta_T \TT
-\left(\sigma\lambda(t)+\omega+\mu\right)\TL,\\
\dot{\TT}=\tau k \TI-\left(\delta_T+\mu\right)\TT.
\end{array}\right.
}
We have $N=\TS+\TP+\TI+\TL+\TT$ and $\lambda(t)=\beta\nu \TI N^{-1}$. Then
$$
\dot{\lambda}=\beta\nu\left(\dot{\TI}N^{-1}-\TI\,N^{-2}\dot{N}\right).
$$
On the other hand, $\dot{N}=(\eta-\mu)N$, so if $\eta=\mu$ then the population
is constant. The system can be written in a matrix form as
\myeq{eq1a}{
\dot{\TX}=\left(\beta\nu \TI \mathcal{A}+\mathcal{B}\right)\TX+\mathcal{C},
}
where $\TX=(\TS,\TP,\TI,\TL,\TT)$,
$$ \mathcal{A}=\left(\begin{array}{ccccc}
-1 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & \sigma & 0\\
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & -\sigma & 0\\
0 & 0 & 0 & 0 & 0
\end{array}\right),
\quad
\mathcal{B}=\left(\begin{array}{ccccc}
-\mu & 0 & 0 & 0 & 0\\
0 & -(\delta+\mu) & 0 & 0 & 0\\
0 & \phi\delta & -(\tau k + \mu) & \omega & \phi_T\delta_T\\
0 & (1-\phi)\delta & 0 & -(\omega+\mu) & (1-\phi_T)\delta_T\\
0 & 0 & \tau k & 0 & -(\delta_T+\mu)
\end{array}\right),
$$
and $\mathcal{C}=(\eta N, 0, 0, 0, 0)$. We can verify that the matrix
$\lambda(t)\mathcal{A}+\mathcal{B}$ can be diagonalizable, so there is a
semi-closed form solution for the problem (it is not closed a priori because
$\lambda$ still depends on $I$ and $N$).
Suppose this system interacts with (a convex combination of) another two similar
systems $\tilde{X}_1$ and $\tilde{X}_2$, in the following way: there exist
functions $\gamma(t), \tilde{\gamma}(t)\in[0,1]$
and a value $\zeta\in[0,1]$ such that
\myeq{eq2}{
\left\{\begin{array}{l}
\dot{\TS}= \eta N - \left(\lambda(t)+\gamma(t)+\mu\right)\TS
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{\TS}_1+\zeta\tilde{\TS}_2\right),\\
\dot{\TP}=\lambda(t)\TS+\sigma\lambda(t)\TL-\left(\delta+\gamma(t)+\mu\right)\TP
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{\TP}_1+\zeta\tilde{\TP}_2\right),\\
\dot{\TI}=\phi\delta \TP+\omega \TL+\phi_T\delta_T \TT-\left(\tau k + \gamma(t)
+\mu\right)\TI+\tilde{\gamma}(t)\left((1-\zeta)\tilde{\TI}_1+\zeta\tilde{\TI}_2\right),\\
\dot{\TL}=(1-\phi)\delta \TP+(1-\phi_T)\delta_T \TT-\left(\sigma\lambda(t)
+\omega+\gamma(t)+\mu\right)\TL+\tilde{\gamma}(t)\left((1-\zeta)\tilde{\TL}_1
+\zeta\tilde{\TL}_2\right),\\
\dot{\TT}=\tau k \TI-\left(\delta_T+\gamma(t)+\mu\right)\TT
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{\TT}_1+\zeta\tilde{\TT}_2\right).
\end{array}\right.
}
Adding $N=\TS+\TP+\TI+\TL+\TT$ as a new state variable, we have
\myeq{eq3}{
\left\{\begin{array}{l}
\dot{\TS}= \eta N - \left(\lambda +\gamma(t)+\mu\right)\TS
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{\TS}_1+\zeta\tilde{\TS}_2\right),\\
\dot{\TP}=\lambda \TS+\sigma\lambda \TL-\left(\delta+\gamma(t)+\mu\right)\TP
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{\TP}_1+\zeta\tilde{\TP}_2\right),\\
\dot{\TI}=\phi\delta \TP+\omega \TL+\phi_T\delta_T \TT-\left(\tau k + \gamma(t)
+\mu\right)\TI+\tilde{\gamma}(t)\left((1-\zeta)\tilde{\TI}_1+\zeta\tilde{\TI}_2\right),\\
\dot{\TL}=(1-\phi)\delta \TP+(1-\phi_T)\delta_T \TT-\left(\sigma\lambda+\omega
+\gamma(t)+\mu\right)\TL+\tilde{\gamma}(t)\left((1-\zeta)\tilde{\TL}_1
+\zeta\tilde{\TL}_2\right),\\
\dot{\TT}=\tau k \TI-\left(\delta_T+\gamma(t)+\mu\right)\TT
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{\TT}_1+\zeta\tilde{\TT}_2\right),\\
\dot{N}=\left(\eta-\gamma(t)-\mu\right)N
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{N}_1+\zeta\tilde{N}_2\right).
\end{array}\right.
}
Let $S=\TS N^{-1}$, $P=\TP N^{-1}$, $I=\TI N^{-1}$, $L=\TL N^{-1}$,
$T=\TT N^{-1}$. These variables now represent the percentage of the
population in each state, i.e., $S+P+I+L+T=1$. Since
\myeqAN{
\dot{S}&=\dot{\TS}N^{-1}-\TS N^{-2}\dot{N}\\
&=\dot{\TS}N^{-1}-S N^{-1}\left(\left(\eta-\gamma(t)
-\mu\right)N+\tilde{\gamma}(t)\left((1-\zeta)\tilde{N}_1
+\zeta\tilde{N}_2\right)\right)\\
&=\dot{\TS}N^{-1}-\left(\eta-\gamma(t)-\mu+\tilde{\gamma}(t)\left((1-\zeta)
\tilde{N}_1+\zeta\tilde{N}_2\right){N}^{-1}\right)S,\\
&=\dot{\TS}N^{-1}-\left(M(t)-\gamma(t)-\mu\right)S,
}
with $M(t)\mdef\eta+\left((1-\zeta)\tilde{N}_1
+\zeta\tilde{N}_2\right)\tilde{\gamma}(t)N^{-1}$,
where the calculations for the other variables are similar, and adding
$\lambda(t)=\beta\nu I$ as a new state variable, we have
\myeq{eq4}{
\left\{\begin{array}{l}
\dot{S}= \eta - \left(\lambda+M(t)\right)S
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{S}_1+\zeta\tilde{S}_2\right),\\
\dot{P}=\lambda S+\sigma\lambda L-\left(\delta+M(t)\right)P
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{P}_1+\zeta\tilde{P}_2\right),\\
\dot{I}=\phi\delta P+\omega L+\phi_T\delta_T T-\left(\tau k +M(t)\right)I
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{I}_1+\zeta\tilde{I}_2\right),\\
\dot{L}=(1-\phi)\delta P+(1-\phi_T)\delta_T T-\left(\sigma\lambda+\omega
+M(t)\right)L+\tilde{\gamma}(t)\left((1-\zeta)\tilde{L}_1+\zeta\tilde{L}_2\right),\\
\dot{T}=\tau k I-\left(\delta_T+M(t)\right)T
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{T}_1+\zeta\tilde{T}_2\right),\\
\dot{\lambda}=\beta\nu\dot{I}=\beta\nu\left(\phi\delta P+\omega L
+\phi_T\delta_T T-\left(\tau k+M(t)\right)I
+\tilde{\gamma}(t)\left((1-\zeta)\tilde{I}_1+\zeta\tilde{I}_2\right)\right),\\
\dot{N}=\left(M(t)-\gamma(t)-\mu\right)N.
\end{array}\right.
}
Using the above model, we consider different population groups: people living
in a high incidence TB country (A) and people living in a low incidence TB
country (B), where (B) is subdivided in a community (G) with high percentage
of people from (A), and (C) is the rest of the population of (B).
We consider that the values of $\beta$, $\nu$, $\phi_T$ of the group (G) are
different from the values of the group (C). The flux of population follow
the distribution functions $\gamma_{A}$, from (A) to (B), and $\gamma_{B}$,
from (B) to (A). We assume that the persons that arrive and departure from (B)
are split in the following proportions: $\zeta$ goes to (G) and $(1-\zeta)$ goes
to (C), with $\zeta\in[0,1]$ a fixed percentage value in this model.
This model accounts for an average moving value of persons $a^A$, $a^B$ that
increases/decreases in time by the slopes $b^A$, $b^B$ and has a seasonality
variation modeled by $p^A$, $p^B$, $\theta^A$, $\theta^B$. The flux
of population will be modeled by the following functions:
\myeq{eq5a}{
\gamma_A(t) = a^A + b^A t + a^A p^A \cos\left(\theta^A t\right),
\quad \mbox{ and } \quad
\gamma_B(t) = a^B + b^B t + a^B p^B \cos\left(\theta^B t\right),
}
for constants $a^A, a^B, b^A, b^B, p^A, p^B, \theta^A, \theta^B\in\bkR$
chosen to ensure that $0\leq \gamma_A(t),\gamma_B(t)\leq 1$ for all $t$
of the simulation.
The flux of population $\gamma_A(t)$, $\gamma_B(t)$ can be incorporated
as state-space variables. In our case, the functions $\gamma^A$, $\gamma^P$
are solutions of the system of ODEs
$$
\left\{\begin{array}{l}
\dot{\gamma}_A = z^A,\\
\dot{z}_A = -(\theta^A)^2(\gamma_A- a^A- b^A\,t),
\end{array}\right.
\quad \mbox{ and } \quad
\left\{\begin{array}{l}
\dot{\gamma}_B = z^B,\\
\dot{z}_B = -(\theta^B)^2(x- a^B- b^B\,t),
\end{array}\right.
$$
which we add to the model~\myref{eqPA}--\myref{eqPGamma}, obtaining the complete model
with $25$ state-space variables. Note that if $V_N = (N_A,N_C,N_G)$, then
\myeq{eq6}{
\dot{V}_N=\mathcal{A}(t) V_N,
}
where
$$
\mathcal{A}(t)=\left(\begin{array}{ccc}
\eta^A-\mu^A-\gamma_A(t) & \gamma_B(t)(1-\zeta) & \gamma_B(t)\zeta\\
\gamma_A(t) (1-\zeta) & \eta^C-\mu^C-\gamma_B(t) & 0\\
\gamma_A(t) \zeta & 0 & \eta^C-\mu^C-\gamma_B(t)
\end{array}\right).
$$
So the population evolution is only dependent on the moving distribution
functions $\gamma^A$, $\gamma^P$, born rates $\eta$, and natural death rates $\mu$.
Hence, we obtain the complete model composed by the four subsystems
\myref{eqPA}--\myref{eqPGamma} composed by: (i) the variables
of the high incidence TB country
\myeq{eqPA}{
\left\{\begin{array}{l}
\dot{S}_A= \eta^A - \left(\lambda_A +M_A\right)S_A+\gamma_B\left((1-\zeta)
S_C+\zeta S_G\right),\\
\dot{P}_A=\lambda_A S_A+\sigma^A\lambda_A L_A-\left(\delta^A+M_A\right)P_A
+\gamma_B\left((1-\zeta) P_C+\zeta P_G\right),\\
\dot{I}_A=\phi^A\delta^A P_A+\omega^A L_A+\phi^A_T\delta^A_T T_A
-\left(\tau^A k^A +M_A\right)I_A+\gamma_B\left((1-\zeta) I_C
+\zeta I_G\right),\\
\dot{L}_A=(1-\phi^A)\delta^A P_A+(1-\phi^A_T)\delta^A_T T_A
-\left(\sigma^A\lambda_A+\omega^A+M_A\right)L_A+\gamma_B\left((1-\zeta) L_C
+\zeta L_G\right),\\
\dot{T}_A=\tau^A k^A I_A-\left(\delta^A_T+M_A\right)T_A
+\gamma_B\left((1-\zeta) T_C+\zeta T_G\right),\\
\dot{\lambda}_A=\beta^A\nu^A\left(\phi^A\delta^A P_A+\omega^A L_A
+\phi^A_T\delta^A_T T_A-\left(\tau^A k^A+M_A\right)I^A
+\gamma_B\left((1-\zeta) I_C+\zeta I_G\right)\right),\\
\dot{N}_A=\left(M_A-\gamma_A-\mu^A\right)N_A,\\
\end{array}\right.
}
(ii) the variables associated with the community in the host country
\myeq{eqPG}{
\left\{\begin{array}{l}
\dot{S}_G= \eta^C - \left(\lambda_G +M_G\right)S_G+\gamma_A\zeta S_A,\\
\dot{P}_G=\lambda_G S_G+\sigma^C\lambda_G L_G-\left(\delta^C+M_G\right)P_G
+\gamma_A\zeta P_A,\\
\dot{I}_G=\phi^C\delta^C P_G+\omega^C L_G+\phi^G_T\delta^C_T T_G
-\left(\tau^C k^C +M_G\right)I_G+\gamma_A\zeta I_A,\\
\dot{L}_G=(1-\phi^C)\delta^C P_G+(1-\phi^G_T)\delta^C_T T_G
-\left(\sigma^C\lambda_G+\omega^C+M_G\right)L_G+\gamma_A\zeta L_A,\\
\dot{T}_G=\tau^C k^C I_G-\left(\delta^C_T+M_G\right)T_G+\gamma_A\zeta T_A,\\
\dot{\lambda}_G=\beta^G\nu^G\left(\phi^C\delta^C P_G+\omega^C L_G
+\phi^G_T\delta^C_T T_G-\left(\tau^C k^C+M_G\right)I^G+\gamma_A\zeta I_A\right),\\
\dot{N}_G=\left(M_G-\gamma_B-\mu^C\right)N_G,\\
\end{array}\right.
}
(iii) the variables related with the population of the host country excluding
the community
\myeq{eqPC}{
\left\{\begin{array}{l}
\dot{S}_C= \eta^C - \left(\lambda_C +M_C\right)S_C+\gamma_A(1-\zeta) S_A,\\
\dot{P}_C=\lambda_C S_C+\sigma^C\lambda_C L_C-\left(\delta^C+M_C\right)P_C
+\gamma_A(1-\zeta) P_A,\\
\dot{I}_C=\phi^C\delta^C P_C+\omega^C L_C+\phi^C_T\delta^C_T T_C
-\left(\tau^C k^C +M_C\right)I_C+\gamma_A(1-\zeta) I_A,\\
\dot{L}_C=(1-\phi^C)\delta^C P_C+(1-\phi^C_T)\delta^C_T T_C
-\left(\sigma^C\lambda_C+\omega^C+M_C\right)L_C+\gamma_A(1-\zeta) L_A,\\
\dot{T}_C=\tau^C k^C I_C-\left(\delta^C_T+M_C\right)T_C+\gamma_A(1-\zeta) T_A,\\
\dot{\lambda}_C=\beta^C\nu^C\left(\phi^C\delta^C P_C+\omega^C L_C
+\phi^C_T\delta^C_T T_C-\left(\tau^C k^C+M_C\right)I^C+\gamma_A(1-\zeta) I_A\right),\\
\dot{N}_C=\left(M_C-\gamma_B-\mu^C\right)N_C,\\
\end{array}\right.
}
(iv) and the variables measuring the flux of population
\myeq{eqPGamma}{
\left\{\begin{array}{l}
\dot{\gamma}_A = z^A,\\
\dot{z}_A = -(\theta^A)^2(\gamma_A- a^A-b^A\,t),\\
\dot{\gamma}_B = z^B,\\
\dot{z}_B = -(\theta^B)^2(\gamma_B- a^B- b^B\,t),
\end{array}\right.
}
where for presentation convenience we define
\myeqAN{
M_A&=\eta^A+\left((1-\zeta) N_C+\zeta N_G\right)\gamma_B N_A^{-1},\\
M_C&=\eta^C+(1-\zeta) \gamma_A N_A N_C^{-1},\\
M_G&=\eta^C+\zeta \gamma_A N_A N_G^{-1}.
}
Note that
\myeqAN{
\dot{N}_A+\dot{N}_C+\dot{N}_G &=(\eta^A-\mu^A)N_A+(\eta^C-\mu^C)(N_C+N_G).
}
Again, if $\eta^A=\mu^A$ and $\eta^C=\mu^C$, then the total population is constant.
Moreover, if $b^A=b^B=p^A=p^B= 0$, then system \myref{eqPA}--\myref{eqPGamma} is autonomous. For
notation clarity, all parameters (i.e., constant values) have upper indices
whereas state variables have lower indices.
\begin{figure}
\centering
\scalebox{0.90}
{
\begin{pspicture}(0,-3.1489062)(8.3828125,3.1289062)
\psframe[linewidth=0.04,dimen=outer](2.8609376,2.3489063)(1.2809376,1.5089062)
\psframe[linewidth=0.04,dimen=outer](2.8609376,0.34890625)(1.2809376,-0.49109375)
\psframe[linewidth=0.04,dimen=outer](6.6609373,0.32890624)(5.0809374,-0.51109374)
\psframe[linewidth=0.04,dimen=outer](6.6809373,-2.0310938)(5.1009374,-2.8710938)
\psframe[linewidth=0.04,dimen=outer](2.8809376,-2.0310938)(1.3009375,-2.8710938)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(2.0809374,1.4489063)(2.0809374,0.42890626)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(2.0609374,-0.55109376)(2.0809374,-2.0510938)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{<-}(5.8809376,-0.55109376)(5.9009376,-2.0510938)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(2.9609375,0.14890625)(5.0409374,0.14890625)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{<-}(2.9609375,-0.27109376)(5.0409374,-0.27109376)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{<-}(2.9409375,-2.6710937)(5.0209374,-2.6710937)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(2.9809375,-2.2710938)(5.0609374,-2.2710938)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(2.0809374,3.1089063)(2.1009376,2.3689063)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(5.3409376,-0.55109376)(2.3809376,-1.9710938)
\usefont{T1}{ptm}{m}{n}
\rput(2.1023438,1.9189062){$S$}
\usefont{T1}{ptm}{m}{n}
\rput(2.0723438,-0.06109375){$P$}
\usefont{T1}{ptm}{m}{n}
\rput(5.8923435,-0.10109375){$L$}
\usefont{T1}{ptm}{m}{n}
\rput(2.0223436,-2.4210937){$I$}
\usefont{T1}{ptm}{m}{n}
\rput(5.882344,-2.4410937){$T$}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(1.1809375,2.1489062)(0.4409375,2.1489062)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(1.2209375,0.12890625)(0.4809375,0.12890625)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(1.2409375,-2.2710938)(0.5009375,-2.2710938)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{<-}(1.2209375,1.9089062)(0.4809375,1.9089062)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{<-}(1.2609375,-0.09109375)(0.5209375,-0.09109375)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{<-}(1.2809376,-2.4710937)(0.5409375,-2.4710937)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{<-}(7.5209374,0.14890625)(6.7809377,0.14890625)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{<-}(7.5009375,-2.2510939)(6.7609377,-2.2510939)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(7.5209374,-2.4910936)(6.7809377,-2.4910936)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(7.5209374,-0.07109375)(6.7809377,-0.07109375)
\usefont{T1}{ptm}{m}{n}
\rput(0.85234374,2.3989062){$M$}
\usefont{T1}{ptm}{m}{n}
\rput(0.83234376,0.33890626){$M$}
\usefont{T1}{ptm}{m}{n}
\rput(0.8723438,-2.0410938){$M$}
\usefont{T1}{ptm}{m}{n}
\rput(7.2323437,-1.9810938){$M$}
\usefont{T1}{ptm}{m}{n}
\rput(7.2323437,0.37890625){$M$}
\usefont{T1}{ptm}{m}{n}
\rput(0.8123438,1.6789062){$\tilde{S}$}
\usefont{T1}{ptm}{m}{n}
\rput(0.78234375,-0.34109375){$\tilde{P}$}
\usefont{T1}{ptm}{m}{n}
\rput(0.73234373,-2.7210937){$\tilde{I}$}
\usefont{T1}{ptm}{m}{n}
\rput(7.202344,-0.32109374){$\tilde{L}$}
\usefont{T1}{ptm}{m}{n}
\rput(7.2123437,-2.7210937){$\tilde{T}$}
\usefont{T1}{ptm}{m}{n}
\rput(2.4623437,0.93890625){$\lambda$}
\usefont{T1}{ptm}{m}{n}
\rput(4.092344,-0.48109376){$\sigma \lambda$}
\usefont{T1}{ptm}{m}{n}
\rput(3.9523437,0.37890625){$(1-\phi)\delta$}
\usefont{T1}{ptm}{m}{n}
\rput(6.8123436,-1.2810937){$(1-\phi_T)\delta_T$}
\usefont{T1}{ptm}{m}{n}
\rput(1.7923437,-1.1610937){$\phi \delta$}
\usefont{T1}{ptm}{m}{n}
\rput(4.032344,-2.9210937){$\phi_T \delta_T$}
\usefont{T1}{ptm}{m}{n}
\rput(3.9923437,-2.0610938){$\tau k$}
\usefont{T1}{ptm}{m}{n}
\rput{25.0}(-0.101552695,-1.7434878){\rput(3.8623438,-1.0810938){$\omega$}}
\usefont{T1}{ptm}{m}{n}
\rput(2.3223438,2.7389061){$\eta$}
\end{pspicture}
}
\caption{Model for TB transmission.}
\label{fig:model:flow1}
\end{figure}
\begin{figure}
\centering
\scalebox{0.90}
{
\begin{pspicture}(0,-1.39)(10.063281,1.39)
\psframe[linewidth=0.04,dimen=outer](9.82,0.79)(8.22,-0.81)
\usefont{T1}{ptm}{m}{n}
\rput(4.5746875,0.005){\large $A$}
\usefont{T1}{ptm}{m}{n}
\rput(0.79468745,-0.035){\large $G$}
\usefont{T1}{ptm}{m}{n}
\rput(9.024688,0.045){\large $C$}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(4.64,1.37)(4.64,0.87)
\usefont{T1}{ptm}{m}{n}
\rput(4.9028125,1.2){$\eta^A$}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{<-}(3.76,0.23)(1.74,0.23)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{<-}(8.12,0.21)(5.58,0.21)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(8.0,-0.21)(5.54,-0.21)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(3.74,-0.21)(1.72,-0.21)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(0.92,1.35)(0.92,0.85)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(9.08,1.29)(9.08,0.79)
\usefont{T1}{ptm}{m}{n}
\rput(9.3828125,1.16){$\eta^C$}
\usefont{T1}{ptm}{m}{n}
\rput(1.2128125,1.2){$\eta^G$}
\usefont{T1}{ptm}{m}{n}
\rput(2.7728124,-0.46){$\zeta \gamma_A X_A$}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(0.86,-0.83)(0.86,-1.33)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(4.64,-0.87)(4.64,-1.37)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,
arrowinset=0.4]{->}(9.04,-0.85)(9.04,-1.35)
\usefont{T1}{ptm}{m}{n}
\rput(1.2428125,-1.04){$M_G$}
\usefont{T1}{ptm}{m}{n}
\rput(5.0128126,-1.08){$M_A$}
\usefont{T1}{ptm}{m}{n}
\rput(9.452812,-1.04){$M_C$}
\usefont{T1}{ptm}{m}{n}
\rput(2.7728124,0.46){$\zeta \gamma_B X_G$}
\usefont{T1}{ptm}{m}{n}
\rput(6.732813,0.46){$(1-\zeta)\gamma_A X_A$}
\usefont{T1}{ptm}{m}{n}
\rput(6.742812,-0.44){$(1-\zeta)\gamma_B X_C$}
\psframe[linewidth=0.04,dimen=outer](5.42,0.79)(3.82,-0.81)
\psframe[linewidth=0.04,dimen=outer](1.6,0.77)(0.0,-0.83)
\end{pspicture}
}
\caption{Flow chart between high TB incidence country (A), natives from high
TB incidence country living in Communities (G) in a low TB incidence country,
remainder of population living in a low TB incidence country (C).}
\label{fig:model:flow2}
\end{figure}
\section{Reproduction number and its sensitivity analysis for the autonomous case}
\label{sec:R0}
The transmissibility of an infection can be asymptotically quantified by its
reproduction number $R_0$ (for autonomous models), defined as the mean number
of secondary infections seeded by a typical infective into a susceptible
population. Since $R_0$ is a condition for the asymptotic stability of solutions
around a free disease equilibrium point, this value determines a threshold:
whenever $R_0 > 1$, a typical infective gives rise, on average, to more than
one secondary infection, leading to an epidemic. In contrast, when $R_0 < 1$,
infectious typically give rise, on average, to less than one secondary infection,
and the prevalence of infection cannot increase.
A key point is that the model \myref{eqPA}--\myref{eqPGamma} is \emph{a priori} nonautonomous,
due to the flux of population $\gamma_A$ and $\gamma_B$. For such reason,
from now on we assume that $\gamma_A(t)\equiv a^A$ and $\gamma_B(t)=a^B$,
i.e., $b^A=b^B=p^A=p^B=0$ in~\myref{eq5a}, so that model \myref{eqPA}--\myref{eqPGamma} becomes
autonomous and we can apply the standard method from \cite{Driessche-Watmough}.
A complete nonautonomous situation will be considered in a future work.
The reproducing number $R_0$ of system~\myref{eq1} can be analytically
determined and, when $\eta=\mu$, is given by
\myeq{eqR0s}{
R_0 = \frac{\beta\nu\delta(\delta_T+\mu)(\phi\mu+\omega)}{\mu(\delta+\mu)[
(\mu+\omega)(\tau k+\delta_T+\mu)+\delta_T \tau k(1-\phi_T)]},
}
see, e.g., \cite{TBportugalGomesRodrigues}. Hence, $R_0$ is proportional to
$\beta$, $\nu$, $\phi$, $\phi_T$ ($0<\phi_T<1$) and inverse proportional to
$\tau$ and $k$. In the no-transfer situation, i.e., $\gamma_A\equiv \gamma_B
\equiv 0$, our model reduces to the disjoint coupling of the (sub)systems
$(A)$, $(C)$ and $(G)$ similar to~\myref{eq1}, so we can compute the
reproduction numbers for the subsystems (using the fixed parameters
from Table~\ref{table:parameters:PT:AN}) in the no-transfer situation
using~\myref{eqR0s}, which gives
$$
R_0^A=\numD{6.784924946},\quad R_0^C=\numD{1.116995163},
\quad R_0^G=\numD{2.365451295},
$$
where $R_0^A$, $R_0^C$ and $R_0^G$ denote the basic reproduction number for
populations (A), (C) and (G), respectively, when they are complete independent
from each others (no flux of population between the compartments). For the
complete system \myref{eqPA}--\myref{eqPGamma} the basic reproduction number will be denoted
by $R_0^T$. Note that the coupling of only $(C)$ and $(G)$ (again in the
no-transfer situation and without the components associated to $(A)$) is known
in the literature as a model for heterogeneous infection risk
\cite{Gomes:etall:2012,TBportugalGomesRodrigues}.
The complete system~\myref{eqPA}--\myref{eqPGamma}, although a generalization of previous models,
is quite different from systems like~\myref{eq1}, by the fact that it has
internal transfer of individuals between subsystems $(A)$ and $(C)$ and $(G)$,
so it is not expected that $R_0^T$ follows the same expression~\myref{eqR0s}.
So its relevant to understand how $R_0^T$ is affected by variation of the
parameters. In order to verify the validity and to obtain the value of $R_0^T$,
depending on the parameters chosen, we follow the approach in~\cite{Driessche-Watmough}.
Let $x$ represent the state-space variables (in a special order) that group the
individuals in each disease state and group compartment, i.e.,
$$
x=(P_A,P_C,P_G,I_A,I_C,I_G,L_A,L_C,L_G,T_A,T_C,T_G,S_A,S_C,S_G)\in\bkR^{15}_+.
$$
Note that there exists an equilibrium point with $I_A,I_C,I_G=0$,
if $\lambda_A=\lambda_B=\lambda_C=0$ and
$$
\left\{\begin{array}{l}
\eta^A- M_A S_A+a^B\left((1-\zeta) S_C+\zeta S_G\right)=0,\\
\eta^C- M_C S_C+a^A(1-\zeta) S_A=0,\\
\eta^G- M_G S_G+a^A\zeta S_A=0,\\
-\left(\delta^A+M_A\right)P_A+a^B\left((1-\zeta) P_C+\zeta P_G\right)=0,\\
-\left(\delta^C+M_C\right)P_C+a^A(1-\zeta) P_A=0,\\
-\left(\delta^C+M_G\right)P_G+a^A\zeta P_A=0,\\
\phi^A\delta^A P_A+\omega^A L_A+\phi^A_T\delta^A_T T_A=0,\\
\phi^C\delta^C P_C+\omega^C L_C+\phi^C_T\delta^C_T T_C=0,\\
\phi^C\delta^C P_G+\omega^C L_G+\phi^G_T\delta^C_T T_G=0,\\
(1-\phi^A)\delta^A P_A+(1-\phi^A_T)\delta^A_T T_A
-\left(\omega^A+M_A\right)L_A+a^B\left((1-\zeta) L_C
+\zeta L_G\right)=0,\\
(1-\phi^C)\delta^C P_C+(1-\phi^C_T)\delta^C_T T_C
-\left(\omega^C+M_C\right)L_C+a^A(1-\zeta) L_A=0,\\
(1-\phi^C)\delta^C P_G+(1-\phi^G_T)\delta^C_T T_G
-\left(\omega^C+M_G\right)L_G+a^A\zeta L_A=0,\\
-\left(\delta^A_T+M_A\right)T_A+a^B\left((1-\zeta) T_C+\zeta T_G\right)=0,\\
-\left(\delta^C_T+M_C\right)T_C+a^A(1-\zeta) T_A=0,\\
-\left(\delta^C_T+M_G\right)T_G+a^A\zeta T_A=0.
\end{array}\right.
$$
From the last three equations, we have
$$ \left(\begin{array}{ccc}
-\delta^A_T-M_A & a^B(1-\zeta) & a^B\zeta\\
a^A(1-\zeta) & -\delta^C_T-M_C & 0\\
a^A\zeta & 0 & -\delta^G_T-M_G
\end{array}\right)
\left(\begin{array}{c}
T_A\\ T_C\\T_G
\end{array}\right)
=
\left(\begin{array}{c}
0 \\ 0 \\ 0
\end{array}\right)
\:\:\Rightarrow\:\:T_A=T_C=T_G=0.
$$
In the same way we can see, from fourth to sixth equations,
that $P_A=P_C=P_G=0$ and, from the other equations,
that $L_A=L_C=L_G=0$. Since, $\eta^A, \eta^C\neq0$ and
\myeqAN{
M_A S_A&=\eta^A S_A+\left((1-\zeta) S_C+\zeta S_G\right)a^B,\\
M_C S_C&=\eta^C S_C+(1-\zeta) a^A S_A,\\
M_G S_G&=\eta^C S_G +\zeta a^A S_A,
}
from the first three equations, we have $S_A= S_C = S_G = 1$. Hence,
the disease free equilibrium point (DFE) is unique and given by
$$
x_0=(0,0,0,0,0,0,0,0,0,0,0,0,1,1,1),
$$
and it makes sense to define the set of all disease free states $X_s$ as
$$
X_s=\{(0,0,0,0,0,0,0,0,0,0,0,0,S_A,S_C,S_G)\in\bkR^{15}\::\: S_A, S_C, S_G\geq 0\}.
$$
In our model the individuals get the first contact with the infection in the
states $P_A, P_C, P_G$. We have $m=12$ states where individuals have different
degrees of infection and $3$ states free of disease. The vector field $X$
in~\myref{eqPA}--\myref{eqPGamma} is now divided as $X=\mathcal{F}-(\mathcal{V}^--\mathcal{V}^+)$,
where $\mathcal{F}$ is the rate of appearance of new infections, $\mathcal{V}^+$
is the rate of in-transfers of individuals by other means, and $\mathcal{V}^-$
is the rate of out-transfers of individuals by other means. We have
{\footnotesize
$$
\mathcal{F}_{1-3}(x)
=\left(\begin{array}{c}
\beta^A\nu^A I_A \left(S_A+\sigma^A L_A\right)
+a^B \left((1-\zeta) P_C+\zeta P_G\right)\\
\beta^C\nu^C I_C \left(S_C+\sigma^C L_C\right)+a^A(1-\zeta) P_A\\
\beta^G\nu^G I_G \left(S_G+\sigma^C L_G\right)+a^A\zeta P_A
\end{array}\right), \quad \mathcal{F}_j(x) = 0
\mbox { for } j\in\{4,\cdots, 15\},
$$
$$
\mathcal{V}^+(x)- \mathcal{V}^-(x)=\left(\begin{array}{c}
0\\
0\\
0\\
\phi^A\delta^A P_A+\omega^A L_A+\phi^A_T\delta^A_T T_A
+a^B \left((1-\zeta) I_C+\zeta I_G\right)\\
\phi^C\delta^C P_C+\omega^C L_C+\phi^C_T\delta^C_T T_C+ a^A (1-\zeta) I_A\\
\phi^C\delta^C P_G+\omega^C L_G+\phi^G_T\delta^C_T T_G+ a^A\zeta I_A\\
(1-\phi^A)\delta^A P_A+(1-\phi^A_T)\delta^A_T T_A
+a^B \left((1-\zeta) L_C+\zeta L_G\right)\\
(1-\phi^C)\delta^C P_C+(1-\phi^C_T)\delta^C_T T_C+a^A (1-\zeta) L_A\\
(1-\phi^C)\delta^C P_G+(1-\phi^G_T)\delta^C_T T_G+a^A \zeta L_A\\
\tau^A k^A I_A+a^B \left((1-\zeta) T_C+\zeta T_G\right)\\
\tau^C k^C I_C+a^A (1-\zeta) T_A\\
\tau^C k^C I_G+a^A \zeta T_A\\
\eta^A +a^B \left((1-\zeta) S_C+\zeta S_G\right)\\
\eta^C +a^A (1-\zeta) S_A\\
\eta^C +a^A \zeta S_A\\
\end{array}\right)-\left(\begin{array}{c}
\left(\delta^A+M_A\right)P_A\\
\left(\delta^C+M_C\right)P_C\\
\left(\delta^C+M_G\right)P_G\\
\left(\tau^A k^A +M_A\right)I_A\\
\left(\tau^C k^C +M_C\right)I_C\\
\left(\tau^C k^C +M_G\right)I_G\\
\left(\sigma^A\lambda_A+\omega^A+M_A\right)L_A\\
\left(\sigma^C\lambda_C+\omega^C+M_C\right)L_C\\
\left(\sigma^C\lambda_G+\omega^C+M_G\right)L_G\\
\left(\delta^A_T+M_A\right)T_A\\
\left(\delta^C_T+M_C\right)T_C\\
\left(\delta^C_T+M_G\right)T_G\\
\left(\lambda_A +M_A\right)S_A\\
\left(\lambda_C +M_C\right)S_C\\
\left(\lambda_G +M_G\right)S_G\\
\end{array}\right).$$
}
Note that $\mathcal{F}_{1-3}$ denotes the entries of $\mathcal{F}$ from $1$
to $3$. Then $\mathcal{F}$ and $\mathcal{V}= \mathcal{V}^+- \mathcal{V}^-$
satisfy the following assumptions:
\begin{itemize}
\item[$(A_1)$] if $x\geq0$, then $\mathcal{F}(x)$, $\mathcal{V}^+(x)$,
$\mathcal{V}^-(x)\geq 0$ (each function represents a direct transfer of individuals);
\item[$(A_2)$] if $x_i=0$, then $\mathcal{V}_i^-(x)=0$
(if the compartment is empty, then there cannot be out-transfers of individuals);
\item[$(A_3)$] $\mathcal{F}_i(x)=0$ for $i>12$;
\item[$(A_4)$] if $x\in X_s$, then $\mathcal{F}_i(x)=0$ and $\mathcal{V}_i^+(x)=0$
for $1\leq i\leq 12$ (if the population is free of disease,
then it will remain free of disease);
\item[$(A_5)$] when $\mathcal{F}(x)=0$ we have that $DX(x_0)$ is a Hurwitz
matrix, i.e., all eigenvalues have negative real part
(the equilibrium point $x_0$ is asymptotically stable).
\end{itemize}
Only assumption $(A_5)$ creates some difficulty, since the other assumptions
are evident. We numerically checked~$(A_5)$ (in all calculations made)
using the Routh--Hurwitz criterion, which states that the matrix $A=D X(x_0)$
is Hurwitz if and only if all the principal subdeterminants, of a special matrix
constructed with the coefficients of the characteristic polynomial of $A$,
are all strictly positive.
By Lemma~1 in~\cite{Driessche-Watmough}, the derivatives $D\mathcal{F}(x_0)$
and $D\mathcal{V}(x_0)$ are partitioned as
$$
D\mathcal{F}(x_0)=\left(\begin{array}{cc}F & 0\\0 & 0\end{array}\right)
\quad\mbox{ and }\quad
D\mathcal{V}(x_0)=\left(\begin{array}{cc}V & 0\\J_3 & J_4\end{array}\right),
$$
where $F$ and $V$ are $m\times m$-matrices. Hence,
we have $F_{i,j}(x)=0$, if $i>m$ or $j>m$, and
$$
F_{1-6,1-6}=\left(\begin{array}{cccccc}
0 & a^B (1-\zeta) & a^B \zeta & \beta^A\nu^A & 0 & 0\\
a^A(1-\zeta) & 0 & 0 & 0 & \beta^C\nu^C & 0\\
a^A\zeta & 0 & 0 & 0 & 0 & \beta^G\nu^G\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
\end{array}\right).
$$
The critical threshold function $R_0^T$ is then given as the spectral radius
of the matrix $A=FV^{-1}$. We have that $A$ has all entries zero except
\myeqAN{
A_{1,i}&= a^B (1-\zeta)V_{2,i}^{-1}+a^B \zeta V_{3,i}^{-1}
+\beta^A\nu^A V_{4,i}^{-1},\\
A_{2,i}&= a^A (1-\zeta) V_{1,i}^{-1}+\beta^C\nu^C V_{5,i}^{-1},\\
A_{3,i}&= a^A \zeta V_{1,i}^{-1}+\beta^G\nu^G V_{6,i}^{-1}.
}
Considering the algebraic complexity of computing the spectral radius of $A$,
in the next subsection we proceed numerically by understanding $R_0$
from the variation of the parameters.
\subsection{Sensitivity analysis: numerical simulations}
The values of the parameters $\beta$, $\nu$, $\mu$, $\delta$, $\phi$, $\sigma$,
$\omega$, $\tau$, $k$, $\delta_T$ and $\phi_T$ estimated for Portugal, are based
on the values proposed in \cite{TBportugalGomesRodrigues}, as well as the initial
conditions $N(0)$, $\TS(0)$, $\TP(0)$, $\TL(0)$, $\TI(0)$, $\TT(0)$. We assume
that the Portuguese total population will decrease ($\eta < \mu N$), based on
the projections for resident population in Portugal from \emph{Statistics Portugal}
\cite{INE2014} and the value for TB induced death that comes from \cite{Styblo_1991}.
We assume that the reference value for the transmission coefficient in Angola
is $\beta = 150$ based on \cite{noticiasAngola2}. According to the World Bank,
the natural death rate in Angola is equal to $\mu = 1/51 \, yrs^{-1}$
\cite{worldbankdT}. The value for the TB induced death rate is based
on \cite{Styblo_1991}. The proportion of pulmonary TB cases in Angola is equal
to $\nu = 0.937$ and the fraction of treatment default and failure for individuals
under treatment is equal to $\phi_T = 0.219$ \cite{noticiasAngola1}. We assume
that the reinfection factor $\sigma$ in Angola takes the value proposed
in \cite{TBportugalGomesRodrigues}. According to WHO, the proportion
of detected cases in a year is equal to $k=0.79$ \cite{WHO:TB:report:2013}.
The rate at which infectious individuals enter treatment is estimated to be
$\tau = 2.13 \, yrs^{-1}$. The values of the parameters $\delta$, $\phi$,
$\omega$ and $\delta_T$ are taken from \cite{TBportugalGomesRodrigues}.
The recruitment rate value $\eta = 1287900$ is based on the population
projections from Population Reference Bureau \cite{PopulationRankings}.
The initial conditions $N(0)$, $\TS(0)$, $\TP(0)$, $\TL(0)$, $\TI(0)$,
$\TT(0)$ are based on data from \cite{SilvaTorresTBAngola,wikiAngola,noticiasAngola2}.
All previous values are resumed in Table~\ref{table:parameters:PT:AN}.
\begin{table}[!htb]
\centering
\begin{tabular}{|l | l | l | l |}
\hline
{\scriptsize{Symbol}} & {\scriptsize{Description}} & {\scriptsize{Portugal}}
& {\scriptsize{Angola}}\\
\hline
{\scriptsize{$\beta$}} & {\scriptsize{Transmission coefficient}}
& {\scriptsize{variable ($72.358 \, yrs^{-1}$) }}
& {\scriptsize{variable ($150 \, yrs^{-1}$)}}\\
{\scriptsize{$\nu$}} & {\scriptsize{Proportion of pulmonary TB cases}}
& {\scriptsize{$0.75$}} & {\scriptsize{$0.937$}}\\
{\scriptsize{$\mu$}} & {\scriptsize{Natural death rate}}
& {\scriptsize{$1/80 \, yrs^{-1}$}} & {\scriptsize{$1/51 \, yrs^{-1}$}}\\
{\scriptsize{$\delta$}} & {\scriptsize{Rate at which individuals leave P compartment}}
& {\scriptsize{$2 \, yrs^{-1}$}} & {\scriptsize{$2 \, yrs^{-1}$}}\\
{\scriptsize{$\phi$}} & {\scriptsize{Fraction of infected population developing active TB}}
& {\scriptsize{$0.05$}} & {\scriptsize{$0.05$}}\\
{\scriptsize{$\sigma$}} & {\scriptsize{Reinfection (exogenous) factor for latent}}
& {\scriptsize{$0.5$}} & {\scriptsize{$0.5$}}\\
{\scriptsize{$\omega$}} & {\scriptsize{Rate of endogenous reactivation for latent infections}}
& {\scriptsize{$0.0003 \, yrs^{-1}$}} & {\scriptsize{$0.0003 \, yrs^{-1}$}}\\
{\scriptsize{$\tau$}} & {\scriptsize{Rate at which infectious individuals enter treatment}}
& {\scriptsize{$4.26 \, yrs^{-1}$}} & {\scriptsize{$2.13 \, yrs^{-1}$}}\\
{\scriptsize{$k$}} & {\scriptsize{Proportion of detected cases in a year}}
& {\scriptsize{$0.87$}} & {\scriptsize{$0.79$}}\\
{\scriptsize{$\delta_T$}} & {\scriptsize{Inverse of treatment length}}
& {\scriptsize{$1.36 \, yrs^{-1}$ }} & {\scriptsize{$1.36 \, yrs^{-1}$}}\\
{\scriptsize{$\phi_T$}} & {\scriptsize{Fraction of treatment default and failure}}
& {\scriptsize{$0.04$}} & {\scriptsize{$0.219$}}\\
{\scriptsize{$\eta$}} & {\scriptsize{Recruitment rate for Portugal}}
& {\scriptsize{$78672$ }} & {\scriptsize{$1287900$ }} \\
{\scriptsize{$d_T$}} & {\scriptsize{TB induced death rate for Portugal}}
& {\scriptsize{$1/5 \, yrs^{-1}$ }} & {\scriptsize{$1/8 \, yrs^{-1}$}} \\
{\scriptsize{$N(0)$}} & {\scriptsize{Initial total population}}
& {\scriptsize{10560000}} & {\scriptsize{24300000}}\\
{\scriptsize{$\TS(0)$}} & {\scriptsize{Initial susceptible population}}
& {\scriptsize{$8947300$}} & {\scriptsize{$9618729$}}\\
{\scriptsize{$\TP(0)$}} & {\scriptsize{Initial primary infected with TB population}}
& {\scriptsize{$11000$}} & {\scriptsize{$24300$}}\\
{\scriptsize{$\TI(0)$}} & {\scriptsize{Initial actively infected (and infectious) population}}
& {\scriptsize{$500$}} & {\scriptsize{$16164$}}\\
{\scriptsize{$\TL(0)$}} & {\scriptsize{Initial latent infected population}}
& {\scriptsize{$1600000$}} & {\scriptsize{$14580000$}}\\
{\scriptsize{$\TT(0)$}} & {\scriptsize{Initial under treatment population}}
& {\scriptsize{$1200$}} & {\scriptsize{$60807$}}\\
\hline
\end{tabular}
\caption{Estimated parameters and initial conditions values for Portugal and Angola.}
\label{table:parameters:PT:AN}
\end{table}
If we firstly keep all parameters fixed (see Table~\ref{table:parameters:PT:AN}),
we have
$$
R_0^T= \numD{6.359799999}.
$$
Then we vary one of the parameters $\beta^A$, $\beta^C$, $\beta^G$, $k^C$,
$\phi^G_T$, $a^A$, $a^B$, or $\zeta$ in the ranges
$$
\begin{array}{ccc}
150(1-\theta)\leq\beta^A\leq 150(1+\theta), &
72.358(1-\theta)\leq\beta^C\leq 72.358(1+\theta), \\
\beta^C=72.358\leq \beta^G\leq 150=\beta^A, &
0.87(1-\theta)\leq k^C\leq 0.87(1+\theta), \\
\phi_T^C= 0.04, \leq \phi_T^G\leq 0.219=\phi_T^A, &
0\leq a^A\leq 0.1, \\
0\leq a^B \leq 0.1, &
0\leq \zeta\leq 1, \\
\end{array}
$$
where $\theta=0.2$. Each simulation gives a curve $x\mapsto R_0^T(x)$,
where $x$ is one of the above parameters, for which we find a best fitting
curve in one of the models
\myeq{fitmodels}{
P_n(x)=a_0 +a_1 x+ a_2 x^2+\dots+a_n x^n, \:\:n\in\{0,\dots,99\},
\quad\mbox{ and }\quad \frac{a_0+a_1 x + a_2 x^2}{b_0+b_1 x + b_2 x^2},
}
for some constants $a_0,\dots, a_n,b_0, b_1, b_2\in\bkR^N$.
\begin{table}[!htb]
\centering
\begin{tabular}{| c | c | c | c |}
\hline
{\scriptsize Parameter} & {\scriptsize Type} & {\scriptsize Curve Fitting}
& {\scriptsize $\log_{10}(SQR)$}\\
\hline
{\scriptsize $\beta^A$} & {\scriptsize best} & {\scriptsize $R_0^T
=\numD{0.009021048}+\numD{0.0422905212}\, \beta^A
+\numD{4.39929508e-7}\, (\beta^A)^2$} & \ \\
\ & \ & {\scriptsize $\numD{-7.99262125e-10}\, (\beta^A)^3$}
& {\scriptsize $\numD{-5.278627639}$}\\
\ & {\scriptsize as in $R_0^A$} & {\scriptsize $R_0^T=\numD{.042398419}\, \beta^A$}
& {\scriptsize $\numD{-2.2700918}$}\\
\ & \ & \ & \ \\
{\scriptsize $\beta^C$} & {\scriptsize best} & {\scriptsize $R_0^T
=\numD{6.3577692}+\numD{2.2968396e-005}\, \beta^C
+\numD{6.545004e-008}\, (\beta^C)^2$} & \ \\
\ & \ & {\scriptsize $+\numD{6.933227073e-12}\, (\beta^C)^3
+\numD{8.83463551e-13}\, (\beta^C)^4$} & {\scriptsize \numD{-7.055330529}} \\
\ & {\scriptsize as in $R_0^C$} & {\scriptsize $R_0^T
=\numD{.08671477}\, \beta^C$} & {\scriptsize $\numD{.8692731}$}\\
\ & \ & \ & \ \\
{\scriptsize $\beta^G$} & {\scriptsize best}& {\footnotesize $R_0^T
=\frac{\numD{-3639.13063363}+\numD{1285.78172120}\, \beta^G
-\numD{4.168861981}\, (\beta^G)^2}{\numD{-572.32092506}+\numD{202.21340477}\,
\beta^G -\numD{0.65585384}\, (\beta^G)^2}$} & {\scriptsize $\numD{-7.0129799}$} \\
\ & {\scriptsize as in $R_0^G$} & {\scriptsize $R_0^T=\numD{.0551009}\, \beta^G$}
& {\scriptsize $\numD{1.097186}$}\\
\ & \ & \ & \ \\
{\scriptsize $k^C$} & {\scriptsize best} & {\scriptsize $R_0^T=P_{89}(k^C)$}
& {\scriptsize $\numD{-7.058232835}$}\\
\ & {\scriptsize not best} & {\footnotesize $R_0^T=\frac{\numD{-27.075657}
+\numD{53.2692494}\,k^C+\numD{215.9660609}\, (k^C)^2}{\numD{-4.2640340}
+\numD{8.36667709}\, k^C +\numD{33.97750881}\, (k^C)^2}$}
& {\scriptsize $\numD{-6.6542336}$}\\
\ & \ & \ & \ \\
{\scriptsize $\phi_T^G$} & {\scriptsize best} & {\scriptsize $R_0^T=P_{21}(\phi_T^G)$}
& {\scriptsize $\numD{-7.007144237}$}\\
\ & {\scriptsize not best} & {\footnotesize $R_0^T=\frac{\numD{1103.5908788}
-\numD{1522.87667500}\, \phi_T^G}{\numD{173.537338}-\numD{239.505221}\, \phi_T^G}$}
& {\scriptsize $\numD{-7.002127}$}\\
\ & \ & \ & \ \\
{\scriptsize $a^A$} & {\scriptsize best} & {\footnotesize $R_0^T
=\frac{\numD{38.07598473}+\numD{2747.0900415}\, a^A+\numD{42528.207079}\,
(a^A)^2}{\numD{6.0248383}+\numD{419.985378}\, a^A+\numD{6317.049255}\, (a^A)^2}$}
& {\scriptsize $\numD{-4.27964}$}\\
\ & \ & \ & \ \\
{\scriptsize $a^B$} & {\scriptsize best} & {\scriptsize $R_0^T
=\numD{6.78217128}-\numD{77.5566782}\,a^B +\numD{2571.6270531}\, (a^B)^2$} & \ \\
\ & \ & {\scriptsize $-\numD{68307.97202742} \, (a^B)^3+ \numD{1194841.268572}\,
(a^B)^4- \numD{12711602.9588141}\, (a^B)^5$} & \ \\
\ & \ & {\scriptsize $+\numD{73922994.9730162}\,(a^B)^6-\numD{179395541.509671}\,(a^B)^7$}
& {\scriptsize $\numD{-2.1925449}$}\\
\ & \ & \ & \ \\
{\scriptsize $\zeta$} & {\scriptsize best} & {\scriptsize $R_0^T=P_{91}(\zeta)$}
& {\scriptsize $\numD{-7.11446753}$}\\
\ & {\scriptsize not best} & {\scriptsize $R_0^T=\numD{6.383321621}
-\numD{.1147570445}\, \zeta+\numD{.1218747529}\, \zeta^2$}
& {\scriptsize $\numD{-3.0198063}$}\\
\hline
\end{tabular}
\caption{Curve fitting of $R_0^T$.}
\label{R0TFit}
\end{table}
Table~\ref{R0TFit} shows several curve fittings for the map $x\mapsto R_0^T(x)$.
By ``best fitting'' we mean a model, chosen between the above
models~\myref{fitmodels}, where the square root of the sum of squares of the
residuals $SQR=\sqrt{\sum_i r_i^2}$ has a minimum value or is smaller than the
number of significant digits in determining $R_0^T$, i.e., $10^{-8}$. The same
procedure applied to $R_0^A, R_0^C, R_0^G$ gave results compatible
with the analytic formula~\myref{eqR0s}.
\subsubsection{Variation of the TB transmission rates
(i.e., changing $\beta^A$, $\beta^C$ and $\beta^G$)}
A variation of $20\%$ in the value of $\beta^A$ implies a variation of
approximately $20\%$ to $R_0^T$. However, the same variation of $20\%$ in the
values of $\beta^C$ and $\beta^G$ affects $R_0^T$ less than $1\%$. Contrary
to~\myref{eqR0s}, the parameters $\beta^A, \beta^C, \beta^G$ do not appear
linearly in the calculation of $R_0^T$, although locally look similar
to an affine function, see Fig.~\ref{R0Tbeta}.
\begin{figure}[!htb]
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{R0-Abeta-T-cropped.eps}
\SFcaption{$R_0^T \in [5.1, 7.6]$ vs $\beta^A \in [120, 180]$}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{R0-Cbeta-T-cropped.eps}
\SFcaption{$R_0^T \in [6.3593, 6.3603]$ vs $\beta^C \in [78, 87]$}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{R0-Gbeta-T-cropped.eps}
\SFcaption{$R_0^T \in [6.3592, 6.3607]$ vs $\beta^G \in [70,150]$}
\end{subfigure}
\caption{$R_0^T$ when varying $\beta^A$, $\beta^C$ and $\beta^G$, respectively.}
\label{R0Tbeta}
\end{figure}
The variation of $\beta^A$ has also a significative impact on the community and
the host country, namely, in the number of infected and infectious individuals
after $5$ years, see Fig.~\ref{CIGI_ABETA}. Defining
$I_X(t, s)=\left.I_C(t)\right|_{\beta^A=s}$ with $X\in\{C,G\}$, we have
$$
\frac{I_C(5,180)}{I_C(5,150)}\approx 1.24,
\quad \frac{I_C(5,120)}{I_C(5,150)}\approx 0.70,
\quad\frac{I_G(5,180)}{I_G(5,150)}\approx 1.20,
\quad \frac{I_G(5,120)}{I_G(5,150)}\approx 0.77.
$$
An increase (decrease) of $20\%$ in $\beta^A$ implies a 5 years increase of
approximately $20\%$ (decrease of $30\%$) in $I_C$ and $I_G$, respectively.
This enforce the importance of additional effort to treat TB in countries
with high TB incidence, not only because of their population health improvement,
but also because of the implications on the health of individuals
in other host countries.
\begin{figure}[!htb]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{Abeta-CI-cropped.eps}
\SFcaption{$I_C(t) \in [0, 0.0006]$ vs $t \in [0, 5]$}
\end{subfigure}
\hspace*{0.1cm}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{Abeta-GI-cropped.eps}
\SFcaption{$I_G(t) \in [0, 0.0004]$ vs $t \in [0, 5]$}
\end{subfigure}
\caption{$I_C(t)$ and $I_G(t)$ when varying $\beta^A$
(box: $\beta^A=120$, solid: $\beta^A=150$, cross: $\beta^A=180$).}
\label{CIGI_ABETA}
\end{figure}
\subsubsection{Variation in the transfer of individuals (i.e., changing $a^A$ and $a^B$)}
The transfer of individuals between (A) and (C)+(G) (i.e., (B)) is determined by the
functions $\gamma_A(t)$ and $\gamma_B(t)$, which are here assumed to be equal
to the parameters $a^A$ and $a^B$. From Fig.~\ref{R0Ta} it is clear, as expected,
that an increment on the flux of individuals moving from areas of lower
TB incidence to areas of higher TB incidence reduces $R_0^T$ and,
on the contrary, an increment in the flux of individuals moving from areas
of high TB incidence to areas of lower TB incidence increases $R_0^T$.
Note that $R_0^T$ grows very fast for smaller values of $a^A$ and then tends
to stabilize with the flux of persons coming from the high incidence TB area.
An interesting phenomena when varying $a^A$ appears in the variable $I_G$,
i.e., the number of infected individuals in (G) (the community), see Fig.~\ref{R0Tb}.
It tells us that it is better for the community to have some moderate exchange
of persons with the high incidence TB region. Such behavior and its reverse,
after some time, seems to be related to the chosen value of $\zeta$
(discussed in the next subsection). It also imply that a careful study of the
seasonality distribution of persons traveling between (A) and (B) may be more relevant
for (G) than expected a priori. On the host country viewpoint, such phenomena
is not noticed as one can see from the evolution of the total number of infected
individuals in the host country, i.e., $I_C(t)\,N_C(t)+I_G\,N_G(t)$, see Fig.~\ref{R0Tb}.
\begin{figure}[!htb]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{R0-aA-T-cropped.eps}
\SFcaption{$R_0^T \in [6.3, 6.7]$ vs $a^A \in [0, 0.1]$}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{R0-aP-T-cropped.eps}
\SFcaption{$R_0^T \in [4.8, 6.8]$ vs $a^B \in [0, 0.1]$}
\end{subfigure}
\caption{$R_0^T$ when varying $a^A$ and $a^B$, respectively.}
\label{R0Ta}
\end{figure}
\begin{figure}[!htb]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{aA-GI-cropped.eps}
\SFcaption{$I_G(t) \in [0, 0.0008]$ vs $t \in [0, 5]$}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{aA-TPI-cropped.eps}
\SFcaption{$\mathcal{N} \in [0, 35000]$ vs $t \in [0, 5]$}
\end{subfigure}
\caption{$I_G(t)$ and total number $\mathcal{N}$ of infected individuals
in $(C)+(G)$ when varying $a^A$ (box: $a^A=0$, solid: $a^A=0.05$, cross: $a^A=0.1$).}
\label{R0Tb}
\end{figure}
\subsubsection{About the ratio of individuals that stay in the community
versus spread in the host country (i.e., changing $\zeta$)}
In what follows we analyze the impact of the existence of a community
of immigrants coming from a high incidence TB area on the host country,
the country of origin and in the global situation.
Recall that $\zeta$ is the percentage of persons traveling that come/go
specifically to (G) versus the complementary (C). Hence, the situation $\zeta=0$
means that all persons traveling between Angola and Portugal all come/go
to (C) and none to (G). On the contrary, $\zeta=1$ means that all persons traveling
between Angola and Portugal all come/go to (G). From the analysis of
Fig.~\ref{FigZETA} (right), it is clear that the existence of a community
of immigrants coming from a high incidence TB area is convenient for the host
country in order to better control TB spread. Regarding the point of view
of Angola, a change in $\zeta$ is not significative as one can see,
in Table~\ref{Dvalues}, that $I_A$ is not affected by a change in $\zeta$.
On a global viewpoint, a change in $\zeta$ has a big impact on the reproduction
number $R_0^T$, see Fig.~\ref{FigZETA} (left), for which the existence
of communities turn to be also convenient. In fact, the function attains
a minimum value that can be estimated from the approximated fitting
by a parabolic function as
$$
R_0 (T ) = \numD{6.383321621} + \numD{0.114757045} x + \numD{0.121874753} x^2,
$$
see Table~\ref{R0TFit}. Hence, we may say that the optimal value for $\zeta$
is approximately
$$
\min_{0\leq\zeta\leq 1}R_0^T(\zeta)
= \frac{\numD{0.114757045}}{2\times \numD{0.121874753}}\approx 0.46.
$$
\begin{figure}[ht]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{R0-ZETA-T-cropped.eps}
\SFcaption{$R_0^T \in [6.35, 6.39]$ vs $\zeta \in [0, 1]$}
\end{subfigure}
\hspace*{0.1cm}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{ZETA-TPI-cropped.eps}
\SFcaption{$\mathcal{H}(t) \in [0, 7500]$ vs $t \in [0, 5]$}
\end{subfigure}
\caption{$R_0^T$ versus $\zeta$ and total number
$\mathcal{H}(t) = I_C(t)\,N_C(t)+I_G(t)\,N_G(t)$ of infected
individuals in the host country versus $t$ when changing $\zeta$
(box: $\zeta=0$; solid: $\zeta=0.5$; cross: $\zeta=1$).}
\label{FigZETA}
\end{figure}
\section{Numerical results and discussion}
\label{sec:numeric}
Regarding the sensitivity analysis, we numerically simulated the
system~\myref{eqPA}--\myref{eqPGamma} by considering all parameters fixed except one chosen
parameter for which we consider three possible values according with
$$
\begin{array}{ll}
\beta^A \in \left\{150(1-\theta), 150, 150(1+\theta)\right\}, &
\beta^C \in \left\{72.358(1-\theta), 72.358, 72.358(1+\theta)\right\}, \\
\beta^G \in \left\{\beta^C, \frac{\beta^C+\beta^A}{2},\beta^A\right\}, &
k^C \in \left\{0.87(1-\theta), 0.87, 0.87(1+\theta)\right\}, \\
\phi_T^G \in \left\{\phi_T^C, \frac{\phi_T^C+\phi_T^A}{2}, \phi_T^A\right\}, &
a^A \in \left\{0, 0.05, 0.1\right\}, \\
a^B \in \left\{0, 0.05, 0.1\right\}, &
\zeta \in \left\{0, 0.5, 1\right\}, \\
\end{array}
$$
where $\theta=0.2$ (i.e., a variation of $\pm 20\%$). The middle levels are
the values considered when the parameters are fixed.
\begin{figure}[!htb]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{Abeta-AL-cropped.eps}
\SFcaption{$L_A(t) \in [0.35, 0.66]$ vs $t \in [0, 5]$}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{ZETA-GI-cropped.eps}
\SFcaption{$I_G(t) \in [0, 0.0006]$ vs $t \in [0, 5]$}
\end{subfigure}
\caption{$L_A(t)$ when varying $\beta^A$ and $I_G(t)$ when varying
$\zeta$ (box: smaller level, solid: middle level, cross: higher level).}
\label{LAGIZETA}
\end{figure}
Considering that system~\myref{eqPA}--\myref{eqPGamma} has $15$ relevant state-space variables and
we are perturbing $8$ parameters (with $3$ levels), even with overlapping of the
levels on the same graphic, such analysis implies the study of $360$ functions
aggregated in $120$ graphics. We want to quantify and describe the qualitative
behavior and difference between the evolutions, when comparing the different
levels. Additionally, a direct visual interpretation of the plots may be biased
since the plots are not in the same scale, which may give a quite erroneous
filling of disparity between functions when, in fact, the difference may be in
a small amount, e.g., see Fig.~\ref{LAGIZETA}. To deal with such issues,
in a precise and normalized way, we considered the following procedure.
Let $F_{Y,P,1}(t), F_{Y,P,2}(t), F_{Y,P,3}(t)$ be the evolution functions
associated to one of the state-variables
$$
Y \in \{S_A, S_C, S_G, P_A, P_C, P_G, I_A, I_C, I_G, L_A, L_C, L_G, T_A, T_C, T_G\}
$$
and to one of the three variation levels of a parameter
$P\in\{\beta^A, \beta^C, \beta^G, k^C, \phi_T^G, a^A, a^B, \zeta\}$.
Let $\mathcal{T}>0$ denote the total time of simulation. Define
$$
\vartheta(t)= \frac{1}{2}\left(\max_{i\in\mathcal{L}}{F_{Y,P,i}(t)}
+\min_{i\in\mathcal{L}}{F_{Y,P,i}(t)}\right)
\quad\mbox{ and }\quad
\varrho(t)= \frac{1}{4}\left(\max_{i\in\mathcal{L}}{F_{Y,P,i}(t)}
-\min_{i\in\mathcal{L}}{F_{Y,P,i}(t)}\right)^2
$$
for $t\in [0,\mathcal{T}]$ and $\mathcal{L}=\{1,2,3\}$. We divide the analysis
of the graphics, like in Fig.~\ref{LAGIZETA}, in three regions of time:
\textit{beginning} for $t\in \mathcal{B}=[0,\frac{1}{3}\mathcal{T}]$;
\textit{middle} when $t\in\mathcal{M}=[\frac{1}{3}\mathcal{T},\frac{2}{3}\mathcal{T}]$;
and \textit{end} when $t\in\mathcal{E}=[\frac{2}{3}\mathcal{T},\mathcal{T}]$.
The time set for the complete graph is denoted by $\mathcal{A}=[0, \mathcal{T}]$.
Hence, we define
$$
\xi_\mathcal{S}=\frac{\int_\mathcal{S}\varrho(s)\,ds}{\int_0^{\mathcal{T}}
\vartheta(s)\,ds} \:\mbox{ with }\:
\mathcal{S}\in\left\{\mathcal{B},\mathcal{M},\mathcal{E}\right\}.
$$
It is clear, from the linearity of the integral, that
$\xi_\mathcal{A}=\xi_\mathcal{B}+\xi_\mathcal{M}+\xi_\mathcal{E}$.
To understand what $\xi_\mathcal{A}$ measures, consider the hypothetical
situation where $F_{Y,P,1}(t)\equiv m+\theta$, $F_{Y,P,2}(t)\equiv m$, and
consider $F_{Y,P,3}(t)\equiv m-\theta$ for some $m\in\bkR$ and $\theta>0$. Then,
$$
\varphi(t)\equiv m,\:\:\varrho(t)\equiv \theta^2
\quad\Rightarrow\quad \xi_\mathcal{A}=\frac{\theta^2}{m}.
$$
So, although different, $\xi_\mathcal{A}$ is somehow similar to the variance
over the average, which gives an indication of how much the functions are spread
from the average value (between them in each instant of time). The definition of
$\xi_\mathcal{A}$ is also invariant to scale factors, which is quite useful
to eliminate erroneous interpretations of graphics, that may happen without
such measuring tools.
For the qualitative description of the variability of the evolution functions,
we introduced the following tagging notation based on concrete specifications:
\begin{itemize}
\item[1.] (cases $\mathcal{A}_{--}$, $\mathcal{A}_{+-}$, $\mathcal{A}_{++}$)
if $\max(\xi_{\mathcal{B}}, \xi_{\mathcal{M}}, \xi_{\mathcal{E}})<0.4$;
\item[2.] (cases $\mathcal{B}_{--}$, $\mathcal{B}_{+-}$, $\mathcal{B}_{++}$)
if $\mathcal{S}\neq\mathcal{A}$ and $\max(\xi_{\mathcal{B}}, \xi_{\mathcal{M}},
\xi_{\mathcal{E}})=\xi_{\mathcal{B}}$;
\item[3.] (case $\mathcal{M}_{--}$, $\mathcal{M}_{+-}$, $\mathcal{M}_{++}$)
if $\mathcal{S}\neq\mathcal{A}$ and $\max(\xi_{\mathcal{B}}, \xi_{\mathcal{M}},
\xi_{\mathcal{E}})=\xi_{\mathcal{M}}$;
\item[4.] (cases $\mathcal{E}_{--}$, $\mathcal{E}_{+-}$, $\mathcal{E}_{++}$)
if $\mathcal{S}\neq\mathcal{A}$ and $\max(\xi_{\mathcal{B}}, \xi_{\mathcal{M}},
\xi_{\mathcal{E}})=\xi_{\mathcal{E}}$;
\item[5.] (cases $\mathcal{S}_{--}$ with $\mathcal{S}\in\left\{\mathcal{B},
\mathcal{M},\mathcal{E},\mathcal{A}\right\}$) if $\xi_{\mathcal{A}}< 0.01$;
\item[6.] (cases $\mathcal{S}_{+-}$ with $\mathcal{S}\in\left\{\mathcal{B},
\mathcal{M},\mathcal{E},\mathcal{A}\right\}$) if it is not $\mathcal{S}_{--}$
and $\xi_{\mathcal{A}}< 0.25$;
\item[7.] (cases $\mathcal{S}_{+-}$ with $\mathcal{S}\in\left\{\mathcal{B},
\mathcal{M},\mathcal{E},\mathcal{A}\right\}$)
if it is not $\mathcal{S}_{--}$ and $\mathcal{S}_{+-}$.
\end{itemize}
If $\xi_{\mathcal{A}}<0.1$, then we consider that the variation is not
numerically significative, so it is not discussed. Table~\ref{Dvalues}
resumes the sensitivity analysis, where the only tag behaviors that appear are
$\mathcal{B}_{+-}$, $\mathcal{M}_{+-}$, $\mathcal{E}_{+-}$, and $\mathcal{E}_{++}$.
\begin{table}[!htb]
\centering
\begin{tabular}{| c | c | c | c | c | l |}
\hline
\ & {\scriptsize $\mathcal{B}_{+-}$} & {\scriptsize $\mathcal{M}_{+-}$}
& {\scriptsize $\mathcal{E}_{+-}$} & {\scriptsize $\mathcal{E}_{++}$}
& {\scriptsize $\xi_\mathcal{A}$ values, respectively}\\
\hline
{\scriptsize $S_A$} & {\scriptsize $a^B$} & {\scriptsize $\beta^A$}
& {\scriptsize $$} & {\scriptsize $$} & {\scriptsize $\numD{1.17e-01},
\numD{2.64e-01}$} \\
{\scriptsize $P_A$} & {\scriptsize $$} & {\scriptsize $$} & {\scriptsize $a^B$}
& {\scriptsize $\beta^A$} & {\scriptsize $\numD{2.34e-01}, \numD{3.79e-01}$}\\
{\scriptsize $I_A$} & {\scriptsize $$} & {\scriptsize $$} & {\scriptsize $a^B$}
& {\scriptsize $\beta^A$} & {\scriptsize $\numD{2.28e-01}, \numD{4.23e-01}$}\\
{\scriptsize $L_A$} & {\scriptsize $$} & {\scriptsize $$}
& {\scriptsize $\beta^A$} & {\scriptsize $$} & {\scriptsize $\numD{1.26e-01}$} \\
{\scriptsize $T_A$} & {\scriptsize $$} & {\scriptsize $$}
& {\scriptsize $a^B$} & {\scriptsize $\beta^A$}
& {\scriptsize $\numD{2.18e-01}, \numD{4.58e-01}$}\\
{\scriptsize $S_C$} & {\scriptsize $$} & {\scriptsize $$}
& {\scriptsize $a^A$} & {\scriptsize $$} & {\scriptsize $\numD{1.33e-01}$}\\
{\scriptsize $P_C$} & {\scriptsize $$} & {\scriptsize $$}
& {\scriptsize $k^C, a^B$} & {\scriptsize $\beta^A, \beta^C, a^A, \zeta$}
& {\scriptsize $\numD{2.47e-01}, \numD{1.82e-01}, \numD{3.78e-01},
\numD{2.84e-01}, \numD{8.43e-01}, \numD{1}$}\\
{\scriptsize $I_C$} & {\scriptsize $$} & {\scriptsize $$}
& {\scriptsize $\beta^C, k^C, a^B$} & {\scriptsize $\beta^A, a^A, \zeta$}
& {\scriptsize $\numD{1.81e-01}, \numD{3.72e-01}, \numD{1.84e-01},
\numD{3.76e-01}, \numD{8.72e-01}, \numD{9.87e-01}$}\\
{\scriptsize $L_C$} & {\scriptsize $$} & {\scriptsize $$}
& {\scriptsize $a^A$} & {\scriptsize $$} & {\scriptsize $\numD{2.43e-01}$} \\
{\scriptsize $T_C$} & {\scriptsize $$} & {\scriptsize $$}
& {\scriptsize $\beta^C, a^B$} & {\scriptsize $\beta^A, a^A, \zeta$}
& {\scriptsize $\numD{1.39e-01}, \numD{1.73e-01}, \numD{3.97e-01},
\numD{9.28e-01}, \numD{9.74e-01}$}\\
{\scriptsize $S_G$} & {\scriptsize $$} & {\scriptsize $a^A$} & {\scriptsize $$}
& {\scriptsize $\zeta$} & {\scriptsize $\numD{3.95e-01}, \numD{2.88e-01}$} \\
{\scriptsize $P_G$} & {\scriptsize $$} & {\scriptsize $$}
& {\scriptsize $\beta^A, \phi_T^G, a^A$} & {\scriptsize $\beta^G, k^C, \zeta$}
& {\scriptsize $\numD{1.80e-01}, \numD{1.54e-01}, \numD{2.26e-01},
\numD{7.64e-01}, \numD{4.21e-01}, \numD{3.90e-01}$}\\
{\scriptsize $I_G$} & {\scriptsize $$} & {\scriptsize $$}
& {\scriptsize $\beta^A, \beta^G, \phi_T^G, \zeta$} & {\scriptsize $k^C, a^A$}
& {\scriptsize $\numD{1.97e-01}, \numD{6.53e-01}, \numD{1.97e-01},
\numD{2.15e-01}, \numD{5.15e-01}, \numD{3.75e-01}$}\\
{\scriptsize $L_G$} & {\scriptsize $$} & {\scriptsize $a^A$}
& {\scriptsize $\beta^G$} & {\scriptsize $\zeta$}
& {\scriptsize $\numD{2.38e-01}, \numD{9.46e-02}, \numD{2.59e-01}$}\\
{\scriptsize $T_G$} & {\scriptsize $$} & {\scriptsize $$}
& {\scriptsize $\beta^A, \phi_T^G$} & {\scriptsize $k^C, a^A, \zeta$}
& {\scriptsize $\numD{2.02e-01}, \numD{1.72e-01}, \numD{2.82e-01},
\numD{4.56e-01}, \numD{1.94e-01}$}\\ \hline
\end{tabular}
\caption{Qualitative sensitivity analysis.}
\label{Dvalues}
\end{table}
Table~\ref{Dvalues} is quite explanatory and shows relations between parameter
perturbations and epidemiological compartments, in a mathematically precise
and rather simple visual representation way. The variation of some parameters
just gives the expected behavior, which shows that the proposed model is suitable
for the situation under study. On the other hand, it also shows that some parameters
that a priori we do not give much attention, as the distribution of persons between
(G) and (C) (i.e., $\zeta$), play an important role in TB spread.
\section{Conclusions}
\label{sec:conclusions}
In this paper, we propose and analyze a new mathematical model for TB transmission
that considers internal transfer of individuals. As a case-study, we consider a
situation with three populations, namely, Angola (a country with high TB incidence),
people living in a semi-closed community of Angola natives, and other persons living
in Portugal (a country with low TB incidence). Each of the previous subsystems
is divided into five epidemiological categories, which follow the TB transmission
dynamics found in \cite{TBportugalGomesRodrigues}.
For the analysis and verification of the results presented in this paper,
we developed a software tool, so-called \textit{sDL} \cite{SDL}, that combines in the
same framework the power of pre-processing systems (as \textit{m4} \cite{m4}
and \textit{cpp} \cite{cpp}), a logical verification tool for classical
and hybrid systems (as \textit{SMT} \cite{smt} or \textit{KeYmaera} \cite{Platzer}),
a computer algebra system (as \textit{Maple} \cite{maple}), and a numerical
computing language (as \textit{Matlab} \cite{matlab}). The pre-processing systems
allow the existence of a unique and general file, where constants and ODEs
are defined in two hierarchical levels, in order to be used across all tools.
The verification tool and the computer algebra system allowed to test the
validity of some assumptions and verify the correctness of analytic/algebraic
formulae. As expected, the numerical computing language allowed to do the numeric
simulations and generate the corresponding graphics. Considering the potential
of the software tool \textit{sDL}, in a forthcoming publication, we intend to
study real situations that are modeled by pure hybrid model systems, e.g.,
transmission coefficients that are discontinuous functions varying with
climate and season conditions.
Simulations and sensitivity analysis show that variations of the transmission
coefficient on the origin country has a big influence on the number of infected
(and infectious) individuals on the community and the host country. This enforce
the importance of an additional effort to treat TB and improve health conditions
in countries with high TB incidence, since they remarkably affect (in long term)
the health of individuals on other countries. As expected, an increment on the
flux of individuals moving from areas of lower TB incidence to areas of higher
TB incidence reduces the global reproduction number and an increment in the flux
of individuals moving from areas of high TB incidence to areas of lower TB
incidence increases the global reproduction number, but also introduce
modifications in the evolution of each disease category that is not linearly
proportional to flux rate. From the community point of view, it is better to
have some moderate exchange of persons with the high incidence TB region.
Seasonality distribution of persons traveling between Angola and Portugal
has an important impact in the number of infected (and infectious) individuals
in the community.
The main conclusion is that, contrary to some beliefs, the existence of a
community of immigrants coming from a high incidence TB area seems to be
convenient in a global point of view, as well as for the host country,
in order to better control TB spread. On the other hand, it does not affect
the TB incidence in the origin country of the immigrant community.
By nonexistence of the community of immigrants we mean the situation where
the individuals traveling are spread uniformly on the host country. As shown
above, a key parameter in such analysis is the percentage of persons traveling
from the high incidence TB area that will stay in the community. Such parameter
has an optimal value for TB control, in the sense of minimizing
the global reproduction number, that is near to $47\%$.
The obtained results are valid under the hypothesis of
a semi-closed community. Further studies are necessary
for the situation without any flux restrictions.
\bigskip
\noindent \textbf{Acknowledgments}.
{\small Work partially supported by Portuguese funds through the Center for Research
and Development in Mathematics and Applications (CIDMA) and the Portuguese
Foundation for Science and Technology (FCT), within project UID/MAT/04106/2013.
Rocha is also supported by the FCT project ``DALI -- Dynamic logics for cyber-physical
systems: towards contract based design''
with reference P2020-PTDC/EEI-CTP/4836/2014;
Silva by the FCT post-doc fellowship SFRH/BPD/72061/2010;
Silva and Torres by project TOCCATA, reference PTDC/EEI-AUT/2933/2014, funded by Project
3599 -- Promover a Produ\c{c}\~ao Cient\'{\i}fica e Desenvolvimento
Tecnol\'ogico e a Constitui\c{c}\~ao de Redes Tem\'aticas (3599-PPCDT)
and FEDER funds through COMPETE 2020, Programa Operacional
Competitividade e Internacionaliza\c{c}\~ao (POCI), and by national
funds through FCT. The authors are grateful to two referees
for useful comments and suggestions.}
\small
|
{
"timestamp": "2017-02-01T02:08:15",
"yymm": "1701",
"arxiv_id": "1701.09157",
"language": "en",
"url": "https://arxiv.org/abs/1701.09157"
}
|
"\\section{Introduction}\n\n The ASTRAL spectral library for hot stars (Ayres 2014) includes observa(...TRUNCATED)
| {"timestamp":"2017-02-01T02:08:28","yymm":"1701","arxiv_id":"1701.09172","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nInterest in the study of samarium containing compounds has increased recen(...TRUNCATED)
| {"timestamp":"2017-04-27T02:03:25","yymm":"1701","arxiv_id":"1701.09171","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\r\nGravitational lensing is a useful tool to search for dark and compact ob(...TRUNCATED)
| {"timestamp":"2017-04-10T02:06:30","yymm":"1701","arxiv_id":"1701.09169","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nAutomotive radar is widely used within modern advanced driver assistance(...TRUNCATED)
| {"timestamp":"2017-06-20T02:01:53","yymm":"1701","arxiv_id":"1701.09180","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nOver the decades since its inception, the issue of naturalness of the Hi(...TRUNCATED)
| {"timestamp":"2017-08-28T02:05:04","yymm":"1701","arxiv_id":"1701.09167","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sec1}\nThe Widom-Rowlinson model was originally introduced in the (...TRUNCATED)
| {"timestamp":"2017-02-03T02:09:08","yymm":"1701","arxiv_id":"1701.09185","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\r\nThe subject of non-Hermitian random matrix theory can be said to origina(...TRUNCATED)
| {"timestamp":"2017-02-01T02:08:29","yymm":"1701","arxiv_id":"1701.09176","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 3