Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 9
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 46799)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 9
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
\label{sec:intro}
Deep neural networks (DNNs) have been playing an overwhelming role in transforming our perspectives towards the digital world \citep{AlexNet,ResNet,SPM18,Revhashnet}. Apart from performing the human-aiding tasks, DNNs can also generate new digital objects/images. Recently generative adversarial networks (GANs) were used to generate photo-realistic fake face photos to easily fool human eyes \citep{GAN14,proGAN,styleGAN,styleGAN2}. In Fig. \ref{fig:fakeFaces}, we show several human face images where some are captured from real person and some are generated from advanced GANs. Can you pick up GAN-generated fake face photos in Fig. \ref{fig:fakeFaces}? (\textit{Answer}: Images in the first two rows are fake face images from styleGAN \citep{styleGAN} and styleGAN2 \citep{styleGAN2}, respectively; while those in the last row are real ones from the Flicker face dataset \citep{styleGAN}.)
\begin{figure}[htp]
\centering
\includegraphics[width=0.45\textwidth]{figures/fakeFaces.jpg}
\centering
\caption{Example images for fake face imagery detection. Question: Which images are from real persons and which ones are generated from GAN? Image samples are from \citep{styleGAN,styleGAN2}.}
\label{fig:fakeFaces}
\end{figure}
The widespread of such visually realistic fake face images may pose security concerns \citep{GANforen18,Wface18,fakeNet18}. E.g., the Washington Post reported that some spies created social accounts with AI-generated fake face images to connect with politicians for malicious purposes \citep{news}. Fake face photos may also be used to falsify identity information and create fake news. Therefore accurate and reliable detection of such fake face images is important.
In work \citep{GANforen18}, the authors fed features from a pretrained VGG network \citep{VGG} to steganalysis classifiers \citep{steg12} to identify fake face images from real ones. In work \citep{marra2019gans,yu2019attributing}, the authors studied the existence of GAN fingerprints to distinguish fake images generated from different GAN models. In work \citep{colClue18}, the authors analyzed the structure of GAN architectures and proposed to utilize saturation statistics as features, and the extracted features were classified with a support vector machine.
In work \citep{fakeNet18}, a deep learning-based forensic detector was designed and gave high average accuracy, i.e., over 98\% on fake face detection. The authors firstly cast color images to the residual domain with high-pass filters. Then a set of convolutional modules were applied for feature extraction and classification. More recently, in \citep{wang2020cnn}, the authors proposed a general fake face detector which was shown to generalize well to detect fake images from unseen GAN models. The authors in \citep{color_disp} investigated discernible color disparities between GAN-generated and real face photos. Then ensemble steganalysis classifiers were employed using features extracted from a third order co-occurrence matrix. Among non-deep learning based methods, the method in \citep{color_disp} achieved superior forensic accuracy on fake face imagery detection.
While existing forensics can successfully identify GAN-generated fake face images, there exists the concern that fake face imagery detectors might be easily bypassed by anti-forensic methods. Image anti-forensics is a countermeasure of image forensics by manipulating discernible traces to reduce the performance of forensic detectors \citep{forenReview13,pivaReview13}. Existing anti-forensic methods often target at specific forensic detectors, e.g., JPEG compression detection \citep{jpeg_anti14}, which are not directly applicable for our fake face detection task.
Though rarely investigated yet, studying fake face imagery anti-forensics is meaningful since it exposes possible vulnerability issues of forensic detectors. In turn, the anti-forensic study promotes researchers to propose more reliable and robust detectors, which is critical in safety-related forensic tasks. In this study, our contributions are summarized as follows:
1. We introduce adversarial attacks as an automatic anti-forensic approach for GAN-generated fake face detection. Our study shows both deep-learning and non-deep learning based methods can be vulnerable to such adversarial perturbations.
2. We investigated the perturbation residues of existing forensic models both in the $RGB$ and $YC_bC_r$ domains. Our analysis shows that existing gradient-based attacks display strong correlations for perturbations at $RGB$ channels, while such correlations reduce in the $YC_bC_r$ domain. The perturbation mainly concentrates on the $Y$ component, leading to severe visual distortion effects.
3. We propose a novel adversarial attack algorithm with perception constraints in the $YC_bC_r$ domain. We allocate more perturbation for $C_b$ and $C_r$ channels while less for $Y$. \textit{More imperceptible} and \textit{transferable}, the proposed method significantly improves the visual quality and the attack success rate when compared with baseline attacks. We have released our codes, datasets and more results in Github: \url{https://github.com/enkiwang/Imperceptible-fake-face-antiforensic}.
4. Finally, this study also reveals several interesting observations. For example, perturbations crafted for fake face images are significantly more transferable than those for real face images on all attacks we evaluated, which is worthy of further investigation.
\section{Related Work}
\label{sec:related}
\textbf{GAN-generated fake face imagery}. GAN was formulated as a two-player game between a generator and a discriminator \citep{GAN14,DCGAN}. In theory, the generator can generate visually realistic images by capturing the underlying distributions of real data when GAN reaches an equilibrium. In practice, vanilla GAN models often suffer from training instability issues. Subsequent studies then tried to stabilize training GANs (e.g. \citep{miyato2018spectral,zhang2019self,durall2020watch}). Specific to fake face imagery generation, progressive GAN (ProGAN) \citep{proGAN} was the first GAN model to generate high resolution fake face images with relatively good visual quality. Then Karras et al. developped StyleGAN \citep{styleGAN} which can generate human face photos with impressively realistic visual quality. Recently, StyleGAN2 \citep{styleGAN2} was proposed to achieve the state-of-the-art performance in fake face generation.
\textbf{Adversarial attacks}. Recent studies show DNNs are vulnerable to adversarial perturbations, termed as \textit{adversarial examples} \citep{opt14,FGSM,blackBox1,madry2017towards,MIFGSM}. Adversarial examples crafted from one network can possibly fool an unknown model. This makes adversarial examples as potential threats to deployed safety-critical systems built on DNNs. Despite active studies in the computer vision area, the existence of adversarial examples has raised relatively less attention in the forensic community \citep{marra2018vulnerability,transfer_icassp19}, which requires forensic detection to be both accurate and secure. For instance, a fake face image detection model is potentially meaningless if it is susceptible to certain carefully crafted perturbation.
Compared with general adversarial attacks, the anti-forensic method for GAN-generated fake face imagery detection has its unique characteristics. Generally, a higher perturbation budget indicates stronger attack ability, but degradation in visual quality. For natural-scene texture-rich images, relative higher perturbation does not seriously impair the perceptual quality. However, in fake face imagery anti-forensics, facial images are very sensitive to adversarial perturbation due to their large smooth regions. To avoid being spotted, the crafted perturbation should look \textit{\textbf{imperceptible}} to human eyes. Otherwise, such perturbed images can be easily detected by visual sanity check.
For adversaries, another desirable property is that anti-forensic manipulations are \textit{\textbf{transferable}} to unseen forensic models. \textit{Transferability} means the anti-forensic perturbation designed for specific forensic models can also reduce the detectability of other unknown forensic models. This property also poses severe threats to fake fake forensic detectors.
In work \citep{marra2018vulnerability}, the authors employed existing attack methods \citep{FGSM,madry2017towards} to study the adversarial vulnerability of deep learning-based classifiers for camera model identification. In work \citep{transfer_icassp19}, the authors examined adversarial attacks in the median filtering and image resizing forensic tasks, and concluded that adversarial examples are generally not transferable in image forensics. However, such conventional attack methods they used are less transferable and lead to perceptual issues in our specific anti-forensic task. Therefore, in this study we propose a novel perception-aware attack method which provides both imperceptible visual quality and higher transferability than those from the existing methods.
\section{Method}
\label{sec:antiForensic}
\subsection{The adversarial attack problem}
Assume a forensic detector $f:\mathcal{D} \subseteq \mathbb{R}^d \mapsto \mathbb{R}^K$, where $\mathcal{D}=[0, 255]^d$. Given a data sample $\boldsymbol{x} \in \mathbb{R}^d$, the detector correctly predicts its label as $y \in \mathcal{Y}$, i.e., $y=\mathop{\rm arg\,max}\limits_{k=1,\cdots,K} f_k(\boldsymbol{x})$.
The adversarial attack problem seeks an $\epsilon$-ball bounded perturbation $||\boldsymbol{\delta}||_p \leq \epsilon$ within the vicinity of $\boldsymbol{x}$, which makes the forensic detector fail with a high probability. Here $||\cdot||_p$ denotes the $\ell_p$ norm constraint. Then the perturbed data $\boldsymbol{x}^{adv}:= \boldsymbol{x} + \boldsymbol{\delta} $ is an adversarial example w.r.t the threat model if the following conditions are satisfied,
\begin{equation}
\mathop{\rm arg\,max}\limits_{k=1,\cdots,K} \; f_k(\boldsymbol{x} + \boldsymbol{\delta}) \neq y, \quad ||\boldsymbol{\delta}||_p \leq \epsilon \quad \textrm{and}\quad \boldsymbol{x} + \boldsymbol{\delta} \in \mathcal{D}
\end{equation}
Denoting a surrogate function as $\mathcal{L}$, we define the constrained optimization problem as,
\begin{equation}
\mathop{\rm arg\,max}\limits_{\boldsymbol{\delta}} \; \mathcal{L}(f(\boldsymbol{x}+\boldsymbol{\delta}), y) \quad \textrm{s.t.} \; ||\boldsymbol{\delta}||_p \leq \epsilon, \; \boldsymbol{x} + \boldsymbol{\delta} \in \mathcal{D}
\label{eq:obj_fun}
\end{equation}
In this work, we use the $\ell_\infty$ norm constraint, a popular $\ell_p$ norm in the literature. The surrogate function $\mathcal{L}$ is selected as the binary cross entropy function in our setting.
To solve Eq.(\ref{eq:obj_fun}), \citep{FGSM} proposed the Fast Sign Gradient Method (FGSM), a one-step gradient-based perturbation, which utilizes the sign of the gradient w.r.t. the input data,
\begin{equation}
\boldsymbol{\delta}_{FGSM} = \epsilon \cdot \textrm{sign}(\nabla_{\boldsymbol{x}} \mathcal{L} (f(\boldsymbol{x}+\boldsymbol{\delta}, y))
\label{eq:fgsm}
\end{equation}
where the element-wise $\textrm{sign}(\cdot)$ function gives $+1$ for positive values, and $-1$ for negative values; otherwise, it gives $0$.
The FGSM method was designed under the assumption that the decision boundary is linear around the input data. For neural networks with nonlinear activation function, this assumption does not hold, thus the FGSM attack generally ``underfits'' the model, which compromises its attack ability. To increase the attack ability, adversaries can apply Eq.(\ref{eq:fgsm}) iteratively for multiple times \citep{madry2017towards}. \citep{MIFGSM} further incorporates the momentum during the gradient update at each iteration and proposes the Momentum Iterative-FGSM (MIM). We use FGSM (single-step) and MIM (multiple-step) as our baseline attacks.
\subsection{Perturbation analysis in $YC_bC_r$ domain}
In this section, we investigate spatial correlations of adversarial perturbations in $R$, $G$, and $B$ channels. We then show that for existing fake face forensic models (trained on RGB domain), with baseline attacks, the perturbation energy concentrates more in the $Y$ component than in $C_b$ and $C_r$ components.
For simplicity, we analyze adversarial perturbations generated from FGSM, the single-step attack method with perturbation as the gradient (after sign). For a single pixel in an image, we denote the gradient (after sign) of $R$, $G$, $B$ components as three random variables $\boldsymbol{S} = (s^r, s^g, s^b)^T$, where $s^r, s^g, s^b$ follows the Bernoulli distribution. The statistical correlations of these three components are provided by the covariance matrix $\Sigma_{\boldsymbol{S}}$, which can be estimated via observations of the random variable $\boldsymbol{S}$,
\begin{equation}
\boldsymbol{\Sigma_{\boldsymbol{S}}} \approx \frac{1}{N} \sum_{i=1}^{N} (\boldsymbol{S}_i - \bar{\boldsymbol{S}}) \cdot (\boldsymbol{S}_i - \bar{\boldsymbol{S}})^T
\label{eq:cov_mat}
\end{equation}
where $N$ denotes the number of observations of $\boldsymbol{S}$, and $\bar{\boldsymbol{S}}$ represents the sample mean of $\boldsymbol{S}$.
The conversion from the $RGB$ domain to the $YC_bC_r$ domain is to perform an affine transformation,
\begin{equation}
\boldsymbol{S}' = \boldsymbol{A} \boldsymbol{S} + \boldsymbol{b}
\label{eq:ycbcr}
\end{equation}
where $\boldsymbol{S}'=(s^y, s^{Cb}, s^{Cr})^T$ denotes the transformed random variables in the $YC_bC_r$ domain; $\boldsymbol{A}, \boldsymbol{b}$ denote respectively the transformation matrix and bias, with
\[\boldsymbol{A} = \left[
\begin{matrix}
0.2568 & 0.5041 & 0.0979 \\
-0.1482 & -0.2910 & 0.4392 \\
0.4392 & -0.3678 & -0.0714 \\
\end{matrix}
\right]
\]
and \[
\boldsymbol{b} = \left(16, 128, 128 \right)^T
\]
Then we can obtain the covariance matrix of $\boldsymbol{S}'$ as,
\begin{equation}
\Sigma_{\boldsymbol{S}'} = \boldsymbol{A} \boldsymbol{\Sigma_{\boldsymbol{S}}} \boldsymbol{A}^T
\end{equation}
In Fig. \ref{fig:cov_mat}, we illustrate the covariance matrices of $\boldsymbol{S}$ and $\boldsymbol{S}'$ estimated with the number of pixels as $N=10,10^2,10^3$ and $10^4$ on StyleGAN \citep{styleGAN}. Clearly, $s^r, s^g, s^b$ components are highly correlated; while the correlations reduce when we apply the $YC_bC_r$ transform in Eq.(\ref{eq:ycbcr}). Also, we notice that the variances are almost identical for $s^r, s^g, s^b$, while the variance of $s^y$ is significantly larger than that of $s^{Cb}$ and $s^{Cr}$ (i.e., around $3$ times larger). It indicates that the perturbation energy concentrates more on the $Y$ component than on $C_b$ and $C_r$ components.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{figures/covariance_mat.pdf}
\caption{Illustration of estimated covariance matrices $\hat{\Sigma}_{\boldsymbol{S}}$ ($1^{st}$ row) and $\hat{\Sigma}_{\boldsymbol{S}'}$ ($2^{nd}$ row) with $N=10,10^2,10^3$ and $10^4$ respectively. Here $\epsilon=5.5$.}
\label{fig:cov_mat}
\end{figure}
To validate the analysis, we generate adversarial examples using FGSM and MIM, and show the histograms of perturbations in the $YC_bC_r$ domain in Fig. \ref{fig:res_fgsm_mim}. For both attacks, we observe that perturbation residues mainly cluster at $\pm 5.5$ for $Y$ while the perturbations peak around $0$ for $C_b$ and $C_r$ components. We observed similar perturbation phenomena during attacking existing forensic models on StyleGAN2 \citep{styleGAN2} and ProGAN \citep{proGAN} datasets. Since the human visual system is more sensitive to perturbations in the $Y$ component than in $C_b$ and $C_r$ components, this intuitively explains why the $RGB$ domain attacks are prone to visual distortion.
\begin{figure}[h]
\centering
\includegraphics[width=8.2cm, height=4.5cm]{figures/res_fgsm_mim.pdf}
\caption{Example perturbation histograms of FGSM ($1^{st}$ row) and MIM ($2^{nd}$ row) attacks in the $YC_bC_r$ domain. The histogram is generated by using 5000 adversarial samples of StyleGAN-generated fake face images. The perturbation bounds are $\epsilon=5.5$ and $\epsilon=6$ for FGSM and MIM, respectively.}
\label{fig:res_fgsm_mim}
\end{figure}
\vspace{-3mm}
\subsection{Proposed adversarial attack}
Based on the perturbation analysis discussed above, as an alternative to existing attacks on the RGB domain, we are motivated to perform adversarial attacks with explicit perturbation constraints in the $YC_bC_r$ domain. By exploiting the perception characteristics, we propose to directly allocate more perturbations to $C_b$ and $C_r$ components than to the $Y$ component to produce more visually pleasant adversarial examples.
Denote $\mathcal{T}$ as the transformation operator from the $RGB$ domain to the $YC_bC_r$ domain (see Eq.(\ref{eq:ycbcr})), and $\mathcal{T}^{-1}$ as its inverse transformation back to the $RGB$ domain. The proposed loss function is expressed as,
\begin{equation}
\begin{split}
\mathcal{L}\left( f \left(\mathcal{T}^{-1} (\mathcal{T} \boldsymbol{x} + \boldsymbol{ \zeta}) \right), y \right) = & -y \cdot \textrm{log} \left(f_y\left(\mathcal{T}^{-1} (\mathcal{T} \boldsymbol{x} + \boldsymbol{ \zeta}) \right) \right) \\
& - (1-y) \cdot \textrm{log} \left( f_{1-y} \left(\mathcal{T}^{-1} (\mathcal{T} \boldsymbol{x} + \boldsymbol{ \zeta}) \right) \right)
\end{split}
\label{eq:prop_loss}
\end{equation}
where $\boldsymbol{\zeta}$ denotes the perturbation that is directly optimized in the $YC_bC_r$ domain.
Now our constrained optimization problem becomes,
\begin{equation}
\begin{split}
\mathop{\rm arg\,max}\limits_{\boldsymbol{\zeta}} \; & \mathcal{L}\left( f \left(\mathcal{T}^{-1} (\mathcal{T} \boldsymbol{x} + \boldsymbol{ \zeta}) \right), y \right) \\
& \textrm{s.t.} \; ||\boldsymbol{\zeta}^{[c]}||_{\infty} \leq \epsilon^{[c]}, \; c\in \left\{Y, C_b, C_r \right\} , \\
& \textrm{and} \quad \boldsymbol{x} + \mathcal{T}^{-1} \boldsymbol{\zeta} \in \mathcal{D}
\label{eq:obj_fun_prop}
\end{split}
\end{equation}
where $\boldsymbol{\zeta}^{[c]}$ and $\epsilon^{[c]}$ denote the constrained perturbation and its perturbation budget at channel $c, c\in \{Y, C_b, C_r \}$, respectively. To alleviate the visual distortion effects due to perturbations $\boldsymbol{\zeta}$, it is desirable to assign larger values for $\epsilon^{[C_b]}, \epsilon^{[C_r]}$ than $\epsilon^{[Y]}$. Assume that we have access to the forensic detector (or its substitute model), we can utilize the gradient-based approach to solve Eq.(\ref{eq:obj_fun_prop}).
Denote any pixel in an image by $\boldsymbol{P}_{i,j}=\left( R(i,j), G(i,j), B(i,j) \right)^T$, and its counterpart in the $YC_bC_r$ domain as $\boldsymbol{P}_{i,j}'=\left( Y{(i,j)}, C_b{(i,j)}, C_r{(i,j)} \right)^T$. We can propagate the gradient from the $RGB$ to the $YC_bC_r$ domain,
\begin{equation}
\begin{split}
\nabla_{\boldsymbol{P}_{i,j}'} \mathcal{L}\left( f \left(\mathcal{T}^{-1} (\mathcal{T} \boldsymbol{x} + \boldsymbol{ \zeta}) \right), y \right) &= \left( \boldsymbol{1} \oslash \boldsymbol{A} \right) \; \cdot \\
& \nabla_{\boldsymbol{P}_{i,j}} \mathcal{L} \left( f \left(\mathcal{T}^{-1} (\mathcal{T} \boldsymbol{x} + \boldsymbol{ \zeta}) \right), y \right)
\end{split}
\label{eq:grad_cal}
\end{equation}
where $\nabla_{\boldsymbol{P}_{i,j}'} \mathcal{L}= \left(\frac{\partial \mathcal{L}}{\partial Y(i,j)}, \frac{\partial \mathcal{L}}{\partial C_b(i,j)}, \frac{\partial \mathcal{L}}{\partial C_r(i,j)} \right)^T$ and $\nabla_{\boldsymbol{P}_{i,j}} \mathcal{L}= \left(\frac{\partial \mathcal{L}}{\partial R(i,j)}, \frac{\partial \mathcal{L}}{\partial G(i,j)}, \frac{\partial \mathcal{L}}{\partial B(i,j)} \right)^T$ denote the partial derivatives w.r.t. the loss function $\mathcal{L}(\cdot)$ in $RGB$ and $YC_bC_r$ domains, respectively; $\oslash$ denotes the elementwise division operation.
The flowchart of the proposed attack method is described in detail in Algorithm \ref{alg:ycc_prop}.
\begin{algorithm}[ht]
\footnotesize
\SetAlgoLined
\KwData{A clean image $\boldsymbol{x}$ with label $y$, a fake-face forensic model $f$, channel-wise perturbation budget $\epsilon^{[c]}, c\in\{Y, C_b, C_r\}$, iteration number $K$ and hyperparameter $\mu$.}
\KwResult{Optimized perturbation $\boldsymbol{\zeta}$ that satisfies $\left\{\boldsymbol{\zeta} \; | \; \{||\boldsymbol{\zeta}^{[c]}||_{\infty} \leq \epsilon^{[c]} \} \cap \{\boldsymbol{x} + \mathcal{T}^{-1} \boldsymbol{\zeta} \in \mathcal{D} \} \right\}$, and the perturbed image $\boldsymbol{x}^{adv}$.}
Initialize $\alpha^{[c]}={\epsilon^{[c]}}/{K}, c\in\{ Y, C_b, C_r \}$, $\boldsymbol{\zeta}_{(0)}=\boldsymbol{0}$, $\boldsymbol{g}_{(0)}'=\boldsymbol{0}$\;
\For{$k=0$ \KwTo $K-1$}{
Input $\boldsymbol{x}_{(k)}$ to the forensic model $f$, and compute gradients of $\boldsymbol{x}$: $\nabla_{\boldsymbol{x}_{(k)}} \mathcal{L}$\;
Compute gradients w.r.t. $\mathcal{T}\boldsymbol{x}_{(k)}$ using Eq.(\ref{eq:grad_cal}): $\nabla_{\mathcal{T}\boldsymbol{x}_{(k)}} \mathcal{L}$\;
Compute accumulated gradients w.r.t. $\mathcal{T}\boldsymbol{x}_{(k)}$:
$\boldsymbol{g}_{(k+1)}'=\mu \cdot \boldsymbol{g}_{(k)}' + \nabla_{\mathcal{T}\boldsymbol{x}_{(k)}} \mathcal{L} / ||\nabla_{\mathcal{T}\boldsymbol{x}_{(k)}} \mathcal{L}||_1$\;
Compute perturbation $\boldsymbol{\zeta}_{(k+1)}$: $\boldsymbol{\zeta}^{[c]}_{(k+1)}=\boldsymbol{\zeta}^{[c]}_{(k)}+\alpha^{[c]} \cdot \textrm{sign} \left( \boldsymbol{g}_{(k+1)}' \right), c\in \{Y, C_b, C_r \}$\;
Project $\boldsymbol{\zeta}_{(k+1)}$ within the $\epsilon$-ball: $\boldsymbol{\zeta}_{(k+1)}=\textrm{max} \left( \textrm{min} \left(\boldsymbol{\zeta}_{(k+1)}, \epsilon \right), -\epsilon \right)$\;
Update adversarial example $\boldsymbol{x}_{(k+1)}$: $\boldsymbol{x}_{(k+1)}= \boldsymbol{x} + \mathcal{T}^{-1}\boldsymbol{\zeta}_{(k+1)} $\;
Project $\boldsymbol{x}_{(k+1)}$within the feasible set $\mathcal{D}$: $\boldsymbol{x}_{(k+1)}=\textrm{Proj}_{\mathcal{D}} \left( \boldsymbol{x}_{(k+1)} \right)$\;
}
\textbf{Return}: Optimized perturbation $\boldsymbol{\zeta}=\boldsymbol{\zeta}_{(K)}$ and the perturbed image $\boldsymbol{x}^{adv} = \boldsymbol{x}_{(K)}$.
\caption{The proposed algorithm of adversarial attacks in the $YC_bC_r$ domain.}
\label{alg:ycc_prop}
\end{algorithm}
\vspace{-3mm}
\section{Experiments}
\label{sec:experiment}
\subsection{Experimental setup}
\textbf{Datasets:} We create face image datasets for the fake face imagery detection task: Dataset 1 and Dataset 2, respectively. Dataset 1 consists of 40,000 real face photos and 40,000 StyleGAN-generated photo-realistic facial images \citep{styleGAN}. In Dataset 2, the fake face images are from StyleGAN2 \citep{styleGAN2}. For real or fake images in both datasets, image splits are: 30,000 images for model training, 5,000 images for validation and the rest 5,000 images for test. To reduce the computational complexity, all images are resized to $128\times128$.
\textbf{Models:} We study seven effective fake face identification models \citep{fakeNet18,VGG, DCGAN, AlexNet, MobileNetV2, wang2020cnn, color_disp}, which are trained from scratch on the face datasets described above. For deep learning-based models (trained on RGB domain), the hyperparameters are as follows: the learning rate is set as $10^{-4}$ with weight decay $5\times 10^{-4}$, the batchsize is selected as 64, and the number of epochs equals 20 with early stopping. For non-deep learning based fake-face detection models \citep{colClue18,color_disp}, we consider the state-of-the-art method proposed in \citep{color_disp}. For convenient expression, we denote the deep-learning based forensic models as ${m}_i, i=1,2,\cdots, 6$ for six different architecture from work \citep{fakeNet18,VGG, DCGAN, AlexNet, MobileNetV2, wang2020cnn} used in the literature respectively. We denote the selected non-deep learning forensic model as ``\textit{NDL}'' \citep{color_disp}.
To make sure that the forensic models work well (e.g., detection accuracy $\ge 90\%$), we adopt the true positive rate ($TPR$) and the true negative rate ($TNR$) as their performance measures, where $TPR$ and $TNR$ are defined as:
\begin{equation}
TPR = \frac{TP}{TP + FN}, \quad TNR = \frac{TN}{TN + FP}
\end{equation}
where $TP$, $TN$, $FP$ and $FN$ denote the numbers of correctly identified fake face images, correctly detected real face images, misclassified real face samples and misclassified fake face images, respectively. A good detector provides high $TPR$ and $TNR$ simultaneously. After proper training, all forensic models achieve high $TPR$ and $TNR$ values on both datasets, as shown in Table \ref{tab:model_acc}. We have released these pretrained models to the public.
\begin{table}[htb]
\caption{Pretrained forensic models we evaluated and their performances measured by $TPR$ and $TNR$ on Dataset 1 and Dataset 2, respectively.}
\centering
\begin{adjustbox}{width=0.46\textwidth}
\begin{tabular}{ccccccccc}
\toprule
Datasets & models & $m_1$ & $m_2$ & $m_3$ & $m_4$ & $m_5$ & $m_6$ & $NDL$ \\ \hline
\multirow{2}{*}{Dataset 1} & TPR (\%) & 98.6 & 94.1 & 91.4 & 95.8 & 90.8 & 99.6 & 98.6 \\ \cline{2-9}
& TNR (\%) & 98.7 & 96.9 & 94.6 & 98.0 & 94.4 & 99.9 & 98.7 \\ \midrule[0.25mm]
\multirow{2}{*}{Dataset 2} & TPR (\%) & 98.8 & 99.0 & 98.1 & 98.5 & 96.2 & 99.9 & 99.5 \\ \cline{2-9}
& TNR (\%) & 99.2 & 99.4 & 98.5 & 98.5 & 97.2 & 99.9 & 99.4 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:model_acc}
\end{table}
\textbf{Parameters:} In the following experiments, following the baseline MIM method \citep{MIFGSM}, for iterative attacks, we set the iteration number $K$ as 10, and the momentum decay factor $\mu$ as 1. The perturbation bound $\epsilon$ is often chosen as 16. However, this perturbation bound is generally too large in the fake face anti-forensic tasks since it can severely degrade visual quality. To have a good trade-off between visual quality and attack success rate, we set lower perturbation bound, e.g., on Dataset 1 we use $\epsilon$ as $5.5$ and $6$ for FGSM and MIM attacks, respectively. For the proposed method, we set larger values for $\epsilon^{[C_b]}$ and $\epsilon^{[C_r]}$ than $\epsilon^{[Y]}$ for better visual imperceptibility.
\subsection{Attack success rate comparison}
The attack success rate ($ASR$) is defined as the accuracy reduction of forensic models after applying adversarial attacks. Concretely, for the fake face detection problem, denote $TPR'$ as the true positive rates after the attack on fake face images. Then $ASR^{[p]}$ on this given fake face image subset (5,000 images in total) is calculated as,
\begin{equation}
ASR^{[p]} = TPR - TPR'
\label{eq:ASR_tpr}
\end{equation}
Similarly, we can define the attack success rate on real images as $ASR^{[n]} = TNR - TNR'$, where $TNR'$ denotes the true negative rates after the attack on the real face image subset. Clearly, the stronger the adversary, the higher the attack success rates.
For the visual quality evaluation, we use three popular image quality assessment (IQA) metrics: the ''Naturalness Image Quality Evaluator`` (NIQE) \citep{mittal2012making}, a no-reference IQA to evaluate the naturalness of images (lower indices indicate more natural visual quality); the ''Learned Perceptual Image Patch Similarity'' (LPIPS) \citep{zhang2018unreasonable}, a DL-based IQA for semantic similarity measurement (lower values suggest closer semantic similarity); and the feature similarity index ($\textrm{FSIM}_c$) \citep{zhang2011fsim}, a full-reference IQA based on human visual system (normalized within $[0,1]$, the higher the index, the better the visual quality).
As an adversary, we focus on attacking fake face images whose reliable detection is vital for forensic models. First, assume we have full access to $m_1$, then we can craft adversarial perturbations based on this model. On Dataset 1, $\epsilon$ are set as 5.5 and 6 for FGSM and MIM attacks, respectively. To have comparable average ASRs, the proposed method adopts $\epsilon^{[Y]}=2.5, \; \epsilon^{[C_b]}=6, \; \epsilon^{[C_r]}=6$. Similarly, on Dataset 2, we use $\epsilon$ as 6 and 7.5 for FGSM and MIM; and $\epsilon^{[Y]}=2, \; \epsilon^{[C_b]}=6, \; \epsilon^{[C_r]}=6$ for the proposed method for a fair comparison. On both datasets, the comparison results are reported in Table \ref{tab:ASR_fake}. The effects of adversarial perturbations crafted from other deep learning models are investigated in Section \ref{sec:transfer}.
\begin{table*}[htb]
\caption{Performance comparisons of the attack success rate $(\%)$ and the visual quality when applying FGSM, MIM and the proposed method on fake face images from Dataset 1 and Dataset 2. The source model is $m_1$. On Dataset 1, $\epsilon$ is 5.5, 6 for FGSM and MIM attacks respectively; and on Dataset 2, $\epsilon$ is 6, 7.5 for FGSM and MIM, respectively. For the proposed method, $\epsilon^{[c]}$ are $2.5/6/6$ on Dataset 1 and $2/6/6$ on Dataset 2. The best performances are marked in bold.}
\centering
\begin{adjustbox}{width=0.85\textwidth}
\begin{tabular}{ccccccccccccc}
\toprule
Datasets & Attack & $m_1$ & $m_2$ & $m_3$ & $m_4$ & $m_5$ & $m_6$ & $NDL$ & avg. $ASR^{[p]}$ & $\textrm{NIQE}$ & $\textrm{LPIPS}$ & $\textrm{FSIM}_c$ \\ \hline
\multirow{3}{*}{Dataset 1} & FGSM & 98.6 & 90.9 & 73.4 & 58.9 & 20.6 & 72.5 & 97.6 & 73.2 & 1.188 & 0.026 & 0.955 \\ \cline{2-13}
& MIM & 98.6 & \textbf{91.6} & 77.4 & 63.4 & 21.1 & \textbf{81.2} & 98.6 & 76.0 & 1.032 & 0.028 & 0.952 \\ \cline{2-13}
& \textbf{Prop.} & \textbf{98.6} & 91.2 & \textbf{83.3} & \textbf{78.3} & \textbf{40.9} & 63.3 & \textbf{98.6} & \textbf{79.2} & \textbf{0.798} & \textbf{0.020} & \textbf{0.984} \\ \midrule[0.2mm]
\multirow{3}{*}{Dataset 2} & FGSM & 98.8 & 98.9 & 84.2 & 94.6 & 5.0 & 2.6 & 62.5 & 63.8 & 1.728 & 0.034 & 0.969 \\ \cline{2-13}
& MIM & 98.8 & 99.0 & 97.5 & 97.5 & 6.4 & \textbf{23.3} & 41.0 & 66.2 & 1.660 & 0.036 & 0.965 \\ \cline{2-13}
& \textbf{Prop.} & \textbf{98.8} & \textbf{99.0} & \textbf{98.1} & \textbf{98.5} & \textbf{12.9} & 14.4 & \textbf{92.6} & \textbf{73.5} & \textbf{1.029} & \textbf{0.018} & \textbf{0.992} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:ASR_fake}
\end{table*}
In Table \ref{tab:ASR_fake}, we can see that with comparable average $ASR^{[p]}$ on both datasets, the perceptual quality of the proposed method has been improved over FGSM and MIM attacks by a large margin quantitatively measured by three IQA metrics. Particularly on Dataset 2, the proposed method achieves considerably improved visual performance and $9.7\%$ and $7.3\%$ higher attack success rates on average on fake face imagery antiforensics.
\subsection{Visual quality comparison}
As shown in Table \ref{tab:ASR_fake}, when compared with baseline attacks, quantitatively, the proposed method has much improved IQA indices measured by NIQE, LPIPS and $\textrm{FSIM}_c$, i.e., we have higher fidelity with cleaner images either semantically or visually when using the proposed attack algorithm.
In Fig. \ref{fig:vis_fake_face}, we show several perturbed fake face image examples from Dataset 1. The first row shows clean images, while the rest three rows display their perturbed versions using FGSM, MIM and the proposed method, respectively. By zooming in Fig.\ref{fig:vis_fake_face}, we can easily spot texture-like visual distortions on FGSM and MIM attacks, both in facial regions and background. By contrast, adversarial images from the proposed method still maintain smooth and appear more natural and more \textit{\textbf{imperceptible}}, compared with the clean images. More comparison examples can be found in the project website.
Moreover, we conduct the human subjective preference study to further validate the visual/quantitative comparison results. For each dataset, we randomly choose 50 comparison pairs: clean image and their adversarial version generated by FGSM, MIM and the proposed method, respectively. For each surveyed pair, we prepare two questions: (a) Is it hard to tell which perturbed one is the "cleanest"? If yes, we proceed to the next pair; otherwise we ask the interviewer to (b) Choose the "cleanest" one from three adversarial images. Overall, on each dataset, we received 500 answers (10 volunteers for each dataset), and our subjective study shows that all interviewers perceive adversarial images from the proposed method as the best/cleanest one. With the human preference survey, we can safely conclude that the proposed method has indeed considerably improved the perceptual quality of adversarial images.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/fake_face_compare.pdf}
\caption{Examples of fake face images for visual quality comparisons on FGSM, MIM and the proposed method. For FGSM and MIM, $\epsilon$ are 5.5 and 6 respectively; for the proposed method, $\epsilon^{[c]}$ are $2.5/6/6$ for $Y, C_b, C_r$ channels. We recommend to zoom in the digital images for better visual comparison. }
\label{fig:vis_fake_face}
\end{figure}
\subsection{Adversarial transferability}
\label{sec:transfer}
In Fig.\ref{fig:vis_transfer}, on Dataset 1 and Dataset 2, we visualize the transfer matrices of adversarial examples crafted from different forensic models using FGSM, MIM and the proposed method, respectively. In each matrix, each row denotes the same source model to craft adversarial examples and each column represents a target model on which to be evaluated.
\begin{figure}[h]
\centering
\subfloat{\includegraphics[width=8.8cm, height=2.7cm]{figures/heatmap_styleGAN.pdf}} \\
\subfloat{\includegraphics[width=8.8cm, height=2.7cm]{figures/heatmap_styleGAN2.pdf}}
\caption{Comparisons of adversarial transferability of FGSM, MIM and the proposed method on fake face image forensic models on Dataset 1 (the $1^{st}$ row) and Dataset 2 (the $2^{nd}$ row), respectively.}
\label{fig:vis_transfer}
\end{figure}
For each source model, the proposed method achieves higher average attack success rates over FGSM and MIM on both datasets. Besides, we have several interesting observations. First, the $NDL$ models are also likely to be fooled in the presence of antiforensic perturbations. Particularly on Dataset 1, $NDL$ models almost completely fail. This indicates that even non-deep learning based forensic models can be vulnerable to adversarial perturbations crafted from deep forensic models, which \textit{necessitates further security investigation into conventional forensic models} in the presence of adversarial attacks. Second, the adversarial transferability can be quite asymmetric between different forensic models. For instance, adversarial perturbations crafted from $m_1$ effectively transfer to $m_4$ for all three attacks. However, adversarial examples created from the source model $m_4$ hardly transfer to $m_1$. This intriguing phenomenon might be related with the sophisticated decision landscapes of DL models (which differ in network modules or depth). We also observe that by the careful selection of source forensic models, \textit{adversaries can build more transferable attacks} with the same attack method. To demonstrate further in this direction, in our preliminary study, we ensemble different forensic models to compose new source models and evaluate their attacking performances. As an example, by using the grid search, we combine three forensic models as $m_{ens(i,j,k)}$ with $i, j, k \in \{1, \cdots, 6 \}$. Then we fuse their scores together with equal weights and generate adversarial perturbations with the proposed method. The average ASRs of some ensemble source models are reported in Table \ref{tab:ASR_ens}. Though it remains unclear on the optimal model ensemble selection (i.e., in terms of the model number and weights), we find some combinations indeed generate more transferable attacks, e.g., the ensemble model $m_{ens(1,4,6)}$ on Dataset 1 and $m_{ens(3,4,5)}$ on Dataset 2. We will investigate further on this phenomenon in the future.
\begin{table}[htb]
\caption{ Average $ASR^{[p]}$ results (\%) from example combinations of the source models on Dataset 1 (\#1) and Dataset 2 (\#2).}
\centering
\begin{adjustbox}{width=0.48\textwidth}
\begin{tabular}{cccccc}
\toprule
Source model & $m_{ens(1,2,5)}$ & $m_{ens(1,4,5)}$ & $m_{ens(1,4,6)}$ & $m_{ens(2,3,4)}$ & $m_{ens(3,4,5)}$ \\ \hline
avg. $ASR^{[p]}$ (\#1) & 78.5 & 78.5 & \textbf{82.2} & 72.4 & 57.1 \\ \hline
avg. $ASR^{[p]}$ (\#2) & 83.3 & 83.2 & 72.5 & 72.5 & \textbf{84.3} \\ \bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:ASR_ens}
\end{table}
\subsection{Perturbation residues}
In Fig. \ref{fig:res_ycc}, we show the perturbations generated from the proposed method in the $YC_bC_r$ domain. The $\textrm{1}^{st}$ and $\textrm{2}^{nd}$ rows illustrate the perturbation histograms on Dataset 1 and Dataset 2 with parameters the same as in Table \ref{tab:ASR_fake}. Compared with Fig.\ref{fig:res_fgsm_mim}, perturbations in the $Y$ component approach $\pm 2.5$ on Dataset 1 ($\pm 2$ on Dataset 2). By contrast, perturbations in $C_b$ and $C_r$ components spread away from 0 and concentrate around $\pm 6$. This observation on perturbation residues aligns well with our expectation, which possibly explains the more imperceptible image quality of the proposed attack method. Besides the two datasets as reported, we also experimented on ProGAN \citep{proGAN} and StyleGAN (with image resolution as $512 \times 512$), we have similar conclusion: the proposed method also much improves the visual quality over baseline attacks. Due to page limit, please find more comparison results in the project page.
\begin{figure}[h]
\centering
\includegraphics[width=8.2cm, height=4.5cm]{figures/res_ycc_dataset_1_2.pdf}
\caption{Example perturbation histograms of the proposed method in the $YC_bC_r$ domain on two datasets: $\epsilon^{[Y]}=2.5, \epsilon^{[C_b]}=6, \epsilon^{[C_r]}=6$ on Dataset 1; and $\epsilon^{[Y]}=2, \epsilon^{[C_b]}=6, \epsilon^{[C_r]}=6$ on Dataset 2. The histogram is generated using 5000 adversarial samples of fake face images. }
\label{fig:res_ycc}
\end{figure}
\vspace{-5mm}
\subsection{Comparison on different parameters}
In Fig.\ref{fig:para_sensitivity}, we show the averaged attack success rates and perceptual quality with different choices of $\epsilon$ for FGSM and MIM attacks on the fake face image subset where the source model is $m_1$. Generally, as the perturbation bound $\epsilon$ increases, $ASR$ increases for both attacks at the cost of visual degradation. To keep comparably high visual quality, e.g., setting $\textrm{FSIM}_c$ as 0.984, the averaged $ASR$ of FGSM and MIM are only about $49.0\%, 46.7\%$. However, the index of the proposed method is $79.2\%$ ($\epsilon^{[c]}=2.5/6/6$) , which is $\textbf{30.2}\%$ and $\textbf{32.5}\%$ higher than FGSM and MIM. Similarly, we can compute the approximate $ASR$ improvement as \textbf{35.1}\% and \textbf{28.5}\% on FGSM and MIM on Dataset 2 when setting $\textrm{FSIM}_c$ as 0.992. This result further convincingly shows the superiority of the proposed method over baseline attacks.
\begin{figure}[h]
\centering
\includegraphics[width=8.8cm, height=4.0cm]{figures/para_sensitivity.pdf}
\caption{Illustration of the averaged attack success rate and visual quality with different $\epsilon$ values for FGSM and MIM attacks on Dataset 1 ($1^{st}$ row) and Dataset 2 ($2^{nd}$ row). (a) and (c): $ASR^{[p]}$ vs. $\epsilon$; (b) and (d): $\textrm{FSIM}_c$ vs. $\epsilon$.}
\label{fig:para_sensitivity}
\end{figure}
\vspace{-3mm}
\section{Discussion}
\vspace{-2mm}
\label{sec:discussion}
\textbf{Attacking real face images:} Although attacking fake face images poses more threats on forensic models, we also report the attack success rates $ASR^{[n]}$ on the real face image subset (5000 images in total) \cite{styleGAN}. Consistent with the conclusion on fake face images, the proposed method achieves the highest averaged $ASR^{[n]}$ and $\textrm{FSIM}_c$ compared with FGSM and MIM attacks. Interestingly, we observe sharper degradation in attack success rates for each attack method on the real face images subset than those on fake face images. Particularly, the adversarial perturbations may fail completely for the non-deep learning based method \citep{color_disp}. We will further explore this phenomenon in the future work.
\begin{table}[htb]
\caption{The comparisons of the attack success rate $(\%)$ and visual quality between FGSM, MIM and the proposed method on real face images. The source model is $m_1$. $\epsilon$ is 5.5 for FGSM and 6 for MIM; $\epsilon^{[c]}$ are $4/7/7$ for the proposed method for $Y, C_b, C_r$ channels. The best performances are marked in bold.}
\centering
\begin{adjustbox}{width=0.485\textwidth}
\begin{tabular}{cccccccccc}
\toprule \noalign{\smallskip}
Attack & $m_1$ & $m_2$ & $m_3$ & $m_4$ & $m_5$ & $m_6$ & $NDL$ & avg. $ASR^{[n]}$ & $\textrm{FSIM}_c$ \\
\noalign{\smallskip}
\hline
FGSM & 98.7 & 75.4 & 20.1 & 10.0 & 4.6 & 48.8 & 0 & 41.8 & 0.955 \\
\noalign{\smallskip}
\hline
MIM & 98.7 & 90.8 & 43.5 & \textbf{20.6} & 7.6 & 81.9 & 0 & 52.2 & 0.955 \\
\noalign{\smallskip}
\hline
\textbf{Prop.} & \textbf{98.7} & \textbf{94.4} & \textbf{47.3} & 18.0 & \textbf{11.3} & \textbf{88.9} & 0 & \textbf{53.9} & \textbf{0.965} \\
\noalign{\smallskip}
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:ASR_real}
\end{table}
\textbf{Attacks in the $HSV$ domain:} In addition to adversarial attacks in the $YC_bC_r$ domain, we also explored attacks in the $HSV$ domain, since recent study shows relatively large discriminative statistics in the $HSV$ domain for fake face forensics. Our preliminary study shows the inferior performance of $HSV$ than that in the $YC_bC_r$ domain. One possible explanation is the following: There does not exist a clear relationship between $HSV$ channels and the human visual system, thus making it challenging to find adversarial examples both with high attack success rates and imperceptible visual quality.
\vspace{-3mm}
\section{Conclusion and Future Work}
\label{sec:conclusion}
\vspace{-2mm}
In this work, we studied the imperceptible anti-forensics on GAN-generated fake face imagery detection based on an improved adversarial attack method. For existing attacks, our analysis on perturbation residues shows a significantly reduced perturbation correlation in the $YC_bC_r$ channels when compared with $RGB$ channels, and these perturbations concentrate more on the $Y$ channel than on $C_b$ and $C_r$ channels. Such perturbations can severely degrade the perceptual quality of facial images which have large smooth regions. Thus it makes existing attacks ineffective as a meaningful anti-forensic method. Considering the perception constraint, we propose a novel adversarial attack method that is better suitable for fake face imagery anti-forensics. Specifically, we allocate larger perturbation to $C_b$ and $C_r$ channels which are less sensitive to perception distortion. Simple yet effective, the proposed method achieves both higher adversarial transferability and significant improvement in visual quality when compared with baseline attacks. Moreover, we observe that the proposed method can also fool non-deep learning based forensic detectors with a high attack success rate. This study raises security concerns of existing fake face forensic methods.
In addition to fake face imagery anti-forensics, we believe all safety-critical forensic models need to be evaluated against such anti-forensics based on adversarial attacks. \textit{More imperceptible} and \textit{transferable}, we hope the proposed anti-forensic algorithm can be a good candidate to evaluate adversarial vulnerability of forensic models. We have released our code to the forensic community for convenient use. In the future, we will further explore the anti-forensic feasibility in related forensic tasks, and develop improved algorithms to counter such anti-forensics.
\vspace{-3mm}
\section*{Acknowledgments}
\vspace{-1mm}
We acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC), and Yongwei Wang acknowledges the China Scholarship Council (CSC) for financial support.
\vspace{-3mm}
\bibliographystyle{elsarticle-num}
|
{
"timestamp": "2020-11-02T02:02:49",
"yymm": "2010",
"arxiv_id": "2010.15886",
"language": "en",
"url": "https://arxiv.org/abs/2010.15886"
}
|
\section*{Введение и основные результаты}
В работе строится квадратичный алгоритм распознавания вложимости графа с дополнительной структурой в ленту Мёбиуса. История вопроса излагается в замечании \ref{c:hist}. Основным результатом работы является теорема \ref{t:main}. Следующая алгебраическая лемма тесно с ней связана.
\begin{lemma}\label{l:main}
Пусть $M$ - симметричная матрица, элементами которой являются нули и единицы. Тогда следующие условия эквивалентны:
\begin{enumerate}
\item Можно сделать такую одинаковую перестановку строк и столбцов\footnote{Это означает, что строки и столбы последовательно занумерованы числами от $1$ до $n$ и выбрана перестановка~$f$ этих чисел; строки и столбцы переставляются так, что $i,j$-ый элемент становится $f(i),f(j)$-ым.} матрицы $M$ и изменить некоторые элементы на главной диагонали таким образом, что в верхнем левом углу полученной матрицы будет стоять подматрица, заполненная единицами, вне которой стоят нули.
\item
Нельзя сделать такую одинаковую перестановку строк и столбцов матрицы $M$, что в верхнем левом углу полученной матрицы будет стоять подматрица вида
$$P = \begin{pmatrix}
*& 1& 1\\
1& *& 0\\
1& 0& *
\end{pmatrix}
{\rm \text{ или } }
Q = \begin{pmatrix}
*& 1& 0& 0\\
1& *& 0& 0\\
0& 0& *& 1\\
0& 0& 1& *\\
\end{pmatrix},
$$ где через $*$ обозначены произвольные (возможно, различные) элементы
\footnote{Условиям (1)-(2) эквивалентно следующее условие: изменением некоторых элементов на главной диагонали можно из матрицы $M$ получить матрицу, ранг которой не превосходит $1$. Равносильность этого условия условию (1) очевидна.}
\end{enumerate}
\end{lemma}
Импликация $(1) \Longrightarrow (2)$ легко следует из того, что при любой расстановке нулей и единиц на главной диагонали каждой из матриц $P$ и $Q$ верхняя строка и нижняя строка полученной матрицы будут ненулевыми и различными.
Импликация $(2) \Longrightarrow (1)$ леммы \ref{l:main} фактически доказана при доказательстве импликации $(4) \Longrightarrow (3)$ теоремы \ref{t:main}, см. замечание \ref{r:tl}.
\bigskip
Назовем \textbf{иероглифом} циклическое
слово длины $2n$ из
$n$ различных букв, в котором каждая буква встречается дважды
(стандартные термины: хордовая диаграмма или мультиграф с вращениями, имеющий одну вершину).
Возьмём выпуклый многоугольник на плоскости.
Отметим на ограничивающей его ломаной непересекающиеся отрезки и
обозначим их буквами из слова в том порядке, в каком эти буквы следуют в слове.
Для каждой буквы соединим соответствующие два отрезка ленточкой
(не обязательно в плоскости) так, чтобы
различные ленточки не пересекались.
(Ленточки могут быть и перекрученные, и нет.)
Любой из полученных таким образом объектов будем называть \textbf{диском с ленточками}, соответствующим данному иероглифу, см. рис. \ref{f:ababcdcd_1}.
\begin{figure}[ht]
\center{\includegraphics[scale=0.5, width=250pt]{ababcdcd_sample_2.png}}
\caption{Пример диска с ленточками, соответствующего иероглифу $ababcdcd$}
\label{f:ababcdcd_1}
\end{figure}
Назовем иероглиф \textbf{слабо реализуемым} на ленте Мёбиуса, если из неё можно вырезать некоторый диск с ленточками, соответствующий данному иероглифу (ср. с замечанием \ref{c:hist}).
\begin{theorem}[доказана далее]\label{t:alg}
Существует квадратичный алгоритм проверки слабой реализуемости иероглифа на ленте Мёбиуса.
\end{theorem}
Будем говорить, что две различные буквы $a$ и $b$ иероглифа \textbf{перекрещиваются}, если они идут в этом иероглифе в чередующемся порядке ($abab$, а не $aabb$).
Иероглифы считаются равными, если получаются один из другого взаимно однозначной заменой букв или осевой симметрией.
\begin{theorem}\label{t:main}
Пусть $H$ --- иероглиф. Следующие условия эквивалентны:
\begin{enumerate}
\item
Иероглиф $H$
слабо реализуем на ленте Мёбиуса.
\item Буквы иероглифа $H$ можно раскрасить в красный и синий цвета так, что любые две красные буквы перекрещиваются, а никакая синяя буква не перекрещивается ни с какой другой буквой.
\item
Операциями
удаления пар одинаковых букв, таких, что каждая из этих букв не перекрещивается ни с какой другой буквой иероглифа $H$, можно
свести $H$
к
иероглифу вида
$a_1 a_2 ... a_m a_1 a_2 ...a_m$ (возможно, $m=0$).
\item Из $H$ операциями удаления пар одинаковых букв нельзя получить иероглифы $abcacb$ и $ababcdcd$.
\footnote{Условиям (1)-(4) эквивалентно следующее условие: найдется такая матрица над $\mathbb{Z}_2$ ранга не более $1$ размера $n\times n$, где $n$ - число различных букв
в иероглифе
$H$, что в
клетках
вне главной диагонали матрицы стоит $0$, если соответствующие буквы иероглифа $H$ не перекрещиваются, и $1$ в противном случае. Равносильность этого условия и условия (1) является частным случаем теоремы \ref{t:mohar} Мохара, сформулированной далее.}
\end{enumerate}
\end{theorem}
В теореме \ref{t:main} импликации (3) $\Longleftrightarrow$ (2)$\Longrightarrow$ (1) очевидны. Импликации (1) $\Longrightarrow$ (4) $\Longrightarrow$ (3) доказаны далее.
\begin{comment}[\textbf{история проблемы}]\label{c:hist}
Базовые ссылки по данной теме: \cite{MT01},{\cite[$\S 2$]{Sk20}}, \cite{LZ}. По поводу обобщения на произвольные графы см. \cite{MT01},{\cite[$\S 2$]{Sk20}}, \cite{LZ} и \cite{Ko20}
. По поводу связи с интегрируемыми гамильтоновыми системами см. \cite{BFM90}.
Известны полиномиальные алгоритмы распознавания вырезаемости конкретного двумерного многообразия из ленты Мёбиуса, например, использующие эйлерову характеристику или теорему \ref{t:mohar} Мохара.
Однако из существования этих алгоритмов не следует существование полиномиального алгоритма распознавания слабой реализуемости иероглифа на ленте Мёбиуса. Действительно, каждому иероглифу с $n$ парами букв соответствует не одно, а $2^n$ двумерных многообразий (дисков с ленточками), так как каждая ленточка может быть либо перекрученной, либо нет. Слабая реализуемость иероглифа на ленте Мёбиуса эквивалентна вырезаемости из ленты Мёбиуса хотя бы одного из них.
\end{comment}
\section*{Доказательства и нерешённая проблема}
\begin{proof}[\textbf{Доказательство импликации (4) $\Longrightarrow$ (3) теоремы \ref{t:main}}]
Пусть для иероглифа $H$ выполняется условие (4). Обозначим через $H_1$ иероглиф, получаемый из иероглифа $H$ удалением всех таких букв, которые не перекрещиваются ни с одной другой. Условие (4) выполняется и для иероглифа $H_1$. Предположим, что в иероглифе $H_1$ есть пара не перекрещивающихся букв $a$ и $c$.
Если найдется буква $b$, перекрещивающаяся и с $a$, и с $c$, то, удалив из $H_1$ все буквы, кроме $a$, $b$ и $c$, получим иероглиф $abacbc$, равный иероглифу $abcacb$, что дает противоречие с условием (4). Значит, любая буква, отличная от $a$ и $c$, перекрещивается не более чем с одной из них.
Следовательно,
в силу того, что
в иероглифе
$H_1$ любая буква перекрещивается хотя бы с одной другой, найдутся две различные буквы $b$ и $d$ такие, что $b$ перекрещивается с $a$, но не перекрещивается с $c$, а $d$ перекрещивается с $c$, но не перекрещивается с $a$. Поэтому если буквы $b$ и $d$ не перекрещиваются, то, удалив все буквы, кроме $a$, $b$, $c$ и $d$, получим иероглиф $ababcdcd$. Если же $b$ и $d$ перекрещиваются, то, удалив все буквы, кроме $a$, $b$ и $d$, получим иероглиф $badbda$, равный иероглифу $abcacb$. Во всех случаях получили противоречие с тем, что $a$ и $c$ не перекрещиваются. Значит, любые две буквы иероглифа $H_1$ перекрещиваются,
и тогда
он имеет вид $a_1 a_2 ... a_n a_1 a_2 ... a_n$. Следовательно, иероглиф $H$ удовлетворяет условию (3).
\end{proof}
\textbf{Матрицей перекрещиваний} диска с $n$ ленточками $D$ называется матрица размера $n \times n$, у которой
$\bullet$ на главной диагонали стоит $1$, если соответствующая ленточка перекручена, и $0$
в противном случае,
и
$\bullet$ вне главной диагонали на пересечении строки $i$ и столбца $j$ стоит $0$, если соответствующие ленточки
не перекрещиваются, и $1$ в противном случае.
Например, матрицей перекрещиваний любого диска с ленточками, соответствующего иероглифу $abcacb$, является матрица вида $P$ (см. лемму \ref{l:main}), в которой элементы на главной диагонали зависят от перекрученности ленточек.
\notpaper{
$$\begin{pmatrix}
*& 1& 1\\
1& *& 0\\
1& 0& *
\end{pmatrix}$$
(элементы на главной диагонали зависят от перекрученности ленточек.)}
\begin{figure}[h]
\center{\includegraphics[scale=0.5, width=200pt]{Scop.png}}
\caption{Диск с тремя лентами Мёбиуса}
\label{f:disk}
\end{figure}
\textbf{Диском с $m$ лентами Мёбиуса} (см. рис. \ref{f:disk}) называется объединение круга и $m$ ленточек,
в
котором
$\bullet$ каждая ленточка приклеивается двумя отрезками к граничной окружности $S$
круга, а
направления на этих
отрезках, задаваемые произвольным направлением на~$S$, <<сонаправлены вдоль ленточки>>,
$\bullet$ ленточки
<<отделены>>,
т. е. приклеены к $2m$ попарно не пересекающимся отрезкам на~$S$.
\textbf{Рангом над $\mathbb{Z}_2$} матрицы из нулей и единиц будем называть размерность пространства ее строк над полем $\mathbb{Z}_2$.
Следующая теорема была сформулирована и применена в \cite{Mo89}.
\begin{theorem}\label{t:mohar}
Диск с ленточками можно вырезать из диска с $m$ лентами Мёбиуса тогда и только тогда, когда ранг над $\mathbb{Z}_2$ матрицы перекрещиваний этого диска с ленточками не превосходит $m$.
\end{theorem}
Теорему \ref{t:mohar} можно доказать, например, несложным применением формы пересечений двумерного многообразия. См. также {\cite[утв. 2.8.8(с)]{Sk20}} и {\cite[утв. 6.7.7]{Sk20}}.
\begin{proof}[\textbf{Доказательство импликации (1) $\Longrightarrow$ (4) теоремы \ref{t:main}}]
При любой расстановке нулей и единиц на главной диагонали каждой из матриц $P$ и $Q$ из леммы \ref{l:main} две верхние строки полученной матрицы будут ненулевыми и различными.
Матрицы $P$ и $Q$ являются матрицами перекрещиваний дисков с ленточками, соответствующих иероглифам $abcacb$ и $ababcdcd$. Поэтому, по теореме Мохара, иероглифы $abcacb$ и $ababcdcd$ не являются слабо реализуемыми на ленте Мёбиуса.
\end{proof}
\begin{comment}[\textbf{связь леммы \ref{l:main} и теоремы \ref{t:main}}]\label{r:tl}
Теорема \ref{t:main} является топологической версией леммы \ref{l:main}. При этом условие (1) леммы аналогом условия (2) теоремы, условие (2) леммы является аналогом условия (3) теоремы. Матрицы перекрещиваний любых дисков с ленточками, соответствующих иероглифу $H$ из теоремы, получаются друг из друга одинаковыми перестановками столбцов и изменением некоторых элементов на главной диагонали. Произвольной из этих матриц можно поставить в соответствие матрицу $M$ из леммы. Тогда применение к иероглифу $H$ нескольких операций удаления одинаковых букв соответствует взятию левого верхнего угла некоторой матрицы, полученной из $M$ одинаковыми перестановками столбцов и изменением некоторых элементов на главной диагонали. Каждая буква иероглифа соответствует симметричным относительно главной диагонали строке и столбцу матрицы $M$. Матрицы $P$ и $Q$ в пункте $(2)$ леммы являются аналогами иероглифов $abcacb$ и $ababcdcd$ в пункте $(4)$ теоремы. Доказательство импликации $(2) \Longrightarrow (1)$ леммы \ref{l:main} аналогично доказательству импликации $(4) \Longrightarrow (3)$ теоремы \ref{t:main}.
\end{comment}
\begin{proof}[\textbf{Доказательство теоремы \ref{t:alg}}]
Условие (3) из теоремы \ref{t:main} проверяется за квадратичное (относительно длины иероглифа) время. Докажем это. Построим граф $G$, вершины которого соответствуют буквам иероглифа $H$,
а две
вершины соединены ребром, если соответствующие буквы перекрещиваются. Эта процедура занимает квадратичное относительно длины иероглифа время. Тогда условие (3) эквивалентно тому, что $G$ есть объединение клики и, возможно, нескольких изолированных вершин. Проверка всех вершин на изолированность занимает квадратичное время. После этого проверка того, что все не изолированные вершины образуют клику, также занимает квадратичное время.
\end{proof}
\begin{problem}\label{problem}
Пусть дана матрица $M$ размера $n \times n$ из нулей и единиц, на диагонали которой
стоят нули.
Обозначим через $R(M)$ наименьший ранг над $\mathbb Z_2$ матрицы, полученной из $M$ изменением
каких-нибудь
диагональных элементов. Найти быстрый (по $n$) алгоритм, вычисляющий $R(M)$.
\end{problem}
\begin{corollary}[из теоремы \ref{t:mohar}]\label{c:R}
Диск с $n$ ленточками с матрицей перекрещиваний $M$ реализуем на диске с $m$ лентами Мёбиуса тогда и только тогда, когда $R(M) \leq m$.
\end{corollary}
В статье {\cite{Ko20}} для каждого фиксированного целого неотрицательного $m$ дан полиномиальный (по $n$) алгоритм, проверяющий
неравенство
$R(M)\leq m$. Этот и другие близкие результаты приводятся в \cite{VGDNPS}.
Следующий факт показывает, что задача о реализуемости иероглифа на ленте Мёбиуса сводится лишь к частному случаю проблемы \ref{problem}.
\begin{claim}\label{c:matrix}
Не любая матрица из нулей и единиц является матрицей перекрещиваний
какого-нибудь
диска с ленточками.
\end{claim}
\begin{proof}[\textbf{Доказательство}]
Рассмотрим матрицу смежности графа $G$, изображенного на рис. \ref{f:matrix}.
\begin{figure}[h]
\center{\includegraphics[scale=0.5, width=150pt]{graph3.png}}
\caption{К утверждению \ref{c:matrix}}
\label{f:matrix}
\end{figure}
Его вершины $A$, $B$, $C$ попарно не соединены. Если существует диск с ленточками, матрица перекрещиваний которого равна матрице смежности графа $G$, то концы ленточек, соответствующих вершинам $A$,$B$,$C$, расположены в нём в одном из $4$ следующих циклических порядков: $aabbcc, abccba, acbbca, baccab.$ В случае циклического порядка $aabbcc$ ленточка, соответствующая вершине $G$, не может перекрещиваться со всеми тремя ленточками, соответствующими вершинам $A,B,C$. В случае циклического порядка $abccba$ ленточка, соответствующая вершине $F$, не может одновременно перекрещиваться с ленточками, соответствующим вершинам $A$ и $C$ и не перекрещиваться с ленточкой, соответствующей вершине $B$. Аналогично для остальных двух случаев. Получаем противоречие.
\end{proof}
Любые два диска с ленточками, у которых совпадают матрицы перекрещиваний, гомеоморфны, однако, они могут соответствовать разным иероглифам
---
например, диски с ленточками, соответствующие иероглифам $aabbcc$ и $abbacc$ соответственно (у каждого из них любые две буквы не перекрещиваются).
\section*{Приложение: прямое доказательство импликации (1) $\Longrightarrow$ (4) теоремы \ref{t:main}}
Иероглифы $abcacb$ и $ababcdcd$ не являются слабо реализуемыми на ленте Мёбиуса в силу теоремы
\ref{t:betty}
Бетти и леммы \ref{l:curve}.
\begin{theorem}\label{t:betty} Объединение любых двух различных
замкнутых ломаных
на ленте Мёбиуса разбивает её.
\end{theorem}
\begin{proof}[\textbf{Доказательство}]
Теорема \ref{t:betty} следует из неравенства Эйлера для ленты Мёбиуса \cite[2.8.2]{Sk20} и теоремы Римана для ленты Мёбиуса \cite[2.8.3.(b)]{Sk20}. Приведем детали.
Объединение двух попарно не пересекающихся ломаных разбивает ленту Мебиуса в силу теоремы \cite[2.8.3.(b)]{Sk20}.
Пусть данные ломаные пересекаются. Можно считать, что вершины и ребра этих ломаных образуют связный граф.
Для любого связного графа с $V$ вершинами и $E$ рёбрами, изображённого без самопересечений на ленте Мёбиуса и разбивающего её на $F$ граней, выполнено неравенство Эйлера: $V-E+F\geq 1$. Легко проверить, что в данном случае $V<E$, поэтому $F>1$, значит, эти ломаные разбивают ленту Мёбиуса.
\end{proof}
\begin{lemma}\label{l:curve}
На любом диске с ленточками, соответствующем иероглифу $abcacb$ или $ababcdcd$, найдутся две кривые, пересекающиеся в конечном числе точек, которые не разбивают этот диск с ленточками.
\end{lemma}
\begin{lemmaproof}
На рис. \ref{f:abcacb} и рис. \ref{f:ababcdcd} показаны примеры таких пар кривых на дисках с неперекрученными ленточками, соответствующих иероглифам $abcacb$ и $ababcdcd$. Покажем, что если заменить одну или несколько ленточек любого из этих дисков с ленточками на перекрученную, то соответствующие пары кривых по-прежнему не будут разбивать полученные диски с ленточками.
\begin{figure}[h]
\center{\includegraphics[scale=0.5, width=250pt]{top1.png}}
\caption{Диск с ленточками, соответствующий иероглифу $abcacb$, и
две кривые
на нём.}
\label{f:abcacb}
\end{figure}
Пусть в диске с ленточками, изображенном на рис. \ref{f:ababcdcd}, некоторые ленточки заменены на перекрученные. Очевидно, ленточка $c$ находится в одной компоненте связности дополнения
кривых до диска с ленточками.
Тогда области $I$ и $II$ лежат в этой компоненте.
Следовательно,
в ней целиком лежит ленточка
$a$.
Значит, все области $I,II,III,IV$ лежат в этой компоненте. Тогда
в ней лежит и ленточка
$b$.
Значит, кривые не разбивают диск с ленточками.
\begin{figure}[h]
\center{\includegraphics[scale=0.5, width=250pt]{ababcdcd.png}}
\caption{Диск с ленточками, соответствующий иероглифу $ababcdcd$, и две кривые на нем.}
\label{f:ababcdcd}
\end{figure}
Для иероглифа $ababcdcd$ и диска с ленточками, изображённого на рис. \ref{f:ababcdcd}
, выполнено следующее. Если
заменить ленточку $b$ на перекрученную, а ленточку $a$ не заменять, то каждая из частей ленточки $a$, на которые её разбивает кривая, лежит в одной компоненте связности с центром
диска. Таким образом,
ленточка $a$ целиком лежит в одной компоненте связности, тогда и ленточка $b$ тоже (каждая её часть не отделена от какой-то из частей ленточки $a$).
То же верно,
если заменить ленточку $a$ на перекрученную, а ленточку $b$ не заменять. Если же обе ленточки $a$ и $b$ заменить на перекрученные, то связность обеих ленточек можно проверить непосредственно. Для ленточек $c$ и $d$ можно провести полностью аналогичное рассуждение.
\end{lemmaproof}
|
{
"timestamp": "2022-09-27T02:11:31",
"yymm": "2010",
"arxiv_id": "2010.15833",
"language": "ru",
"url": "https://arxiv.org/abs/2010.15833"
}
|
\section{Introduction} \label{sec:intro}
Observations of cosmic neutral hydrogen emission hold the promise of
making precision measurements of the universe's evolution at intermediate
to late redshifts ($0.5 > z > 30$) \citep{Madau97,Tozzi00,LiuShaw20,PUMA}, providing an observable to trace both the growth of massive structure from the time of the Cosmic Microwave Background (CMB), as well as constrain the physics of Reionization ($10 > z > 6$) \citep{Furlanetto06, PritchardLoeb12, MoralesWyithe10}.
Measurement of the 21cm line relies on intensity mapping, in which large fractions of the sky are observed to capture wide-field statistics, instead of resolving individual sources \citep[for an overview see e.g.][]{MoralesWyithe10, LiuShaw20}. This technique has already been used to place constraints on the formation of the first stars and galaxies at $z \sim 9$ \citep{edges2017}, but the precise nature of the Epoch of Reionization (EoR) remains unknown \citep{Furlanetto06}. 21cm intensity maps also promise to be a tracer of three-dimensional, large-scale structure growth at later redshifts $z \lesssim 4$, linking late-stage structure to underlying gravitational theory and the primordial density \citep{hall-2013, camera-2013}. Upcoming experiments, such as the Square Kilometer Array (SKA) promise to trace EoR physics to large-scale structure formation using this single observable \citep{ska2020}.
The greatest challenge for these measurements is mitigating systematics and
removing enormous foreground contamination from galactic radio sources such as synchrotron and free-free emission, as well as
extragalactic features like point sources \citep{MoralesWyithe10, PritchardLoeb12}. These contaminants tend to be three to
four orders of magnitude brighter than the interesting cosmological signal
\citep{haslam1982, MoralesWyithe10}. Furthermore, foregrounds lack detailed analytic descriptions, making 21cm likelihoods hard to specify \citep{Alonso_sim_2014}.
The foreground signals have different statistical properties than the
cosmological signal, a phenomenon thoroughly covered in the literature \citep{Di_Matteo_2002, Peng_Oh_2003, santos-fg, wang_poly_2006, Morales_2006, Jeli__2008, Gleser_2008, Bernardi_2009, Bernardi_2010, Moore_2013}, with several proposed methods for signal separation \citep[][]{Liu_2009, Wolz_2014, Liu_2011, Masui_2013, Shaw_2014, Shaw_2015}.
Most foreground contaminants are forecast to be spectrally smooth in frequency, motivating the application of blind signal separation
techniques, such as Principal Component Analysis (PCA)
\citep[e.g.][]{pca_oliveira2008,Alonso_pca_2014}, which require no prior
knowledge of the expected signals. However, blind separation techniques
are not linked to physical processes and therefore make no use of our
physical understanding of these foregrounds. This means that there exists
information in the observed signal that is not fully exploited for
separation. Blind subtraction for single-dish experiments irretrievably removes the mean of the HI intensity spectrum and can also distort the signal anisotropy \citep{Cunnington_2019, 2015ApJ_signal_loss_singledish, 2018ApJ_signalloss_inter}, placing the focus of analyses on interpreting compressed,
sometimes biased \citep[see e.g.][]{Wolz_2014, Spinelli_2019, Alonso_pca_2014}, summary
statistics derived from these maps, such as power spectra. With clean 21cm intensity
maps, more fundamental large-scale structure analyses and parameter extraction would be
possible using the maps themselves \citep[][for a review]{deeplearn-eor-Gillet_2019, Mangena_2020, Weltman_2020}.
We address these problems by constructing a convolutional neural network
to recover cosmological 21cm maps from PCA-reduced inputs. Deep learning
architectures are ideally suited to similar high-dimensional problems such
as image segmentation, classification, and computer vision tasks (see e.g.
\cite{GoodBengCour16} for a review, and e.g. \cite{shirleyd3m},
\cite{prob-unet}, \cite{Jay_2020} for applications), in which patterns and higher-order
correlations must be captured over a large set of input data. To
incorporate an estimate of uncertainties of the separated maps we train an
ensemble of networks on a suite of simulated radio skies. We then test
these architectures on simulations with altered foreground parameters to
assess how well the approach generalizes beyond the fiducial choice of model.
{Recent studies have incorporated deep learning techniques to analyze EoR
cosmology \citep{deeplearn-eor-Gillet_2019, deeplearn-21-Mangena_2020,
Kwon-deeplearn-21_2020, Pablo_2020, Hassan2020, Chardin_2019, List_2020}, but have largely focused on retrieving compressed cosmological statistics or higher-order correlations from clean
intensity maps.
Some novel methods for astrophysical foreground removal include \citet{bayesian-semi-blind}, who present a Bayesian method for power spectrum recovery that effectively limits blind subtraction bias by exploiting the isotropy and homogeneity of the 21cm signal. Deep learning foreground removal techniques have also been investigated, e.g. for CMB maps \citep{yao_2018}, interferometric 21cm foregrounds \citep{Li_2019}, and far-field radio astrophysics \citep{Pablo_2020} .}
However, our study is (to our knowledge) the first to leverage a fully 3D UNet architecture to separate clean 21cm maps from radio foregrounds directly from simulated single-dish observations. These
clean maps can then be leveraged in existing 21cm analyses.
The layout of this paper is as follows: in Section \ref{sec:formalism}, we
present the physical formalism behind HI intensity mapping and astrophysical
foregrounds. In Section \ref{sec:methods}, we present the foreground
subtraction techniques we employ to train our network. We detail the
architecture choice and the UNet training procedure in Section \ref{sec:deep21}.
Results for both blind subtraction and our network are presented in Section
\ref{sec:results}, followed by tests on foregrounds with altered simulation
parameters. We discuss the successes of our network, as well as failure modes
in Section \ref{sec:conclusions}. The cosmology we assume in our study is the
standard flat $\Lambda$CDM in agreement with the results from the \citet{planck2016},
with fiducial parameters $\{ \Omega_m, \Omega_b, h, n_s, \sigma_8 \}=$ $\{ 0.315,
0.049,$ $0.67, 0.96, 0.83 \}$.
\section{Methods and Formalism}\label{sec:formalism}
In this section we describe the theoretical formalism behind the three main
components of the observed HI 21cm sky: the cosmological signal itself, the
various galactic and extra-galactic foregrounds, and observational noise.
For a deeper discussion of various aspects of these components, we refer
the reader to one of the several review papers on the subject such as
\cite{Furlanetto06,MoralesWyithe10,PritchardLoeb12} and \cite{LiuShaw20}.
We then describe how these various components are created in the simulated
skies that we use to train our machine learning framework.
\subsection{Cosmological HI Signal}
The component of the 21cm sky that we care about most is the redshifted
HI signal itself. This signal is often described in terms of a brightness
temperature, $T_b$, which relates the observed intensity of the signal at
a given sky position and frequency to a temperature \citep{field_58, field_59}. In
the Rayleigh-Jeans limit ($\hbar \nu_{21} \ll k_B T_b$), this brightness
temperature can be related to the underlying cosmology at a given line of
sight $\hat{\textbf{n}}$, and frequency, $\nu$ as \citep{ Madau97}
\begin{equation}\label{eq:temp-from-density}
T_b(\hat{\textbf{n}}, \nu) = \frac{3 \hbar c^3 A_{21}}{16 k_B \nu^2_{21}} \frac{(1+z)^2}{H(z)} {n}_{\rm HI}(z, \hat{\textbf{n}}) \, ,
\end{equation}
where $n_{\rm HI} \propto (1 + \delta_{\rm HI})$ is the comoving number
density of HI, $H(z)$ is the Hubble parameter as a function of redshift,
$z$, $k_B$ is Boltzmann's constant, $\nu_{21}$ is
the frequency associated with the HI fine-structure line, $\hbar$ is the
reduced Planck's constant, and $A_{21} = 2.876\times10^{-15}\ \rm Hz$ is
the 21cm line Einstein emission coefficient. Using the standard values
for the various constants along with a standard flat $\Lambda$CDM cosmology for the evolution
of $H(z)$, this expression can be written in terms of the HI overdensity,
$\delta_{\rm HI}=\rho_{\rm HI}/\bar{\rho}_{\rm HI}-1$, redshift, and
cosmological parameters \citep{Madau97, Furlanetto06} as
\begin{equation}\label{eq:temp-hi-density}
\begin{split}
T_{b}&(\hat{\textbf{n}}, z) ={} 0.19055\ \times \\
&\frac{\Omega_b h (1 + z)^2 x_{\rm HI}(z)}{\sqrt{\Omega_m(1+z)^3 + \Omega_{\Lambda}}} (1 + \delta_{\rm HI}\left(\hat{\textbf{n}}, z \right))~\text{mK}
\end{split}
\end{equation}
where $h\equiv H_0/(100 \, {\rm km/s/Mpc})$ is the dimensionless Hubble
constant, $x_{\rm HI}$ is the fraction of baryonic mass comprised of HI,
and $\Omega_b$ and $\Omega_m$ are the baryon and total matter fractions,
respectively. The observed 21cm signal can thus be related to the underlying
cosmological model and relevant parameters for inference studies
\citep{Abdalla_2005, scott-dark-ages, bull-late-stage-cosmo-21, paco-2016, Paco_2018}. The 21cm signal
can thus be used as a tracer of the large-scale structure of the Universe.
The \texttt{CRIME} simulation code, described in \cite{Alonso_sim_2014},
generates a dark matter field and then utilizes a log-normal model to
generate the cosmological HI intensity maps. Concisely, Gaussian density
and velocity perturbations are generated on a Cartesian grid with no
redshift effects. The ``observer'' is placed at the center of the grid,
and the signal is projected onto the observer's light cone. The Gaussian
density field then undergoes localized log-normal transformations to
generate the non-uniform HI density field. The results are projected onto
spherical sky maps at different frequencies corresponding to the redshift of the structure from the observer, using the \texttt{HEALPix}
pixelization scheme \citep{healpix}. The brightness temperature $T_b$ is
related to the underlying HI number density $n_{\rm HI}$, as shown in
Equation \ref{eq:temp-from-density}. The simulations are generated in a box
of size 8850 $h^{-1}$ Mpc per side with 3072$^3$ box cells. The interpolated
sky maps were generated using a \texttt{HEALPix} resolution of
$N_\text{side}=256$, which corresponds to a per-pixel frequency-independent
resolution of $\theta_{\text{\rm pix}}\approx 14'$.
\subsection{Foregrounds}\label{sec:foregrounds}
21cm foregrounds currently lack a detailed analytic description, making
them difficult to separate from the cosmological HI signal. However,
descriptive numerical simulations exist \citep[e.g.][]{hammurabi-gal-fg}, largely extrapolated from
observed radio maps, such as the Haslam and Planck maps \citep{haslam1982, planck2016}. The foregrounds that we hope to remove from the
observed sky can be separated into galactic and extra-galactic components.
Extragalactic foreground sources are expected to be distributed according
to a clear power spectrum \citep{Di_Matteo_2002, cohen_2004, santos-fg}, while galactic foregrounds, such as synchrotron
emission, are expected to be localized, particularly in the galactic plane
\citep{santos-fg, hammurabi-gal-fg}.
For galactic sources, the \texttt{CRIME} simulations extrapolate foregrounds
from the 408 MHz map of \citet{haslam1982} to the relevant frequencies, as
described in \citet{santos-fg}. For weaker foregrounds such as
point sources and free-free emission, as well as for synchrotron effects on
small scales, we adopt Gaussian realizations of the generic power-spectrum
based model
\begin{equation}\label{eq:fg-cl-model}
\begin{split}
C_\ell (\nu_1, \nu_2) & ={} A \left(\frac{\ell_{\rm ref}}{\ell}\right)^{\beta} \left( \frac{\nu^2_{\rm ref}}{\nu_1 \nu_2} \right)^\alpha \exp{\left(\frac{-\log^2(\nu_1/\nu_2)}{2 \xi^2} \right)}
\end{split}
\end{equation}
with values given in Table \ref{tab:iso-params} (see \cite{santos-fg} for
details). While we train our network on these fiducial values, we explore
model generalization to new foreground parameters in
Section \ref{sec:generalization}.
\begin{table}[t!]
\begin{center}
\begin{tabular}{l c c c c}
\toprule
{Foreground Component} & $A\ \rm [mK^2]$ & $\beta$ & $\alpha$ & $\xi$ \\
\midrule
Galactic Synchrotron & 1100 & 3.3 & 2.80 & 4.0 \\
Point Sources & 57 & 1.1 & 2.07 & 1.0 \\
Galactic free-free & 0.088 & 3.0 & 2.15 & 35 \\
Extragalactic free-free & 0.014 & 1.0 & 2.10 & 35 \\
\toprule
\end{tabular}
\caption{Fiducial foreground $C_\ell(\nu_1, \nu_2)$ model parameters used in this study, adapted from \cite{santos-fg} for the pivot values $\ell_{\rm ref} = 1000$ and
reference frequency $\nu_{\rm ref}=130$ MHz.}\label{tab:iso-params}
\end{center}
\end{table}
\subsubsection{Polarized Foregrounds.}
Foreground polarization arises when synchrotron emitting electrons traverse
the Milky Way's magnetic fields, changing their polarization angles due to
Faraday rotation \citep[see e.g.][]{RybickLightman86}. Despite some empirical observations
\citep[e.g.][]{wolleben-polar2006A&A...448..411W, deBruyn2006, schnitzeler2009}, this effect on radio foregrounds
is poorly understood, except perhaps at very low radio frequencies, as purported by the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) \citep{edges-2018}. Several models have been proposed to describe
this phenomenon. The \texttt{CRIME} simulation package defines the Faraday
depth at a distance $s$ along a line of sight (LOS), $\hat{\textbf{n}}$, as:
\begin{equation}\label{eq:faraday-depth}
\psi(s, \hat{\textbf{n}}) = \frac{e^3}{2\pi (m_e c^2)} \int_0^s ds' n_e(s', \hat{\textbf{n}})B_\parallel(s', \hat{\textbf{n}})
\end{equation}
where $m_e$ is the electron mass, and $n_e(s, \hat{\textbf{n}})$ $B_\parallel$ are the
number density of electrons and galactic magnetic field contribution for the
given LOS. As shown in \cite{Alonso_sim_2014}, the correlation over frequency
for the polarization leakage field $\mu$ can then be written as:
\begin{equation}
\langle \mu_{lm}(\psi) \mu^*_{l'm'}\rangle \propto \delta_{ll'} \delta_{mm'} \left( \frac{\ell_{\rm ref}}{\ell} \right) e^{-\frac{1}{2}\left[\frac{\psi - \psi'}{\xi_{\rm polar}}\right]^2}
\end{equation}
where the correlation length, $\xi_{\rm polar}$, and amplitude are free
parameters. The rest of the numerical values are given in
\cite{Alonso_sim_2014}. The polarized emission correlation length is usually phenomenologically chosen. For example, \cite{polar-Shaw_2015} use an equivalent correlation length scale $\xi_{\rm polar}$ of
$0.1-0.05\ \rm rad\ m^{-2}$ to fit observations, while \cite{Alonso_sim_2014} choose a typical value of $0.5\ \rm rad\ m^{-2}$ to match simulations obtained by the \texttt{Hammurabi} code \citep{hammurabi-gal-fg}. We vary this parameter to probe failure modes
in our foreground subtraction method in Section \ref{sec:polar}.
\subsection{Observational Noise}
The third component of the observed 21cm signal is (largely thermal)
observational noise. Radio observational noise can be simply modeled
as zero-centered Gaussian noise for single-dish experiments
\citep{bull-late-stage-cosmo-21, LiuShaw20}.
We modify the white noise model with a stochastic component in order to train and test cleaning methods on a wide range of possible observational thermal noise, capturing a range of possible current and future intensity mapping configurations. For each full-sky simulation, we adopt a frequency-dependent hierarchical noise model, which has the added advantage of better training our networks (see Section \ref{sec:deep21}):
\begin{subequations}\label{eq:noise-model}
\begin{align}
\alpha_{\rm noise} &\curvearrowleft \log \mathcal{U}(0.05, 0.5) \\
\sigma_{\rm noise} &= \alpha_{\rm noise}\ \langle T_b(\nu) \rangle \\
\epsilon_{b,i} &\curvearrowleft\mathcal{N}(0, \sigma_{\rm noise} ) \\
\hat{T}_{b,i} &= T_{b,i} + \epsilon_{b,i} \ \ \ \
\end{align}
\end{subequations}
where we relate the variance of the noise to the average fiducial cosmological temperature at a given frequency, $\langle T_b(\nu) \rangle$. The observed signal at pixel $i$, written as $\hat{T}_{b,i}$ is then given by the true signal, $T_{b,i}$, with the addition of the Gaussian noise $\epsilon_{b,i}$. For comparison, the noise model employed by \cite{Alonso_pca_2014} and \cite{paco-2016} correspond to an amplitude of $0.025 < \alpha_{\rm noise} < 0.12$. By sampling a large and competitive range of amplitudes for realizations of the per-pixel Gaussian noise, we allow the network to learn despite a variable range of noise.
It should be noted that, strictly speaking, our noise model allows for the observed signal $\hat{T}_{b,i}$ to be negative,
which is unphysical, but not an uncommon observable in radio astronomy as a result of readout noise \citep[see e.g.][]{wilson2011techniques}. However, given the range of $\alpha_{\rm noise}$
values taken above, this is a very rare occurrence.
\section{Foreground Removal Methods}\label{sec:methods}
Since foreground contaminants are several orders of magnitude brighter than
cosmological signal, the biggest challenge for upcoming observational data
analysis will be to separate the two signals \citep{Di_Matteo_2002, pca_oliveira2008}. In this section we review blind
data preprocessing for foreground subtraction and introduce our improved
method.
\subsection{Blind Foreground Subtraction}\label{sec:blind-techniques}
Fortunately, foregrounds are expected to be spectrally smooth in frequency \citep{Tegmark_2000, tegmark-liu12, planck2016},
while the cosmological signal is expected to vary with frequency according
to Equation \ref{eq:temp-from-density} \citep{Di_Matteo_2002, LiuShaw20}.
Current separation techniques therefore rely on the statistical
distinctions between 21cm spectral components.
As we outlined in the last section, the observed 21cm signal can be modeled
as the sum of three components: cosmological, noise, and foreground modes.
Formally, we write:
\begin{equation}\label{eq:sum_components}
\begin{split}
T_\text{obs}(\nu, \hat{\textbf{n}}) = T_\text{fg}(\nu, \hat{\textbf{n}}) + T_\text{cosmo}(\nu, \hat{\textbf{n}}) + T_\text{noise}(\nu)
\end{split}
\end{equation}
where each component is described with respect to a given frequency, $\nu$,
and line of sight direction, $\hat{\textbf{n}}$. In a system of discrete frequencies,
we can write a linear system for each line of sight over frequency:
\begin{equation}
\textbf{x} = \boldsymbol{\hat{A}} \cdot \textbf{s} + \boldsymbol{\mathcal{C}_0}
\end{equation}
where each $x_i = T_{\rm obs}(\nu_i, \hat{\textbf{n}})$, and $A_{ik} = f_k(\nu_i)$ and $s_k = S_k(\hat{\textbf{n}})$ are linearly separable basis functions and foreground sky components, respectively. The cosmological signal and thermal noise can then be packaged as $\boldsymbol{\mathcal{C}_0} = T_\text{cosmo}(\nu, \hat{\textbf{n}}) + T_\text{noise}(\nu)$. We see that in these terms, foreground subtraction becomes a residual learning problem, such that we aim to reconstruct $ \boldsymbol{\mathcal{C}_0} = \textbf{x} - \boldsymbol{\hat{A}} \cdot \textbf{s}$ as accurately as possible.
\subsection{PCA Residual Analysis}
\label{section:pca}
Principal Component Analysis (PCA) makes use of the statistical properties
of foreground signals by simultaneously fitting foreground sky maps, $s_k$,
and foreground functions, $A_{ik}$ \citep{pca_oliveira2008, Alonso_pca_2014}. Intuitively, PCA can be thought
of as fitting a multidimensional ellipsoid to a feature space, with
orthogonal axes (eigenvectors) pointing in the directions of the largest
variance. PCA is an orthogonal transformation which maps the data from a set
of basis vectors in which the data is correlated, to a basis in which the
data is linearly uncorrelated. Since foregrounds are expected to be smooth
and highly correlated in frequency \citep{Di_Matteo_2002, PritchardLoeb12,
pca_oliveira2008}, removing the components of largest eigenvalue (see Section
\ref{sec:polar} in this work and Figure 1 in \citet{Alonso_pca_2014}) is expected
to preserve the cosmological signal on large angular scales relevant to
cosmology. The method has been employed for foreground cleaning in both
simulated and real 21cm data
\citep{chang-pca-real, Switzer_2015_pca_real, Masui_2013}.
\begin{figure*}[tb]
\centering
\includegraphics[width=\textwidth]{figures/panel-comparison.png}
\caption{2D slices from input foreground (\textit{left}) and output cosmological (\textit{right}) voxels for the \texttt{deep21}{} network. Each full-sky simulation is comprised of 192 \texttt{HEALPix} pixels at 690 different frequencies. We first diagonalize each sky in frequency and remove the first 3 principal components. We then take 64 frequencies from the first bin in Table \ref{tab:freq-bins} to generate 3D voxels of dimension $64^3$ for \texttt{deep21}{} to process in batches of 16. Each epoch we process $80\times 192$ training and $10\times 192$ validation voxels.}
\label{fig:inputs}
\end{figure*}
For our analysis, we repeat
\citet{Alonso_pca_2014}'s PCA removal procedure here. First we bin our
observed maps (foreground and cosmological signal) into $N_{\nu} = 64$
frequency bands. We define a correlation matrix, $\textbf{C}$ in frequency
for all $N_{\rm pix}$ pixels in our simulation:
\begin{equation}\label{eq:pca-covariance}
C_{ij} = \frac{1}{N_{\rm pix}} \sum_{n=1}^{N_{\rm pix}}\ \frac{T(\nu_i, \hat{\textbf{n}}_n)\ T(\nu_j, \hat{\textbf{n}}_n)}{\sigma_i \sigma_j},
\end{equation}
where $T(\nu_i, \hat{\textbf{n}}_i)$ is the observed 21cm map signal and
$\sigma_i$ are root-mean-square fluctuations of $\boldsymbol{\mathcal{C}_0}$
in mK, in the $i$th frequency band. Each $\sigma_i$ is estimated iteratively
from the data \citep{Alonso_pca_2014, bishop_prob_pca}. The covariance
$\textbf{C}$ can then be diagonalized via eigenvalue decomposition:
\begin{equation}
\boldsymbol{\Lambda} = \textbf{U}\textbf{C} \textbf{U}^T = \textbf{diag}(\lambda_1, \dots, \lambda_{N_\nu}),
\end{equation}
where $\boldsymbol{\Lambda}$ is the diagonal eigenvalue matrix for
$\textbf{C}$, and $\textbf{U}$ is an orthogonal matrix comprised of
the corresponding eigenvectors. $\boldsymbol{\Lambda}$ is ordered by
decreasing eigenvalues (principal components). For every pixel $n$ we
compute the PCA spectrum in frequency, and then project onto
$\boldsymbol{\Lambda}$. We then remove the first $N_{\rm comp}$
components, and generate a filtered spectrum from the remainder,
$\boldsymbol{\mathcal{C}_0}$, which is then assigned to the pixel as
the output.
For our analyses, we preprocess observed radio sky maps and remove both
the first three and first six principal components from our foreground
maps. Henceforth, the notation PCA$-N_{\rm comp}$ corresponds to signal
for which the first $N_{\rm comp}$ principal components have been removed.
Despite its ability to recover cosmological statistics in some frequency
ranges, as demonstrated in \cite{Alonso_pca_2014, pca_oliveira2008}, it
is important to realize that blind subtraction techniques do not guarantee that HI overdensity anisotropy will be preserved at all scales, since they are not based on a signal likelihood. {PCA is best suited to removal of bright
foregrounds that occupy smoother, low-rank modes compared with the cosmological signal.} Unfortunately, the
cosmological signal
exhibits similar, smoothly varying structure on large scales, meaning PCA will remove
cosmological clustering information needed for different studies (e.g.
primordial non-Gaussianities \citep{gaussSekiguchi_2019}). Furthermore,
polarization leakage from galactic synchrotron emission can create choppy
foreground signal that is difficult for blind methods to distinguish from
the cosmological signal (see \cite{villaescusanavarro2014crosscorrelating}
and Section \ref{sec:polar}). This motivates finding an approach in which a separation
scheme is informed of the signal and foreground patterning.
\subsection{\texttt{deep21}{} Neural Network}
\label{sec:deep21}
In this section we present our novel method for cleaning foregrounds from
21cm maps. We adopt a convolutional neural network (CNN) architecture based on the UNet model of
\cite{unet_review}, which maps images to images via a symmetric
encoder-decoder convolution scheme. {This simulation-trained learning approach seeks to make a more informed separation of signals since the network has access to both foreground and cosmological axes in training.}
\subsubsection{Input Preprocessing}
\begin{figure*}[tp]
\includegraphics[width=\textwidth]{figures/unet-alt.pdf}
\caption{UNet Architecture and training scheme. We first remove the first three principal components from the simulated observed maps. We then split each map via the \texttt{HEALPix} pixelization scheme for the network to process. On the encoder side, input data undergo \texttt{w}=3 convolutions (\textit{black prism}) at each level and are subsequently downsampled (\textit{green prism}) \texttt{h}=6 times, halving the spatial dimensionality while simultaneously doubling the number of filters at each level. The data are then decoded via symmetric transposed convolutions (\textit{purple prism}). Skip connections
concatenate features at each depth, allowing the network to learn a specific correlation scale at a time.}
\label{fig:experiment}
\end{figure*}
Since PCA has been shown to effectively remove the majority of foregrounds \citep{Alonso_pca_2014, pca_oliveira2008}, we focus our analysis on recovering the physical, cosmological signal from the PCA residuals. We feed in PCA-3 residual maps, processing input maps using \texttt{scikit-learn} \citep{scikit-learn}, which implements \cite{bishop_prob_pca}'s probabilistic PCA algorithm. Removing the first three components centers input signal on zero and drastically reduces input amplitudes, but does not remove too much small-scale cosmological clustering, as shown in \cite{Alonso_pca_2014}. PCA preprocessing thus has the added benefit of scaling inputs appropriately for neural networks, which perform best for inputs in the range $[-1, 1]$ \citep{GoodBengCour16}.
Unlike previous analyses (see e.g. \cite{Alonso_pca_2014, paco-2016, Cunnington_2019, pca_oliveira2008}), we do not perform instrument-dependent Gaussian beam smoothing as a preprocessing step, since we want to test the ability of our model to recover signal in the limit of pixel-size resolution.
\subsubsection{Dataset Assembly}
To test our foreground separation methods, we generated a suite of 100 full-sky cosmological and foreground \texttt{CRIME} simulations over 690 frequencies, $350 < \nu < 1044$ MHz, each separated by $\Delta \nu \sim 1 \rm MHz$. We then add a copy of the foreground and cosmological maps together, and split the dataset into 80 training simulations, 10 validation, and 10 (hidden) test simulations.
The UNet architecture is ideally suited to image-like data. For this reason, we split each simulation into 192 equal-area windows via the \texttt{HEALPix} pixelization scheme \citep{healpix}. We then stack maps in frequency, drawing 64 frequencies evenly within the designated redshift bin, yielding 192 cubic voxels of dimension $(N_{\theta_x}, N_{\theta_y}, N_\nu) = (64, 64, 64)$ pixels, shown schematically in Figure \ref{fig:inputs}. According to the train-validation-test split, the network sees $80\times192=15,360$ training voxels each epoch, followed by $10\times192=1,920$ validation voxels. We set aside 10 hidden test simulations with which to assess our cleaning methods.
The UNet architecture maps input voxels to output voxels via a contracting path, in which input dimensionality is halved and feature channels doubled iteratively via stride-2 downsampling convolutional layers, depicted schematically by green prisms in Figure \ref{fig:experiment}. At each depth of the network, we perform \texttt{w} convolutions. Data are then upsampled via transposed convolutions through a symmetric upsampling path. Skip connections concatenate features from one side of the network to the other, allowing each depth of the network to focus on learning a specific scale of correlations at a time. These correlations are then summed as the network upsamples on the output side. We employ batch normalization and \texttt{ReLU} activation between convolutional layers, except for the last convolutional block on the output side. Once voxels have been processed by the network, we reconstruct the full-sky cosmological maps to compute power spectra and clustering statistics.
\subsubsection{Loss Function}
To train the networks, we would like to minimize a pixel-wise loss function of the form $\mathcal{L}(p,t) = \sum_i L(|p_i - t_i|)$ between prediction, $p$, and simulation target, $t$ of each $i^{th}$ voxel, and $L(x)$ is the pixel-wise loss function. We find empirically that the standard Mean Square Error (MSE) loss proved volatile early in training. For this reason we selected the Log-Cosh loss function
\begin{equation}\label{eq:lossfn}
\mathcal{L}(p,t) = \sum_i \log \cosh (p_i - t_i)\, .
\end{equation}
This function behaves much like the L1 norm for poor predictions (large values of $|p_i - t_i|$), making it robust to outliers, and approaches $(p_i - t_i)^2 / 2$ for small residuals. For network performance validation and test statistics we look at the Log-Cosh loss, as well as the standard MSE metric.
\subsection{Training Procedure}
In training our selected architecture we make use of a combination of the \texttt{AdamW} optimizer \citep{adamw} and a step-wise learning rate reduction conditioned on validation data. This choice of learning routine provides weight decay regularization, as well as a fine-tuning of network optima.
Every training epoch the network sees 80 simulations of foregrounds added to cosmological signal, subject to a new observational noise realization for a sampled $\alpha_{\rm noise}$. The observed maps are first preprocessed by the PCA-3 subtraction and then pixelized into \texttt{HEALPix} voxels described above. The PCA-3 residuals are then processed by the UNet network.
We split our frequency range to test our foreground cleaning in the context of analyses such as \cite{paco-2016}. Redshift shells and corresponding co-moving distances are shown in Table \ref{tab:freq-bins}. For our analysis we focus on evaluating network performances in the co-moving shell of lowest frequency, since these high-redshift regions are interesting for both Baryonic Acoustic Oscillation (BAO) measurement, as well as post-EoR structure analysis \citep{PritchardLoeb12}. Furthermore, these regions have consistently proven difficult for blind foreground techniques to clean, especially in angular power spectrum recovery \citep{Alonso_sim_2014, paco-2016}.
\subsection{Hyperparameter Tuning}
In choosing our network architecture, we first compared architectures with 2D and 3D convolutional kernels with different network depths. We anticipate that training 3D convolutional kernels perform better than 2D convolutions, since inputs in this scenario are treated as full 3D volumes, capturing frequency patterning in $\nu$, as well as angular patterns in $\theta_x$ and $\theta_y$.
Other important hyperparameters we considered were UNet depth, \texttt{h}, or number of down-convolutions (denoted by green prisms in Figure \ref{fig:experiment}), and the number of convolutions at a given dimension, \texttt{w} (convolutional block width). To test hyperparameters, we developed a dynamic UNet model compatible with the \texttt{HyperOpt} Python library \citep{hyperopt}. Our architecture draws hyperparameters from proposal distributions and trains the resulting architecture on a smaller set of training data. We selected the hyperparameter combination that yielded the lowest validation loss after testing 550 trial architectures. The priors from which we drew our hyperparameters are listed in Appendix \ref{sec:appendix}. The results yielded the interesting result that deeper, wider UNets outperformed shallower networks (see \cite{network-depth-mhaskar2016deep} for a theoretical investigation).
We found that in particular, architectures with network width $2 < \texttt{w} < 4$ and height satisfying
\begin{equation}\label{eq:height}
\texttt{h} = \log_{\rm \texttt{stride}} \texttt{n}_{\rm filters}; \ \ \ \rm \texttt{stride} = 2
\end{equation} consistently yielded the lowest losses. Too many convolutions per block frequently obscured the sharp $T_b$ distribution (see Figure \ref{fig:temp_comp}), and Equation \ref{eq:height} guarantees an architecture that compresses inputs down to dimension $1^3$ for stride-2 down-convolutions, meaning the network learned correlations on a pixel-sized scales. Our optimized architecture, henceforth \texttt{deep21}, is displayed graphically in Figure \ref{fig:experiment}.
Neural network outputs are generally not probabilistically interpretable \citep{charnock2020bayesian}, and no tractable image-producing Bayesian neural networks currently exist,
so quantifying the variability of foreground cleaning on a given dataset is not possible with a single UNet.
Motivated by the methodology of deep ensembles \citep{deep_ens, deep_ens_loss_land} we train an ensemble of $M=9$ networks with independently Glorot-Uniform-initialized weights \citep{Glorot10understandingthe} for 300 epochs in parallel.
The \texttt{HEALPix} data inputs are also subject to the stochastic noise model as before, as well as random sky-sized rotations on the sphere, such that the networks learn to denoise pixels independently of orientation.
To gauge the uncertainty of the cleaning method, each ensemble member then performs foreground separation on ten test simulations, subject to competitive observational noise with a fixed $\alpha_{\rm noise} = 0.25$, falling in the upper range of our amplitude prior and roughly twice the maximum noise amplitude utilized by \citet{Alonso_pca_2014}. This practice allows us to estimate the epistemic uncertainty on the summary statistics obtained from the predicted full-sky maps.
\section{Results and Analysis}\label{sec:results}
\begin{figure*}[htpb]
\centering
\includegraphics[width=\textwidth]{figures/cutouts/alt-cutout2x2.png}
\caption{
{2D slices at $\nu=392\rm\ MHz$ comparing raw foreground signal (\textit{top left}), PCA-3 UNet inputs
(\textit{top right}), to the UNet ensemble prediction (\textit{lower left}) and target cosmological signal
(\textit{lower right}). \texttt{deep21}{} is able to correctly reconstruct the signal from minimal PCA-3 subtracted
inputs. }
}
\label{fig:slice-comp}
\end{figure*}
\subsection{Visual Inspection}
\label{section:visual}
Most current and upcoming cosmological experiments are aimed at reconstructing relevant clustering statistics from brightness temperature maps. Therefore it is prudent to ensure that our network outputs clean maps that capture temperature distributions at given frequencies. An initial check for \texttt{deep21}'s performance is a qualitative one: Figure \ref{fig:slice-comp} shows each cleaning method's performance on a \texttt{HEALPix} pixel slice drawn from the test set. The top two panels display observed input signal and the initial PCA-3 preprocessing network inputs. The bottom row compares the network-cleaned panel compared with the target cosmological signal. \texttt{deep21}{} is able to recover intricate cosmological signal from corrupted inputs almost indistinguishable from the simulated targets.
\begin{figure*}[htpb]
\centering
\includegraphics[width=\textwidth]{figures/cutouts/alt-residual3-comp.png}
\caption{Temperature map residuals for PCA-6 (\textit{middle}) and \texttt{deep21}{} (\textit{right}) map residuals compared to scaled intensity of the simulated cosmological signal (\textit{left}) for the same 2D slices as Figure \ref{fig:slice-comp}. The UNet ensemble recovers a much more accurate tomography than the PCA method, the latter failing to capture details around high-intensity regions and voids. Removal of the first moment in the PCA method results in a significant deviation ($\sim 200 \%$) from the true signal, while \texttt{deep21}{} predictions yield small, positive residuals.
}
\label{fig:res-slice-comp}
\end{figure*}
In Figure \ref{fig:res-slice-comp}, we compare the per-pixel scaled cosmological signal (greyscale panel) to the relative residual error for cosmological brightness temperature $T_b$,
\begin{equation}
\delta_{\text{err}, i} = \frac{p_{i} - t_i }{ {t}_{i}}
\end{equation} where $p_i$ and $t_i$ are the pixel-wise predicted and target signals, respectively. Here we compare the best-case (PCA-6) blind subtraction to \texttt{deep21}{}. We note that PCA predictions over-subtract the signal, particularly in low-density regions. \texttt{deep21}{}'s residuals are all well within order unity of the target, with some deviations above zero.
\subsection{Clustering Statistics}
\label{sec:clustering-stats}
The most important cosmological parameter constraints from HI intensity mapping will most likely come from power spectra of the 21cm brightness temperature, since two-point correlation functions contain the vast majority of information regarding underlying cosmology on large, linear, scales. For this study, we consider angular and radial power spectra separately, capturing clustering patterning on the sky and along each line of sight, respectively.
For a fixed frequency and assuming a full-sky survey, the angular power spectrum of the brightness fluctuations $\Delta T_b$ is computed first by calculating the spherical harmonic components:
\begin{equation}
a_{\ell m}(\nu) = \int d \hat{\textbf{n}}^2 \Delta T_b(\nu, \hat{\textbf{n}}) Y^*_{\ell m}(\hat{\textbf{n}}),
\end{equation}
where $Y_{lm}(\hat{\textbf{n}})$ are the spherical harmonic basis functions. We can then estimate the power spectrum by averaging over the moduli of the harmonics:
\begin{equation}\label{eq:Cl-def}
\widetilde{C}_l = \frac{1}{2\ell + 1} \sum_{m=-\ell}^\ell |a_{lm}|^2
\end{equation}
where small $\ell$ correspond to the largest scales. We calculated the angular power spectra for our maps using the \texttt{healpy} Python library \citep{healpy}.
To capture radial clustering in an HI survey independent of redshift effects, one must make two assumptions, namely 1) each line-of-sight (\texttt{HEALPix} pixel) window satisfies the flat-sky assumption \citep{Alonso_sim_2014}, and 2) that the redshift bin under consideration is narrow enough that no significant cosmological expansion occurs between the edges of the bin. The resulting power spectrum then describes the clustering distribution independently of cosmological expansion effects. Given these two assumptions, we can average over all possible radial lines of sight, $i = 1, \dots, N_\theta$, to obtain the radial power spectrum:
\begin{equation}\label{eq:radial-def}
P_\parallel({k_{\parallel}}) = \frac{\Delta \chi}{2 \pi N_\theta} \sum^{N_\theta}_{i=1} | \widetilde{\Delta T_b}(\hat{\textbf{n}}, {k_{\parallel}}) |^2
\end{equation}
where the Fourier coefficients $\widetilde{\Delta T_b}$ are estimated using the Fast Fourier Transform (FFT) over each line of sight, and $\Delta \chi = \chi(z_\text{max}) - \chi(z_\text{min})$ is the comoving width of the given redshift bin. Given a constant frequency interval separating the spherical surfaces, $\delta \nu$, the same interval is expressed in the conjugate space as $\delta k_\nu = 2\pi / \Delta \nu$. The radial coordinate, ${k_{\parallel}}$, can then be defined as \citep{Alonso_sim_2014}:
\begin{equation}
{k_{\parallel}} = \frac{\nu_{21} H(z_{\rm eff})}{(1 + z_{\rm eff})^2} k_\nu,
\end{equation}
where $z_{\rm eff}$ is the effective redshift for the comoving volume under consideration.
\begin{table}[t!]
\centering
\adjustbox{width=0.55\textwidth}{%
\begin{tabular}{c c c c c}
\toprule
{$\nu$ (MHz)} & {z} & ${\langle z \rangle}$ & Vol. $(h^{-1} \text{Gpc})^3$ & $N_\text{side}$ \\
\hline
$[886-1044]$ & $[0.36-0.60]$ &0.47 & 12 & 256 \\
$[667-886]$ & $[0.60-1.12]$ & 0.87 & 50 & 256 \\
$[476-667]$ & $[1.12-1.88]$ & 1.50 & 108 & 256 \\
$[350-491]$ & $[1.88-3.05]$ & 2.47 & 187 & 256 \\
\toprule
\end{tabular}
}
\caption{Comparison of the four redshift co-moving shells used in the UNet assessments. Co-moving shells were chosen by splicing the simulation into bins with equal numbers of frequency. Our analysis focuses on the highest-redshift bin because this is where blind foreground techniques such as PCA reduction have been shown to perform the worst in the literature.}
\label{tab:freq-bins}
\end{table}
To capture uncertainty over the space of the \texttt{deep21}{} ensemble, we compute a weighted average, $\Bar{Z}_w$, and standard deviation, $\sigma_w(Z)$, of each network's independent estimate of a given statistic, $Z$. We employ proper scoring weights by computing the inverse globally-averaged MSE for each network's prediction over test data: $w_{\rm m} = \frac{1}{\langle {\rm MSE}\rangle}$ for $m=1,\dots 9$ independent networks. Here we do not explicitly assume a Gaussian form, so $\sigma$ does not correspond to the 68\% inclusion interval. We note that to apply this procedure in a real observational setting, scores for each ensemble member would need to be computed for a set of validation simulations, and then used to compute statistics obtained from the cleaned observed sky.
\begin{figure*}[htp]
\centering
\includegraphics[width=\textwidth]{figures/power-spec/res_spec.png}
\caption{Comparison of angular (\textit{left}) and radial (\textit{right}) residual map power spectra as a fraction of the target cosmological signal predicted by the \texttt{deep21} UNet ensemble (\textit{green}) blind PCA reduction (purple), and noise realization (\textit{black}) with $\alpha_{\rm noise} = 0.25$ over a single full-sky test simulation. Confidence intervals corresponding to $\pm 2 \sigma_w$ over the space of ensemble parameters is estimated in shaded green. We display \texttt{deep21}{}'s angular resolution via the black dashed vertical line.
The angular power spectrum shown is computed for a single frequency, $\nu = 357\ \rm MHz$, while the radial power spectrum is computed for the lowest frequency bin, with mean redshift $\langle z \rangle = 2.5$. The network successfully learns to marginalize out additive observational noise in the radial direction at smaller scales ($k_\parallel > 0.015$).}
\label{fig:res-power-spec}
\end{figure*}
We consider the power spectra calculated for the residual maps for each cleaning method, defined for $P \in \{C_\ell, P_\parallel \}$ as:
\begin{equation}
\rho_{\rm res} = \frac{P_{\rm res}}{P_{\rm cosmo}} = \frac{P(p - t)}{P(t)}
\end{equation}
where $t$ is the target cosmological signal and $p \in \{ T_{\rm PCA}, T_{\rm \texttt{deep21}{}}\}$ is the given cleaning method's predicted map. We additionally consider $\rho_{\rm res}$ computed for the noise map, $P(T_{\rm noise})$, generated for the test simulation.
This statistic quantifies each cleaning method's residuals as a fraction of the true cosmological signal in both the angular and radial directions. The noise residual demonstrates to what degree observational error obscures the structure estimate.
We compare \texttt{deep21}{} to the PCA-6 residual noise realization generated with $\alpha_{\rm noise} = 0.25$ in Figure \ref{fig:res-power-spec}. \texttt{deep21}{} (green) outperforms PCA (purple) in both the angular and radial directions, and successfully fits cosmological signal at small scales despite high observational noise contribution (black dashed line). We interpret this result as a successful marginalization of the observational noise. Through training, \texttt{deep21}{} has learned to distinguish cosmological clustering from noise fluctuations at small scales.
By contrast, the PCA-6 residual asymptotically approaches the noise boundary in both plots, indicating a limit to the blind foreground cleaning at small scales.
\texttt{deep21}{} also substantially reduces the loss of signal at large radial scales incurred by the PCA method. The larger PCA-6 residual at small $k_\parallel$ indicates large-scale information lost to the foreground subtraction as demonstrated in \cite{Alonso_pca_2014}.
\subsection{Intensity Distributions}
\begin{figure*}[htb]
\centering
\includegraphics[width=\textwidth]{figures/temp_comparison.png}
\caption{Comparison of the distribution of pixel temperatures from the PCA-6 (\textit{purple}) and \texttt{deep21}{} (\textit{green}) cleaned maps to those of the cosmological simulations (\textit{black}) at several different frequencies. {\texttt{deep21}{} captures the target asymmetric temperature PDF much more effectively than the PCA.}}
\label{fig:temp_comp}
\end{figure*}
We also compare how well each cleaning method captures the distribution of the cosmological signal at each frequency. Figure \ref{fig:temp_comp} compares cosmological temperature distributions at several frequencies throughout a single test simulation. We see that the PCA method reproduces a more symmetric temperature distribution that is zero-centered. This result is anticipated by the definition of the method (which removes the mean of the distribution to diagonalize the signal in frequency). {It is clear from comparison to the true signal that \texttt{deep21}{} captures the asymmetric distribution much more accurately than the PCA.}
\subsection{Power Spectrum Recovery}\label{section:clustering}
We additionally report clustering statistics based on the analysis by \cite{Alonso_pca_2014}. We introduce the power spectrum correlation statistic
{\begin{equation}\label{eq:epsilon}
r(k) =
\frac{P_{\rm p \times t}(k)}{\sqrt{P_{\rm p}(k) P_{\rm t}(k)}}
\end{equation}
where $P_{\rm p}$ is the predicted auto-power spectrum from the foreground cleaning method under consideration, $P_{\rm t}$ is the auto-power spectrum generated by the target cosmological map, and $P_{\rm p \times t}$ is the cross-spectrum between the predicted and target maps. This statistic quantifies discrepancies in phases introduced by each cleaning method. We also define the transfer function for coordinate $k \in \{ \ell, {k_{\parallel}} \}$ }
\begin{equation}\label{eq:transfer-fn}
{\rm T}(k) = \sqrt{\frac{P_{p}(k)}{P_{\rm t}(k)}}
\end{equation}
which quantifies discrepancies in amplitude as a function of scale $k$ between cleaned and target maps. For a perfect foreground cleaning, both $r(k)$ and $T(k)$ approach 1.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/power-spec/power-spec-r-statistic.png}
\caption{{Residual variance $1-r^2(k)$ for both angular (\textit{left}) and radial (\textit{right}) power spectra. The angular spectrum is computed at $\nu = 357\ \rm MHz$, with the angular resolution of a \texttt{deep21}{} input voxel shown via the black dashed vertical line. \texttt{deep21}{} (\textit{green}) outperforms the PCA-6 subtraction on all angular scales. In the radial direction, \texttt{deep21}{} offers a significant improvement in preserving phase information on intermediate scales.}}
\label{fig:power-spec-res}
\end{figure*}
{The statistic $1 - r^2(k)$ describes the fraction of variance in cleaned maps that isn't accounted for in the target map. We compare this variance as a function of scale for angular and radial power spectra in Figure \ref{fig:power-spec-res}. \texttt{deep21}{} (green) outperforms the PCA-6 consistently at both large and small angular and radial scales, capturing the correct phase within $\sim 80\%$ accuracy within the comoving bin. This statistic demonstrates the results displayed in Figure \ref{fig:temp_comp} as a function of scale: network-based cleaning captures the non-Gaussian cosmological signal with much higher accuracy, particularly in the angular direction. This is because unlike the PCA, \texttt{deep21}{}'s convolutional filters have access to neighboring pixels in the spatial axes, meaning foreground separation can be learned simultaneously in $\{\theta_x, \theta_y, \nu \}$}.
The consistency of the deep learning approach should be emphasized here: the \texttt{deep21}{} method is largely scale-independent in foreground removal and signal reconstruction. Moreover, these results are not reliant on statistical marginalization over many data realizations like the analyses done in \cite{Alonso_pca_2014} and \cite{paco-2016}, meaning fewer observations need to be made in a realistic setting in order to achieve a consistent, successful foreground removal with estimated uncertainties. Furthermore, with more computational power, larger voxels can be expected to be processed in the future, resulting in an improvement in large-scale network recovery.
\subsection{Noise Performance and Instrument Effects}
\texttt{deep21}{}'s training procedure includes variable levels of observational noise to improve test performance and incorporate uncertainty regarding noise levels in upcoming intensity mapping experiments. To test whether or not the ensemble learned to marginalize out this effect, we tasked the trained network with cleaning the same foreground simulation with a variable noise amplitude, $\alpha_{\rm noise}$. We plot the corresponding MSE metric in Figure \ref{fig:noise-test}. The network's test MSE remains fairly constant until we exceed the noise threshold, $\max\lbrace\alpha_{\rm noise}\rbrace = 0.5$, encountered in training. This shows that \texttt{deep21}{} indeed captures the statistical properties of the observational noise it encountered during training, and subsequently marginalizes over it. In contrast, the blind PCA subtraction, whose MSE is dominated by the removal of the first moment of signal, does not increase in MSE until $\alpha_{\rm noise} = 1$, or order unity with the mean cosmological signal at a given frequency (see Equation \ref{eq:noise-model}). {This result shows that a network ensemble can be trained to be robust to observational noise and more complicated instrument effects, so long as a statistical model is specified and varied enough during training, a significant advantage over blind approaches with no access to this information.
}
\begin{figure}
\centering
\includegraphics[]{figures/noise-test.png}
\caption{Noise performance testing for \texttt{deep21}{} and PCA-6 cleaning routines. 20 maps were generated with increasing noise amplitude $\alpha_{\rm noise}$. As anticipated, \texttt{deep21}{} performs consistently for $\alpha < \alpha_{\rm max} = 0.5$, since this represents the level of noise the network encountered during training according to Equation \ref{eq:noise-model}. The network's error quickly increases in variance and magnitude beyond this threshold.
}
\label{fig:noise-test}
\end{figure}
\subsection{Generalization to new foreground parameters}
\label{sec:generalization}
\begin{figure*}[htpb]
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{figures/generalization/beta-comp-new.png}
\caption{Varying galactic synchrotron $\ell$ dependence.}
\label{fig:beta}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{figures/generalization/alpha-comp-new.png}
\caption{Varying galactic synchrotron $\nu$ dependence.}
\label{fig:alpha}
\end{subfigure}
\caption{Generalization testing for \texttt{deep21}{}. We compare two-point statistics on test data by varying foreground parameters, keeping training data and trained network ensembles constant. We vary the galactic synchrotron $\beta$ and $\alpha$ in Figures \ref{fig:beta} and \ref{fig:alpha}, respectively, according to Equation \ref{eq:fg-cl-model}. {We display power spectra (\textit{top}), transfer function, i.e. the square root of predicted to true spectrum (\textit{middle}), and residual variance fraction $1-r^2(k)$ (\textit{bottom}) for both angular (at $\nu=357\ \rm MHz$) and radial statistics.}
}
\label{fig:generalization}
\end{figure*}
Despite training \texttt{deep21}{} on many foreground and cosmological realizations, we assumed the same fiducial model for all training data generation. For a foreground cleaning experiment with real data, one would ideally train \texttt{deep21}{} on a range of foreground and cosmological models. As a final test for our trained UNet ensemble, we task the network, trained on fiducial simulation parameters, with cleaning observed signal generated by different input parameters.
To do this, we chose to alter the galactic synchrotron foregrounds via the \texttt{CRIME} simulation package, since these represent the largest of the foreground contaminants. Since the PCA processing removes synchrotron amplitude information, we elected to alter its correlation structure according to Equation \ref{eq:fg-cl-model}, namely 1) angular scale dependence, 2) frequency dependence and 3) switching on polarization effects. We display recovered power spectra for the correlation structure analysis in Figure \ref{fig:generalization}.
\subsubsection*{Varying galactic synchrotron angular correlation dependence} Having trained \texttt{deep21}{} on foregrounds with fiducial galactic synchrotron $\beta_o = 3.3$, we vary the synchrotron $\ell$ dependence according to Equation \ref{eq:fg-cl-model} by $\pm 30$\% to $\beta = 1.3 \beta_o$ and $\beta = 0.7 \beta_o $, generating a new full-sky simulation of 192 \texttt{HEALPix} voxels for each case. Increasing $\beta$ yields a smaller foreground correlation in $\ell$, reflected in \texttt{deep21}{}'s over-estimate of the power spectrum in Figure \ref{fig:beta}. Likewise, the network under-estimates the power spectrum for lower $\beta = 2.3$. This indicates that the network has indeed captured the fiducial foreground model in the training data.
\subsubsection*{Varying galactic synchrotron frequency correlation dependence} We repeated the same analysis for the $\alpha$ parameter, varying the galactic synchrotron $C_\ell$ model's dependence on frequency by $\pm 35\%$. We recovered a similar trend in \texttt{deep21}{} performance in Figure \ref{fig:alpha}. Here, decreasing $\alpha$ results in a smaller correlation amplitude as a function of frequency. \texttt{deep21}{} produces an over-estimate of the radial power spectrum since it is trained on $\alpha = 2.8$. Increasing $\alpha_v = 1.35 \alpha_o$ produces little difference in the ensemble estimate, indicating that the model might generalize well in this regime.
In contrast, PCA-6 cleaning (dashed orange) yields almost identical results for changes in the correlation parameters, indicating that the blind method is robust to changes in correlation structure. We display foreground cleaning results at different scales for our test and generalization cases in Table \ref{tab:results}.
\subsubsection*{Polarized foregrounds}\label{sec:polar}
We also tested simulations contaminated by a galactic synchrotron polarization leakage of 1\%. Polarized foregrounds due to the Milky Way's magnetic effects could wreak a catastrophic effect on signal recovered using blind techniques, since poorly-understood polarization leakage could induce foregrounds to interfere with cosmological signal modes \citep{Alonso_sim_2014, LiuShaw20}. To motivate a follow-up study, we enabled galactic synchrotron polarization within the \texttt{CRIME} simulation package, varying the polarized correlation length $\xi_{\rm polar}$, and cleaned the resulting observed maps with our technique. We recovered substantial differences in performance for PCA-6 and \texttt{deep21}{}, as shown in Figure \ref{fig:polar-comp}. Reducing the correlation length, $\xi_{\rm polar}$ makes leaked galactic synchrotron foreground emission behave similarly to the cosmological signal in frequency (see lefthand plot for $\xi_{\rm polar} = 0.01$, and Figure 8 in \cite{Alonso_sim_2014}). The PCA subtraction fails since synchrotron foregrounds can no longer be smoothly resolved from the choppy cosmological signal. This is shown formally in \cite{Alonso_pca_2014}, Figure 1, where decreasing polarization correlation length spreads foreground contamination across diagonalized PCA eigenvalues, making it more difficult to know when foregrounds have been successfully removed. As expected, \texttt{deep21}{} does not generalize well to polarized foregrounds, since PCA-3 preprocessing is unable to remove the 1\% synchrotron leakage. We do, however, note that \texttt{deep21}{} does produce a slight improvement in radial power spectrum recovery (Figure \ref{fig:polar-comp}, right-hand side).
\begin{figure}[htpb]
\centering
\includegraphics[width=\textwidth]{figures/generalization/polar-comp.png}
\caption{Foreground cleaning errors due to 1\% galactic synchrotron polarization leakage. (\textit{Left}) \texttt{deep21}{} temperature recovery compared with several PCA subtractions for $\xi_{\rm polar} = 0.01$. Removing as many as 21 PCA components does not resolve the cosmological signal in the presence of polarization leakage. (\textit{Right}) Transfer function for \texttt{deep21}{} recovery of radial power spectrum recovery for various values of $\xi_{\rm polar}$. Here the PCA-6 subtraction is shown for the given simulation as a dashed line, and shaded contours show $\pm 2\sigma_w$ \texttt{deep21}{} estimates. Shrinking the polarization correlation length makes leaked galactic synchrotron foregrounds behave similarly to the cosmological signal in frequency, rendering blind methods and \texttt{deep21}{} ineffective.}
\label{fig:polar-comp}
\end{figure}
For a more complete understanding of polarized foregrounds and \texttt{deep21}{}'s effectiveness in removing them, a follow-up study is warranted. Existing codes such as \texttt{Hammurabi} \citep{hammurabi-gal-fg} make use of detailed three-dimensional Milky Way simulations to model the magnetic fields responsible for polarization leakage. Training \texttt{deep21}{} on these detailed foregrounds is a necessary next step for real data-preparedness.
\begin{table}[htb]
\centering
\adjustbox{width=\textwidth, center}{
\begin{tabular}{cccccccccc}
\toprule
{} & MSE & {${\rm T}(\ell)$} & {${\rm T}(k_\parallel)$} & {$\rho_{\rm res}(\ell)$} & {$\rho_{\rm res}(k_\parallel)$} & {${\rm T}(\ell)$} & {${\rm T}(k_\parallel)$} & {$\rho_{\rm res}(\ell)$} & {$\rho_{\rm res}(k_\parallel)$} \\
{} & (global) & $\ell=50$ & $k_\parallel = 0.02$ & $\ell=50$ & $k_\parallel = 0.02$ & $\ell=550$ & $k_\parallel = 0.15$ & $\ell=550$ & $k_\parallel = 0.15$ \\
\midrule
\textbf{test phase} & & & & & & & & & \\
$\texttt{deep21}$ & 0.877$\pm$0.156 & 0.899$\pm$0.186 & 0.728$\pm$0.087 & 3.857$\pm$3.178 & 1.099$\pm$0.307 & 0.848$\pm$0.085 & 0.868$\pm$0.088 & 1.099$\pm$0.13 & 0.648$\pm$0.115 \\
PCA-6 & 17.15 & 0.611 & 1.095 & 8.2 & 1.184 & 0.814 & 1.195 & 2.417 & 1.005 \\
\midrule
$\boldsymbol{\beta = 2.3}$ & & & & & & & & & \\
$\texttt{deep21}$ & 0.872$\pm$0.152 & 0.895$\pm$0.212 & 0.728$\pm$0.088 & 4.545$\pm$4.414 & 1.076$\pm$0.3 & 0.839$\pm$0.086 & 0.867$\pm$0.088 & 1.063$\pm$0.135 & 0.649$\pm$0.115 \\
PCA-6 & 17.15 & 0.542 & 1.093 & 11.49 & 1.183 & 0.773 & 1.194 & 2.397 & 1.005 \\
\midrule
$\boldsymbol{\beta = 4.3}$ & & & & & & & & & \\
$\texttt{deep21}$ & 1.044$\pm$0.289 & 0.977$\pm$0.19 & 0.738$\pm$0.081 & 5.83$\pm$5.094 & 1.297$\pm$0.46 & 0.853$\pm$0.114 & 0.874$\pm$0.103 & 1.344$\pm$0.291 & 0.666$\pm$0.132 \\
PCA-6 & 17.16 & 0.607 & 1.092 & 9.025 & 1.21 & 0.763 & 1.195 & 2.805 & 1.006 \\
\midrule
$\boldsymbol{\alpha = 1.3}$ & & & & & & & & & \\
$\texttt{deep21}$ & 1.764$\pm$1.056 & 1.57$\pm$0.505 & 0.844$\pm$0.081 & 23.761$\pm$23.8 & 1.909$\pm$0.785 & 0.867$\pm$0.111 & 0.87$\pm$0.114 & 1.588$\pm$0.321 & 0.691$\pm$0.143 \\
PCA-6 & 17.15 & 0.573 & 1.058 & 11.35 & 1.298 & 0.781 & 1.195 & 2.707 & 1.005 \\
\midrule
$\boldsymbol{\alpha = 3.3}$ & & & & & & & & & \\
$\texttt{deep21}$ & 0.871$\pm$0.152 & 0.907$\pm$0.229 & 0.727$\pm$0.088 & 3.135$\pm$3.146 & 1.073$\pm$0.3 & 0.823$\pm$0.083 & 0.867$\pm$0.088 & 1.164$\pm$0.151 & 0.649$\pm$0.114 \\
PCA-6 & 17.15 & 0.534 & 1.095 & 7.945 & 1.186 & 0.755 & 1.194 & 2.536 & 1.005 \\
\bottomrule
\end{tabular}}
\caption{Summary of foreground cleaning results. All residuals and MSE metrics are normalized to the corresponding statistic computed for observational noise generated with $\alpha_{\rm noise} = 0.25$. Angular power spectra are computed for a slice at $\nu = 357 \rm MHz$. Uncertainty intervals for \texttt{deep21}{} were computed for $\pm 2 \sigma_w$ for each statistic. }\label{tab:results}
\end{table}
\begin{figure}[htpb]
\centering
\includegraphics[width=\textwidth]{figures/power-spec/r-cls.png}
\caption{{Mean (\textit{left}) and variance (\textit{right}) of angular power spectrum correlation, $r(\ell)$ computed over frequencies in the first comoving shell for PCA (purple) and \texttt{deep21}{} methods (green, shaded $\pm 2\sigma_w$). The \texttt{deep21}{} ensemble's correlation with target signal is consistent with one at all but the smallest scales, showing a distinct improvement at large scales. The variance plot shows that the network method is over an order of magnitude more consistent in capturing map phase information than the more variable PCA as a function of scale.}}
\label{fig:var-cls}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
In this study we developed a deep learning-based method to improve foreground cleaning techniques for
single-dish 21cm cosmology. Our method outputs clean intensity maps instead of derived summary
statistics. Training an ensemble of independent UNet architectures on simulated foregrounds and
cosmological signal resulted in the improved recovery of intensity maps and angular and radial power
spectra. {Figure \ref{fig:var-cls}) summarises the network's performance in the radial direction. The
lefthand panel shows that on average over frequency, \texttt{deep21}{}'s $r(\ell)$ is of order 1, offering a
distinct improvement on the largest scales. The righthand plot demonstrates that \texttt{deep21}{} is over an
order of magnitude more consistent in capturing phase information over frequency than PCA-6.} In
addition, the ensemble method provides an estimate of uncertainty for the summary statistics of
resulting maps. We show that
\texttt{deep21}{} effectively marginalizes out observational noise at small angular and radial scales,
demonstrating a marked improvement in the sensitivity limit of foreground subtraction over PCA. This
suggests that a learning separation approach could be hardened against instrument effects, presenting a
significant advantage over blind methods. We show that \texttt{deep21}{} is sensitive to foreground physics by
changing test data simulation
parameters, meaning deep networks trained on more detailed (and varied) simulations will likely be
able to effectively remove foregrounds in absence of a formal foreground likelihood. We also
investigated \texttt{deep21}{}'s failure modes, namely in the presence of galactic synchrotron polarization
leakage. Improved networks trained on polarized foregrounds (including those which require no PCA
preprocessing) will be the subject of a future work to mitigate these effects on radio map retrieval.
Our method demonstrates that cosmological analyses on previously irretrievable 21cm intensity maps
may be possible in an observational setting.
\subsection{Future Work}
The methods outlined here utilize a simulation-based deep learning method to retrieve intensity maps for radio cosmology. These techniques pave the way for more fundamental studies of the 21cm signal captured by upcoming SKA experiments, as well as future studies with even higher resolution. The ability to retrieve intensity maps will allow future studies to probe structure formation and EoR physics beyond the power spectrum statistic.
However, before \texttt{deep21}{} can be applied to real 21cm data, several key aspects should be addressed in follow-up studies:
\begin{itemize}
\item {Rigorous quantification of network errors: although \texttt{deep21}{} achieves a more consistent foreground cleaning than blind methods, the error in the network method is still not fully understood. A way to quantify this effect is to incorporate a \texttt{deep21}{}-like cleaning method in a simulation-based inference pipeline for compressed cosmological parameters \citep[see e.g.][]{Alsing_2019, Charnock_2018}. If foreground cleaning is performed as a step in a simulation pipeline, cleaning method errors can be propagated to parameter measurement using Approximate Bayesian Computational methods.}
\item {BAO application: a future study might train \texttt{deep21}{} on cosmological data without BAO oscillations, and then ask the network to clean data with BAO signal. Training a network with a fiducial BAO radius would introduce a bias, but cleaning a map with a BAO-free network might be trained to leave an unbiased BAO residual signal in the predicted map, since BAO signal is expected to be statistically distinct from foreground and cosmological signal \citep{Wyithe_2007, paco-2016}.}
\item A more realistic noise model: the white noise model considered here, while frequency-dependent, will not extend to more complicated intensity map datasets, such as those obtained via interferometry \citep{Shaw_2014}. Thus the impact of nontrivial noise correlations on network performance must be considered before real signals can be reliably separated. A \texttt{deep21}{}-like study on systematics would additionally benefit from varying observed sky fraction, as is done in \citet{paco-2016} for 21cm BAO recovery.
\item Varying astrophysical parameters: here we trained \texttt{deep21}{} on thousands of input voxels derived from the same fiducial cosmological and foreground parameters. While the network performed relatively well in some generalization cases, \texttt{deep21}{} would ideally be trained on a range of simulation parameters, like is done by \citet{Pablo_2020}, before asked to clean real data.
\item Training on polarized foregrounds: We additionally probed a failure mode of the network in the presence of 1\% galactic synchrotron radiation polarization leakage. Here the PCA preprocessing fails to separate the leaked and cosmological signals, making it harder for \texttt{deep21}{} to pick out the cosmological signal. A follow-up study might additionally train on more realistic polarized simulations, such as those produced by \texttt{Hammurabi}, as well as bypass the need for a blind preprocessing step.
\item Increasing input sizes: \texttt{deep21}{}'s input voxel sizes were limited by available GPU memory. With improved deep learning computational resources (or a detailed tiling strategy such as the one employed by \citet{shirleyd3m} for N-Body analyses), Increasing input size will likely improve foreground removal on large scales, since the network will have access to a larger context of information. Ideally, entire maps would only be split into a handful of UNet input units. An increase in input volume might also allow for a larger frequency range to be assessed, which would aid the network in distinguishing cosmological and polarized signals, as well as improve radial power spectrum recovery.
\end{itemize}
\subsection*{Code Availability}
The code used for training and generation of results is publicly available at \url{https://github.com/tlmakinen/deep21} \faGithub.
A browser-based tutorial for the experiment and UNet module is available via the accompanying \href{https://colab.research.google.com/drive/1wQnmelM33Qjq-nHeVD9JkTHXER1PAJM0?hl=en#scrollTo=yyqq36iyWJ6g}{Colab notebook} \faGoogle.
\section{Acknowledgements}
The authors would like to thank the referee for their constructive comments guiding this revision. Many thanks to Nick Carriero and the Flatiron Institute's HPC support team, without whom this work would not be possible. Thank you also to David Alonso for simulation guidance and to Ben Wandelt for helpful discussions. FVN acknowledges funding from the WFIRST program through NNG26PJ30C and NNN12AA01c. The work of SH and DNS has been supported by the Simons Foundation. TLM completed a large portion of this work to satisfy requirements for the Degree of Bachelor of Arts at Princeton University.
\bibliographystyle{plainnat}
|
{
"timestamp": "2021-06-03T02:04:36",
"yymm": "2010",
"arxiv_id": "2010.15843",
"language": "en",
"url": "https://arxiv.org/abs/2010.15843"
}
|
"\\section{Introduction}\n\n\nThe effective potential $V(\\phi)$ plays an important role in determin(...TRUNCATED)
| {"timestamp":"2021-03-17T01:12:59","yymm":"2010","arxiv_id":"2010.15806","language":"en","url":"http(...TRUNCATED)
|
"\\section{Acoustic Correlates of Voice Quality}\n\\label{sec:vqtable}\nTable~\\ref{tab:1} maps each(...TRUNCATED)
| {"timestamp":"2020-11-02T02:01:11","yymm":"2010","arxiv_id":"2010.15869","language":"en","url":"http(...TRUNCATED)
|
"\n\\section{Introduction} \n\n\\subsection{Introduction} We investigate an interplay between the co(...TRUNCATED)
| {"timestamp":"2022-06-07T02:34:10","yymm":"2010","arxiv_id":"2010.15804","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nThe ability to communicate effectively with other agents is part of a nece(...TRUNCATED)
| {"timestamp":"2020-12-04T02:14:53","yymm":"2010","arxiv_id":"2010.15896","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sec:intro}\n\n\nThe advanced LIGO~\\cite{LSC:2015} and Virgo~\\ci(...TRUNCATED)
| {"timestamp":"2020-11-02T02:00:26","yymm":"2010","arxiv_id":"2010.15845","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nQuantum information processing (QIP), namely, application of the laws of q(...TRUNCATED)
| {"timestamp":"2020-10-30T01:26:55","yymm":"2010","arxiv_id":"2010.15747","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nMeasurements are the most essential part of every physical theory as they (...TRUNCATED)
| {"timestamp":"2020-10-30T01:29:05","yymm":"2010","arxiv_id":"2010.15816","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 4