The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 45
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 44716)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 45
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
A transfer line is a part of an accelerator facility without accelerating cavities, used
to transport the beam between accelerators or from an accelerator to experiments.
The main constituents of a beamline are dipoles, quadrupoles and steerer magnets. Other types of magnets are rarely used.
Dipole settings are defined by the geometry of the beamline.
Quadrupoles, together with beam properties at the entrance of the beamline, define what is called \textit{beam optics}.
In GSI, the network of beamlines starting at SIS18 heavy ion synchrotron is called HEST (from German "Hochenergie-Strahlführung") \cite{HESTweb}.
The beamlines bifurcate, cross and reunite in a particular, complex pattern adapted to the experiments.
HADES \cite{HADES} is one of the largest experiments and
it is placed at the end of an about 160 meter long beamline starting at the magnetic septum of SIS18.
The beamline contains 21 individually powered quadrupoles and two active dipoles tilted by \SI{21.7}{\degree} to bring the beam to the elevated position of the experimental area \cite{Sapinski:2019mvl, Sapinski:2017}. HADES is designed to work in two main modes, using either primary or secondary particles.
Here the first case is studied, in which the beam is focused on an internal target inside the experiment. In both cases HADES accepts slowly extracted beams which are realized by incrementally changing the tune towards third order resonance.
The required beam spot size is about
$\sigma=\SI{0.4}{\milli\meter}$ \cite{Rost2019}.
Only about \SI{1}{\percent} of ions interact with the target and the rest of the beam is dumped
downstream of the experiment.
A typical ion optics used in operation, called \textit{BEAMTIME2019}, is visualized in \hbox{Figure \ref{fig:HADES_optics2019}}.
\begin{figure}[htb]
\centering
\includegraphics[width=.55\textwidth]{plots/MADX_BEAMTIME19.png}
\caption{Example of ion optics for HADES beamline used during the beam time in 2019. The experimental target is located at \SI{158}{\meter}.}
\label{fig:HADES_optics2019}
\end{figure}
Setting up a beamline optics involves fulfillment of various constraints. Typically the beam should be transported with maximum transmission
and focused on the experimental target.
These constraints can be fulfilled by various quadrupole configurations.
Therefore, an analysis of the set of all possible configurations can provide interesting insights about the beamline capabilities.
The relevant configuration space has 21 dimensions corresponding to the 21 quadrupoles.
Because of its high dimensionality the volume of this space is extremely large and thus it is not possible to generate and compute all possible configurations. A randomly sampled subset of configurations can however be chosen in order to represent the full population.
This paper is structured as follows: in Section \ref{sec:Sample} the probing of configuration space and generation of data sets is described, in Section \ref{sec:Optics} the general properties of optics functions are discussed, in Section \ref{sec:ConfigurationSpace} the configuration space is analyzed and in Section \ref{sec:DimensionalityReduction} the potential for dimensionality reduction is investigated. In Section \ref{sec:Microstructure} the microstructure of the configuration space is investigated.
Section \ref{sec:PCA} presents Principal Component Analysis of the $\rm k_1$-values. In Section \ref{sec:Clustering} a grouping of configurations is discussed. Section \ref{sec:Robustness} discusses stability of the optics, considering quadrupoles gradient errors as well as a shifting of Twiss parameters at the entrance of the beamline. Finally, the last Section contains a discussion on the choices of the operational ion optics in view of previous results.
\section{Samples}
\label{sec:Sample}
Three data sets with \num{10000} configurations each have been generated. They use randomly sampled $\rm k_1$-values for the 21 quadrupoles which are used as starting points for a subsequent MADX \cite{MADX} matching procedure.
The three data sets differ in the matching constraints on the beta function which are described in Table \ref{tab:samples}. In addition to these constraints the $\beta_{h,v}$ on the beam dump, downstream of the
experimental target, is constrained to less than \SI{3000}{\meter}.
The Levenberg-Marquardt algorithm \cite{lmdiff}, which is implemented as \textit{LMDIF} in MADX, is used as the matching method with the maximum number of function evaluations set to \num{1d7} and a tolerance of \num{1d-16}.
Only \SI{0.03}{\percent} of the randomly chosen starting points converged during the matching procedure. Table \ref{tab:finish-reasons} contains an overview of the various termination reasons of the LMDIF algorithm. The data sets contain only successfully converged configurations for the further analysis.
The further analysis is concentrated around the $\mathcal{D}^{\,500}_{\,1.0}$ data set.
\begin{table}[!hbt]
\centering
\caption{Summary of data sets and associated matching constraints.}
\begin{tabular}{l|cc|}
\toprule
& \multicolumn{2}{|c|}{\textbf{Constraints}} \\
\textbf{data set} & \textbf{beamline} & \textbf{target} \\
\midrule
$\mathcal{D}^{\,500}_{\,1.0}$ & $\beta_{h,v} < \SI{500}{\meter}$ & $\beta_{h,v} < \SI{1.0}{\meter}$ \\
$\mathcal{D}^{\,500}_{\,0.2}$ & $\beta_{h,v} < \SI{500}{\meter}$ & $\beta_{h,v} < \SI{0.2}{\meter}$ \\
$\mathcal{D}^{\,250}_{\,1.0}$ & $\beta_{h,v} < \SI{250}{\meter}$ & $\beta_{h,v} < \SI{1.0}{\meter}$ \\
\bottomrule
\end{tabular}
\label{tab:samples}
\end{table}
\begin{table}[!hbt]
\centering
\caption{Overview of the various LMDIF termination reasons that occurred during probing of the configuration space. Here "unstable" means that the six-dimensional orbit vector grew too large along the beamline during the matching process.}
\begin{tabular}{lr}
\toprule
\textbf{Reason} & \textbf{Fraction} \\
\midrule
unstable & \SI{93.8370}{\percent} \\
converged without success & \SI{ 3.2336}{\percent} \\
variables too close to limit & \SI{ 2.8957}{\percent} \\
converged successfully & \SI{ 0.0336}{\percent} \\
call limit & \SI{ 0.0002}{\percent} \\
\bottomrule
\end{tabular}
\label{tab:finish-reasons}
\end{table}
The matching constraints lead to strong limitations and interrelations with respect to the $\rm k_1$-values.
The results of the matching procedure depend not only on the matching conditions but also on the used algorithm.
The Levenberg-Marquardt minimization uses gradient descent and it stops as soon as all constraints are fulfilled. Thus it is possible that a given result could be further optimized towards even smaller values of the beta function but this possibility is not explored by the optimizer.
\section{Optics properties}
\label{sec:Optics}
The minimization procedure stops as soon as all constraints are fulfilled, including cases where some of the constrained quantities end up well below their associated thresholds. This becomes apparent on the left plot of Figure \ref{fig:optics_properties}, where a large number of configurations
clearly surpass their constraints. Imposing stronger constraints results in a smaller possible margin and the constrained quantities remain closer to their threshold values, as can be seen from the right plot of Figure \ref{fig:optics_properties}. This suggests that the constraints of the $\mathcal{D}^{\,500}_{\,0.2}$ case are close to the limits of the beamline.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/betas_line_target_500_1_v2.png}
\includegraphics[width=.45\textwidth]{plots/betas_line_target_500_02_v2.png}
\caption{Values of the final beta function along the beamline (maximum value, blue) and at the target location (red) for $\mathcal{D}^{\,500}_{\,1.0}$ (left) and $\mathcal{D}^{\,500}_{\,0.2}$ (right).}
\label{fig:optics_properties}
\end{figure}
The dispersion at the target is a relevant property of the optics as well which however was not constrained during the matching process. Vertical dispersion is small but non-zero due to the presence of tilted dipoles. Small horizontal dispersion is desirable because the beam momentum changes during the spill resulting in potential movement of the beam spot on target. In order to counteract this movement, the first steering dipoles after the extraction septum can be ramped during the spill and therefore zero dispersion at the target is not absolutely necessary but desirable nonetheless.
The distribution of dispersion at the target is shown on the left plot of Figure
\ref{fig:dispersion}. The right plot of this figure presents another interesting
aspect of the minimization procedure. The phase advance on the target tends to
prefer values corresponding to certain angles (or 180-degree rotations). For instance the peaks for vertical phase advance are located around $\pi$, $1.5\pi$ and $2\pi$ for $\mathcal{D}^{\,500}_{\,0.2}$ . For other data sets, the peaks are less pronounced. This shows that the whole beamline must be more precisely tuned when a stronger constraint on the target focusing is imposed.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/dhy_target_v2}
\includegraphics[width=.45\textwidth]{plots/muhv_target_v2}
\caption{Left: Distribution of horizontal and vertical dispersion on the target. Right: Distributions of the phase advance on the target plotted for $\mathcal{D}^{\,500}_{\,0.2}$ sample; for $\mathcal{D}^{\,500}_{\,1.0}$ a similar structure occurs but the peaks are less pronounced.}
\label{fig:dispersion}
\end{figure}
\section{Configuration space}
\label{sec:ConfigurationSpace}
The configuration space spans across 21 dimensions defined by $\rm k_1$-values
of the quadrupole magnets. The beamline contains two types of quadrupole magnets and their main properties are shown in Table \ref{tab:quads}.
\begin{table}[!hbt]
\centering
\caption{Quadrupole types installed on the beamline.}
\begin{tabular}{lccc}
\toprule
\textbf{Type} & \textbf{Length} & $\mathbf{max\,|k_1|}$ & \textbf{Count} \\
\midrule
QPK & \SI{0.6}{\meter} & $\SI{0.37}{\per\square\meter}$ & 6 \\
QPL & \SI{1.0}{\meter} & $\SI{0.60}{\per\square\meter}$ & 15 \\
\bottomrule
\end{tabular}
\label{tab:quads}
\end{table}
The configuration space is a 21-orthotope with \num{2097152} vertices. The corresponding volume is
\begin{equation}
V_{21} = \prod_{i=1}^{21} \max |k_{1,i}| = \SI{1.21d-6}{\meter^{-42}}
\label{eq:hyperrvolume}
\end{equation}
The notion of volume in 21 dimensions is far from being intuitive and probably a better
grasp is provided by the largest one-dimensional span between the edges of the configuration space.
This maximum Euclidean distance is \SI{2.49}{\per\square\meter}.
The available configuration space is strongly reduced by the constraints listed in Table \ref{tab:samples}.
The distribution of resulting $\rm k_1$-values is approximately continuous and it does not peak at the extremes of $\rm k_1$-values,
as shown, for a selection of magnets, on the left plot of Figure \ref{fig:k1_dist}.
Therefore, the volume occupied by the
converged configurations can be estimated from eigenvalues ($\rm\lambda_i$) of the covariance matrix using equation \ref{eq:phasesvol}. The eigenvalues of the covariance matrix represent the variances of the projection of the 21-dimensional distribution onto the eigenvectors. If these distributions are Gaussian, then width of the distribution $\sigma_{gauss} = \sqrt{\lambda}$. The distributions are not always Gaussian, especially for the eigenvectors with largest variance (see Section \ref{sec:PCA}), so the proposed estimation is an approximation.
For non-Gaussian distributions Chebyshev's inequality can be used to estimate which percentage of configurations lie within a given range of variances, i.e. in the range $\pm 2\sqrt{\lambda}$ there should be between \SI{75}{\percent} (Chebyshev's inequality) and \SI{95}{\percent} (normal distribution) of configurations.
In addition, due to presence of substructures in the configuration space, this approach overestimates the actual space of valid parameters. This will be discussed further in Section \ref{sec:Microstructure}.
\begin{equation}
V_{21} = 4^{21} \prod_{i=1}^{21} \sqrt{ \lambda_i}
\label{eq:phasesvol}
\end{equation}
The covariance matrix is visualized on the right plot of Figure \ref{fig:k1_dist}. The quadrupole triplet and two doublets with largest values of matrix elements constitute the first two principal components (see Section \ref{sec:PCA}).
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/k1_dist_v1.png} %
\includegraphics[width=.45\textwidth]{plots/k1_cov_matrix.png} %
\caption{Left: distribution of selected $\rm k_1$-values selected as examples.
Right: covariance matrix for the $\rm k_1$-values. The correlations between the quadrupole doublets are observed.}
\label{fig:k1_dist}
\end{figure}
The volume of the configuration space which contains matched configurations is about $\rm 10^{-4}$ of the total volume.
As one can see from Table \ref{tab:phaspac}, constraining the beamline beta function by factor 2 (from 500 m to 250 m) decreases the configuration space volume by factor 10 and constraining the beta function on the target by factor 5 decreases the configuration space volume by factor 50.
\begin{table}[!hbt]
\centering
\caption{Configuration space volume calculated with Equation \ref{eq:phasesvol}.}
\begin{tabular}{lc}
\toprule
\textbf{sample} & \textbf{Volume [$\rm m^{-42}$]} \\
\midrule
$\mathcal{D}^{\,500}_{\,1.0}$ & $\rm 7.93\cdot 10^{-10}$ \\
$\mathcal{D}^{\,250}_{\,1.0}$ & $\rm 6.29\cdot 10^{-11}$ \\
$\mathcal{D}^{\,500}_{\,0.2}$ & $\rm 1.29\cdot 10^{-11}$ \\
\bottomrule
\end{tabular}
\label{tab:phaspac}
\end{table}
The distribution of distances between initial and final configurations and distances to the closest and the most distant configuration are shown in Figure \ref{fig:k1_distances}. The average distance to the closest configuration before the matching procedure is \SI{0.41}{\per\square\meter} and drops to \SI{0.29}{\per\square\meter} for the converged configurations. This distance is never zero, the models do not overlap.
The average distance between corresponding initial and final configurations is
\SI{0.46}{\per\square\meter} which is significantly larger than the distance to the final nearest neighbour.
The maximum stretch of the reduced configuration space is only \SI{1.51}{\per\square\meter}.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/distances_configurations.png} %
\includegraphics[width=.45\textwidth]{plots/distances_configurations_max_v2.png} %
\caption{Left: distribution of minimum distances between configurations.
Right: distribution of maximum distances between configurations.}
\label{fig:k1_distances}
\end{figure}
The last characteristic length is related to the local size of a configuration, which will be discussed in more detail in Section \ref{sec:Microstructure}. It describes the local region which fulfills the matching conditions.
Due to the high dimensionality of the problem, the precise shape of this region is difficult to estimate.
An increase of sampling requires enormous computing power, not available for this study.
Two approaches are used in this study. In the first approach small, selected areas of the configuration space are sampled with very large granularity. In the second approach, for each configuration a set of 42 variations is created.
For each variation one of the $\rm k_1$-values is changed by $\pm r_{21}$.
The left plot of Figure \ref{fig:r21_investigation} shows the distribution of mean and maximum values
of $\beta/\beta_0$ ratio on target for $r_{21} = \SI{0.002}{\per\square\meter}$.
The right plot shows the evolution of the $\beta/\beta_0$ as a function of
investigated distance $r_{21}$ from the original configuration.
From this analysis we can conclude that the typical size of a configuration space occupied by a configuration is
about \SIrange{0.002}{0.005}{\per\square\meter}, however closer analysis reveals much larger structures.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/dbtarget_r21_002_500_1.png} %
\includegraphics[width=.45\textwidth]{plots/bb0_r21.png} %
\caption{Left plot: Average and maximum change of the $\beta$ function on target for configurations in a distance of $\rm 0.002~m^{-2}$ from the configurations found by the minimization procedure. Right plot: mean value of $\beta/\beta_{0}$ for various $\rm r_{21}$. }
\label{fig:r21_investigation}
\end{figure}
The various characteristic lengths are summarized Table \ref{tab:lengths}.
\begin{table}[!hbt]
\centering
\caption{Overview of various characteristic lengths (measured as Euclidean distances). "fn" stands for "farthest neighbor" and "nn" stands for "nearest neighbor" in configuration space. "initial-final" denotes the distance of corresponding initial and final configuration pairs. Distances are given in units of \si{\per\square\meter}.}
\begin{tabular}{lrrrrrrr}
\toprule
{} & mean & std & min & \SI{25}{\percent} & \SI{50}{\percent} & \SI{75}{\percent} & max \\
\midrule
fn-initial & 1.275 & 0.078 & 1.023 & 1.222 & 1.274 & 1.328 & 1.571 \\
nn-initial & 0.410 & 0.044 & 0.243 & 0.381 & 0.410 & 0.439 & 0.634 \\
fn-final & 1.163 & 0.090 & 0.899 & 1.100 & 1.160 & 1.225 & 1.507 \\
nn-final & 0.296 & 0.042 & 0.136 & 0.268 & 0.295 & 0.323 & 0.479 \\
initial-final & 0.456 & 0.124 & 0.038 & 0.368 & 0.446 & 0.535 & 0.992 \\
\bottomrule
\end{tabular}
\label{tab:lengths}
\end{table}
\section{Configuration space dimensionality reduction}
\label{sec:DimensionalityReduction}
The application of various constraints confines the valid configurations to a region of the configuration space that potentially needs fewer dimensions to be described (rather than the full number of 21 dimensions of the original configuration space). Since the structure of this region is neither identified nor apparent, a dedicated method for identification of the corresponding dimensionality is needed. We are using the method presented in \cite{number-of-intrinsic-dimensions} which relies solely on the distances of the two nearest neighbors for each data point. This has the advantage that the bias due to curvature or density variations is reduced in the estimate.
Table \ref{tab:intrinsic-dimensions} shows the resulting estimates for the three data sets. These indicate that further constraining the beta function along the beamline or at the target location, beyond the values of \SI{500}{\meter} and \SI{1}{\meter}, does not decrease the number of intrinsic dimensions. Hence we use the $\mathcal{D}^{\,500}_{\,1.0}$ data set as a representative for the further analysis.
\begin{table}[!hbt]
\centering
\caption{Estimation of number of intrinsic dimensions for the different data sets.}
\begin{tabular}{l|cc}
\toprule
\textbf{data set} & \multicolumn{2}{c}{\textbf{intrinsic dimension}} \\
& \textbf{initial} & \textbf{final} \\
\midrule
$\mathcal{D}^{\,500}_{\,1.0}$ & 16.23 & 12.93 \\
$\mathcal{D}^{\,500}_{\,0.2}$ & 16.27 & 12.07 \\
$\mathcal{D}^{\,250}_{\,1.0}$ & 16.05 & 12.62 \\
\bottomrule
\end{tabular}
\label{tab:intrinsic-dimensions}
\end{table}
\section{Configuration space microstructure}
\label{sec:Microstructure}
The accuracy of the power supplies used in HEST is about 100 ppm and the precision is about 200 ppm.
Therefore the total relative uncertainty of magnet current setting is about $\delta=\Delta I/I = \num{3d-4}$ \cite{AStaf} which also applies to the relative uncertainty of each $\rm k_1$-value. The potential difference between theoretical and actual setting in terms of distance in $\rm k_1$-space depends on the $\rm k_1$-values and has the following upper bound:
\begin{equation}
\Delta k_{1, tot} = \delta\cdot\sqrt{6\cdot (\max|k_{1, QPK}|)^2 + 15\cdot (\max|k_{1,QPL}|)^2} = \SI{7.5d-4}{\per\square\meter}
\end{equation}
In order to investigate the configuration space microstructure, for each configuration in data set $\mathcal{D}^{\,500}_{\,1.0}$ , a set of \num{5000} additional configurations was generated inside the 21-dimensional ball with radius \SI{0.001}{\per\square\meter} around that original configuration. These additional configurations were filtered according to the matching constraints and those that fulfill the constraints, in the following called \textit{leaf configurations}, are used for further analysis. These leaf configurations are the result of pure Monte Carlo sampling without any matching procedure. Hence they do not reflect any properties of the previously used LMDIF matching algorithm.
The distribution of the number of leaf configurations is shown on the left plot of Figure \ref{fig:disp_size}. The configuration space structure at this scale is very rich. In some areas the sampling method found no leaf configurations, what means that the original configuration is vulnerable to small errors in quadrupoles setting. On the other hand, there are a few areas which are filled with many leaf configurations and these areas are tolerant towards quadrupole errors.
An example of a region with a large number of good configurations is shown on right plot of Figure \ref{fig:disp_size}. The data for this plot was obtained using the above mentioned technique of probing the configuration space around each seed configuration in $\mathcal{D}^{\,500}_{\,1.0}$ , with additional sampling of consecutive shells, each with the same thickness of \SI{0.001}{\per\square\meter} and containing \num{5000} samples, until no leaf configurations are found anymore. Since the volume of each shell increases with power 21, the intersection and thus the number of leaf configurations per shell, will decrease in case the probed region of potentially valid configurations locally spans less than 21 dimensions. This implies that even though the number of leaf configurations goes to zero, there is no evidence that the region of valid configurations is bounded at the same level in configuration space, just that the probability to sample a configuration in the intersection of that region with the hypershell decreases accordingly.
Nevertheless the thus obtained data gives an idea about the spatial distribution of leaf configurations around particularly good seed configurations.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/N0dist.png}
\includegraphics[width=.45\textwidth]{plots/conf9159_pc01space_v3.png}
\caption{Left: Number of leaf configurations found within the 21-ball of radius \SI{0.001}{\per\square\meter} around each configuration in the data set $\mathcal{D}^{\,500}_{\,1.0}$ . Right: Example of a distribution of leaf configurations corresponding to a particularly large region of valid configurations. The two-dimensional distribution is obtained by projecting onto the plane of largest variance (principal components). The seed configuration is marked with a red "x".}
\label{fig:disp_size}
\end{figure}
\section{Principal Component Analysis}
\label{sec:PCA}
Principal component analysis allows to find the main degrees of freedom of the studied system and thus to potentially reduce dimensionality of the configuration space.
The aspect of dimensionality reduction is useful for visualization and processing of high-dimensional data sets.
The optimal number of principal components can be found using method from \cite{Minka}. The method, applied to the data set $\mathcal{D}^{\,500}_{\,1.0}$ , shows that the first two components are responsible for about \SI{30}{\percent} of the total variance in the data and the remaining components vary significantly less, as illustrated in the left plot of Figure \ref{fig:pca_components}.
Imposing a stronger constraint on target focusing makes the first two principal
components more pronounced, i.e. responsible for a larger fraction of the total variance.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/explained_variance_fraction_v4.png}
\includegraphics[width=.45\textwidth]{plots/pca_kval_weights_10k_v3.png}
\caption{Left: Fraction of the variance in the data explained by the given principal components - the first two principal components explain about \SI{30}{\percent} of the variance. Right: Composition of the first two principal components for the $\mathcal{D}^{\,500}_{\,1.0}$ data set (for other data sets similar results are obtained). The horizontal axis shows the name of the quadrupole magnet along the beamline and the vertical axis shows the weight associated with corresponding $\rm k_1$-value. }
\label{fig:pca_components}
\end{figure}
The weights associated to the first and second principal components, here called PC1 and PC2, are shown on the right plot of Figure \ref{fig:pca_components}.
The first principal component mainly consists of the contribution of the two quadrupole doublets $\textrm{GHADQD}(11|12)$ and $\textrm{GHADQD}(21|22)$ while the second component corresponds to the quadrupole triplet
$\textrm{GTE2QT}(11|12|13)$ at the beginning of the beamline.
In order to better understand the meaning of the principal components, the plots of Figure \ref{fig:pca12_composition} shows the $\rm k_1$ - values
of all quadrupole magnets for those configurations that expose extreme values of PC1 and PC2. Negative values of PC1 correspond to strong focusing in the GHADQD-zone and positive values of PC2 correspond to strong focusing in GTE2QT-zone.
The Figure \ref{fig:k1meansigma} shows the mean value as well as standard deviation of the absolute $\rm k_1$-values along the beamline. The final focusing magnets have large $\rm k_1$-values but rather small spread of the values.
The plot also shows the BEAMTIME2019 optics configuration which is commonly used in daily operation.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/pca1explained_v5.png}
\includegraphics[width=.45\textwidth]{plots/pca2explained_v5.png}
\caption{Visualization of configurations corresponding to large magnitudes of the first (left plot) and second (right plot) principal components.}
\label{fig:pca12_composition}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/k1variation_v5.png}
\caption{ Mean and standard deviation of $\rm k_1$-values along the beamline. The BEAMTIME2019 configuration, used in operation, is shown as well.}
\label{fig:k1meansigma}
\end{figure}
Figure \ref{fig:pca_mountains} shows a density map of all configurations projected onto the plane corresponding to the first two principal components. This is referred to as \textit{PC-space} in the following.
The left plot shows a density map corresponding to initial $\rm k_1$-values which were used as starting points for the matching procedure; only those values that could be successfully optimized are included. The right plot shows the corresponding $\rm k_1$-values after the matching procedure converged.
The models are spread, but clearly the preferred values are for small negative first and second principal components.
More than \SI{50}{\percent} of the configurations lie in the area defined by \hbox{$-0.2 < \textrm{PC1, PC2} < 0.0$}.
\begin{figure}[!htb]
\centering
\includegraphics[width=.45\textwidth]{plots/pca_density_map_init_10k_nist.png}
\includegraphics[width=.45\textwidth]{plots/pca_density_map_10k_nist_star.png}
\caption{Density of models in \textit{PC-space} for the initial (left plot) $\rm k_1$-value settings and the ones after the matching procedure converged (right plot).
The star on the right plot shows the position of ion optics settings currently used in operation (BEAMTIME2019, see Figure \ref{fig:HADES_optics2019}).}
\label{fig:pca_mountains}
\end{figure}
\section{Clustering}
\label{sec:Clustering}
The goal of clustering is to investigate if the optics configurations can be divided into groups of common features.
Several algorithms were tested, but the only interesting results were obtained using the k-means algorithm.
This algorithm performs a partitioning of the data, that is it does not find an optimal number of clusters but it divides the data into a predefined number of partitions.
The elbow curve, shown on the left plot of Figure \ref{fig:k-means-elbow},
can be used to estimate the optimal number of clusters.
The choice of number of clusters is less straightforward than the choice of minimum number of Principal Components.
Visually the number of three clusters
seems to be a better choice than two or four.
The subsequent results were obtained requesting three clusters.
The right plot of Figure \ref{fig:k-means-elbow} shows division of the PC component configuration space into 3 clusters.
Those clusters are clearly distinguishable and contain respectively: \SI{21}{\percent}, \SI{47}{\percent} and \SI{32}{\percent} of configurations.
If two clusters are requested, the partitioning reveals a clear distinction between negative and positive values of the second principal component (PC1), ie. configurations are split between strong and weak focus in GTE2QT segment.
If more than three clusters are chosen, the resulting partitions overlap in the PC1-PC2 projection of the configuration space.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/k-means-elbow_10k_v3.png}
\includegraphics[width=.45\textwidth]{plots/3_Cluster_K-Means_10k_v4.png}
\caption{Left: elbow curve for k-means algorithm. Right: cluster coverage in principal component space.}
\label{fig:k-means-elbow}
\end{figure}
The clustering algorithm is applied in $\rm k_1$-space, but the same results are obtained when it is applied in PC-space. It means that the features detected by the algorithm are present in the first two Principal Components and not in other components.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/3_Cluster_absk1_10k_v4.png}
\includegraphics[width=.45\textwidth, height=.312\textwidth]{plots/3_Cluster_betaH_10k_v3.png}
\caption{Left: the distribution of quadrupoles k-values along the beamline for 3-cluster analysis. For better visibility the absolute value of k-values is displayed. Right: horizontal beta function along the line for the three clusters. The line represents mean value and the band represent half of standard deviation.}
\label{fig:k-val-cluster}
\end{figure}
The left plot of Figure \ref{fig:k-val-cluster} presents the distribution of $\rm k_1$-values and the horizontal optics functions for the three clusters.
It is interesting to note that the unsupervised algorithm, which is k-means clustering, finds three main strategies
for ion optics on the HADES beamline:
\begin{itemize}
\item cluster 0, with strong focus in GTE2QT segment,
\item cluster 1, with weak focus in GTE2QT segment and strong focus in GHADQD segment,
\item cluster 2, with weak focus in GTE2QT segment and weak focus in GHADQD segment.
\end{itemize}
\section{Stability of optics configurations}
\label{sec:Robustness}
The stability of an optics configuration can refer to two different aspects.
One aspect is the change of the beta functions along the beamline and at the target location as a function of change of the Twiss parameters at the entrance of the beamline. Such a shifting of lattice parameters can lead to an increase of beam spot size at the target and hence it is desirable that a configuration is \textit{robust} against such shifting.
The other aspect is concerned with quadrupole gradient errors. Small changes in the $\rm k_1$-values might lead to an increase of the beam spot size at the target location and hence it is desirable that a configurations is \textit{tolerant} towards such gradient errors.
Here we define the \textit{robustness} and \textit{tolerance} of configurations as follows. The robustness score is given by the formula:
\begin{equation}
\textrm{Robustness} = \sqrt{\max\left(\frac{\Delta\beta_{h, \textrm{target}}}{\beta_{h, \textrm{target}}}, 0\right)^2 + \max\left(\frac{\Delta\beta_{v, \textrm{target}}}{\beta_{v, \textrm{target}}}, 0\right)^2}
\end{equation}
The $\Delta\beta_{(h,v), \textrm{target}}$ is the result of a shifting of the Twiss parameters at the entrance of the beamline. The $\max$ part ensures that only an increase in beta functions is taken into the account and hence a robustness score of zero indicates that the configuration is robust against the shifting, i.e. it will not increase the beta function at the target location. A robustness score greater than zero indicates an increase in beta function by a corresponding magnitude.
The tolerance of a configuration towards quadrupole gradient errors can be assessed via the number of valid configurations that are found in a ball with corresponding radius around that configuration. This is similar to the sampled leaf configurations from Section \ref{sec:Microstructure}, where the distribution of tolerance is presented in Figure \ref{fig:disp_size}. Here we define the tolerance of a configuration as the fraction of leaf configurations:
\begin{equation}
\textrm{Tolerance} = \frac{N_{leaves}}{N_{samples}} \hspace{1cm} \textrm{within } R = \SI{0.001}{\per\square\meter}
\end{equation}
Figure \ref{fig:k1ff_tolerance} presents a particular and non-trivial property of optics configurations: the most tolerant configurations have the $\rm k_1$-values of the two last focusing quadrupoles close to their average value. Therefore, in order to speed-up the search for high-tolerance configurations, one could restrict the $\rm k_1$-values of these magnets.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/ff_tolerance_v0.png}
\caption{Strength of final focusing quadrupoles in function of tolerance.}
\label{fig:k1ff_tolerance}
\end{figure}
The HADES experiment requires slowly extracted beams and the optics of the synchrotron change during the quadrupole-driven slow extraction process which causes the lattice parameters at the entrance of the transfer line to change too.
According to dedicated optics calculations \cite{SSorge} the values of $\beta_{h}$, $\beta_{v}$
and horizontal dispersion $D_{h}$ at the entrance of the beamline vary during the spill by \SI{3}{\percent}, \SI{1}{\percent} and \SI{3.5}{\percent} respectively.
For the following analysis we recompute the beamline lattice functions with modified values at the entrance of the beamline
and observe the change of beta function at the experimental target location for each of the configurations in the data set $\mathcal{D}^{\,500}_{\,1.0}$ . The change of beta functions at the target location is denoted with $\Delta\beta_{h}$ and $\Delta\beta_{v}$.
As the beamline lattice is mostly linear, the expected change of lattice parameters at the end of the line is of the same order of magnitude as the variation at the entrance of the beamline.
The relative changes of beta functions at the target location are shown on the left plot of Figure \ref{fig:stability}.
The vertical change is about \SI{30}{\percent} of the horizontal one, as expected from the changes imposed at the entrance of the beamline and the fact that
part of the beamline is tilted, therefore leading to coupling of the two planes.
The distribution has clear maxima - deviation of the optics function on the target is non-zero, however a small population of configurations
lie in a wide minimum of the configuration space, where they seem quite independent on the variation of beam parameters at the beginning of the beamline.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/dbx_dby_hexbin}
\includegraphics[width=.45\textwidth]{plots/robustness_tolerance_scatter}
\caption{Relative change of the beta functions on experimental target due to variation of the beam parameters at the beginning of the beamline.}
\label{fig:stability}
\end{figure}
The results show that \SI{22}{\percent} of the models are robust against a shifting of the lattice functions with a few models leading even to a significant decrease of the beta function at the target location in both planes.
\section{Ion optics choice}
\label{sec:OpticsChoice}
An interesting observation is that most of the historically used ion optics settings of the
beamline are located in same region of PC-space, at about $\rm PC1 = (0.42,0.44)$ and $\rm PC2=(-0.05,0.05)$ (see Figures \ref{fig:pca_mountains}, \ref{fig:ionoptics2}). In this region the focus of GHADQD magnets is weak and for GTE2QT magnets it is moderate (belonging to cluster 2).
As mentioned before, there is no single best solution for ion optics of a multi-purpose beamline. However, the configurations found in the exhaustive scan of the available configuration space reveal various levels of tolerance towards quadrupole errors, robustness with respect to shifting of lattice functions and are characterized by various levels of dispersion at the target location.
Two particular configurations have been investigated as a potential substitute for historically used settings. The first, no. 336, has a robustness score of zero (full robustness) and maximal tolerance. The second one, no. 5741, is chosen with similar criteria but selecting only from configurations which have very small horizontal dispersion on the target: $D_x < \SI{0.1}{\meter}$.
These configurations are shown in Figures \ref{fig:ionoptics1} and \ref{fig:ionoptics2}.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/new_optics.png}
\includegraphics[width=.45\textwidth]{plots/madx_config_336.png}
\caption{Left: comparison of $\rm k_1$-values for operational ion optics settings with two new settings proposed as an outcome of the study. Right: optics functions for configuration \#336.}
\label{fig:ionoptics1}
\end{figure}
Right plot of Figure \ref{fig:ionoptics2} suggests that good configurations are spread across the {\it PC-space}, without any particular regularity or preferences.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/madx_config_5741.png}
\includegraphics[width=.45\textwidth]{plots/conf_onPC12.png}
\caption{Left: optics functions for configuration \#5741. Right: location of the discussed configurations in principal component space.}
\label{fig:ionoptics2}
\end{figure}
\section{Conclusions}
The analyzed beamline is designed to be flexible as other beamlines bifurcate from it and the main experiment HADES, which is situated at the end of this line, operates in various modes.
The methodology of the study relies on generation of large amounts of configurations spread in the available configuration space and by executing the matching procedure with constraints on focusing beam on the target and keeping beam envelope within the beamline acceptance.
\newpage
The following conclusions are the highlights of the study:
\begin{itemize}
\item When stronger constraints are imposed on the target focusing, more quadrupoles along the beamline are employed to meet the constraint, what is expressed by selection of particular values of a total phase advance.
\item The configuration space of matched optics fills a small, but continuous region of the total possible configuration space.
\item However, this region is not uniform: there are configurations which are tolerant to change of $\rm k_1$-values and others which are not; This is expressed as a variation of density of matched configurations.
\item Principal Component Analysis reveals that two beamline sections, one at the beginning and one in the middle have most variance while setting of other magnets is more constrained.
\item Partitioning the available configurations shows that three approaches to construction of beamline optics can be distinguished, based on values of the first two Principal Components.
\item Configurations have also varying tolerance to the change of the initial twiss parameters.
\item Neither tolerance to varying initial twiss parameters nor robustness of the configurations were found to favour some particular region of configuration space.
\item A choice of new optics configurations can be done based on selection of parti\-cu\-lary robust and tolerant configurations.
\end{itemize}
The presented analysis allows for investigation of the possible ion optics settings of a beamline. It reveals what types of optics are possible and gives indications about
sensitivity of the optics to various errors, like uncertainty of beam parameters at the entrance
to the beamline or quadrupole errors.
In the future developments, it is planned to check other matching procedures, especially gradient-free ones and to perform more precise studies of configuration space microstructure.
\section{Convergence of matching process}
Here we look at the distance between the initial, randomly chosen set of k-values and the final one, after the matching procedure.
It must be stressed that only a small fraction of the randomly-chosen set of k-values converge. This is a property
of the beam line but also of the minimization algorithm.
In our exercise this fraction was only 0.034\%.
In other cases the matching process ends, after a predefined maximum number of steps (?), without convergence.
Here we show result for a simply 21-dimensional euclidean distance between initial k-value point and the final one.
The left plot in Figure \ref{fig:euclidean_distance} shows the distance distribution.
The right plot shows the PC phase space colored according to the movement of the configurations during the matching process.
Read areas are areas which loose configurations during the matching and blue areas are 'attractors'.
The plot is basically a difference of the maps in Figure \ref{fig:pca_mountains}
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{plots/if_euclidean.png}
\includegraphics[width=.45\textwidth]{plots/pca_attractors_10k_nist.png}
\caption{Distribution of euclidean distances between the initial and final kL-values.
Right plot: read areas are areas where the initial models are leaving, blue areas are attractors.
}
\label{fig:euclidean_distance}
\end{figure}
|
{
"timestamp": "2020-02-12T02:08:49",
"yymm": "2002",
"arxiv_id": "2002.04226",
"language": "en",
"url": "https://arxiv.org/abs/2002.04226"
}
|
\section{Introduction}
\vspace*{-0.15cm}
Millimeter wave (mmWave) systems have emerged as a promising candidate for high data rate communication in 5G wireless networks. One of the major obstacles in the implementation of mmWave systems is the high energy consumption \cite{rangan2014millimeter,walden1999analog,Murmann2015}. One way to reduce power consumption in mmWave systems is to use low resolution analog to digital converters (ADCs) (e.g. one-bit threshold ADCs) at the receiver \cite{MIMO1,mo2015capacity,alkhateeb2014mimo,abbasISIT2018,rini2017generalITW,mo2016ADC,dutta2020capacity,mezghani2012capacity,Dutta2019}. However, this inflicts a rate-loss due to the large quantization noise caused by coarse quantization.
There has been a large body of work dedicated to characterizing the capacity of point-to-point (PtP) MIMO systems in the presence of low resolution ADCs at the receiver
\cite{abbasISIT2018,rini2017generalITW,mo2016ADC,dutta2020capacity}. These works consider \textit{`analog-one-shot'} receivers, where at each channel-use the received signal goes through analog processing prior to being fed to the one-bit ADCs. The receiver then performs blockwise signal processing on the stored digital signal to decode the message. In contrast, \cite{abbasPtPISIT2019} and \cite{abbasMtISIT2019} propose two new classes of receivers with low resolution ADCs, called \textit{analog-blockwise} and \textit{adaptive threshold} receiver, respectively, which generalize analog-one-shot receivers and achieve higher performance in terms of communication rates for a given set of one-bit ADCs. These receivers incorporate delay elements to perform analog blockwise processing which is not possible with analog-one-shot receivers. More specifically, the adaptive threshold receiver changes the threshold of the ADCs adaptively based on their outputs in previous channel uses. Note that receivers with successive approximate register (SAR) ADCs, used for low power consumption applications \cite{5746277,5711005,5433830,6043594}, also belong to the family of adaptive threshold receivers.
A fundamental question which arises in the context of low resolution receivers is the best way to allocate a total of $m$ bits among the receiver antennas in order to maximize the achievable rate. Unlike analog-one-shot receivers which require pre-set ADCs of different resolutions for bit allocation among the antennas, the adaptive threshold receiver can form $m$-bit quantization using $m$ one-bit ADCs and allocate the bits to the antennas in any desired fashion.
This flexibility allows the receiver to switch between tasks which require different bit allocations among antennas such as channel estimation and data communication, where the latter could depend on the estimated channel. Another advantage of the adaptive threshold receiver is its optimality (in terms of achievable rates) in the high SNR regime for the single and multi-user uplink (UL) and downlink (DL) communication scenarios \cite{abbasMtISIT2019}. The proposed transmission schemes in \cite{abbasMtISIT2019} employ singular value decomposition (SVD) to transform the MIMO channel into a set of subchannels. The achievable region is then characterized in terms of single-letter mutual informations optimized over all possible ADC and power allocations among the subchannels.
In this paper, recognizing the high complexity of the optimal ADC allocation scheme for the adaptive threshold receiver \cite{abbasMtISIT2019}, we compare various low-complexity algorithms for transmit power and ADC allocation among subchannels taking into account practical constraints such as limited modulation levels and realistic mmWave channel models. We show through simulations that simple power and ADC allocation strategies are able to achieve near optimal rates for PtP communication in practical mmWave cellular networks. Additionally, we demonstrate that with the adaptive threshold receiver, using few one-bit ADCs is enough to achieve near optimal performance in terms of throughput.
In addition, we relax the idealistic assumption made in \cite{abbasMtISIT2019} that both transmitter and receiver have access to full channel state information (CSI), and consider practical channel estimation using low resolution ADCs configured with the adaptive threshold receiver. We note that prior works have considered channel estimation
when analog-one-shot receivers are used \cite{mezghani2010multiple,zeitler2012bayesian,dab2010ches,mo2018channel,shlezinger2018asymptotic}.
We show that in practical DL mmWave communication scenarios with imperfect CSI and limited number of one-bit ADCs, the achievable rate distribution is close to the one with perfect CSI and fully digital receiver with high resolution ADCs employing time division multiple access (TDMA) of equal time-shares.
We demonstrate that the performance of the proposed adaptive threshold based TDMA in \cite{abbasMtISIT2019} outperforms that of the conventional TDMA in terms of the system throughput significantly.
\begin{figure*}[t]
\centering
\includegraphics[width =0.55\textwidth ,draft=false]{PtP_Arch2_3.pdf}
\caption{An adaptive threshold receiver with $n_q$ one-bit ADCs is shown where the analog linear combiner, the delay network operation, and adaptive threshold coefficient vector set at channel-use $i$ are characterized by the matrix $\textbf{V}$, binary matrix $\textbf{B}(i)$, and set $\{(\textbf{u}^l_{1}(i),\textbf{u}^r_{1}(i)), \cdots, (\textbf{u}^l_{n_q}(i),\textbf{u}^r_{n_q}(i))\}$, respectively.}
\label{fig:PtP}
\vspace*{-0.4cm}
\end{figure*}
\vspace*{-0.2cm}
\section{System Model and Preliminaries}
\vspace*{-0.2cm}
\label{sec:System Model}
\subsection{Channel Model}
We consider the DL communication in a mmWave single-cell system consisting of one base station (BS) and $n_u$ users, where the $i$th user is equipped with $n_{q,i}$ one-bit ADCs. The received signal at the $i$th user is represented by
\begin{align}
\label{eq:channel}
\textbf{y}_{i} = \textbf{H}_{i}\textbf{x} + \textbf{z}_{i},
\end{align}
where $\textbf{x}\in \mathbb{C}^{n_b}$ is the vector of transmit signal from the BS with $E[||\textbf{x}||^2] \leq P$, where $P$ is the average transmit power of the BS, $\textbf{y}_i$ is the vector of the received signal at the $i$th user, $\textbf{z}_i$ is a vector of independent, zero-mean and unit-variance complex Gaussian noise, and $\textbf{H}_i\in \mathbb{C}^{ n_{i}\times n_b}$ is the complex channel gain matrix between the BS and $i$th user, where $n_i$ and $n_b$ are the number of antennas at the $i$th user and BS, respectively.
To model the channel in mmWave bands, we adopt a standard multipath clustered channel model described in \cite{akdeniz2014millimeter}. To elaborate, the channel between the BS (with $n_t$ antennas) and a user (with $n_r$ antennas) consisting of $n_c$ clusters (also called paths), where the $j$th cluster includes $n_{\textrm{p},j}$ rays (or sub-paths) is defined as follows
\begin{align}
\begin{aligned}
\textbf{H} = \sum_{j=1}^{n_{c}} \sum_{k=1}^{n_{\textrm{p,j}}} \beta \times g_{j, k} \times \textbf{a$^{\textrm{r}}$}(&\varphi^{\textrm{r}}_{j, k}, \theta^{\textrm{r}}_{j, k})
\textbf{a$^{\textrm{t}}$}(\varphi^{\textrm{t}}_{j, k}, \theta^{\textrm{t}}_{j, k})^*,
\end{aligned}
\end{align}
where $\beta$ denotes the large-scale fading coefficient modeling distance-dependent path loss and shadowing and $g_{j,k}$ is the complex small-scale fading gain. Also, $\textbf{a$^{\textrm{r}}$} \in \mathbb{C}^{n_r}$ and $\textbf{a$^{\textrm{t}}$}\in \mathbb{C}^{n_t}$ denote the array response vectors of the user and BS, respectively.
Furthermore, $\varphi^{\textrm{r}}_{j,k}$, $ \theta^{\textrm{r}}_{j,k}$, $\varphi^{\textrm{t}}_{j,k}$, and $\theta^{\textrm{t}}_{j,k}$ are the azimuth angle of arrival (AoA), elevation AoA, azimuth angle of departure (AoD), and elevation AoD associated with the $k$th ray of the $j$th cluster, respectively.
\vspace*{-0.2cm}
\subsection{Receiver Architecture}
\vspace*{-0.1cm}
For each user equipped with $n_r$ antennas and $n_q$ one-bit ADCs, we use the adaptive threshold receiver proposed in \cite{abbasMtISIT2019}. Here, we provide a brief description of this receiver and refer the reader to \cite{abbasMtISIT2019} for more details. The block diagram is shown in Fig. \ref{fig:PtP}. In this receiver, the output of the ADCs at the $i$th channel-use is
\begin{align}
\label{eq:ADC_out}
\widehat{\textbf{w}}(i) = Q(\textbf{w}(i)+\widetilde{\textbf{t}}(i)+\textbf{t}),
\end{align}
where $\textbf{w}(i) = \textbf{B}_{(n_q \times b n_r)}(i) \widetilde{\textbf{y}}(i)$
holds $n_q$ elements of the analog signal vector $\widetilde{\textbf{y}}(i)$ that are selected using the binary matrix $\textbf{B}(i)$ and are fed to the ADCs in the $i$th channel-use.
The vector $\widetilde{\textbf{y}}(i) = (\widehat{\textbf{y}}^T(b k+1), \widehat{\textbf{y}}^T(b k+2),\cdots,\widehat{\textbf{y}}^T(b (k+1))^T$ where $\widehat{\textbf{y}}(i)=\textbf{V}\textbf{y}(i), k = \textrm{mod}_b(i)-1$ represents the concatenation of $b$ consecutive channel outputs which are processed through the linear analog combiner matrix $\textbf{V}$. These channel outputs are buffered in the delay network, and are jointly processed in analog domain at the receiver.
The vector $\tilde{\textbf{t}}$ represents the adaptive part of the ADC thresholds whose elements follow the equation
\begin{align}
\tilde{t}_k(i) = \textbf{u}^l_{k}(i)\widehat{\textbf{W}}\textbf{u}^r_{k}(i), \text{ for } k \in[n_q].
\end{align}
The matrix $\widehat{\textbf{W}}(i) = [\widehat{\textbf{w}}(kb+1), \widehat{\textbf{w}}(kb+2)\cdots, \widehat{\textbf{w}}(i-1)]$ represents the ADC outputs from the channel-uses $kb + 1$ to $i-1$, where $k = \textrm{mod}_{b}(i)$.
The vector set $\{ (\textbf{u}^l_{1}(i),\textbf{u}^r_{1}(i)),(\textbf{u}^l_{2}(i),\textbf{u}^r_{2}(i)), \cdots, (\textbf{u}^l_{n_q}(i)$ $,\textbf{u}^r_{n_q}(i))\}$ is called the \textit{adaptive threshold coefficient vector set} and denotes the linear rule which determines the threshold of the ADCs at the $i$th channel-use with respect to the ADC outputs in the previous channel uses. The vector $\textbf{t}$ in Equation \eqref{eq:ADC_out} represents the fixed part of the ADC thresholds.
In \cite{abbasMtISIT2019}, it is shown that the adaptive threshold receiver allows for $n_q$-bit quantization using $n_q$ one-bit ADCs where the $n_q$ bits can be allocated to the antennas.
Furthermore, it is proved that in high SNR regime, this receiver achieves the transmission rate of $n_q$ bits per channel-use which is optimal among all the receivers with the same number of one-bit ADCs. Moreover, this optimal rate is achieved using practical modulation schemes such as pulse amplitude modulation (PAM) and quadratic amplitude modulation (QAM). This is explained in more detail in Section \ref{subsec:PtP}.
\vspace*{-0.2cm}
\section{Communication Schemes and Channel Estimation}
\label{sec:PtP}
In this section, we first consider communication over a PtP MIMO system with adaptive threshold receiver and perfect CSI available at both transmitter and receiver terminals. Then, we investigate a DL scenario. We further investigate channel estimation using one-bit ADCs and the impact of imperfect CSI on the proposed schemes.
\vspace*{-0.2cm}
\subsection{PtP and DL Communication with Perfect CSI}
\label{subsec:PtP}
\textbf{PtP communication:} Consider a PtP MIMO system with the adaptive threshold receiver, where the transmitter is equipped with $n_t$ antennas, the receiver is equipped with $n_r$ antennas and $n_q$ one-bit ADCs, and the channel is represented as in Equation \eqref{eq:channel} with channel matrix \textbf{H}. It is assumed that both transmitter and receiver terminals have perfect CSI. We consider the communication scheme described in {\cite[Theorem 1]{abbasMtISIT2019}} which is summarized in the following.
In the first step, singular value decomposition (SVD) is performed in the analog domain to transform the complex channel \textbf{H} into $s$ parallel real subchannels. Let $\sigma_{k},~k \in [s]$ represent the singular values associated with the per dimension of the channel gain matrix ${\textbf{H}}$ (i.e., $\textrm{Re} \left (\textbf{H}\right)$ and $\textrm{Im} \left (\textbf{H}\right) $). Fix $n_{q,1}, n_{q,2}, \cdots, n_{q,s}\in \mathbb{N}\cup \{0\}$ and $P_1,P_2,\cdots,P_s\in \{\mathbb{R}^{+},0\}$ such that $\sum_{k\in [s]} n_{q,k}=n_q$ and $\sum_{k\in [s]}P_k=P$, where $n_{q,k}$ and $P_k$ are the number of one-bit ADCs and transmit power allocated to the $k$th subchannel, respectively. In Section \ref{sec:numerical_results}, we provide several low-complexity algorithms for power and ADC allocation and compare their performances in terms of achievable rates under realistic channel models. The matrices \textbf{V} and \textbf{B}, and the adaptive threshold coefficient vector set are taken so as to ensure that the $n_{q,k}$ one-bit ADCs allocated to the $k$th subchannel perform as $n_{q,k}$-bit quantization following \cite{abbasMtISIT2019}. The transmitter uses $2^{n_{q,k}}$-PAM signaling for each real subchannel. An example of this receiver is provided below.
\vspace*{-0.1cm}
\begin{Example}
For a PtP single-input single-output (SISO) system with channel gain $\textbf{H} = 1$, and $n_q= 2$, a possible set of values of the receiver parameters are as follows: $\textbf{V} = 1$, $\textbf{u}^l_j = \textbf{1}_j, j\in\{1,2\}$
\begin{gather}
\textbf{t} = \begin{bmatrix}0 \\ 0 \end{bmatrix},
\textbf{B} = \begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix},\textbf{u}^r_1(i) = \textbf{u}^r_2(i) = \begin{cases} 0 & \text{$i$ is even}\\
-\frac{1}{2} & \text{$i$ is odd}
\end{cases},
\end{gather}
where $\textbf{1}_j$ is the indicator vector whose $j$th element is one and the rest are zero. To elaborate on how this receiver operates, let us consider the first three channel-uses. After two channel-uses, the thresholds of ADCs are set to zero and the channel outputs in the first and second channel-uses are fed to the first and second ADC, respectively. In the third channel-use, the thresholds of ADCs are set to half of their outputs in the previous channel-use and a delayed version of the first two channel outputs are fed to the ADCs. The receiver operates in a similar manner to the second and third channel-uses for the rest of the communication.
\end{Example}
\noindent\textbf{DL communication: }
In DL communication, a TDMA protocol is used where users are scheduled in a round robin fashion so that the BS transmits to one user at each channel-use. We assume the BS transmits to each user once every $n_u$ channel-uses. Each user activates its adaptive threshold receiver for all of the $n_u$ channel-uses, i.e the user's ADCs are active even when the BS does not transmit to it \cite{abbasMtISIT2019}. As in the PtP scenario, SVD, power and ADC allocation across subchannels are performed for optimizing communication rates. Let $n_{q,i,k}$ denote the number of one-bit ADCs allocated to the $k$th subchannel of the $i$th user. The BS uses $2^{n_u n_{q,i,k}}$-PAM signaling over that subchannel to compensate for the $n_u-1$ channel-uses it does not send information over that subchannel.
Compared to a naive time-sharing strategy where a user's receiver is only active when the BS transmits to that user, the proposed TDMA strategy increases the communication rate.
Under the proposed scheme, at high SNR, each user achieves the optimal transmission rate which is $n_{q,i}$ bit per channel-use for the $i$th user.
In contrast, the naive TDMA strategy leads to the high SNR achievable rate of $n_{q,i}/{n_u}$ bit per channel-use.
We elaborate more on this in Section \ref{sec:numerical_results}. An example of the receiver parameters is provided below.
\begin{table}[t]
\centering
\label{tab:sim-param}
\caption{Simulation Parameters}
\begin{footnotesize}
\begin{tabular}{ll} \toprule
$\bf Parameter$& $\bf Value$\\ \midrule
Cell radius& $10$ to $50$ m\\
Carrier frequency& $28$ GHz\\
Bandwidth& $1$ GHz\\
Noise spectral density& $-174$ dBm/Hz\\
Noise figure& $6$ dB\\
BS antenna & $8$x$8$ uniform planar array\\
User antenna & $4$x$4$ uniform planar array\\
BS transmit power & $30$ dBm\\
Path loss (LOS) in dB & $61.4+20\log_{10}(d~\text{in m}) + {\cal{N}}(0, 5.8^2)$\\
Path loss (NLOS) in dB & $72+29.2\log_{10}(d~\text{in m}) + {\cal{N}}(0, 8.7^2)$\\
Probability of LOS & $\textrm{exp}(-0.0149d~\text{in m})$\\
\bottomrule\\
\end{tabular}
\end{footnotesize}
\vspace*{-0.4cm}
\end{table}
\vspace*{-0.2cm}
\subsection{Channel Estimation}
\label{subsec:ch}
While Section \ref{subsec:PtP} considers perfect CSI, the effects of one-bit ADCs on accuracy of channel estimation must also be taken into account. We assume that at the beginning of each coherence interval, the users perform channel estimation using expectation maximization generalized approximate massage passing (EM-GAMP) algorithm investigated in \cite{mo2018channel} which exploits the sparsity of the angular representation of the mmWave channel to reduce the estimation overhead and enhance the quality of the channel estimate. In \cite{mo2018channel}, the EM-GAMP algorithm for mmWave channel estimation with fully digital receivers equipped with a $m$-bit ADC per each dimension (real and imaginary) of the receiver antennas is studied. \textcolor{black}{Furthermore, the computational complexity of the algorithm is analyzed and methods for complexity reduction are provided.}
Since we assume no prior knowledge about the channel such as long-term statistics, the best worst case allocation of the one-bit ADCs among antennas is the uniform one.
Therefore, to perform channel estimation, we configure the adaptive threshold receiver for the $i$th user to form a fully digital receiver with a $m_i$-bit ADC per each dimension of each antenna, where $m_i = n_{q,i}/{2n_i}$.
After channel estimation, the users send their CSI to the BS. Once the channel estimation step is completed, each user is configured for DL transmission as described in Section \ref{subsec:PtP} using the estimated CSI. In Section \ref{sec:numerical_results}, we demonstrate through several simulations of practical scenarios that using estimated channel through this method does not lead to significant rate-loss when the adaptive threshold receiver is used.
\vspace*{-0.2cm}
\section{Simulation Results}
\vspace*{-0.2cm}
\label{sec:numerical_results}
In this section, we provide various simulations to establish the performance of the proposed architectures in Section \ref{sec:PtP}
in practical scenarios. We consider a small-cell scenario with three dimensional network model consisting of a BS and ten users operating at 28 GHz. Users are distributed uniformly in a ring around the BS with inner and outer radii of $10$ to $50$ meters. The maximum transmit power of the users and the BS are set to $23$ dBm and $30$ dBm, respectively. We consider the clustered channel model described in Section \ref{sec:System Model}. The value of the parameters in this model is adopted from \cite{akdeniz2014millimeter} where an empirical approach is taken to estimate these parameters. To elaborate, we assume that the number of clusters in the channel of each user follows a Poisson-max distribution (i.e. $n_{c,j}=\max\{1,Poisson(\lambda)\}$) with mean $\lambda=1.8$ and $20$ rays per cluster. Furthermore, we assume that the BS and users are equipped with $8\times 8$ and $4\times 4$ uniform rectangular antenna arrays, respectively. We consider a maximum spectral efficiency of 8 bps/Hz which is equivalent to use of 256 QAM modulation (i.e. 16 PAM per real dimension) as envisioned for 5G NR standard \cite{5G-NR}. We assume that the coherence time of the channel is $n_c = 10240$ as in \cite{mo2018channel}.
Table I
lists the details of the simulation parameters chosen.
\begin{figure}[t]
\centering
\includegraphics[width = 0.45\textwidth]{fig_PtP_Oct18.pdf}
\caption{Empirical CDF of the achievable rate of the PtP system with perfect CSI when the receiver is equipped with $n_q = 8$ one-bit ADCs.}
\label{fig:PtP_1}
\vspace*{-0.4cm}
\end{figure}
\vspace*{-0.2cm}
\subsection{Power and ADC Allocation with Perfect CSI}
The transmission schemes described in Section \ref{subsec:PtP} use SVD to transform the MIMO channel into a set of parallel subchannels and then distribute the transmit power and ADCs among them. Finding the optimal distribution of the transmit power and ADCs is equivalent to solving a mixed integer programming problem which is known to be NP-hard \cite{bixby2004mixed}. Here, we investigate the performance of several practical heuristic power and ADC allocation approaches. To this end, in this section we consider a PtP scenario with perfect CSI and
$n_q = 8$. We compare the achievable rate for the following power and ADC allocation strategies: \\
$\bullet$ \textbf{WP-UA (Waterfilling Power/Uniform ADCs):} This heuristic employs waterfilling \cite{cover2012elements} for power allocation among subchannels and assigns the ADCs to each subchannel uniformly. Note that this may result in a non-uniform ADC assignment to receive antennas.\\
$\bullet$ \textbf{UP-UA (Uniform Power/Uniform ADCs):} In this approach, both the transmit power at the transmitter and ADCs at the receiver are distributed uniformly among the subchannels.\\
$\bullet$ \textbf{SP-SA (Selection Diversity):} This approach allocates all the power and ADCs to the strongest subchannel.
Fig. \ref{fig:PtP_1} illustrates the empirical cumulative distribution function (CDF) of the achievable rates for the described power and ADC allocation methods for $n_q = 8$. As a performance benchmark and an upperbound for the achievable rates, we consider the truncated Shannon capacity of the MIMO channel which is $\min\{ C,n_q\}$, where $C$ is the Shannon capacity. Fig. \ref{fig:PtP_1} suggests that WP-UA has better performance compared to the other approaches while in the power-limited regime (low rates) SP-SA approach achieves comparable performance.
We also note that the modulation cap restricts the performance of the SP-SA approach. Furthermore, we can see that the WP-UA heuristic performs close to the truncated Shannon upperbound implying that using complex optimization for joint power and ADC allocation would only lead to incremental improvements in the achievable rates. We note that for users with low and intermediate SNRs (for which the achieved data rates are up to $4$ bps/Hz), as well as for users with high SNRs using only a few one-bit ADCs (e.g. one or two one-bit ADCs per real subchannel as in Fig. \ref{fig:PtP_1}) leads to a near-optimal data rates.
\vspace*{-0.3cm}
\subsection{Impact of Imperfect CSI}
\vspace*{-0.1cm}
In this section, we investigate the impact of imperfect channel estimation on the performance of the proposed architectures for PtP and DL scenarios described in Section \ref{subsec:PtP}.
To estimate the channel matrix of each user, we proceed as explained in Section \ref{subsec:ch}. The BS transmits a pilot sequence of length $n_p = 512$ and the users perform channel estimation using three one-bit ADCs per dimension (real and imaginary) of each antenna which are configured as a $3$-bit ADC using the adaptive threshold receiver. Note that using three one-bit ADCs instead of a $3$-bit ADC can potentially reduce the power consumption at the receiver. One possible venue for future work is to use knowledge of the long-term channel statistics to reduce the number of required ADCs while achieving similar performance in channel estimation.
\begin{figure}[t]
\centering
\includegraphics[width = 0.45\textwidth]{fig_MAC_Oct18_b3_512.pdf}
\caption{Empirical CDF of the achievable rate of the PtP system with $n_q= 8$ one-bit ADCs at the receiver.}
\label{fig:MAC}
\vspace*{-0.5cm}
\end{figure}
Although we use three one-bit ADCs per dimension ($16\times 2\times 3 =96$ in total) during channel estimation, motivated by Fig. \ref{fig:PtP_1}, we do not need that many ADCs to achieve near-optimal performance during data transmission. The reason is that we can use CSI (available after channel estimation) to exploit the sparsity of the channel. Moreover, using fewer ADCs leads to lower power consumption during the data transmission phase. Therefore, we assume that the users only use $n_{q,i} = 8$ one-bit ADCs during data transmission. Also, for ADC and power allocation we use the WP-UA heuristic. \textcolor{black}{To calculate the achievable rate of the system, we design the system parameters such as modulation points, analog linear combiner as described in Section \ref{subsec:PtP} using the estimated channel. Next, we numerically calculate the transmission probability matrix of the corresponding discrete input discrete output system. Then, to determine the achievable rate, we compute the mutual information given a uniform prior for input.}
The empirical CDF of the PtP achievable rates with perfect and imperfect CSI is illustrated in Fig. \ref{fig:MAC}. As an upper bound, we use $\frac{n_c-n_p}{n_c}\min\{ C, n_q \}$, where $C$ is the Shannon capacity with perfect CSI and high resolution ADCs. We note that in the presence of the channel estimation error, the MIMO subchannels after performing SVD will interfere with each other which degrades the performance. Comparing the CDF of the achievable rates with perfect and imperfect CSI in Fig. \ref{fig:MAC}, we observe that while the performance loss is small for intermediate and high SNRs, it is larger in the low SNR regime. This is due to the fact that in low SNRs the channel estimation error is high.
Fig. \ref{fig:BC} depicts the empirical CDF of the DL per user achievable rates with perfect and imperfect CSI. We use $ \frac{n_c-n_p}{n_c}\min\{C_{t}, n_{q,i}\}$, where $C_t$ is the Shannon capacity with TDMA of equal time-shares in the presence of perfect CSI and high resolution ADCs as an upperbound. Since $n_{q,i} = 8$, the truncation effect cannot be observed in the range of the plot. We observe that the performance loss caused by the estimation error is small in intermediate and high SNR regime. Furthermore, we see that the proposed TDMA approach for DL with adaptive threshold receiver discussed in Section \ref{subsec:PtP} provides a performance close to the upperbound. Note that, as discussed in Section \ref{subsec:PtP}, while the proposed TDMA strategy leads to higher power consumption at the users compared to naive TDMA since their ADCs are active in all the channel-uses, it significantly increases the system's achievable rate (up to $4\times$ over naive TDMA).
\begin{figure}[t]
\centering
\includegraphics[width = 0.45\textwidth]{fig_BC_Oct18_b3_512.pdf}
\caption{Empirical CDF of the achievable rates of the users for DL transmission when $n_u = 10$ and the users are equipped with $n_{q,i}=8$ one-bit ADCs. SC, P-CSI, and E-CSI denote Shannon capacity, perfect CSI, and estimated CSI, respectively. }
\label{fig:BC}
\vspace*{-0.5cm}
\end{figure}
\vspace*{-0.3cm}
\section{Conclusion}
\vspace*{-0.2cm}
\label{sec:conclusion}
In this paper, we have considered energy efficient multiuser communication in a practical mmWave DL scenario, where the receiver is equipped with one-bit ADCs. We have compared low-complexity algorithms for the power and ADC allocation among transmitter and receiver terminals, respectively. We have shown that under practical mmWave settings with limits on the modulation levels, and imperfect channel estimation, using low resolution ADCs with the adaptive threshold receiver under simple allocation algorithms does not notably degrade the performance in terms of achievable rates.
We have provided simulations for multiuser DL communication scenarios which show that the achievable rate of the proposed architecture is close to the optimal Shannon rate when using TDMA protocol where users have equal time-shares with high resolution ADCs.
\bibliographystyle{IEEEbib}
|
{
"timestamp": "2020-02-12T02:08:35",
"yymm": "2002",
"arxiv_id": "2002.04221",
"language": "en",
"url": "https://arxiv.org/abs/2002.04221"
}
|
\section{Introduction}
Influenced by real-world dynamics and hardware uncertainty, robots inevitably fail in task executions.
Robot abnormal behaviors result in various hazards, including economic loss, threats to human safety, and decreased social acceptance of robots.
Failure avoidance is an urgent need for improving robot performance \cite{c28} \cite{c29}, yet it is a challenging practice in the real world. First, it is hard for a robot to realize that its performance is abnormal \cite{c27}. Accurate and prompt failure detection is difficult due to the high requirements for both advanced sensing systems and reasoning algorithms. It is challenging to design a reasoning system that both plans task executions and simultaneously monitors execution abnormalities \cite{c30}\cite{c31}. Moreover, even when a robot can realize its abnormalities, it is difficult for it to identify the abnormal executions and correct them correspondingly \cite{c32}\cite{c64}. Lastly, it is expensive to correct robot failures. The extra perceiving, reasoning, and action systems increase costs of robot system design and deployment \cite{c90}.
\begin{figure}[!t]
\centering
\includegraphics [width=0.92 \linewidth ]{images/trans.png}
\caption{An illustration of the attention transfer using the developed \textit{\textbf{H2R-AT}} model. The attention region of a human (from human observation perspective) and robot (from robot perceiving perspective) are highlighted as shown. By using \textit{\textbf{H2R-AT}} the attention of abnormal robot executions was transferred from a human to a robot to alert its failures in an early stage before failures happen.}
\label{illustration}
\vspace{-0.6cm}
\end{figure}
\begin{figure*}[ht!]
\centering
\includegraphics [width=0.8 \linewidth ]{images/attention_mapping.png}
\caption{The framework of \textbf{\textit{H2R-AT}} using human verbal reminders for robot failure avoidance. Human attention is embedded in verbal reminders. Feature vectors extracted from human verbal reminders and feature vectors from robot visual perceiving are combined to get confined attention. With the confined attention, the robot can correct its abnormal behavior accordingly.}
\label{H2R-AT}
\vspace{-1em}
\end{figure*}
To address these challenges, a novel human-to-robot attention transfer (\textbf{\textit{H2R-AT}}) method, as shown in Figure \ref{illustration}, was developed in this paper, by introducing human intelligence to detect abnormal robot executions at an early stage and subsequently correct executions to avoid failure. Human attention is reflected in the concern of the specific area of their perceiving. When abnormal behaviors occur, human address their concern immediately on the fault area and generate the possible reason based on their domain knowledge. Through \textbf{\textit{H2R-AT}}, human intent is transferred to robots and help them perceive the abnormal execution. In this research, we envision a human-guided robotic system. Human monitors robot execution and verbally alerts robots to the abnormal executions. The research made two contributions:
\begin{itemize}
\item { A novel attention transfer method was developed to transfer human attention of unsatisfied robot executions to a robot, to alert a robot of its abnormality.}
\item {Developed an attention supported failure correction method to help with the identification of robot abnormal executions for performance improvement.}
\end{itemize}
\section{Related Work}
Attention sharing was widely investigated in the robotics field to indicate human preference and clarify robot confusion \cite{p6}\cite{p7}. The attention mechanism was used in both daily and industrial scenarios to increase robot execution efficiency, improve robot execution accuracy, ensure human safety, and increase robot social acceptance \cite{c34}\cite{c38}\cite{c63}. For example, a social robot used human-like gestures according to human attention in a conversation to increase human engagement in interactions\cite{p3}\cite{c37}\cite{c35}; a service robot changed its trajectory by estimating human intended places to avoid collision with the human\cite{e1}\cite{p4}; an industrial robot followed human head orientations to find the intended place to improve object search and delivery accuracy \cite{c40}\cite{c39}. Even though attention mechanisms have been used in robotics research, there is minimal work focusing on robot failure avoidance. The \textit{\textbf{H2R-AT}} presented in this paper targeted failure avoidance by utilizing an attention mechanism to involved in a human to send timely alerts for abnormal robot behaviors.
Current attention transfer methods require prior user training which is expensive and time-consuming. Non-verbal attention was used to express human expectations to guide robot executions. Safety concern attention was delivered by using human gaze to indicate the cared human location to avoid collision \cite{c45}\cite{c47}. Social etiquette attention was delivered by recognizing facial expressions to suggest human willingness to cooperate \cite{c41}\cite{c42}\cite{c43}. Human preference attention was delivered by using hand gestures to point to the human-desired personal items for daily assistance \cite{c46}\cite{c49}.
Though non-verbal attention is effective in delivering human instructions to robots, due to the needs in adding extra perceiving devices and reasoning algorithms, such as computer vision systems and image intelligence methods, to extract human instructions, non-verbal attention is expensive in designing robotic systems. Also, non-verbal attentions allow for only limited interaction patterns, restricting the content in human instructions sent to a robot and further limiting the implementation scope of robotic systems. In this work, the proposed \textbf{\textit{H2R-AT}} enables a robot to directly process human verbal instructions with an accurate understanding, supporting natural human guidance on robot failure avoidance with no requirements for prior user training, complex vision or sensor systems, thus reducing the cost for the robot failure avoidance.
\section{Attention Transfer Model}
When abnormal executions occur, the human will give a verbal alert to correct robot behaviors. The transfer of attention can help the robot to understand human alert and correspondingly identify abnormal robot executions by localizing human attention regions onto robot perceived actions.
As shown in Figure \ref{H2R-AT}, by using verbal reminders, human attention to suspicious robot behaviors is expressed. Based on Stacked Attention Networks \cite{san}, we designed a new model, \textbf{\textit{H2R-AT}}, combined with analysis methods in human verbal reminder processing and visual feature extraction of abnormal robot executions. The first-layer attention generated by combining these two factors are then multiplied to the robot perceiving, added to the human reminder feature to be the new reminder input for the second-layer attention. Human attention has been correlated with specific regions in robot perceiving, which is directly correlated with some robot executions, for finally identifying the abnormal robot executions according to human attention.
\subsection{Interpreting Human Intention from Verbal Reminders}
Human verbal alerts described the location and types of robot abnormal executions. Based on a Long-Short Term Memory (LSTM) model, a model suitable for sequential input, commonly used for linguistic data, the semantic meaning embedded in human verbal alerts can be extracted.
The Natural Language Processing (NLP) module can identify different reminders (concerns) accurately because of the use of LSTM and word embedding. LSTM has a strong temporal modeling capability in extracting meaning from temporal human verbal instruction which is suitable for dynamic scenarios where a human gives a continuous description.
The NLP module used LSTM instead of other semantic analysis methods because the human reminders do not have a fixed length and usually vary both in the form and in the meaning. Using LSTM means less training and a better accuracy. The human reminder is usually short, less than 15 words. This also makes LSTM suitable.
Considering a human natural language reminder $r = [ r_1 , r_2 , ... r_I ]$ where $I$ represents the length of the reminder and $r_i$ represents a ``one-hot" vector of the $i^{st}$ word of the reminder.
Let $M_{we}$ represent the word embedding matrix, which can show the robot the relations of different words. The matrix is used to convert the words to vectors $M_i$.
\begin{equation}\label{eq:1}
M_i = M_{we} \cdot r_i ,
i \in {1,2, ... I} \\
\end{equation}
Then the result vectors of each word are fed to LSTM in sequence and use the vector of the last word $R_I$ to represent the whole-sentence reminder.
\begin{equation}\label{eq:2}
R_i = LSTM(M_i) ,
i \in {1,2, ... I} \\
\end{equation}
With this algorithm, a robot combines the meaning of a single word and the context of the whole reminder to help it to extract attention-related patterns from human reminders.
\subsection{Locating Robot Attention in Visual Perceiving}
The moment the human raises a reminder is when the robots show visually observable abnormal executions. The robot records a video at this specific moment from its own perspective for describing the abnormal executions. The visual features of robot abnormal executions are extracted by the following method.
Each frame of the video is turned into a $448\times448$ size raw image $I$. The images then are converted into a $14\times14\times512$ feature map $V_f$ by a Convolutional Neural Network (CNN) method called VGGnet16\cite{vgg}. The $14\times14$ dimension represents the 196 regions in the $448\times448$ picture and each region denoted by $F_i$ , $i \in [0,195]$ has $32\times32$ pixels. The $512$ is the dimension of the features of each region. In order to combine the word vectors to the image matrix, a perception is used to convert $V_f$ to have the same dimension as the reminder vectors.
\begin{equation}\label{eq:3}
F_{I} = CNN_{VGG}(I)
\end{equation}
\begin{equation}\label{eq:4}
V_{I} = tanh(W_i \cdot F_I + b_i)
\end{equation}
In the Equation \ref{eq:4}, $V_I$ is a matrix. The $i^{th}$ column of $V_I$ stands for the visual feature vector of the $i^{th}$ region of the image.
\subsection{H2R-AT for Attention Transfer}
The \textit{\textbf{H2R-AT}} combines the human reminder and the robot view. By using two layers of attention, the most critical region in the robot view is identified as the actual attention of the robot.
When robots are showing abnormal executions, a robot uses an \textit{\textbf{H2R-AT}} model to gradually filter out unrelated areas within its perceiving scope to focus on the abnormal regions.
We use this stack neural network instead of other method because it had a better alignment between image and natural language, better adapting to dynamic process. With two stacked network, it has a better accuracy than other methods.
Given the robot visual perceiving feature matrix $V_I$ from the robot vision and the reminder vector $R_I$ from a human supervisor, the robot can reason by the \textit{\textbf{H2R-AT}} model, as shown in Figure \ref{H2R-AT}.
There are two layers in our \textit{\textbf{H2R-AT}} model. In the first layer, a single layer neural network and a softmax function are used to generate the distribution of robot attention to its view.
\begin{equation}\label{eq:5}
h_{1} = tanh((W_{V_I} \cdot V_I) \oplus (W_{R_I} \cdot R_I + b_{R_I}))
\end{equation}
\begin{equation}\label{eq:6}
p_{1} = softmax(W_{p_{1}}\cdot h_{1}+b_{p_{1}})
\end{equation}
$V_I \in R^{m \times d}$ represents the features of the robot visual perceiving, $m$ represents the dimension of features in a region and $d$ represents the number of regions in robot image perceiving. The vector $R_I\in R^{m}$ represents the reminder features and is a $m$ dimensional vector. Suppose the dimension of $W_{R_I}$ and $W_{V_I}$ is $k \times m$ and the dimension of $W_{h_1}$ is $\textit{1}\times k$, then the matrix $p_1$ is a $d$ dimensional vector and represents the attention distribution of the first layer. $\oplus$ is used to denote the addition between a $m$ dimension vector and a $m \times d$ matrix, which is adding each column of the matrix by the vector.
Then the robot perceiving feature $V_I$ is combined together to a $d$ dimension vector $v$ according to the attention distribution $p_1$ and combines $v$ with $R_I$ to form a vector $u_1$ which has both the information of the robot visual perceiving and the reminder.
\begin{equation}\label{eq:7}
v = p_1 \cdot V_I
\end{equation}
\begin{equation}\label{eq:8}
u_1 = v + R_I
\end{equation}
Because of the use of attention, the more relevant the region is to the abnormal execution, the more likely that a robot will focus on it, which will lead to a more informative $u_1$ and thus a higher accuracy compared to the robot using a full view to reason. However, in a complicated case, one attention layer is not enough to locate the region which is most relevant to the abnormal execution, so the previous attention generating process is iterated by feeding the result of the first attention layer to the second layer, leading to a more fine-grained attention distribution.
\begin{equation}\label{eq:9}
h_{2} = tanh((W_{V'_I} \cdot V'_I) \oplus (W'_{R_I} \cdot u_1 + b'_{R_I}))
\end{equation}
\begin{equation}\label{eq:10}
p_{2} = softmax(W_{p_{2}}\cdot h_{2}+b_{p_{2}})
\end{equation}
Then a new vector $v'$ is generated like $v$ by $p_2$ and added with $u_1$ to generate a more feature distinctive vector $u_2$ which also has both the visual information and the information from the reminder.
\begin{equation}\label{eq:11}
v' = p_2 \cdot V'_I
\end{equation}
\begin{equation}\label{eq:12}
u_2 = v' + u_1
\end{equation}
The generated $u_2$ is used to infer which kind of abnormal execution the robot is making.
\begin{equation}\label{eq:13}
p_{ans} = softmax(W_{u}\cdot u_{2} + b_{u} )
\end{equation}
\subsection{Attention Supported Failure Avoidance}
Based on this research, a correction mechanism is supported. When the abnormal actions detected by the H2R-AT, the correct actions for failure avoidance will be recommended to improve the robot performance.
\begin{equation}\label{eq:14}
\hat{\alpha} = \mathop{\arg\max}_{\alpha}P(\alpha_i | \alpha_{attention}),
i \in {1,2, ...} \\
\end{equation}
\section{Validation}
The effectiveness of the \textit{\textbf{H2R-AT}} model was evaluated by both its accuracy and reliability in transferring human attention for robot failure avoidance. The performance of the model is evaluated by comparing the human attention distribution and the model attention distribution.
\begin{figure*}[!ht]
\centering
\includegraphics [scale=0.45 ]{images/visualization.png}
\caption{Visualization of the attention transfer. The \textbf{Baseline} is human attention generated by user study. The three lines in this figure show the simulated robot abnormal execution, robot attention (recommended by \textbf{\textit{H2R-AT}}), and human attention (baseline) respectively. For all four cases, robot attention is highly consistent with human attention, validating the accuracy of \textbf{\textit{H2R-AT}}. Due to the model uncertainties and vague human descriptions, robot attention is slightly more sparse than human attention.}
\label{visualization}
\vspace{-1em}
\end{figure*}
\subsection{Experiment Settings: Robot Task Scenarios and Human User Study}
\textbf{Robot Task Design}. To learn and validate the effectiveness of the \textit{\textbf{H2R-AT}} model in guiding robot failure avoidance, two representative task scenarios, ``serve water for a human in a kitchen" and ``pick up a defective gear in a factory" were designed. To represent typical failures in robot executions, four types of basic abnormal robot executions were designed as ``wrong action, wrong pose, wrong region, and wrong spatial relation". ALl other robot executions can be conbined by this four executions. Task scenarios were designed with a JACO robot arm mounted with an HD camera by using the simulation platform CRAIhri, which is developed based on the open-access software V-REP \cite{p16}, and is a widely-used simulation platform in robotics research \cite{p1}\cite{p2}.
In our experiment, a robot arm JACO completed tasks while monitored by a human instructor. The instructor was asked to give verbal reminders to alert the robot when the robot showed abnormal executions. At the moment the human sends alerts, the visual observation from robot perspective was recorded as video training samples. By using the \textbf{\textit{H2R-AT}} model to align both robot visual perceiving and human verbal alerts at the moment robots showed abnormal executions, the nonlinear relation between human attention and robot attention was modeled to guide robot failure avoidance in an early stage. The robot perceiving was recorded from the mounted HD camera.
\textbf{Human User Study}. To learn and validate the \textit{\textbf{H2R-AT}} model, a human user study was conducted to collect verbal instructions for abnormal execution description and suggestions for robot execution corrections. The user study was conducted on the crowd-sourcing platform, Amazon Mechanical Turk \cite{c70}. In total, $252$ English-speaking volunteers were recruited with $1.5$ dollar payment each. They were required to watch a 10 second video containing abnormal executions, and to provide abnormality descriptions, correction suggestion, and the area they paid most attention to, at the moment of detecting abnormal execution. Here in the user study, since the volunteers were asked to give abnormality descriptions, they must pay their attention to the most suspicious area that showed the robot abnormal behavior. Thus, by collecting the regions where they were paying attention to, the user attention distribution which works as the evaluation baseline of robot attention were generated.
After filtering the questionnaires, about $12000$ verbal reminders were collected to label $12000$ most-typical images of abnormal robot executions.
We divided the collected data into two parts equally, one for training and one for testing, these two parts all have four different basic abnormal execution types and two different scenarios.
\subsection{H2R-AT Performance in Attention Transfer}
\textbf{H2R-AT Model Accuracy}. As shown in Figure \ref{visualization}, human attention was successfully transferred to robot attention. The three lines denote four types of abnormal executions, model-transferred robot attention, and actual human attention (baseline), respectively.
The accuracy of the \textit{\textbf{H2R-AT}} model in attention transfer is calculated by the average of the precision and recall. Various levels of confidence scores for the predictions made by the \textit{\textbf{H2R-AT}} model were used as the threshold to accept or reject the true positives. On each curve, one dot denotes a recall-precision pair given one threshold; one curve denotes the prediction performance of \textit{\textbf{H2R-AT}} in predicting one category of task scenarios. By setting the confidence threshold as 0.5 in reference, the average precision was about $73.73\%$. The average recall is about $73.63\%$. The precision of each case is $76.73\%$, $72.46\%$, $73.77\%$, and $71.99\%$. The recall of each case is $70.07\%$, $78.75\%$, $70.90\%$, and $74.79\%$. The stable performance and the P-R curves which are close to the upper-right corner show the effectiveness of the \textit{\textbf{H2R-AT}} model in transferring human attention into a robot in various scenarios.
\subsection{H2R-AT Reliability Analysis}
\textbf{Definition of Reliability.} Unlike human attention, which concentrates on one area, robot attention is distributed in several regions due to the model and data uncertainty.
For example, in some ``wrong pose" case,
the robot mapped some of its attention on the elbow, while correct attention was on the fingers. The most popular attention regions selected by volunteers in the user study were set as the baseline to measure the model reliability of attention transfer. If the model recommended attention regions are inconsistent with the human attention region, then it means predicted attention is unreliable in supporting robot failure avoidance.
\textbf{Analysis of Model Instability.} Vague descriptions caused false attention mapping. The robot attention focused on a same undesired region consistently because the description was not clear enough to point out the exact part causing the error
There are cases in which the generated robot attention focused on random part which are unrelated to the robot. These kinds of cases mislead the robot to an undesired result. A similarity in features of different parts of the robot perceiving makes these cases unavoidable by simply retraining the model.
Thus, in order to reduce attention focusing on robot-unrelated things, a filter is designed to filter out these distractions, such as wall and carpet. The critical part often appears near the robot and the center of the perceiving. So the edge parts of the robot view were filtered out to help the robot with accurate attention mapping. It turns out that with the filter, the generated attention is more focused on the robot and the object.
\section{Conclusion
In this paper, \textit{\textbf{H2R-AT}}, a novel model using human attention to avoid robot execution failure was proposed. The robot was enabled to identify its abnormal executions by interpreting human verbal reminders. Four types of robot abnormal executions - wrong action, wrong region, wrong pose, and wrong spatial relation - in both daily and industrial scenarios were designed. Volunteers were recruited to provide verbal reminders for the robots and labeled their concerned executions for training the \textit{\textbf{H2R-AT}}. With an average accuracy of $73.68\%$ in transferring human attention and $67.04\%$ performance improvement, the feasibility of verbally transferring human attention to robots for failure avoidance was validated, showing the great potential in using this \textit{\textbf{H2R-AT}} model for naturally integrating human intelligence for robot failure avoidance, in scenarios from daily assistance to cooperative manufacturing.
Though this work was based on simulated environment, there are enough facts proving that this model can be used in the real-world environment. To start with, in this work, robot executions were the only part that was simulated, the model itself and the instructions we collected from the volunteers were real. That means, without considering mechanical failures, the environment was no different from the real-world environment. Moreover, the four tasks that were validated in this work covered the four basic robot abnormal actions. Thus, every robot abnormal behavior in the real-world is essentially one of the tasks in this work.
To implement this model to practically guide robot executions in a real-world environment, data of the real world, i.e. appropriate visual observations of practical robot behaviors, as well as human verbal descriptions for the abnormal robot executions need to be provided to train a practical model for guiding real-world human-robot interaction. In the future, novel attention-based correction methods will be designed to accurately correct robot executions after human reminders. Also, the attention region identification can be improved by using some rule-based methods to narrow down the searching.
\addtolength{\textheight}{-4cm}
|
{
"timestamp": "2021-06-30T02:26:50",
"yymm": "2002",
"arxiv_id": "2002.04242",
"language": "en",
"url": "https://arxiv.org/abs/2002.04242"
}
|
\section{Introduction}
In the last 20 years the study of cubic fourfolds has been a central research topic, due for instance to their rich associated hyperk\"ahler geometry and the still open question concerning whether they are rational or irrational. One foundational work is \cite{Hassett}, where Hassett studied special cubic fourfolds, i.e.\ cubic fourfolds containing a surface which is not a complete intersection. Special cubic fourfolds form divisors in the moduli space of cubic fourfolds parametrized by a positive even integer $d$ called the discriminant. Moreover, depending on the value of
$d$, the cubic fourfold is related to a degree-$d$ polarized K3 surface via Hodge theory.
In order to study this relation on the level of period domains and moduli spaces, Hassett introduced the notions of marked and labelled special cubic fourfolds. Depending on $d$, the moduli space of discriminant $d$ marked cubic fourfolds is either isomorphic to or a two-to-one covering of the moduli space of discriminant $d$ labelled cubic fourfolds. Moreover, if $d$ is such that an associated K3 surface exists, this is used to construct an either generically injective or degree-two rational map from the moduli space of degree-$d$ polarized K3 surfaces to the divisor of discriminant-$d$ special cubic fourfolds. This difference was further investigated in \cite{BrakkeeTwoK3}, where the geometry of the covering involution arising in the second case is completely described.
\medskip
In this paper, we deal with similar questions in the case of Gushel--Mukai fourfolds. These are smooth Fano fourfolds obtained generically as quadric sections of linear sections of the Grassmannian $\text{Gr}(2,5)$, and they share many similarities with cubic fourfolds. After defining marked and labelled GM fourfolds of discriminant $d$ and their associated moduli stacks in Section \ref{section-markedlabelled}, we show that they provide equivalent notions in this case.
\begin{theorem}[Corollary \ref{IsoModStacks}]
\label{thm_isomodstacks}
The moduli stacks of labelled and marked Hodge-special GM fourfolds are isomorphic.
\end{theorem}
Like for cubic fourfolds, this result is particularly interesting when we specify to GM fourfolds with Hodge-associated K3 surfaces, as defined in \cite{debarre_iliev_manivel_2015}. Recall that a GM fourfold has an associated K3 surface if and only if its discriminant satisfies a certain numerical condition $\eqref{eq_astast}$ -- see Sections \ref{section_HspGM}
and \ref{DefRationalMap}. For these values of the discriminant, applying Theorem \ref{thm_isomodstacks}, we interpret the condition of having an associated K3 surface on the level of moduli stacks as follows.
\begin{theorem}
\label{cor_ratmap}
Let $d$ be a positive integer satisfying condition $\eqref{eq_astast}$. Then there exists a dominant rational map defined in \eqref{eq_ratmap} from the moduli stack of Hodge-special GM fourfolds with discriminant $d$ to the moduli space of degree-$d$ polarized K3 surfaces that sends a GM fourfold to a Hodge-associated K3 surface.
\end{theorem}
As an application, we can count fibers of the period map for GM fourfolds whose elements are Fourier--Mukai partners. By \cite{kuznetsov_perry} the bounded derived category of a GM fourfold $X$ has a semiorthogonal decomposition of the form
\begin{equation*}
\text{D}^b(X)= \langle \text{Ku}(X), \mathcal{O}_X, \mathcal{U}_X^*, \mathcal{O}_X(1), \mathcal{U}_X^*(1)\rangle,
\end{equation*}
where $\mathcal{U}_X^*$ is the restriction to $X$ of the tautological rank-$2$ vector bundle on $\text{Gr}(2,5)$ and $\text{Ku}(X)$, defined as the orthogonal complement to the exceptional collection $\mathcal{O}_X, \mathcal{U}_X^*, \mathcal{O}_X(1), \mathcal{U}_X^*(1)$, is a subcategory of K3 type. We say that a GM fourfold $X'$ is a Fourier--Mukai partner of $X$ if there is an equivalence $\text{Ku}(X) \xrightarrow{\sim} \text{Ku}(X')$ of Fourier--Mukai type. As shown in \cite[Theorem 4.4]{debarre_iliev_manivel_2015} the period map of GM fourfolds has smooth $4$-dimensional fibers, so we cannot expect a finite number of Fourier--Mukai partners as in the case of K3 surfaces \cite{BriMac} or cubic fourfolds \cite[Theorem 1.1]{Huy}. Nevertheless, Theorem \ref{cor_ratmap} allows to prove a counting formula to the number of period points of Fourier--Mukai partners for very general GM fourfolds with Hodge-associated K3 surface. See \cite{Oguiso} and \cite{Pert1} for the analogous statements for K3 surfaces and cubic fourfolds, respectively.
\begin{theorem}[Proposition \ref{prop_FMpGM}]
\label{thm_FMp}
Let $X$ be a very general Hodge-special GM fourfold with discriminant $d$ satisfying \eqref{eq_astast}. Let $m$ be the number of non-isomorphic Fourier--Mukai partners of its Hodge-associated K3 surface. Then when $d \equiv 4 \mod 8$ (resp.\ $d \equiv 2 \mod 8$), there are $m$ (resp.\ $2m$) fibers of the period map of GM fourfolds such that, when non-empty, their elements are Fourier--Mukai partners of $X$. Moreover, all Fourier--Mukai partners of $X$ are obtained in this way.
\end{theorem}
We end with proving the analogue of Theorem \ref{cor_ratmap} for GM fourfolds with associated twisted K3 surface. Recall that by \cite[Theorem 1.1]{Pert2} this is equivalent to having discriminant of the form $d'=dr^2$ with $d$ satisfying $(\ast\ast)$ -- see Section \ref{section_GMvstwistedK3}. On the other hand, the moduli space of polarized twisted K3 surfaces with fixed degree and order was recently constructed in \cite{BrakkeeTwistedK3}.
\begin{theorem}[Corollary \ref{RatMapTwisted}]
\label{thm_ratmaptwisted}
Let $d'$ be a positive integer such that a very general GM fourfold of discriminant $d'$ admits an associated polarized twisted K3 surface of degree $d$ and order $r$. There is a dominant rational map from the moduli stack of Hodge-special GM fourfolds of discriminant $d'$ to a component of the moduli space of twisted K3 surfaces of degree $d$ and order $r$, sending a GM fourfold of discriminant $d'$ to an associated twisted K3 surface.
\end{theorem}
Finally, as in the untwisted setting, we apply Theorem \ref{thm_ratmaptwisted} to study Fourier--Mukai partners of a very general GM fourfold with associated twisted K3 surface.
\begin{theorem}[Proposition \ref{prop_FMptwistedcase}]
\label{thm_FMptwisted}
Let $d'$ be a positive integer such that a very general GM fourfold of discriminant $d'$ admits an associated polarized twisted K3 surface $(S,\alpha)$ of degree $d$ and order $r$. Let $m'$ be the number of non-isomorphic Fourier--Mukai partners of $(S,\alpha)$ of order $r$.
Then when $d' \equiv 0 \mod 4$ (resp.\ $d' \equiv 2 \mod 8$), there are at least $m'$ (resp.\ $2m'$) fibers of the period map of GM fourfolds such that, when non-empty, their elements are Fourier--Mukai partners of $X$.
\end{theorem}
\begin{plan}
In Section \ref{section_introGM} we recall the definition of (Hodge-special) GM fourfolds and some results concerning their Hodge theory. In Section \ref{section-markedlabelled} we define marked and labelled Hodge-special GM fourfolds and we prove Theorem \ref{thm_isomodstacks}. Section \ref{section_GMvsK3} is devoted to the construction of the rational map of Theorem \ref{cor_ratmap} and the proof of Theorem \ref{thm_FMp}. Finally, in Section \ref{section_GMandtwistedK3} we recall the construction of moduli spaces of twisted K3 surfaces with fixed order and degree, and we prove Theorems \ref{thm_ratmaptwisted} and \ref{thm_FMptwisted}.
\end{plan}
\begin{notation}
Given a lattice $L$, we denote by $\Disc L:=L^{\smash{\raisebox{-0.1em}{\scalebox{.7}[1.4]{\rotatebox{90}{\textnormal{\guilsinglleft}}}}}}/L$ its discriminant group and we set $\widetilde{\tO}(L) := \ker(\tO(L) \to \tO(\Disc L))$. For any integer $m \neq 0$ we denote by $L(m)$ the lattice $L$ with the intersection form multiplied by $m$.
We denote by $I_1$ the lattice $\mathbb{Z}$ with bilinear form $(1)$, $I_{r,s}:=I_1^{\oplus r} \oplus I_1(-1)^{\oplus s}$, $A_1:=I_1(2)$, $U$ is the hyperbolic plane $\left(\mathbb{Z}^{\oplus 2},\bigl(\begin{smallmatrix}
0 & 1 \\
1 & 0
\end{smallmatrix}\bigr)\right)$ and $E_8$ is the unique even unimodular lattice of signature $(8,0)$.
For $3 \geq i \geq j \geq 0$ the Schubert cycles on the Grassmannian $\text{Gr}(2,5)$ are denoted by $\sigma_{i,j} \in \HH^{2(i+j)}(\text{Gr}(2,5),\mathbb{Z})$ and we set $\sigma_i:=\sigma_{i,0}$.
\end{notation}
\begin{ack}
We thank Gerard van der Geer and Mingmin Shen for their interest, and Thorsten Beckmann for useful discussions. We are grateful to Daniel Huybrechts, Alex Perry and Paolo Stellari for suggestions on the preliminary version of this work.
This work started when the second author was visiting the Max-Planck-Institut f\"ur Mathematik in Bonn whose hospitality is gratefully acknowledged.
The first author is supported by
NWO Innovational Research Incentives Scheme 016.Vidi.189.015. The second author is supported by the ERC Consolidator Grant ERC-2017-CoG-771507, Stab-CondEn.
\end{ack}
\section{Gushel--Mukai fourfolds}
\label{section_introGM}
In this section, we review the definition of Gushel--Mukai fourfolds and some known results concerning their Hodge theory. Our main references are \cite{debarre_iliev_manivel_2015, debarre_kuznetsov_2019}. We assume the base field is $\mathbb{C}$.
\subsection{Cohomology and period domain of Gushel--Mukai fourfolds}
Let $V_5$ be a $5$-dimensional $\mathbb{C}$-vector space and denote by $\text{CGr}(2,V_5)$ the cone over the Grassmannian $\text{Gr}(2,V_5)$ with vertex $\nu:=\mathbb{P}(\mathbb{C})$, embedded in $\mathbb{P}(\mathbb{C} \oplus \bigwedge^2 V_5) \cong \mathbb{P}^{10}$ via the Pl\"ucker embedding of $\text{Gr}(2,V_5) \subset \mathbb{P}(\bigwedge^2 V_5)$.
\begin{definition}
A \emph{Gushel--Mukai (GM) fourfold} is a smooth $4$-dimensional intersection
$$X:=\text{CGr}(2,V_5) \cap Q$$
where $Q \subset \mathbb{P}(W)$ is a quadric hypersurface in a linear space $\mathbb{P}(W) \cong \mathbb{P}^8 \subset \mathbb{P}(\mathbb{C} \oplus \bigwedge^2 V_5)$.
\end{definition}
Since $X$ is smooth, the linear projection $\gamma_X\colon X \to \text{Gr}(2,V_5)$ from the vertex $\nu$ is a regular map.
The restriction of the hyperplane class on $\mathbb{P}(\mathbb{C} \oplus \bigwedge^2 V_5)$ defines a natural polarization $H:=\gamma_X^*\sigma_1$ on $X$ with degree $H^4=10$. By the adjunction formula, the canonical divisor is $K_X=-2H$, so $X$ is a Fano fourfold of degree $10$ and index $2$.
The moduli stack $\mathcal{M}_4$ of GM fourfolds is a smooth, irreducible Deligne--Mumford stack of finite type over $\mathbb{C}$ of dimension $24$ \cite[Prop.\ 2.4]{kuznetsov_perry}.
\medskip
By \cite[Lemma 4.1]{Iliev_Maniv} the Hodge diamond of $X$ is
\[
\begin{tabular}{ccccccccccccccc}
&&&&&&&1\\
&&&&&&0&&0&\\
&&&&&0&&1&&0\\
&&&&0&&0&&0&&0\\
&&&0&&1&&22&&1&&0.
\end{tabular}
\]
By \cite[Proposition 5.1]{debarre_iliev_manivel_2015} there is an isomorphism of lattices
$$H^4(X,\mathbb{Z}) \cong \Lambda:= I_{22,2}.$$
Note that the rank-2 lattice $\HH^4(\text{Gr}(2,V_5),\mathbb{Z})$ embeds into $\HH^4(X,\mathbb{Z})$ via $\gamma_X^*$. The \emph{vanishing lattice} of $X$ is the sublattice
$$\HH^4(X,\mathbb{Z})_{00}:=\left\lbrace x \in \HH^4(X,\mathbb{Z}) \mid x \cdot \gamma_X^{*}(\HH^4(\text{Gr}(2,V_5),\mathbb{Z}))=0 \right\rbrace.$$
By \cite[Proposition 5.1]{debarre_iliev_manivel_2015} it is isomorphic to
$$\Lambda_{00}:=E_8^{\oplus 2} \oplus U^{\oplus 2} \oplus A_1^{\oplus 2}.$$
Note that the intersection form on $\gamma_X^*(\HH^4(\text{Gr}(2,V_5),\mathbb{Z}))$ with respect to the basis $\gamma_X^*\sigma_{1,1}, \gamma_X^*\sigma_2$ is represented by the matrix $\begin{pmatrix}
2 & 2\\
2 & 4
\end{pmatrix}$. Fixing a primitive embedding of $\Lambda_{00}$ into $\Lambda$, we set $\Lambda_G:=\Lambda_{00}^{\perp}\subset\Lambda$ and we can find two generators $\lambda_1$ and $\lambda_2$ of $\Lambda_G$ such that the intersection matrix is
$\begin{pmatrix}
2 & 0\\
0 & 2
\end{pmatrix}$.
The period domain of GM fourfolds is the complex manifold
\begin{equation*}
\label{locperiodom}
\Omega(\Lambda_{00}):= \lbrace w \in \mathbb{P}(\Lambda_{00} \otimes \mathbb{C}) \mid w \cdot w =0, w \cdot \bar{w}<0 \rbrace.
\end{equation*}
Note that the group $\widetilde{\tO}(\Lambda_{00})$ acts properly discontinuously on $\Omega(\Lambda_{00})$ and it is isomorphic to
\[\Gamma:=\{g\in\tO(\Lambda)\mid g|_{\Lambda_G}=\id_{\Lambda_G}\}.\]
The quotient
$$\mathcal{D}:=\Omega(\Lambda_{00})/\widetilde{\tO}(\Lambda_{00})$$
is an irreducible quasi-projective variety of dimension $20$ and by \cite[Theorem 4.4]{debarre_iliev_manivel_2015} the period map $p\colon \mathcal{M}_4 \to \mathcal{D}$ is dominant as a map of stacks with smooth $4$-dimensional fibers. The period point of $X$ is $p(X) \in \mathcal{D}$.
\subsection{Hodge-special Gushel--Mukai fourfolds}
\label{section_HspGM}
A very general GM fourfold $X$ satisfies $\rk \HH^{2,2}(X,\mathbb{Z})=2$.
We call $X$ \emph{Hodge-special} if $H^{2,2}(X,\mathbb{Z})$ contains a rank-three primitive sublattice containing $\gamma_X^*(H^4(\text{Gr}(2,V_5),\mathbb{Z}))$.
Period points of Hodge-special GM fourfolds lie in codimension-$1$ Noether--Lefschetz loci in $\mathcal{D}$.
Indeed, let $L_d \subset \Lambda$ be a primitive rank-three positive definite sublattice containing $\Lambda_G$, with discriminant $d$. By \cite[Lemma 6.1]{debarre_iliev_manivel_2015} we have $d \equiv 0,2$ or $4 \mod 8$.
Consider the codimension-$1$ locus
$$\Omega(L_d^\perp):=\mathbb{P}(L_d^\perp \otimes \mathbb{C}) \cap \Omega(\Lambda_{00})$$
where $L_d^\perp$ is the orthogonal complement of $L_d$ in $\Lambda$. Let
$$\mathcal{D}_{L_d}\subset\mathcal{D}$$
be the image of $\Omega(L_d^{\perp})$ under the
map $\Omega(\Lambda_{00})\to \mathcal{D}$. Then the period of any Hodge-special GM fourfold lies in $\mathcal{D}_{L_d}$ for some $L_d$.
By \cite[Proposition 6.2]{debarre_iliev_manivel_2015},
the lattice $L_d$ only depends on the discriminant $d$, and depending on $d$, there are one or two embeddings of $L_d$ into $\Lambda$ up to composition with elements of $\widetilde{\tO}(\Lambda_{00})$.
To be precise, up to the action of $\widetilde{\tO}(\Lambda_{00})$, there exists $\tau \in L_d$ such that $\lambda_1, \lambda_2, \tau$ is a basis for $L_d$ with intersection matrix given by
\[
\begin{pmatrix}
2 & 0 & 0 \\
0 & 2 & 0 \\
0 & 0 & 2k
\end{pmatrix} \quad \text{if }d=8k,
\]
\[
\begin{pmatrix}
2 & 0 & 1 \\
0 & 2 & 0 \\
1 & 0 & 2k
\end{pmatrix} \quad \text{or} \quad
\begin{pmatrix}
2 & 0 & 0 \\
0 & 2 & 1 \\
0 & 1 & 2k
\end{pmatrix} \quad \text{if }d=2+8k,
\]
\[
\begin{pmatrix}
2 & 0 & 1 \\
0 & 2 & 1 \\
1 & 1 & 2k
\end{pmatrix} \quad \text{if }d=4+8k.
\]
In the case $d=2+8k$, denote by $\mathcal{D}'_d$ and $\mathcal{D}''_d$ the divisors $\mathcal{D}_{L_d}$ corresponding to the first and second embedding of $L_d$, respectively.
It follows from \cite[Corollary 6.3]{debarre_iliev_manivel_2015} that the periods of Hodge-special GM fourfolds are contained in the union of
\begin{enumerate}[noitemsep,label=(\roman*)]
\item the irreducible hypersurfaces $\mathcal{D}_d:=\mathcal{D}_{L_d}\subset \mathcal{D}$ for all $d\equiv 0\mod 4$;
\item the unions $\mathcal{D}_d:=\mathcal{D}'_d\cup\mathcal{D}''_d$ for all $d\equiv 2\mod 8$.
\end{enumerate}
Moreover, there exists an involution $r \in \tO(\Lambda_{00})$ which is not in $\widetilde{\tO}(\Lambda_{00})$, inducing an involution $r_{\mathcal{D}}$ on $\mathcal{D}$ which exchanges $\mathcal{D}_d'$ and $\mathcal{D}_d''$ when $d \equiv 2 \mod 8$.
\medskip
We say that a Hodge-special GM fourfold $X$ has discriminant $d$ if its period point belongs to $\mathcal{D}_d$.
The moduli stack of Hodge-special GM fourfolds of discriminant $d$ is $\mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_d$. Note that a very general $X \in \mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_d$ satisfies $\text{rk}\HH^{2,2}(X,\mathbb{Z})=3$.
It is known that each of the irreducible divisors $\mathcal{D}_{L_d}$ intersects the image of $p$ for $d > 8$ \cite[Theorem 8.1]{debarre_iliev_manivel_2015}, so $\mathcal{D}_{L_d}\cap\im(p)$ contains an open dense subset of $\mathcal{D}_{L_d}$.
It follows that the restriction $p\colon \mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_d\to\mathcal{D}_d$ is still dominant when $d>8$.
\section{Marked and labelled Gushel--Mukai fourfolds}
\label{section-markedlabelled}
In analogy to \cite[Definition 3.1.3]{Hassett} for cubic fourfolds, we give the following definition. Let $L_d$ be a rank-$3$ positive definite lattice containing $\Lambda_G$.
\begin{definition}
A \emph{marked} Hodge-special GM fourfold is a GM fourfold $X$ together with a primitive embedding $\varphi\colon L_d\hookrightarrow \HH^{2,2}(X,\mathbb{Z})$ preserving the classes $\lambda_1$ and $\lambda_2$.
A \emph{labelled} Hodge-special GM fourfold is a GM fourfold $X$ together with a primitive sublattice $L_d\subset \HH^{2,2}(X,\mathbb{Z})$.
\end{definition}
So a labelling of a GM fourfold is the image of a marking.
\medskip
Two marked GM fourfolds
$(X,\varphi\colon L_d\hookrightarrow \HH^{2,2}(X,\mathbb{Z}))$ and ${(X',\varphi'\colon L_d\hookrightarrow \HH^{2,2}(X',\mathbb{Z}))}$
are isomorphic if there is an isomorphism $f\colon X\to X'$ such that $f^*\colon \HH^4(X',\mathbb{Z})\to \HH^4(X,\mathbb{Z})$ satisfies
$f^*\circ \varphi = \varphi'$.
Two labelled GM fourfolds
$(X,L_d\subset \HH^{2,2}(X,\mathbb{Z}))$ and ${(X', L_d\subset \HH^{2,2}(X',\mathbb{Z}))}$
are isomorphic if there exists an isomorphism $f\colon X\to X'$ such that $f^*$ preserves $L_d$.
\begin{remark}
Consider the sets of isomorphism classes of marked and labelled GM fourfolds. There is a map
\[\{\text{ marked GM 4-folds }\}/_{\cong} \to \{\text{ labelled GM 4-folds }\}/_{\cong}\]
sending $(X,\varphi\colon L_d\hookrightarrow \HH^{2,2}(X,\mathbb{Z}))$ to
$(X,\varphi(L_d)\subset \HH^{2,2}(X,\mathbb{Z}))$. It is surjective but need, a priori, not be injective:
The lattice $L_d$ could have non-trivial automorphisms fixing the $\lambda_i$.
\end{remark}
Fix an embedding $L_d\hookrightarrow \Lambda$.
Recall that $\mathcal{D}_{L_d}$ is the image of $\Omega(L_d^{\perp})$ under
${\Omega(\Lambda_{00})\to\mathcal{D}=\Omega(\Lambda_{00})/\Gamma}$.
Let
\begin{align*}
G(L_d) &:= \{g\in\Gamma : g(L_d) = L_d\}\\
H(L_d) &:= \{g\in G(L_d): g|_{L_d} = \id_{L_d}\}
\end{align*}
and define
\begin{align*}
\mathcal{D}_{L_d}^{\lab} &:= \Omega(L_d^{\perp})/G(L_d)\\
\mathcal{D}_{L_d}^{\mar} &:= \Omega(L_d^{\perp})/H(L_d).
\end{align*}
Then we have surjective maps
\[\mathcal{D}_{L_d}^{\mar}\to \mathcal{D}_{L_d}^{\lab}\to\mathcal{D}_{L_d}\subset\mathcal{D}.\]
When $d\equiv 0\mod 4$, we set $\mathcal{D}_d^{\lab}:=\mathcal{D}_{L_d}^{\lab}$ and
$\mathcal{D}_d^{\mar}:=\mathcal{D}_{L_d}^{\mar}$. When $d\equiv 2\mod 8$, we have two embeddings $\mathcal{D}_{L_d}\xrightarrow{\cong}\mathcal{D}_d'\subset\mathcal{D}$ and $\mathcal{D}_{L_d}\xrightarrow{\cong}\mathcal{D}_d''\subset\mathcal{D}$; let $(\mathcal{D}_d')^{\lab}$ and $(\mathcal{D}_d'')^{\lab}$ be the corresponding spaces $\mathcal{D}_{L_d}^{\lab}$ over $\mathcal{D}_d'$ and $\mathcal{D}_d''$, respectively.
Note that if $x\in\mathcal{D}_d'\cap\mathcal{D}_d''$, then there are two embeddings of $L_d$ into the (2,2)-part of the corresponding Hodge structure on $\Lambda_{00}$ that are in different $\widetilde{\tO}(\Lambda_{00})$-orbits. So $x$ has two labellings, giving rise to one point in $(\mathcal{D}_d')^{\lab}$ and one in $(\mathcal{D}_d'')^{\lab}$. Accordingly, we let $\mathcal{D}_d^{\lab}$ be the disjoint union
\[\mathcal{D}_d^{\lab}:=(\mathcal{D}_d')^{\lab}\coprod(\mathcal{D}_d'')^{\lab}.\]
Analogously, define $\mathcal{D}_d^{\mar}:=(\mathcal{D}_d')^{\mar}\coprod(\mathcal{D}_d'')^{\mar}$.
Then the moduli stacks of labelled and marked Hodge-special GM fourfolds of discriminant $d$ are $\mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_d^{\lab}$ and
$\mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_d^{\mar}$, respectively.
\medskip
In the rest of this section, we analyze the natural surjective morphisms
\[\mathcal{D}_{L_d}^{\mar}\to \mathcal{D}_{L_d}^{\lab}\to\mathcal{D}_{L_d}.\]
\begin{lemma}
\label{lemma_mapnu}
The natural map $\nu\colon \mathcal{D}_{L_d}^{\lab}\twoheadrightarrow\mathcal{D}_{L_d}$ is a normalization.
\end{lemma}
\begin{proof}
The argument is the same as in the case of cubic fourfolds in \cite[Section~2.3]{BrakkeeTwoK3}.
\end{proof}
Note that a non-normal point in $\mathcal{D}_{L_d}$ has two different labellings by $L_d$. In particular, the integral (2,2)-part of the corresponding Hodge structure has rank bigger than 3.
\begin{proposition}\label{MainResult}
The map $\mathcal{D}_{L_d}^{\mar}\twoheadrightarrow\mathcal{D}_{L_d}^{\lab}$ is an isomorphism.
\end{proposition}
It follows that $\mathcal{D}_d^{\mar}\to\mathcal{D}_d^{\lab}$ is an isomorphism.
\medskip
In order to prove Proposition \ref{MainResult}, we will show that $G(L_d)/H(L_d)\cong\mathbb{Z}/2\mathbb{Z}$ and it is generated by an element
that restricts to $-\id$ on $L_d^{\perp}$.
As $-\id_{L_d^{\perp}}$ induces the trivial action on $\Omega(L_d^{\perp})$,
we deduce the proof of Proposition \ref{MainResult}.
\medskip
Let $G'(L_d) := \{g\in\tO(L_d): g(\lambda_i) = \lambda_i\}$.
Then $G(L_d)/H(L_d)$ is isomorphic to $G'(L_d)$ via restriction to $L_d$.
\begin{lemma}
\label{lemma_descrofG'}
The group $G'(L_d)$ is $\mathbb{Z}/2\mathbb{Z}$, generated by an element that acts on $\Disc(L_d)$ as $-\id$.
\end{lemma}
\begin{proof}
Let $g\in G'(L_d)$.
Assume $d=8k$, so $L_d$ has a basis $\lambda_1,\lambda_2,\tau$ with corresponding intersection matrix
\[\begin{pmatrix} 2 & 0 & 0 \\
0 & 2 & 0 \\
0 & 0 & d/4
\end{pmatrix}\]
(see Section \ref{section_HspGM}). Then either $g(\tau)=\tau$, so $g=\id_{L_d}$, or
$g(\tau)=-\tau$. In the second case, $g$ acts on the discriminant group
\[\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/(d/4)\mathbb{Z} = \left\langle \frac{\lambda_1}{2},\frac{\lambda_2}2,\frac{\tau}{d/4}\right\rangle\]
of $L_d$ by
\[\left(\frac{\lambda_1}{2},\frac{\lambda_2}2,\frac{\tau}{d/4}\right)\mapsto
\left(\frac{\lambda_1}{2},\frac{\lambda_2}2,-\frac{\tau}{d/4}\right)\equiv
-\left(\frac{\lambda_1}{2},\frac{\lambda_2}2,\frac{\tau}{d/4}\right).
\]
Next, assume $d=2+8k$, so $L_d$ is isomorphic to the lattice with basis $\lambda_1,\lambda_2,\tau$ and intersection matrix
\[\begin{pmatrix} 2 & 0 & 0 \\
0 & 2 & 1 \\
0 & 1 & (d+2)/4
\end{pmatrix}\]
Write $g(\tau) = a\lambda_1+b\lambda_2+c\tau$.
It follows from $g(\lambda_i) = \lambda_i$ that $a=0$ and $c=1-2b$, and solving $(g(\tau))^2 = (\tau)^2$ gives
$(b-b^2)d=0$. Hence we either have $b=0$,
so $g=\id_{L_d}$, or $b=1$ and $c=-1$, so $g(\tau) = \lambda_2-\tau$.
In the second case, the action on the discriminant group
\[\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/(d/2)\mathbb{Z} = \left\langle \frac{\lambda_1}{2},\frac{\lambda_2-2\tau}{d/2}\right\rangle\]
of $L_d$ is given by
\[\left(\frac{\lambda_1}{2},\frac{\lambda_2-2\tau}{d/2}\right)\mapsto
\left(\frac{\lambda_1}{2},\frac{-\lambda_2+2\tau}{d/2}\right)\equiv
-\left(\frac{\lambda_1}{2},\frac{\lambda_2-2\tau}{d/2}\right).
\]
Finally, assume $d=4+8k$, there is a basis $\lambda_1,\lambda_2,\tau$ for $L_d$ with intersection matrix
\[\begin{pmatrix} 2 & 0 & 1 \\
0 & 2 & 1 \\
1 & 1 & (d+4)/4
\end{pmatrix}
\]
and write $g(\tau) = a\lambda_1+b\lambda_2+c\tau$.
Now $g(\lambda_i) = \lambda_i$, implies $a=b$ and $c = 1-2a$, and solving $(g(\tau))^2 = (\tau)^2$ gives
$(a-a^2)d=0$. Hence we either get $a=0$,
so $g=\id_{L_d}$, or $a=1$ and $c=-1$, so $g(\tau) = \lambda_1+\lambda_2-\tau$.
In the second case, the action on the discriminant group
\[\mathbb{Z}/d\mathbb{Z} = \left\langle \frac{\lambda_1+\lambda_2-2\tau}{d}\right\rangle\]
of $L_d$ is given by
\[\frac{\lambda_1+\lambda_2-2\tau}{d}\mapsto
\frac{\lambda_1+\lambda_2-2(\lambda_1+\lambda_2-\tau)}{d}=
-\frac{\lambda_1+\lambda_2-2\tau}{d}. \qedhere
\]
\end{proof}
\begin{proof}[Proof of Proposition~\ref{MainResult}]
By Lemma \ref{lemma_descrofG'}, the generator $\gamma'$ of $G'(L_d)$ acts as $-\id$ on $\Disc L_d$. Then $-\id_{L_d^{\perp}}\oplus\gamma'$ extends to an element $\gamma$ of $\tO(\Lambda)$ by \cite[Corollary 1.5.2 and Proposition 1.6.1]{Nikulin}, which generates $G(L_d)/H(L_d)$. Since by definition $\gamma$ restricts to
$-\id$ on $L_d^{\perp}$, we conclude that $\gamma$ acts trivially on $\Omega(L_d^\perp)$. This implies the statement.
\end{proof}
As a direct consequence of Proposition \ref{MainResult}, we get the following identification between moduli stacks of marked and labelled Hodge-special GM fourfolds.
\begin{cor}\label{IsoModStacks}
We have an isomorphism $\mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_d^{\mar} \cong \mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_d^{\lab}$.
\end{cor}
\section{Gushel--Mukai fourfolds with associated K3 surface}
\label{section_GMvsK3}
In this section we prove Theorem \ref{cor_ratmap} and Theorem \ref{thm_FMp}.
\subsection{Rational maps to moduli spaces of K3 surfaces}\label{DefRationalMap}
The aim of this section is to construct the rational map of Theorem \ref{cor_ratmap}. Let $X$ be a Hodge-special GM fourfold whose period lies in $\mathcal{D}_d$, that is, there are a rank-$3$ positive definite lattice $L_d$ of discriminant $d$ containing $\Lambda_G$ and a primitive embedding $L_d\hookrightarrow \HH^{2,2}(X,\mathbb{Z})$. As in \cite[Section 6.2]{debarre_iliev_manivel_2015}, we say that a quasi-polarized K3 surface $(S,l)$ is \emph{Hodge-associated} to $X$ if there is a Hodge isometry
\[\HH^2(S,\mathbb{Z})\supset l^{\perp}\cong L_d^{\perp}\subset\HH^4(X,\mathbb{Z})\]
up to a sign and a Tate twist. In particular, $(S,l)$ has degree $d$. By \cite[Prop.~6.5]{debarre_iliev_manivel_2015}
$X$ has a Hodge--associated quasi-polarized K3 surface if and only if $d$ satisfies
\begin{equation}\tag{$\ast\ast$}
\label{eq_astast}
d\equiv 2,4\mod 8 \text{ and } p\centernot| d\text{ for every prime }p\equiv 3\mod 4.
\end{equation}
Moreover, when the period does not lie in $\mathcal{D}_d\cap\mathcal{D}_8$, then the quasi-polarized K3 surface is actually polarized.
\medskip
Denote by $\Lambda_d:=E_8(-1)^{\oplus 2} \oplus U^{\oplus 2} \oplus I_1(-d)$ the lattice isomorphic to the primitive middle cohomology $\HH^2(S,\mathbb{Z})_{\prim}:=l^{\perp}\subset\HH^2(S,\mathbb{Z})$ of a polarized K3 surface $(S,l)$ of degree $d$. Then condition \eqref{eq_astast} on $d$ is equivalent to the existence of an isomorphism of lattices
$L_d^{\perp}\cong\Lambda_d(-1)$.
Under this isomorphism, the group $\widetilde{\tO}(\Lambda_d(-1))$ is identified with $\widetilde{\tO}(L_d^{\perp}) \cong H(L_d)$. Fix an embedding $L_d\hookrightarrow \Lambda$.
By Proposition \ref{MainResult}, we obtain the following commutative diagram:
\begin{equation}
\label{eq_diagram}
\xymatrix@C=0em{\Omega(\Lambda_d(-1))\ar[d] \ar[rrrrr]^{\cong} &&&&& \Omega(L_d^{\perp})\ar[d]\ar@{^{(}->}[rrrrr] &&&&& \Omega(\Lambda_{00})\ar[d]\\
\Omega(\Lambda_d(-1))/\widetilde{\tO}(\Lambda_d(-1))\ar[rrrr]^-{\cong} &&&& \Omega(L_d^{\perp})/H(L_d)= \!\!\!\!\! &\mathcal{D}_{L_d}^{\lab}\ar[rrrr]^-{\nu} &&&& \mathcal{D}_{L_d}\ar@{^{(}->}[r]&\mathcal{D}
}
\end{equation}
By Lemma \ref{lemma_mapnu}, the map $\nu$ is birational. It follows from the diagram above that there exists a birational map
\[\mathcal{D}\supset\mathcal{D}_{L_d}\dashrightarrow \Omega(\Lambda_d(-1))/\widetilde{\tO}(\Lambda_d(-1))
\cong \Omega(\Lambda_d)/\widetilde{\tO}(\Lambda_d).\]
In particular, we obtain a rational map
\begin{equation}
\label{eq_rationalmaponD}
\mathcal{D}_d\dashrightarrow\Omega(\Lambda_d)/\widetilde{\tO}(\Lambda_d)
\end{equation}
which is birational when $d\equiv 4\mod 8$ and generically two-to-one when $d\equiv 2\mod 8$. Indeed, for a generic $x \in \mathcal{D}_d'$, note that for some $g\in\tO(L_d^{\perp})$, $x$ and $(g\circ r)_{\mathcal{D}}(x)$ are mapped to the same point in $\Omega(\Lambda_d)/\widetilde{\tO}(\Lambda_d)$.
The quotient $\Omega(\Lambda_d)/\widetilde{\tO}(\Lambda_d)$ can be viewed as the moduli space of degree-$d$ quasi-polarized K3 surfaces \cite[Section~5]{HulekPloog}.
The map above induces a rational map
\begin{equation*}
\mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_d\dashrightarrow \Omega(\Lambda_d)/\widetilde{\tO}(\Lambda_d)
\end{equation*}
sending a GM fourfold to an associated quasi-polarized K3 surface.
Now denote by $\tM_d$ the moduli space of polarized K3 surfaces of degree $d$.
The period map induces an open immersion
$\tM_d\hookrightarrow \Omega(\Lambda_d)/\widetilde{\tO}(\Lambda_d)$.
When restricted to points outside $\mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_8$, the image of the above rational map lies in $\tM_d$. We obtain a dominant rational map
\begin{equation}
\label{eq_ratmap}
\gamma_d\colon\mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_d\dashrightarrow \tM_d
\end{equation}
sending a GM fourfold to an associated polarized K3 surface.
This proves Theorem \ref{cor_ratmap}.
Note that $\gamma_d(X)$ is defined whenever $\rk\HH^{2,2}(X,\mathbb{Z})=3$.
\begin{remark}
Note that $\gamma_d$ is not unique,
since the map $\mathcal{D}_{L_d}\dashrightarrow \Omega(\Lambda_d)/\widetilde{\tO}(\Lambda_d)$
depends on the choice of an isomorphism $L_d^{\perp}\cong\Lambda_d(-1)$.
To be precise, $\gamma_d$ is unique up to
$\tO(\Lambda_d)/\widetilde{\tO}(\Lambda_d)$ when $d\equiv 4\mod 8$,
and up to $(\tO(\Lambda_d)/\widetilde{\tO}(\Lambda_d))^2$ when $d\equiv 2\mod 8$.
\end{remark}
\subsection{Fibers of Fourier--Mukai partners}
We now apply the results in the previous sections to study Fourier--Mukai partners of GM fourfolds. In analogy to \cite{Huy, Pert1} for cubic fourfolds, we say that a \emph{Fourier--Mukai partner} of a GM fourfold $X$ is a GM fourfold $X'$ such that there exists an exact equivalence $\text{Ku}(X) \xrightarrow{\sim} \text{Ku}(X')$ of Fourier--Mukai type, i.e.\ the composition $\text{D}^b(X) \to \text{Ku}(X) \xrightarrow{\sim} \text{Ku}(X') \to \text{D}^b(X')$ has a Fourier--Mukai kernel. Note that by \cite[Theorem 1.6]{kuznetsov_perry_cones} non-isomorphic GM fourfolds in the same fiber of the period map are Fourier--Mukai partners.
\medskip
Before proving Theorem \ref{thm_FMp}, we need to make the following remark. In analogy to \cite{AddTho}, the \emph{Mukai lattice} for $\text{Ku}(X)$ has been defined in \cite[Section 3.1]{Pert2} as the abelian subgroup
$$\widetilde{\mathrm{H}}(\text{Ku}(X),\mathbb{Z}):=\lbrace \kappa \in \mathrm{K}(X)_{\text{top}}: \chi([\mathcal{O}_X(i)],\kappa)=\chi([\mathcal{U}_X^*(i)],\kappa)=0 \, \text{ for }i=0,1 \rbrace$$
of the topological K-theory of $X$, with the Euler form $\chi$ with reversed sign and the weight-$2$ Hodge structure induced by pulling back via the isomorphism
$$\widetilde{\mathrm{H}}(\text{Ku}(X),\mathbb{Z}) \otimes \mathbb{C} \rightarrow \HH^\bullet(X,\mathbb{C})$$
given by the Mukai vector $v(-)=\text{ch}(-).\sqrt{\text{td}(X)}$. As a lattice, $\widetilde{\mathrm{H}}(\text{Ku}(X),\mathbb{Z}) \cong U^{\oplus 4} \oplus E_8(-1)^{\oplus 2}$ by \cite[Theorem 1.2]{debarre_kuznetsov_2019}. We set
$$\widetilde{\mathrm{H}}^{1,1}(\text{Ku}(X),\mathbb{Z}):= \widetilde{\mathrm{H}}^{1,1}(\text{Ku}(X))\cap \widetilde{\mathrm{H}}(\text{Ku}(X),\mathbb{Z}).$$
By \cite[Lemma 2.27]{kuznetsov_perry} there are two classes in $\widetilde{\mathrm{H}}^{1,1}(\text{Ku}(X),\mathbb{Z})$ spanning a lattice $A_1^{\oplus 2}$. By \cite[Proposition 3.1]{Pert2}, there is a Hodge isometry
$$A_1^{\oplus 2\perp} \cong \HH^4(X,\mathbb{Z})_{00},$$
up to a sign and a Tate twist, where the orthogonal complement is taken in $\widetilde{\mathrm{H}}(\text{Ku}(X),\mathbb{Z})$.
In particular, a very general Hodge-special GM fourfold $X$ with discriminant $d$ has $\text{rk}\widetilde{\mathrm{H}}^{1,1}(\text{Ku}(X),\mathbb{Z})=3$ and $\Disc \widetilde{\mathrm{H}}^{1,1}(\text{Ku}(X),\mathbb{Z})=d$. Moreover, we have the following property.
\begin{lemma}
\label{lemma_eqinduceisometry}
Every equivalence $\emph{Ku}(X) \xrightarrow{\sim} \emph{Ku}(X')$ of Fourier--Mukai type induces a Hodge isometry $\widetilde{\mathrm{H}}(\emph{Ku}(X),\mathbb{Z}) \cong \widetilde{\mathrm{H}}(\emph{Ku}(X'),\mathbb{Z})$.
\end{lemma}
\begin{proof}
Apply a similar argument as in \cite[Proposition 3.3]{Huy}.
\end{proof}
By the above lemma, every Fourier--Mukai partner of a very general Hodge-special GM fourfold with discriminant $d$ is a very general Hodge-special GM fourfold of the same discriminant.
\medskip
We are now ready to prove the next proposition which implies Theorem \ref{thm_FMp}.
Denote by $\tau(d)$ the number of distinct primes that divide $d/2$.
\begin{proposition}
\label{prop_FMpGM}
Let $d$ be a positive integer satisfying condition \eqref{eq_astast}. If $X$ is a very general Hodge-special GM fourfold with discriminant $d \equiv 4 \mod 8$ (resp.\ $d \equiv 2 \mod 8$), then there are $2^{\tau(d)-1}$ (resp.\ $2^{\tau(d)}$) fibers of the period map $p$ such that, when non-empty, their elements are Fourier--Mukai partners of $X$. Moreover, all Fourier--Mukai partners of $X$ are obtained in this way.
\end{proposition}
\begin{proof}
We fix a choice of the rational map $\gamma_d\colon\mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_d \dashrightarrow \tM_d $ of Section \ref{DefRationalMap}.
Let $X$ be a GM fourfold as in the statement and consider $\gamma_d(X)=(S,l)$, a degree-$d$ polarized K3 surface associated to $X$. Note that $S$ has Picard rank $1$. Moreover, by \cite[Theorem 3.6]{Pert2} and \cite[Theorem 1.9]{PPZ} there exists an exact equivalence $\text{Ku}(X) \xrightarrow{\sim} \text{D}^b(S)$. By \cite[Proposition 1.10]{Oguiso}, $S$ has $m:=2^{\tau(d)-1}$ non-isomorphic Fourier--Mukai partners.
Choose $m$ K3 surfaces $S_1:=S, S_2 \dots, S_m$ as representatives for each isomorphism class of Fourier--Mukai partners endowed with the unique degree-$d$ polarizations $l_1:=l, \dots, l_m$, respectively. These polarized K3 surfaces determine $m$ distinct points in $\Omega(\Lambda_d)/\widetilde{\tO}(\Lambda_d)$, which we still denote by $(S_i,l_i)$ for $1 \leq i \leq m$.
As summarized in diagram \eqref{eq_diagram}, their image via \eqref{eq_rationalmaponD} defines $m$ (resp.\ $2m$) period points in $\mathcal{D}_d$ if $d \equiv 4 \mod 8$ (resp.\ $d \equiv 2 \mod 8$).
We denote by $x_i \in \mathcal{D}_d$ the period point defined by $(S_i,l_i)$ if $d \equiv 4 \mod 8$, and by $x_i' \in \mathcal{D}_d'$, $x_i'' \in \mathcal{D}_d''$ those defined by $(S_i,l_i)$ if $d \equiv 2 \mod 8$.
Assume that $x_i$ (resp.\ $x_i'$ or $x_i''$) is in the image of the period map $p$ and consider a GM fourfold $X'$ in the fiber of $p$ over this point.
Then $$\text{Ku}(X') \xrightarrow{\sim} \text{D}^b(S_i) \xrightarrow{\sim} \text{D}^b(S) \xrightarrow{\sim} \text{Ku}(X).$$
Finally, by Lemma \ref{lemma_eqinduceisometry}, if $X'$ is a Fourier--Mukai partner of $X$, then $X'$ is a very general Hodge-special GM fourfold of the same discriminant.
Thus $\gamma_d(X')$ is a well defined element in $\tM_d$. But then $\gamma_d(X')$ is a degree-$d$ polarized K3 surface which is a Fourier--Mukai partner of $(S,l)$, hence isomorphic to $(S_i,l_i)$ for a certain $1 \leq i \leq m$. This implies the statement.
\end{proof}
\begin{remark}
Note that the image of the period map is not known \cite[Question 9.1]{debarre_iliev_manivel_2015}. Thus some of the fibers of Proposition \ref{prop_FMpGM} could a priori be empty.
\end{remark}
\section{Gushel--Mukai fourfolds and twisted K3 surfaces}
\label{section_GMandtwistedK3}
In this section we recall the definition of the moduli spaces of twisted polarized K3 surfaces with fixed order and degree introduced in \cite{BrakkeeTwistedK3}, and then we prove Theorem \ref{thm_ratmaptwisted} and Theorem \ref{thm_FMptwisted}.
\subsection{Moduli and periods of twisted K3 surfaces}
We summarize the relevant results of \cite{BrakkeeTwistedK3}.
Recall that for a complex K3 surface $S$, the Brauer group $\br(S)$ is isomorphic to the cohomological Brauer group
\[\HH^2_{\text{\'et}}(S,\mathbb{G}_m)\cong\HH^2(S,\mathcal{O}_S^*)_{\tors}\cong (\mathbb{Q}/\mathbb{Z})^{\oplus 22-\rho(S)}.\]
Let $T(S):=\ns(S)^{\perp}\subset\HH^2(S,\mathbb{Z})$ be the transcendental lattice of $S$. Then there is an isomorphism
\[\br(S)\cong\hom(T(S),\mathbb{Q}/\mathbb{Z}).\]
We denote by $\br(S)[r]$ the group of elements in $\br(S)$ whose order divides $r$. There exists a surjection
\[\widetilde{\br}(S)[r]:=\hom(\HH^2(S,\mathbb{Z})_{\prim},\mathbb{Z}/r\mathbb{Z})\twoheadrightarrow \hom(T(S),\mathbb{Z}/r\mathbb{Z})\cong \br(S)[r]\]
which is an isomorphism if and only if $\rho(S)=1$.
\begin{theorem}[{\cite[Theorem~1]{BrakkeeTwistedK3}}]
There exists a scheme $\tM_d[r]$ which is a coarse moduli space for triples $(S,l,\alpha)$ consisting of a polarized K3 surface $(S,l)$ of degree $d$ and an element $\alpha\in\widetilde{\br}(S)[r]$.
There exists a subscheme $\tM_d^r\subset\tM_d[r]$ which is a coarse moduli space for those triples for which $\alpha$ has order $r$.
\end{theorem}
The spaces $\tM_d[r]$ and $\tM_d^r$ are constructed as follows.
Let $\tM_d^{\mar}$ be the (fine) moduli space of triples $(S,l,\varphi)$
where $(S,l)$ is as before and $\varphi$ is an isomorphism
$\HH^2(S,\mathbb{Z})_{\prim}\cong\Lambda_d$. Note that $\varphi$ induces an isomorphism $\varphi_r\colon \br(S)[r]\to\hom(\Lambda_d,\mathbb{Z}/r\mathbb{Z})$. The group $\widetilde{\tO}(\Lambda_d)$ induces an action on $\hom(\Lambda_d,\mathbb{Z}/r\mathbb{Z})$,
and on the product
\[\tM_d^{\mar}[r]:=\tM_d^{\mar}\times\hom(\Lambda_d,\mathbb{Z}/r\mathbb{Z})\]
by
\[g(S,l,\varphi,\alpha) = (S,l,g\circ\varphi, \varphi^{-1}_rg\varphi_r(\alpha)).\]
The space $\tM_d[r]$ is the quotient $\tM_d^{\mar}[r]/\widetilde{\tO}(\Lambda_d)$.
For $w\in\hom(\Lambda_d,\mathbb{Z}/r\mathbb{Z})$,
denote by $\stab(w)\subset\widetilde{\tO}(\Lambda_d)$ its stabilizer under the action of $\widetilde{\tO}(\Lambda_d)$.
Then $\tM_d[r]$ is a disjoint union
$\coprod_{[w]}\tM_w$
where $[w]\in \hom(\Lambda_d,\mathbb{Z}/r\mathbb{Z})/\widetilde{\tO}(\Lambda_d)$
and
\[\tM_w=(\tM_d^{\mar}\times\{w\})/\stab(w).\]
Each component $\tM_w$ is an irreducible quasi-projective variety with at most finite quotient singularities \cite[Corollary~2.2]{BrakkeeTwistedK3}.
It parametrizes triples $(S,l,\alpha)$ that admit a marking $\varphi$ such that $\varphi_r(\alpha) = w$.
The space $\tM_d^r$ is the union of those $\tM_w$ for which $w$ has order $r$.
\medskip
Given $(S,l)\in\tM_d$ and $\alpha\in\widetilde{\br}(S)[r] \cong\frac1r\HH^2(S,\mathbb{Z})^{\smash{\raisebox{-0.1em}{\scalebox{.7}[1.4]{\rotatebox{90}{\textnormal{\guilsinglleft}}}}}}_{\prim}/\HH^2(S,\mathbb{Z})_{\prim}^{\smash{\raisebox{-0.1em}{\scalebox{.7}[1.4]{\rotatebox{90}{\textnormal{\guilsinglleft}}}}}}$, there is an associated Hodge structure $\widetilde{\HH}(S,\alpha,\mathbb{Z})$ of K3 type on the full cohomology $\HH^*(S,\mathbb{Z})$ of $S$.
Namely, fix a lift of $\alpha$ to $\frac1r\HH^2(S,\mathbb{Z})_{\prim}^{\smash{\raisebox{-0.1em}{\scalebox{.7}[1.4]{\rotatebox{90}{\textnormal{\guilsinglleft}}}}}}\subset\HH^2(S,\mathbb{Q})$, that we will also denote by $\alpha$.
Then $\widetilde{\HH}(S,\alpha,\mathbb{Z})$ is defined by
\[\widetilde{\HH}{}^{2,0}(S,\alpha):=\mathbb{C}[\sigma+\alpha\wedge\sigma]\subset\HH^*(S,\mathbb{Z}),\]
where $\sigma$ is a non-degenerate holomorphic 2-form on $S$.
If $\alpha$ maps to $\alpha'$ under $\widetilde{\br}(S)[r]\twoheadrightarrow\br(S)[r]$, then $\widetilde{\HH}(S,\alpha,\mathbb{Z})$ is isomorphic to the Hodge structure $\widetilde{\HH}(S,\alpha',\mathbb{Z})$ defined by $\alpha'$ as in \cite[Section~4]{GeneralizedCY}.
\medskip
Denote by $\widetilde{\Lambda}$ the extended K3 lattice.
Let $w\in\hom(\Lambda_d,\mathbb{Z}/r\mathbb{Z})$ and denote by $T_w$ the finite-index sublattice $\ker(w)\subset\Lambda_d$.
Using the above one shows that, up to an identification of $T_w$ with $\exp(w)\Lambda_d\cap\widetilde{\Lambda}$ (see \cite[Section~3.1]{BrakkeeTwistedK3}), there is a holomorphic, injective period map
\begin{align*}
\tM_d^{\mar}\times\{w\}&\to\Omega(T_w)\\
(S,l,\varphi,w)&\mapsto \widetilde{\varphi}\left(\widetilde{\HH}{}^{2,0}(S,\varphi_r^{-1}(w)\right).
\end{align*}
It induces an algebraic embedding
$\tM_w\hookrightarrow\Omega(T_w)/\stab(w)$.
\medskip
For later use, we define the \emph{Picard group} of a twisted K3 surface as
\[\pic(S,\alpha):=\widetilde{\HH}{}^{1,1}(S,\alpha)\cap \widetilde{\HH}(S,\alpha,\mathbb{Z})\] and its \emph{transcendental lattice} $T(S,\alpha)$ as the orthogonal complement of $\pic(S,\alpha)$ in $\widetilde{\HH}(S,\alpha,\mathbb{Z})$. When $\alpha$ is trivial, we have $\pic(S,\alpha) = \HH^0(S,\mathbb{Z})\oplus\HH^{1,1}(S,\mathbb{Z})\oplus\HH^4(S,\mathbb{Z})$ and $T(S,\alpha) = T(S)$. One can show that there is an isomorphism of lattices $T(S,\alpha)\cong \ker(\alpha\colon T(S)\to\mathbb{Q}/\mathbb{Z})$.
\subsection{Twisted K3 surfaces associated to GM fourfolds}
\label{section_GMvstwistedK3}
Recall \cite[Definition~3.11]{Pert2} that if $X$ is a Hodge-special GM fourfold, a twisted K3 surface $(S,\alpha)$ is said to be associated to $X$ when
there is a Hodge isometry
\[\widetilde{\HH}(\text{Ku}(X),\mathbb{Z})\cong \widetilde{\HH}(S,\alpha,\mathbb{Z}).\]
Note that if $d$ is the degree and $r$ the order of $(S,\alpha)$,
then $X$ has discriminant $dr^2$.
One can show \cite[Theorem 1.1]{Pert2} that $X$ has an associated twisted K3 surface if and only if the period point of $X$ lies in $\mathcal{D}_{d'}$ for some $d'$ satisfying
\begin{equation}\tag{$\ast\ast'$}
d' = \prod_i p_i^{n_i} \text{ with } n_i\equiv 0 \mod 2\text{ for }p_i\equiv 3\mod 4
\end{equation}
where the $p_i$ are distinct primes. Note that this is equivalent to the following: $d'$ is of the form $dr^2$ for some integers $d$ and $r$, where $d$ satisfies $(\ast\ast)$. This decomposition $d'=dr^2$ is however not unique.
We will prove that $(\ast\ast')$ is equivalent to a condition on the lattice $L_{d'}^{\perp}$.
We need the following lemma. For $w\in\hom(\Lambda_d,\mathbb{Z}/r\mathbb{Z})$ of order $r$, let $T_w\subset\Lambda_d$ be the index-$r$ sublattice $\ker(w)$.
We embed $T_w$ primitively into $\widetilde{\Lambda}$ using the map $\exp(w)$ (see \cite[Section~3.1]{BrakkeeTwistedK3}).
\begin{lemma}\label{OrthGpSurj}
Let $S_w:=T_w^{\perp}\subset\widetilde{\Lambda}$. The canonical map
$\tO(S_w)\to\tO(\Disc S_w)$ is surjective.
\end{lemma}
\begin{proof}
Note that $S_w$ has rank 3 and its discriminant group is isomorphic to $\Disc T_w$.
By \cite[Proposition~6.5]{debarre_iliev_manivel_2015}, this group is either cyclic or isomorphic to $(\mathbb{Z}/2\mathbb{Z})^2\times\mathbb{Z}/(d'/4)\mathbb{Z}$. In the first case, the statement follows from \cite[Theorem~1.14.2]{Nikulin}. In the second case, it follows from \cite[Corollary~VIII.7.8]{MiMo}.
\end{proof}
\begin{cor}\label{EquivAssociated}
Consider an integer $d'>8$ with $d' \equiv 0,2,4 \mod 8$. Then $d'$ satisfies $(\ast\ast')$ if and only if for some decomposition $d'=dr^2$ with $d$ satisfying $(\ast\ast)$, there is a $w\in\hom(\Lambda_d,\mathbb{Z}/r\mathbb{Z})$ and a lattice isometry $L_{d'}^{\perp}(-1)\cong T_w$.
\end{cor}
\begin{proof}
Suppose $d'$ satisfies $(\ast\ast')$. Let $X$ be a very general GM fourfold with period point in $\mathcal{D}_{d'}$, which exists by \cite[Theorem 8.1]{debarre_iliev_manivel_2015},
and let $(S,\alpha,\mathbb{Z})$ be a twisted K3 surface associated to $X$.
Let $d$ be the degree of $S$ and $r$ the order of $\alpha$, so $d'=dr^2$.
Then the Hodge isometry
$\widetilde{\HH}(\text{Ku}(X),\mathbb{Z})\cong\widetilde{\HH}(S,\alpha,\mathbb{Z})$ induces a lattice isometry of the transcendental parts:
\[L_{d'}^{\perp}(-1)\cong T(S,\alpha)\cong\ker(\alpha\colon\HH^2(S,\mathbb{Z})_{\prim}\to\mathbb{Z}/r\mathbb{Z}).\]
Now any marking $\HH^2(S,\mathbb{Z})_{\prim}\cong\Lambda_d$ induces an isometry
$L_{d'}^{\perp}(-1)\cong T_w$ for some $w\in\hom(\Lambda_d,\mathbb{Z}/r\mathbb{Z})$.
Vice versa, assume we have an isometry $L_{d'}^{\perp}(-1)\cong T_w$ as above. Then the associated period domains $\Omega(L_{d'}^{\perp}(-1))\cong \Omega(L_{d'}^{\perp})$ and $\Omega(T_w)$ are also isomorphic. It follows that if $X$ is a very general GM fourfold of discriminant $d'$, then $L_{d'}^{\perp}\subset\HH^4(X,\mathbb{Z})$ is Hodge isometric, up to a sign and a Tate twist, to $T(S,\alpha)$ for some twisted K3 surface $(S,\alpha)$ of degree $d$ and order $r$. By Lemma \ref{OrthGpSurj}, this can be extended to a Hodge isometry
$\widetilde{\HH}(\text{Ku}(X),\mathbb{Z})\cong \widetilde{\HH}(S,\alpha,\mathbb{Z})$. Since $X$ is very general in $\mathcal{D}_{d'}$, so its period lies in $\mathcal{D}_e$ if and only if $e=d'$, it follows that $d'$ satisfies $(\ast\ast')$.
\end{proof}
Assume we are in the situation of Corollary \ref{EquivAssociated}, so $d$ satisfies $(\ast\ast')$ and we have a fixed $w\in\hom(\Lambda_d,\mathbb{Z}/r\mathbb{Z})$ such that $L_{d'}^{\perp}(-1)$ is isomorphic to $T_w$.
This induces an isomorphism
$\Omega(T_w)/\widetilde{\tO}(T_w)\cong\Omega(L_{d'}^{\perp})/H(L_{d'})=\mathcal{D}_{L_{d'}}^{\lab}$.
Next, note that $\widetilde{\tO}(T_w)$ is a subgroup of $\stab(w)$ \cite[Lemma~4.1]{BrakkeeTwistedK3}.
Summarized in a diagram, we have
\begin{equation}
\label{cd_twistedperiod}
\xymatrix@C=0em{&&& \Omega(T_w)\ar[d] \ar[rrr]^{\cong} &&& \Omega(L_{d'}^{\perp})\ar[d]\ar@{^{(}->}[rrrrr] &&&&& \Omega(\Lambda_{00})\ar[d]\\
&&& \Omega(T_w)/\widetilde{\tO}(T_w)\ar[d]_{\pi}\ar[rrr]^-{\cong} &&& \mathcal{D}_{L_{d'}}^{\lab}\ar[rrrr]^-{\nu} &&&& \mathcal{D}_{L_{d'}}\ar@{^{(}->}[r]&\mathcal{D} \\
\tM_w\ar@{^{(}->}[rrr] &&& \Omega(T_w)/\stab(w) &&&&&&&&&
}
\end{equation}
where $\pi\colon\Omega(T_w)/\widetilde{\tO}(T_w)\to \Omega(T_w)/\stab(w)$ is a finite map. As in Section \ref{DefRationalMap}, we obtain a finite dominant rational map $\mathcal{D}_{L_{d'}}\dashrightarrow\tM_w$.
\begin{cor}\label{RatMapTwisted}
There is a dominant rational map
\[\delta_{d'}\colon \mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_{d'}\dashrightarrow\tM_w\]
which sends a very general Hodge-special GM fourfold $X$ of discriminant $d'$ to a polarized twisted K3 surface associated to $X$.
\end{cor}
The map is defined whenever $\rk\HH^{2,2}(X,\mathbb{Z})=3$.
When $\rk \HH^{2,2}(X,\mathbb{Z})>3$
and the map is defined at $X$, then the image of $X$ is a triple $(S,l,\alpha)\in \tM_w$ such that if $\alpha'\in\br(S)[r]$ is the image of $\alpha\in\widetilde{\br}(S)[r]$, then $(S,l,\alpha')$ is associated to $X$.
Namely, by Lemma \ref{OrthGpSurj}, the Hodge isometry
\[\HH^4(X,\mathbb{Z})_{00}\supset L_{d'}^{\perp}\cong T_w\subset \widetilde{\HH}(S,\alpha,\mathbb{Z})\]
extends to a Hodge isometry
$\widetilde{\HH}(\text{Ku}(X),\mathbb{Z})\cong\widetilde{\HH}(S,\alpha,\mathbb{Z})=\widetilde{\HH}(S,\alpha',\mathbb{Z})$.
\subsection{Fourier--Mukai partners in the twisted case}
In this section we apply Corollary \ref{RatMapTwisted} to construct Fourier--Mukai partners of a very general GM fourfold with a twisted associated K3 surface.
First, we need the following lemma, which is the analogue of \cite[Lemma 2.3]{Huy} in the case of cubic fourfolds.
\begin{lemma}
\label{lemma_inverseorientation}
The Mukai lattice $\widetilde{\HH}(\emph{Ku}(X),\mathbb{Z})$ of a GM fourfold $X$ has an orientation reversing Hodge isometry.
\end{lemma}
\begin{proof}
Denote by $\lambda_1$ and $\lambda_2$ the standard generators of $A_1^{\oplus 2} \subset \widetilde{\HH}(\text{Ku}(X),\mathbb{Z})$. Consider the isometry $g \in \tO(A_1^{\oplus 2})$ defined by
$$g(\lambda_1)=-\lambda_1 \text{ and }g(\lambda_2)=\lambda_2.$$
Since $g$ acts trivially on the discriminant group of $A_1^{\oplus 2}$, by \cite[Prop. 1.6.1 and Cor. 1.5.2]{Nikulin} there is an isometry $\tilde{g}$ of $\widetilde{\Lambda}:=U^{\oplus 4} \oplus E_8(-1)^{\oplus 2}$ extending $g$ and acting trivially on $A_1^{\oplus 2 ^\perp}$. By definition $\tilde{g}$ reverses the orientation of the two positive directions in $A_1^{\oplus 2}$ and preserves the orientation of the two positive directions in $A_1^{\oplus 2 \perp}$. Moreover, $\tilde{g}$ preserves the Hodge structure as it acts trivially on $A_1^{\oplus 2 \perp}$. This implies the statement.
\end{proof}
\begin{remark}
Note that there is an autoequivalence of $\text{Ku}(X)$ which induces the Hodge isometry described in Lemma \ref{lemma_inverseorientation}. Indeed, consider the composition $\mathbb{L}_{\langle \mathcal{O}_X, \mathcal{U}_X^*,\mathcal{O}_X(1) \rangle} \circ (\mathbb{D}(-) \otimes \mathcal{O}_X(1))$, where $\mathbb{D}(-):=\textrm{R}Hom(-,\mathcal{O}_X)$ and $\mathbb{L}_{\langle \mathcal{O}_X, \mathcal{U}_X^*,\mathcal{O}_X(1) \rangle}$ is the left mutation functor through $\mathcal{O}_X, \mathcal{U}_X^*,\mathcal{O}_X(1)$. One can check that this composition induces an autoequivalence when restricted to $\text{Ku}(X)$, acting on the Mukai lattice as required.
\end{remark}
We can now prove the following proposition which implies Theorem \ref{thm_FMptwisted}. Denote by $\varphi(r)$ the Euler function evaluated in $r$. Recall that a Fourier--Mukai partner of order $r$ of a twisted K3 surface $(S,\alpha)$ is a twisted K3 surface $(S',\alpha')$ with $\alpha'$ of order $r$ such that there is an equivalence $\text{D}^b(S,\alpha) \xrightarrow{\sim} \text{D}^b(S',\alpha')$.
\begin{proposition}
\label{prop_FMptwistedcase}
Let $d'=dr^2$ be a positive integer such that a very general GM fourfold of discriminant $d'$ admits an associated polarized twisted K3 surface of degree $d$ and order $r$. If $d' \equiv 0 \mod 4$ (resp.\ $d' \equiv 2 \mod 8$), then there are $m'$ (resp.\ $2m'$) fibers of the period map $p$ such that, when non-empty, their elements are Fourier--Mukai partners of $X$, where
\begin{equation}
\label{eq_m'}
m'=
\begin{cases}
\varphi(r)2^{\tau(d)-1} & \text{if } r=2 \text{ or } d>2 \\
\varphi(r)/ 2 & \text{if } r>2 \text{ and } d=2.
\end{cases}
\end{equation}
\end{proposition}
\begin{proof}
We fix a rational map $\delta_{d'}\colon \mathcal{M}_4\times_{\mathcal{D}}\mathcal{D}_{d'}\dashrightarrow\tM_w$ as in Corollary \ref{RatMapTwisted}.
Let $X$ be a GM fourfold as in the statement and consider the twisted degree-$d$ polarized K3 surface $\delta_{d'}(X)=(S,\alpha)$ with $\text{ord}(\alpha)=r$. Note that $S$ has Picard rank $1$ and by \cite[Theorem 1.1]{Pert2} and \cite[Theorem 1.9]{PPZ} there is an equivalence $\text{Ku}(X) \xrightarrow{\sim} \text{D}^b(S,\alpha)$. Let $m'$ be the number of Fourier--Mukai partners of $(S,\alpha)$ of order $r$.
By Lemmas \ref{OrthGpSurj} and \ref{lemma_inverseorientation}, arguing as in \cite[Proposition 4.4]{Pert1}, one can show that $m'$ is equal to the upper bound given in \cite[Proposition 4.3]{Ma}. Moreover, by \cite[Proposition 4.7]{Pert1} this number is given by \eqref{eq_m'} as in the statement. Then using diagram \eqref{cd_twistedperiod} and arguing as in Proposition \ref{prop_FMpGM}, we deduce the statement.
\end{proof}
\begin{remark}
Note that a GM fourfold as in Proposition \ref{prop_FMptwistedcase} could have other Fourier--Mukai partners. Indeed, they could be obtained from Fourier--Mukai partners of $(S,\alpha)$ with order different from $r$.
\end{remark}
\begin{remark}
The construction of the rational map in \cite[Section~4]{BrakkeeTwistedK3} can be used in the case of cubic fourfolds to simplify the proof of \cite[Theorem 1.2]{Pert1}. More precisely, the rational map allows to skip the computation in \cite[Section 4.1]{Pert2}. As a consequence, it is possible to remove the assumption in \cite[Theorem 1.2]{Pert1} that $9$ does not divide the discriminant, giving a more complete statement.
\end{remark}
|
{
"timestamp": "2020-02-12T02:09:49",
"yymm": "2002",
"arxiv_id": "2002.04248",
"language": "en",
"url": "https://arxiv.org/abs/2002.04248"
}
|
"\\section*{Introduction}\nOne of the great challenges for next generation information technology is(...TRUNCATED)
| {"timestamp":"2020-02-13T02:07:04","yymm":"2002","arxiv_id":"2002.04296","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nThe use of augmented feedback on movements enables the development of in(...TRUNCATED)
| {"timestamp":"2020-06-04T02:16:32","yymm":"2002","arxiv_id":"2002.04317","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sec:Introduction}\nStatistical analysis using (generalized) self-c(...TRUNCATED)
| {"timestamp":"2020-06-30T02:07:09","yymm":"2002","arxiv_id":"2002.04320","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nWith the development of atomic clock technology, atomic clocks have played(...TRUNCATED)
| {"timestamp":"2020-02-13T02:06:52","yymm":"2002","arxiv_id":"2002.04228","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction} \\label{sec:introduction}\n\t\n\t\nTo an increasing extent, theoretical nuc(...TRUNCATED)
| {"timestamp":"2020-05-08T02:04:01","yymm":"2002","arxiv_id":"2002.04151","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction\\label{sec1}}\n\nA great success of the Large Hadron Collider (LHC) is the d(...TRUNCATED)
| {"timestamp":"2020-04-03T02:07:55","yymm":"2002","arxiv_id":"2002.04370","language":"en","url":"http(...TRUNCATED)
|
End of preview.